An behavioral cloning Agent.

Inherits From: TFAgent

Implements behavioral cloning, wherein the network learns to clone given experience. Users must provide their own loss functions. Note this implementation will use a QPolicy. To use with other policies subclass this agent and override the _get_policies method. Note the cloning_network must match the requirements of the generated policies.

Behavioral cloning was proposed in the following articles:

Pomerleau, D.A., 1991. Efficient training of artificial neural networks for autonomous navigation. Neural Computation, 3(1), pp.88-97.

Russell, S., 1998, July. Learning agents for uncertain environments. In Proceedings of the eleventh annual conference on Computational learning theory (pp. 101-103). ACM.

time_step_spec A TimeStep spec of the expected time_steps.
action_spec A nest of BoundedTensorSpec representing the actions.
cloning_network A to be used by the agent. The network will be called as

network(observation, step_type, network_state=initial_state)

and must return a 2-tuple with elements (output, next_network_state) where output will be passed as the first argument to loss_fn, and used by a Policy. Input tensors will be shaped [batch, time, ...] when training, and they will be shaped [batch, ...] when the network is called within a Policy. If cloning_network has an empty network state, then for training time will always be 1 (individual examples).

optimizer The optimizer to use for training.
num_outer_dims The number of outer dimensions for the agent. Must be either 1 or 2. If 2, training will require both a batch_size and time dimension on every Tensor; if 1, training will require only a batch_size outer dimension.
epsilon_greedy probability of choosing a random action in the default epsilon-greedy collect policy (used only if a wrapper is not provided to the collect_policy method).
loss_fn A function for computing the error between the output of the cloning network and the action that was taken. If None, the loss depends on the action dtype. If the dtype is integer, then loss_fn is

def loss_fn(logits, action):
return tf.compat.v1.nn.sparse_softmax_cross_entropy_with_logits(
labels=action - action_spec.minimum, logits=logits)

If the dtype is floating point, the loss is tf.math.squared_difference.

loss_fn must return a loss value for each element of the batch.

gradient_clipping Norm length to clip gradients.
debug_summaries A bool to gather debug summaries.
summarize_grads_and_vars If True, gradient and network variable summaries will be written during training.
train_step_counter An optional counter to increment every time the train op is run. Defaults to the global_step.
name The name of this agent. All variables in this module will fall under that name. Defaults to the class name.

ValueError If action_spec contains more than one action, but a custom loss_fn is not provided.

action_spec TensorSpec describing the action produced by the agent.
collect_data_spec Returns a Trajectory spec, as expected by the collect_policy.
collect_policy Return a policy that can be used to collect data from the environment.


policy Return the current policy held by the agent.


time_step_spec Describes the TimeStep tensors expected by the agent.
train_argspec TensorSpec describing extra supported kwargs to train().
train_sequence_length The number of time steps needed in experience tensors passed to train.

Train requires experience to be a Trajectory containing tensors shaped [B, T, ...]. This argument describes the value of T required.

For example, for non-RNN DQN training, T=2 because DQN requires single transitions.

If this value is None, then train can handle an unknown T (it can be determined at runtime from the data). Most RNN-based agents fall into this category.


training_data_spec Returns a trajectory spec, as expected by the train() function.
validate_args Whether train & preprocess_sequence validate input & output args.



View source

Initializes the agent.

An operation that can be used to initialize the agent.

RuntimeError If the class was not initialized properly (super.__init__ was not called).


View source

Defines preprocess_sequence function to be fed into replay buffers.

This defines how we preprocess the collected data before training. Defaults to pass through for most agents. Structure of experience must match that of self.collect_data_spec.

experience a Trajectory shaped [batch, time, ...] or [time, ...] which represents the collected experience data.

A post processed Trajectory with the same shape as the input.

TypeError If experience does not match self.collect_data_spec structure types.


View source

Trains the agent.

experience A batch of experience data in the form of a Trajectory. The structure of experience must match that of self.training_data_spec. All tensors in experience must be shaped [batch, time, ...] where time must be equal to self.train_step_length if that property is not None.
weights (optional). A Tensor, either 0-D or shaped [batch], containing weights to be used when calculating the total train loss. Weights are typically multiplied elementwise against the per-batch loss, but the implementation is up to the Agent.
**kwargs Any additional data as declared by self.train_argspec.

A LossInfo loss tuple containing loss and info tensors.

  • In eager mode, the loss values are first calculated, then a train step is performed before they are returned.
  • In graph mode, executing any or all of the loss tensors will first calculate the loss value(s), then perform a train step, and return the pre-train-step LossInfo.

TypeError If validate_args is True and: Experience is not type Trajectory; or if experience does not match self.training_data_spec structure types.
ValueError If validate_args is True and: Experience tensors' time axes are not compatible with self.train_sequence_length; or if experience does not match self.training_data_spec structure.
ValueError If validate_args is True and the user does not pass **kwargs matching self.train_argspec.
RuntimeError If the class was not initialized properly (super.__init__ was not called).