An Enum of type ExplorationPolicy. The kind of
policy we use for exploration. Currently supported policies are
LinUCBPolicy and LinearThompsonSamplingPolicy.
A TimeStep spec describing the expected TimeSteps.
A scalar BoundedTensorSpec with int32 or int64 dtype
describing the number of actions for this agent.
Instance of LinearBanditVariableCollection.
Collection of variables to be updated by the agent. If None, a new
instance of LinearBanditVariableCollection will be created.
(float) positive scalar. This is the exploration parameter that
multiplies the confidence intervals.
a float forgetting factor in [0.0, 1.0]. When set to 1.0, the
algorithm does not forget.
whether to use eigen-decomposition or not. The default
solver is Conjugate Gradient.
(float) tikhonov regularization term.
If true, a bias term will be added to the linear reward
(tuple of strings) what side information we want to get
as part of the policy info. Allowed values can be found in
Whether the policy emits log-probabilities or not.
Since the policy is deterministic, the probability is just 1.
A function used for masking
valid/invalid actions with each state of the environment. The function
takes in a full observation and returns a tuple consisting of 1) the
part of the observation intended as input to the bandit agent and
policy, and 2) the boolean mask. This function should also work with a
TensorSpec as input, and should output TensorSpec objects for the
observation and mask.
A Python bool, default False. When True, debug summaries
A Python bool, default False. When True,
gradients and network variable summaries are written during training.
A Python bool, default True. When False, all summaries
(debug or otherwise) should not be written.
A batch of experience data in the form of a Trajectory. The
structure of experience must match that of self.collect_data_spec.
All tensors in experience must be shaped [batch, time, ...] where
time must be equal to self.train_step_length if that
property is not None.
(optional). A Tensor, either 0-D or shaped [batch],
containing weights to be used when calculating the total train loss.
Weights are typically multiplied elementwise against the per-batch loss,
but the implementation is up to the Agent.
Any additional data as declared by self.train_argspec.
A LossInfo loss tuple containing loss and info tensors.
In eager mode, the loss values are first calculated, then a train step
is performed before they are returned.
In graph mode, executing any or all of the loss tensors
will first calculate the loss value(s), then perform a train step,
and return the pre-train-step LossInfo.
If experience is not type Trajectory. Or if experience
does not match self.collect_data_spec structure types.
If experience tensors' time axes are not compatible with
self.train_sequence_length. Or if experience does not match
If the user does not pass **kwargs matching
If the class was not initialized properly (super.__init__
was not called).