Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

tf_agents.bandits.agents.linear_thompson_sampling_agent.LinearThompsonSamplingAgent

View source on GitHub

Linear Thompson Sampling Agent.

Inherits From: LinearBanditAgent

tf_agents.bandits.agents.linear_thompson_sampling_agent.LinearThompsonSamplingAgent(
    *args, **kwargs
)

Implements the Linear Thompson Sampling Agent from the following paper: "Thompson Sampling for Contextual Bandits with Linear Payoffs", Shipra Agrawal, Navin Goyal, ICML 2013. The actual algorithm implemented is Algorithm 3 from the supplementary material of the paper from <a href="http://proceedings.mlr.press/v28/agrawal13-supp.pdf">http://proceedings.mlr.press/v28/agrawal13-supp.pdf</a>.

In a nutshell, the agent maintains two parameters weight_covariances and parameter_estimators, and updates them based on experience. The inverse of the weight covariance parameters are updated with the outer product of the observations using the Woodbury inverse matrix update, while the parameter estimators are updated by the reward-weighted observation vectors for every action.

Args:

  • time_step_spec: A TimeStep spec describing the expected TimeSteps.
  • action_spec: A scalar BoundedTensorSpec with int32 or int64 dtype describing the number of actions for this agent.
  • alpha: (float) positive scalar. This is the exploration parameter that multiplies the confidence intervals.
  • gamma: a float forgetting factor in [0.0, 1.0]. When set to 1.0, the algorithm does not forget.
  • use_eigendecomp: whether to use eigen-decomposition or not. The default solver is Conjugate Gradient.
  • tikhonov_weight: (float) tikhonov regularization term.
  • add_bias: If true, a bias term will be added to the linear reward estimation.
  • emit_policy_info: (tuple of strings) what side information we want to get as part of the policy info. Allowed values can be found in policy_utilities.PolicyInfo.
  • observation_and_action_constraint_splitter: A function used for masking valid/invalid actions with each state of the environment. The function takes in a full observation and returns a tuple consisting of 1) the part of the observation intended as input to the bandit agent and policy, and 2) the boolean mask. This function should also work with a TensorSpec as input, and should output TensorSpec objects for the observation and mask.
  • debug_summaries: A Python bool, default False. When True, debug summaries are gathered.
  • summarize_grads_and_vars: A Python bool, default False. When True, gradients and network variable summaries are written during training.
  • enable_summaries: A Python bool, default True. When False, all summaries (debug or otherwise) should not be written.
  • dtype: The type of the parameters stored and updated by the agent. Should be one of tf.float32 and tf.float64. Defaults to tf.float32.
  • name: a name for this instance of LinearThompsonSamplingAgent.

Attributes:

  • action_spec: TensorSpec describing the action produced by the agent.

  • alpha

  • collect_data_spec: Returns a Trajectory spec, as expected by the collect_policy.

  • collect_policy: Return a policy that can be used to collect data from the environment.

  • cov_matrix

  • data_vector

  • debug_summaries

  • eig_matrix

  • eig_vals

  • name: Returns the name of this module as passed or determined in the ctor.

    NOTE: This is not the same as the self.name_scope.name which includes parent module names.

  • name_scope: Returns a tf.name_scope instance for this class.

  • num_actions

  • num_samples

  • policy: Return the current policy held by the agent.

  • submodules: Sequence of all sub-modules.

    Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

a = tf.Module()
b = tf.Module()
c = tf.Module()
a.b = b
b.c = c
assert list(a.submodules) == [b, c]
assert list(b.submodules) == [c]
assert list(c.submodules) == []
  • summaries_enabled
  • summarize_grads_and_vars
  • theta: Returns the matrix of per-arm feature weights.

    The returned matrix has shape (num_actions, context_dim). It's equivalent to a stacking of theta vectors from the paper.

  • time_step_spec: Describes the TimeStep tensors expected by the agent.

  • train_argspec: TensorSpec describing extra supported kwargs to train().

  • train_sequence_length: The number of time steps needed in experience tensors passed to train.

    Train requires experience to be a Trajectory containing tensors shaped [B, T, ...]. This argument describes the value of T required.

    For example, for non-RNN DQN training, T=2 because DQN requires single transitions.

    If this value is None, then train can handle an unknown T (it can be determined at runtime from the data). Most RNN-based agents fall into this category.

  • train_step_counter

  • trainable_variables: Sequence of trainable variables owned by this module and its submodules.

  • variables: Sequence of variables owned by this module and its submodules.

Raises:

ValueError if dtype is not one of tf.float32 or tf.float64.

Methods

compute_summaries

View source

compute_summaries(
    loss
)

initialize

View source

initialize()

Initializes the agent.

Returns:

An operation that can be used to initialize the agent.

Raises:

  • RuntimeError: If the class was not initialized properly (super.__init__ was not called).

train

View source

train(
    experience, weights=None, **kwargs
)

Trains the agent.

Args:

  • experience: A batch of experience data in the form of a Trajectory. The structure of experience must match that of self.collect_data_spec. All tensors in experience must be shaped [batch, time, ...] where time must be equal to self.train_step_length if that property is not None.
  • weights: (optional). A Tensor, either 0-D or shaped [batch], containing weights to be used when calculating the total train loss. Weights are typically multiplied elementwise against the per-batch loss, but the implementation is up to the Agent.
  • **kwargs: Any additional data as declared by self.train_argspec.

Returns:

A LossInfo loss tuple containing loss and info tensors.

  • In eager mode, the loss values are first calculated, then a train step is performed before they are returned.
  • In graph mode, executing any or all of the loss tensors will first calculate the loss value(s), then perform a train step, and return the pre-train-step LossInfo.

Raises:

  • TypeError: If experience is not type Trajectory. Or if experience does not match self.collect_data_spec structure types.
  • ValueError: If experience tensors' time axes are not compatible with self.train_sequence_length. Or if experience does not match self.collect_data_spec structure.
  • ValueError: If the user does not pass **kwargs matching self.train_argspec.
  • RuntimeError: If the class was not initialized properly (super.__init__ was not called).

update_alpha

View source

update_alpha(
    alpha
)

with_name_scope

@classmethod
with_name_scope(
    cls, method
)

Decorator to automatically enter the module name scope.

class MyModule(tf.Module):
  @tf.Module.with_name_scope
  def __call__(self, x):
    if not hasattr(self, 'w'):
      self.w = tf.Variable(tf.random.normal([x.shape[1], 64]))
    return tf.matmul(x, self.w)

Using the above module would produce tf.Variables and tf.Tensors whose names included the module name:

mod = MyModule()
mod(tf.ones([8, 32]))
# ==> <tf.Tensor: ...>
mod.w
# ==> <tf.Variable ...'my_module/w:0'>

Args:

  • method: The method to wrap.

Returns:

The original method wrapped such that it enters the module's name scope.