tf_agents.bandits.agents.ranking_agent.RankingAgent

Ranking agent class.

Inherits From: TFAgent

Used in the notebooks

Used in the tutorials

time_step_spec A TimeStep spec of the expected time_steps.
action_spec A nest of BoundedTensorSpec representing the actions.
scoring_network The network that outputs scores for items.
optimizer The optimizer for the agent.
policy_type The type of policy used. The only available type at this moment is COSINE_DISTANCE, that invokes the PenalizeCosineDistanceRankingPolicy. This policy uses the cosine similarity to penalize the scores of not yet selected items. If set to UNKNOWN, falls back to COSINE_DISTANCE.
error_loss_fn The loss function used.
feedback_model The type of feedback model. Implemented models are: --CASCADING : the feedback is a tuple (k, v), where k is the index of the chosen item, and v is the value of the choice. If no item was chosen, then k=num_slots is used and v is ignored. --SCORE_VECTOR : the feedback is a vector of length num_slots, containing scores for every item in the recommendation. If set to UNKNOWN, falls back to CASCADING. </td> </tr><tr> <td>non_click_score<a id="non_click_score"></a> </td> <td> (float) For the cascading feedback model, this is the score value for items lying "before" the clicked item. If not set, -1 is used. It is recommended (but not enforced) to use a negative value. </td> </tr><tr> <td>positional_bias_type<a id="positional_bias_type"></a> </td> <td> (string) If not set (or set toNone), the agent does not apply bias adjustment. If set to eitherbaseorexponent, it parameter determines what way the positional bias is accounted for.base: The bias weight for each slot position isk^s, wheresis the bias severity (set in the next parameter), andkis the position.exponent: The weights ares^k. These bias adjustment types are inspired by Ovaisi et al.Correcting for Selection Bias in Learning-to-rank Systems(WWW 2020). </td> </tr><tr> <td>positional_bias_severity<a id="positional_bias_severity"></a> </td> <td> (float) The severitys, used as explained above. Ifpositional_bias_typeis unset, this parameter has no effect. </td> </tr><tr> <td>positional_bias_positive_only<a id="positional_bias_positive_only"></a> </td> <td> Whether to use the above defined bias weights only for positives (that is, clicked items). Ifpositional_bias_typeis unset, this parameter has no effect. </td> </tr><tr> <td>logits_temperature<a id="logits_temperature"></a> </td> <td> temperature parameter for non-deterministic policies. This value must be positive. </td> </tr><tr> <td>summarize_grads_and_vars<a id="summarize_grads_and_vars"></a> </td> <td> A Python bool, default False. When True, gradients and network variable summaries are written during training. </td> </tr><tr> <td>enable_summaries<a id="enable_summaries"></a> </td> <td> A Python bool, default True. When False, all summaries (debug or otherwise) should not be written. </td> </tr><tr> <td>train_step_counter<a id="train_step_counter"></a> </td> <td> An optional <a href="https://www.tensorflow.org/api_docs/python/tf/Variable"><code>tf.Variable</code></a> to increment every time the train op is run. Defaults to theglobal_step. </td> </tr><tr> <td>penalty_mixture_coefficient<a id="penalty_mixture_coefficient"></a> </td> <td> A parameter responsible for the balance between selecting high scoring items and enforcing diverisity. Used Only by diversity-based policies. </td> </tr><tr> <td>name` The name of this agent instance.

action_spec TensorSpec describing the action produced by the agent.
collect_data_context

collect_data_spec Returns a Trajectory spec, as expected by the collect_policy.
collect_policy Return a policy that can be used to collect data from the environment.
data_context

debug_summaries

policy Return the current policy held by the agent.
summaries_enabled

summarize_grads_and_vars

time_step_spec Describes the TimeStep tensors expected by the agent.
train_sequence_length The number of time steps needed in experience tensors passed to train.

Train requires experience to be a Trajectory containing tensors shaped [B, T, ...]. This argument describes the value of T required.

For example, for non-RNN DQN training, T=2 because DQN requires single transitions.

If this value is None, then train can handle an unknown T (it can be determined at runtime from the data). Most RNN-based agents fall into this category.

train_step_counter

training_data_spec Returns a trajectory spec, as expected by the train() function.

Methods

initialize

View source

Initializes the agent.

Returns
An operation that can be used to initialize the agent.

Raises
RuntimeError If the class was not initialized properly (super.__init__ was not called).

loss

View source

Gets loss from the agent.

If the user calls this from _train, it must be in a tf.GradientTape scope in order to apply gradients to trainable variables. If intermediate gradient steps are needed, _loss and _train will return different values since _loss only supports updating all gradients at once after all losses have been calculated.

Args
experience A batch of experience data in the form of a Trajectory. The structure of experience must match that of self.training_data_spec. All tensors in experience must be shaped [batch, time, ...] where time must be equal to self.train_step_length if that property is not None.
weights (optional). A Tensor, either 0-D or shaped [batch], containing weights to be used when calculating the total train loss. Weights are typically multiplied elementwise against the per-batch loss, but the implementation is up to the Agent.
training Explicit argument to pass to loss. This typically affects network computation paths like dropout and batch normalization.
**kwargs Any additional data as args to loss.

Returns
A LossInfo loss tuple containing loss and info tensors.

Raises
RuntimeError If the class was not initialized properly (super.__init__ was not called).

post_process_policy

View source

Post process policies after training.

The policies of some agents require expensive post processing after training before they can be used. e.g. A Recommender agent might require rebuilding an index of actions. For such agents, this method will return a post processed version of the policy. The post processing may either update the existing policies in place or create a new policy, depnding on the agent. The default implementation for agents that do not want to override this method is to return agent.policy.

Returns
The post processed policy.

preprocess_sequence

View source

Defines preprocess_sequence function to be fed into replay buffers.

This defines how we preprocess the collected data before training. Defaults to pass through for most agents. Structure of experience must match that of self.collect_data_spec.

Args
experience a Trajectory shaped [batch, time, ...] or [time, ...] which represents the collected experience data.

Returns
A post processed Trajectory with the same shape as the input.

train

View source

Trains the agent.

Args
experience A batch of experience data in the form of a Trajectory. The structure of experience must match that of self.training_data_spec. All tensors in experience must be shaped [batch, time, ...] where time must be equal to self.train_step_length if that property is not None.
weights (optional). A Tensor, either 0-D or shaped [batch], containing weights to be used when calculating the total train loss. Weights are typically multiplied elementwise against the per-batch loss, but the implementation is up to the Agent.
**kwargs Any additional data to pass to the subclass.

Returns
A LossInfo loss tuple containing loss and info tensors.

  • In eager mode, the loss values are first calculated, then a train step is performed before they are returned.
  • In graph mode, executing any or all of the loss tensors will first calculate the loss value(s), then perform a train step, and return the pre-train-step LossInfo.

Raises
RuntimeError If the class was not initialized properly (super.__init__ was not called).