A neural-network based bandit agent for multi-objective optimization.
Inherits From: TFAgent
tf_agents.bandits.agents.greedy_multi_objective_neural_agent.GreedyMultiObjectiveNeuralAgent(
time_step_spec: Optional[tf_agents.trajectories.TimeStep
],
action_spec: Optional[tf_agents.typing.types.BoundedTensorSpec
],
scalarizer: tf_agents.bandits.multi_objective.multi_objective_scalarizer.Scalarizer
,
objective_network_and_loss_fn_sequence: Sequence[Tuple[Network, Callable[..., tf.Tensor]]],
optimizer: tf.keras.optimizers.Optimizer,
observation_and_action_constraint_splitter: tf_agents.typing.types.Splitter
= None,
accepts_per_arm_features: bool = False,
gradient_clipping: Optional[float] = None,
debug_summaries: bool = False,
summarize_grads_and_vars: bool = False,
enable_summaries: bool = True,
emit_policy_info: Tuple[Text, ...] = (),
train_step_counter: Optional[tf.Variable] = None,
laplacian_matrix: Optional[types.Float] = None,
laplacian_smoothing_weights: Optional[Sequence[float]] = None,
name: Optional[Text] = None
)
This agent receives multiple neural networks. Each network will be trained by
the agent to predict a specific objective. The agent also receives a
Scalarizer, which transforms multiple predicted objectives to a single reward.
The action is chosen greedily by the policy with respect to the scalarized
predicted reward.
Args |
time_step_spec
|
A TimeStep spec of the expected time_steps.
|
action_spec
|
A nest of BoundedTensorSpec representing the actions.
|
scalarizer
|
A
tf_agents.bandits.multi_objective.multi_objective_scalarizer.Scalarizer
object that implements scalarization of multiple objectives into a
single scalar reward.
|
objective_network_and_loss_fn_sequence
|
A Sequence of Tuples
(tf_agents.network.Network , error loss function) to be used by the
agent. Each network net will be called as net(observation,
training=...) and is expected to output a tf.Tensor of predicted
values for a specific objective for all actions, shaped as [batch-size,
number-of-actions]. Each network will be trained via minimizing the
accompanying error loss function, which takes parameters labels,
predictions, and weights (any function from tf.losses would work).
|
optimizer
|
A 'tf.keras.optimizers.Optimizer' object, the optimizer to use
for training.
|
observation_and_action_constraint_splitter
|
A function used for masking
valid/invalid actions with each state of the environment. The function
takes in a full observation and returns a tuple consisting of 1) the
part of the observation intended as input to the bandit agent and
policy, and 2) the boolean mask of shape [batch_size, num_actions] .
This function should also work with a TensorSpec as input, and should
output TensorSpec objects for the observation and mask.
|
accepts_per_arm_features
|
(bool) Whether the agent accepts per-arm
features.
|
gradient_clipping
|
A float representing the norm length to clip gradients
(or None for no clipping.)
|
debug_summaries
|
A Python bool, default False. When True, debug summaries
are gathered.
|
summarize_grads_and_vars
|
A Python bool, default False. When True,
gradients and network variable summaries are written during training.
|
enable_summaries
|
A Python bool, default True. When False, all summaries
(debug or otherwise) should not be written.
|
emit_policy_info
|
(tuple of strings) what side information we want to get
as part of the policy info. Allowed values can be found in
policy_utilities.PolicyInfo .
|
train_step_counter
|
An optional tf.Variable to increment every time the
train op is run. Defaults to the global_step .
|
laplacian_matrix
|
A float Tensor or a numpy array shaped [num_actions,
num_actions] . This holds the Laplacian matrix used to regularize the
smoothness of the estimated expected reward function. This only applies
to problems where the actions have a graph structure. If None , the
regularization is not applied.
|
laplacian_smoothing_weights
|
A Sequence of floats that determines the
per-objective weight of the regularization term. Note that this has no
effect if laplacian_matrix above is None .
|
name
|
Python str name of this agent. All variables in this module will
fall under that name. Defaults to the class name.
|
Raises |
ValueError
|
- If the action spec contains more than one action or or it is not a
bounded scalar int32 spec with minimum 0.
- If the length of
objective_network_and_loss_fn_sequence is less than
two.
- If the Laplacian matrix is provided and is invalid.
|
Attributes |
action_spec
|
TensorSpec describing the action produced by the agent.
|
collect_data_context
|
|
collect_data_spec
|
Returns a Trajectory spec, as expected by the collect_policy .
|
collect_policy
|
Return a policy that can be used to collect data from the environment.
|
data_context
|
|
debug_summaries
|
|
policy
|
Return the current policy held by the agent.
|
summaries_enabled
|
|
summarize_grads_and_vars
|
|
time_step_spec
|
Describes the TimeStep tensors expected by the agent.
|
train_sequence_length
|
The number of time steps needed in experience tensors passed to train .
Train requires experience to be a Trajectory containing tensors shaped
[B, T, ...] . This argument describes the value of T required.
For example, for non-RNN DQN training, T=2 because DQN requires single
transitions.
If this value is None , then train can handle an unknown T (it can be
determined at runtime from the data). Most RNN-based agents fall into
this category.
|
train_step_counter
|
|
training_data_spec
|
Returns a trajectory spec, as expected by the train() function.
|
Methods
compute_summaries
View source
compute_summaries(
losses: Sequence[tf.Tensor]
)
initialize
View source
initialize() -> Optional[tf.Operation]
Initializes the agent.
Returns |
An operation that can be used to initialize the agent.
|
Raises |
RuntimeError
|
If the class was not initialized properly (super.__init__
was not called).
|
loss
View source
loss(
experience: tf_agents.typing.types.NestedTensor
,
weights: Optional[types.Tensor] = None,
training: bool = False,
**kwargs
) -> tf_agents.agents.tf_agent.LossInfo
Gets loss from the agent.
If the user calls this from _train, it must be in a tf.GradientTape
scope
in order to apply gradients to trainable variables.
If intermediate gradient steps are needed, _loss and _train will return
different values since _loss only supports updating all gradients at once
after all losses have been calculated.
Args |
experience
|
A batch of experience data in the form of a Trajectory . The
structure of experience must match that of self.training_data_spec .
All tensors in experience must be shaped [batch, time, ...] where
time must be equal to self.train_step_length if that property is not
None .
|
weights
|
(optional). A Tensor , either 0-D or shaped [batch] ,
containing weights to be used when calculating the total train loss.
Weights are typically multiplied elementwise against the per-batch loss,
but the implementation is up to the Agent.
|
training
|
Explicit argument to pass to loss . This typically affects
network computation paths like dropout and batch normalization.
|
**kwargs
|
Any additional data as args to loss .
|
Returns |
A LossInfo loss tuple containing loss and info tensors.
|
Raises |
RuntimeError
|
If the class was not initialized properly (super.__init__
was not called).
|
post_process_policy
View source
post_process_policy() -> tf_agents.policies.TFPolicy
Post process policies after training.
The policies of some agents require expensive post processing after training
before they can be used. e.g. A Recommender agent might require rebuilding
an index of actions. For such agents, this method will return a post
processed version of the policy. The post processing may either update the
existing policies in place or create a new policy, depnding on the agent.
The default implementation for agents that do not want to override this
method is to return agent.policy.
Returns |
The post processed policy.
|
preprocess_sequence
View source
preprocess_sequence(
experience: tf_agents.typing.types.NestedTensor
) -> tf_agents.typing.types.NestedTensor
Defines preprocess_sequence function to be fed into replay buffers.
This defines how we preprocess the collected data before training.
Defaults to pass through for most agents.
Structure of experience
must match that of self.collect_data_spec
.
Args |
experience
|
a Trajectory shaped [batch, time, ...] or [time, ...] which
represents the collected experience data.
|
Returns |
A post processed Trajectory with the same shape as the input.
|
train
View source
train(
experience: tf_agents.typing.types.NestedTensor
,
weights: Optional[types.Tensor] = None,
**kwargs
) -> tf_agents.agents.tf_agent.LossInfo
Trains the agent.
Args |
experience
|
A batch of experience data in the form of a Trajectory . The
structure of experience must match that of self.training_data_spec .
All tensors in experience must be shaped [batch, time, ...] where
time must be equal to self.train_step_length if that property is not
None .
|
weights
|
(optional). A Tensor , either 0-D or shaped [batch] ,
containing weights to be used when calculating the total train loss.
Weights are typically multiplied elementwise against the per-batch loss,
but the implementation is up to the Agent.
|
**kwargs
|
Any additional data to pass to the subclass.
|
Returns |
A LossInfo loss tuple containing loss and info tensors.
- In eager mode, the loss values are first calculated, then a train step
is performed before they are returned.
- In graph mode, executing any or all of the loss tensors
will first calculate the loss value(s), then perform a train step,
and return the pre-train-step
LossInfo .
|
Raises |
RuntimeError
|
If the class was not initialized properly (super.__init__
was not called).
|