tf.contrib.opt.ModelAverageOptimizer

Class ModelAverageOptimizer

Inherits From: Optimizer

Defined in tensorflow/contrib/opt/python/training/model_average_optimizer.py.

Wrapper optimizer that implements the Model Average algorithm.

This is a sync optimizer. During the training, each worker will update the local variables and maintains its own local_step, which starts from 0 and is incremented by 1 after each update of local variables. Whenever the interval_steps divides the local step, the local variables from all the workers will be averaged and assigned to global center variables. Then the local variables will be assigned by global center variables.

__init__

__init__(
    opt,
    num_worker,
    is_chief,
    ma_custom_getter,
    interval_steps=100,
    use_locking=True,
    name='ModelAverageOptimizer'
)

Construct a new model average optimizer.

Args:

  • opt: The actual optimizer that will be used to update local variables
  • num_worker: The number of workers
  • is_chief: whether chief worker
  • ma_custom_getter: ModelAverageCustomGetter
  • interval_steps: An int point value to controls the frequency of the average of local variables
  • use_locking: If True use locks for update operations
  • name: string. Optional name of the returned operation

Methods

apply_gradients

apply_gradients(
    grads_and_vars,
    global_step=None,
    name=None
)

Apply gradients to variables.

This contains most of the synchronization implementation and also wraps the apply_gradients() from the real optimizer. The chief work updates global variables.

Args:

  • grads_and_vars: List of (gradient, variable) pairs as returned by compute_gradients().
  • global_step: Optional Variable to increment by one after the variables have been updated.
  • name: Optional name for the returned operation. Default to the name passed to the Optimizer constructor.

Returns:

A conditional 'Operation' that update both local and global variables or just local variables

Raises:

  • ValueError: If the grads_and_vars is empty.
  • ValueError: If global step is not provided, the staleness cannot be checked.

compute_gradients

compute_gradients(
    *args,
    **kwargs
)

Compute gradients of "loss" for the variables in "var_list".

This simply wraps the compute_gradients() from the real optimizer.

Args:

  • *args: Arguments for compute_gradients().
  • **kwargs: Keyword arguments for compute_gradients().

Returns:

A list of (gradient, variable) pairs.

get_init_op

get_init_op()

Returns the op.

This method lets all the local variables equal to the global variables before the training begins.

get_name

get_name()

get_slot

get_slot(
    var,
    name
)

Return a slot named name created for var by the Optimizer.

Some Optimizer subclasses use additional variables. For example Momentum and Adagrad use variables to accumulate updates. This method gives access to these Variable objects if for some reason you need them.

Use get_slot_names() to get the list of slot names created by the Optimizer.

Args:

  • var: A variable passed to minimize() or apply_gradients().
  • name: A string.

Returns:

The Variable for the slot if it was created, None otherwise.

get_slot_names

get_slot_names()

Return a list of the names of slots created by the Optimizer.

See get_slot().

Returns:

A list of strings.

make_session_run_hook

make_session_run_hook()

Creates a hook to handle ModelAverage ops such as initialization.

minimize

minimize(
    loss,
    global_step=None,
    var_list=None,
    gate_gradients=GATE_OP,
    aggregation_method=None,
    colocate_gradients_with_ops=False,
    name=None,
    grad_loss=None
)

Add operations to minimize loss by updating var_list.

This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function.

Args:

  • loss: A Tensor containing the value to minimize.
  • global_step: Optional Variable to increment by one after the variables have been updated.
  • var_list: Optional list or tuple of Variable objects to update to minimize loss. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES.
  • gate_gradients: How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH.
  • aggregation_method: Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod.
  • colocate_gradients_with_ops: If True, try colocating gradients with the corresponding op.
  • name: Optional name for the returned operation.
  • grad_loss: Optional. A Tensor holding the gradient computed for loss.

Returns:

An Operation that updates the variables in var_list. If global_step was not None, that operation also increments global_step.

Raises:

  • ValueError: If some of the variables are not Variable objects.

Eager Compatibility

When eager execution is enabled, loss should be a Python function that takes elements of var_list as arguments and computes the value to be minimized. If var_list is None, loss should take no arguments. Minimization (and gradient computation) is done with respect to the elements of var_list if not None, else with respect to any trainable variables created during the execution of the loss function. gate_gradients, aggregation_method, colocate_gradients_with_ops and grad_loss are ignored when eager execution is enabled.

variables

variables()

A list of variables which encode the current state of Optimizer.

Includes slot variables and additional global variables created by the optimizer in the current default graph.

Returns:

A list of variables.

Class Members

GATE_GRAPH

GATE_NONE

GATE_OP