Join us at TensorFlow World, Oct 28-31. Use code TF20 for 20% off select passes.

## Class `AdditiveExternalRegretOptimizer`

A `ConstrainedOptimizer` based on external-regret minimization.

This `ConstrainedOptimizer` uses the given `tf.compat.v1.train.Optimizer`s to jointly minimize over the model parameters, and maximize over Lagrange multipliers, with the latter maximization using additive updates and an algorithm that minimizes external regret.

For more specifics, please refer to:

Cotter, Jiang and Sridharan. "Two-Player Games for Efficient Non-Convex Constrained Optimization". https://arxiv.org/abs/1804.06500

The formulation used by this optimizer--which is simply the usual Lagrangian formulation--can be found in Definition 1, and is discussed in Section 3. It is most similar to Algorithm 3 in Appendix C.3, with the two differences being that it uses proxy constraints (if they're provided) in the update of the model parameters, and uses `tf.compat.v1.train.Optimizer`s, instead of SGD, for the "inner" updates.

## `__init__`

View source

``````__init__(
optimizer,
constraint_optimizer=None,
)
``````

Constructs a new `AdditiveExternalRegretOptimizer`.

#### Args:

• `optimizer`: tf.compat.v1.train.Optimizer, used to optimize the objective and proxy_constraints portion of ConstrainedMinimizationProblem. If constraint_optimizer is not provided, this will also be used to optimize the Lagrange multipliers.
• `constraint_optimizer`: optional tf.compat.v1.train.Optimizer, used to optimize the Lagrange multipliers.
• `maximum_multiplier_radius`: float, an optional upper bound to impose on the sum of the Lagrange multipliers.

#### Returns:

A new `AdditiveExternalRegretOptimizer`.

#### Raises:

• `ValueError`: If the maximum_multiplier_radius parameter is nonpositive.

## Properties

### `constraint_optimizer`

Returns the `tf.compat.v1.train.Optimizer` used for the Lagrange multipliers.

### `optimizer`

Returns the `tf.compat.v1.train.Optimizer` used for optimization.

## Methods

### `minimize`

View source

``````minimize(
minimization_problem,
unconstrained_steps=None,
global_step=None,
var_list=None,
aggregation_method=None,
name=None,
)
``````

Returns an `Operation` for minimizing the constrained problem.

This method combines the functionality of `minimize_unconstrained` and `minimize_constrained`. If global_step < unconstrained_steps, it will perform an unconstrained update, and if global_step >= unconstrained_steps, it will perform a constrained update.

The reason for this functionality is that it may be best to initialize the constrained optimizer with an approximate optimum of the unconstrained problem.

#### Returns:

`Operation`, the train_op.

#### Raises:

• `ValueError`: If unconstrained_steps is provided, but global_step is not.

### `minimize_constrained`

View source

``````minimize_constrained(
minimization_problem,
global_step=None,
var_list=None,
aggregation_method=None,
name=None,
)
``````

Returns an `Operation` for minimizing the constrained problem.

Unlike `minimize_unconstrained`, this function attempts to find a solution that minimizes the `objective` portion of the minimization problem while satisfying the `constraints` portion.

#### Returns:

`Operation`, the train_op.

### `minimize_unconstrained`

View source

``````minimize_unconstrained(
minimization_problem,
global_step=None,
var_list=None,
aggregation_method=None,
name=None,
Returns an `Operation` for minimizing the unconstrained problem.
Unlike `minimize_constrained`, this function ignores the `constraints` (and `proxy_constraints`) portion of the minimization problem entirely, and only minimizes `objective`.
`Operation`, the train_op.