Join TensorFlow at Google I/O, May 11-12

# tf.contrib.opt.ScipyOptimizerInterface

Wrapper allowing `scipy.optimize.minimize` to operate a `tf.compat.v1.Session`.

Inherits From: `ExternalOptimizerInterface`

#### Example:

``````vector = tf.Variable([7., 7.], 'vector')

# Make vector norm as small as possible.
loss = tf.reduce_sum(tf.square(vector))

optimizer = ScipyOptimizerInterface(loss, options={'maxiter': 100})

with tf.compat.v1.Session() as session:
optimizer.minimize(session)

# The value of vector should now be [0., 0.].
``````

Example with simple bound constraints:

``````vector = tf.Variable([7., 7.], 'vector')

# Make vector norm as small as possible.
loss = tf.reduce_sum(tf.square(vector))

optimizer = ScipyOptimizerInterface(
loss, var_to_bounds={vector: ([1, 2], np.infty)})

with tf.compat.v1.Session() as session:
optimizer.minimize(session)

# The value of vector should now be [1., 2.].
``````

Example with more complicated constraints:

``````vector = tf.Variable([7., 7.], 'vector')

# Make vector norm as small as possible.
loss = tf.reduce_sum(tf.square(vector))
# Ensure the vector's y component is = 1.
equalities = [vector[1] - 1.]
# Ensure the vector's x component is >= 1.
inequalities = [vector[0] - 1.]

# Our default SciPy optimization algorithm, L-BFGS-B, does not support
# general constraints. Thus we use SLSQP instead.
optimizer = ScipyOptimizerInterface(
loss, equalities=equalities, inequalities=inequalities, method='SLSQP')

with tf.compat.v1.Session() as session:
optimizer.minimize(session)

# The value of vector should now be [1., 1.].
``````

`loss` A scalar `Tensor` to be minimized.
`var_list` Optional `list` of `Variable` objects to update to minimize `loss`. Defaults to the list of variables collected in the graph under the key `GraphKeys.TRAINABLE_VARIABLES`.
`equalities` Optional `list` of equality constraint scalar `Tensor`s to be held equal to zero.
`inequalities` Optional `list` of inequality constraint scalar `Tensor`s to be held nonnegative.
`var_to_bounds` Optional `dict` where each key is an optimization `Variable` and each corresponding value is a length-2 tuple of `(low, high)` bounds. Although enforcing this kind of simple constraint could be accomplished with the `inequalities` arg, not all optimization algorithms support general inequality constraints, e.g. L-BFGS-B. Both `low` and `high` can either be numbers or anything convertible to a NumPy array that can be broadcast to the shape of `var` (using `np.broadcast_to`). To indicate that there is no bound, use `None` (or `+/- np.infty`). For example, if `var` is a 2x3 matrix, then any of the following corresponding `bounds` could be supplied:

• `(0, np.infty)`: Each element of `var` held positive.
• `(-np.infty, [1, 2])`: First column less than 1, second column less than 2.
• `(-np.infty, [[1], [2], [3]])`: First row less than 1, second row less than 2, etc.
• `(-np.infty, [[1, 2, 3], [4, 5, 6]])`: Entry `var[0, 0]` less than 1, `var[0, 1]` less than 2, etc.
`**optimizer_kwargs` Other subclass-specific keyword arguments.

## Methods

### `minimize`

View source

Minimize a scalar `Tensor`.

Variables subject to optimization are updated in-place at the end of optimization.

Note that this method does not just return a minimization `Op`, unlike `Optimizer.minimize()`; instead it actually performs minimization by executing commands to control a `Session`.

Args
`session` A `Session` instance.
`feed_dict` A feed dict to be passed to calls to `session.run`.
`fetches` A list of `Tensor`s to fetch and supply to `loss_callback` as positional arguments.
`step_callback` A function to be called at each optimization step; arguments are the current values of all optimization variables flattened into a single vector.
`loss_callback` A function to be called every time the loss and gradients are computed, with evaluated fetches supplied as positional arguments.
`**run_kwargs` kwargs to pass to `session.run`.

[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]