View source on GitHub |
Wrapper allowing scipy.optimize.minimize
to operate a tf.compat.v1.Session
.
Inherits From: ExternalOptimizerInterface
tf.contrib.opt.ScipyOptimizerInterface(
loss, var_list=None, equalities=None, inequalities=None, var_to_bounds=None,
**optimizer_kwargs
)
Example:
vector = tf.Variable([7., 7.], 'vector')
# Make vector norm as small as possible.
loss = tf.reduce_sum(tf.square(vector))
optimizer = ScipyOptimizerInterface(loss, options={'maxiter': 100})
with tf.compat.v1.Session() as session:
optimizer.minimize(session)
# The value of vector should now be [0., 0.].
Example with simple bound constraints:
vector = tf.Variable([7., 7.], 'vector')
# Make vector norm as small as possible.
loss = tf.reduce_sum(tf.square(vector))
optimizer = ScipyOptimizerInterface(
loss, var_to_bounds={vector: ([1, 2], np.infty)})
with tf.compat.v1.Session() as session:
optimizer.minimize(session)
# The value of vector should now be [1., 2.].
Example with more complicated constraints:
vector = tf.Variable([7., 7.], 'vector')
# Make vector norm as small as possible.
loss = tf.reduce_sum(tf.square(vector))
# Ensure the vector's y component is = 1.
equalities = [vector[1] - 1.]
# Ensure the vector's x component is >= 1.
inequalities = [vector[0] - 1.]
# Our default SciPy optimization algorithm, L-BFGS-B, does not support
# general constraints. Thus we use SLSQP instead.
optimizer = ScipyOptimizerInterface(
loss, equalities=equalities, inequalities=inequalities, method='SLSQP')
with tf.compat.v1.Session() as session:
optimizer.minimize(session)
# The value of vector should now be [1., 1.].
Args | |
---|---|
loss
|
A scalar Tensor to be minimized.
|
var_list
|
Optional list of Variable objects to update to minimize
loss . Defaults to the list of variables collected in the graph
under the key GraphKeys.TRAINABLE_VARIABLES .
|
equalities
|
Optional list of equality constraint scalar Tensor s to be
held equal to zero.
|
inequalities
|
Optional list of inequality constraint scalar Tensor s
to be held nonnegative.
|
var_to_bounds
|
Optional dict where each key is an optimization
Variable and each corresponding value is a length-2 tuple of
(low, high) bounds. Although enforcing this kind of simple constraint
could be accomplished with the inequalities arg, not all optimization
algorithms support general inequality constraints, e.g. L-BFGS-B. Both
low and high can either be numbers or anything convertible to a
NumPy array that can be broadcast to the shape of var (using
np.broadcast_to ). To indicate that there is no bound, use None (or
+/- np.infty ). For example, if var is a 2x3 matrix, then any of
the following corresponding bounds could be supplied:
|
**optimizer_kwargs
|
Other subclass-specific keyword arguments. |
Methods
minimize
minimize(
session=None, feed_dict=None, fetches=None, step_callback=None,
loss_callback=None, **run_kwargs
)
Minimize a scalar Tensor
.
Variables subject to optimization are updated in-place at the end of optimization.
Note that this method does not just return a minimization Op
, unlike
Optimizer.minimize()
; instead it actually performs minimization by
executing commands to control a Session
.
Args | |
---|---|
session
|
A Session instance.
|
feed_dict
|
A feed dict to be passed to calls to session.run .
|
fetches
|
A list of Tensor s to fetch and supply to loss_callback
as positional arguments.
|
step_callback
|
A function to be called at each optimization step; arguments are the current values of all optimization variables flattened into a single vector. |
loss_callback
|
A function to be called every time the loss and gradients are computed, with evaluated fetches supplied as positional arguments. |
**run_kwargs
|
kwargs to pass to session.run .
|