|View source on GitHub|
Base class for interfaces with external optimization algorithms.
Subclass this and implement
_minimize in order to wrap a new optimization
ExternalOptimizerInterface should not be instantiated directly; instead use
__init__( loss, var_list=None, equalities=None, inequalities=None, var_to_bounds=None, **optimizer_kwargs )
Initialize a new interface instance.
loss: A scalar
Tensorto be minimized.
Variableobjects to update to minimize
loss. Defaults to the list of variables collected in the graph under the key
listof equality constraint scalar
Tensors to be held equal to zero.
listof inequality constraint scalar
Tensors to be held nonnegative.
dictwhere each key is an optimization
Variableand each corresponding value is a length-2 tuple of
(low, high)bounds. Although enforcing this kind of simple constraint could be accomplished with the
inequalitiesarg, not all optimization algorithms support general inequality constraints, e.g. L-BFGS-B. Both
highcan either be numbers or anything convertible to a NumPy array that can be broadcast to the shape of
np.broadcast_to). To indicate that there is no bound, use
+/- np.infty). For example, if
varis a 2x3 matrix, then any of the following corresponding
boundscould be supplied:
(0, np.infty): Each element of
(-np.infty, [1, 2]): First column less than 1, second column less than 2.
(-np.infty, [, , ]): First row less than 1, second row less than 2, etc.
(-np.infty, [[1, 2, 3], [4, 5, 6]]): Entry
var[0, 0]less than 1,
var[0, 1]less than 2, etc.
**optimizer_kwargs: Other subclass-specific keyword arguments.
minimize( session=None, feed_dict=None, fetches=None, step_callback=None, loss_callback=None, **run_kwargs )
Minimize a scalar
Variables subject to optimization are updated in-place at the end of optimization.
Note that this method does not just return a minimization
Optimizer.minimize(); instead it actually performs minimization by
executing commands to control a
feed_dict: A feed dict to be passed to calls to
fetches: A list of
Tensors to fetch and supply to
loss_callbackas positional arguments.
step_callback: A function to be called at each optimization step; arguments are the current values of all optimization variables flattened into a single vector.
loss_callback: A function to be called every time the loss and gradients are computed, with evaluated fetches supplied as positional arguments.
**run_kwargs: kwargs to pass to