class tf.contrib.keras.optimizers.Adadelta
Defined in tensorflow/contrib/keras/python/keras/optimizers.py.
Adadelta optimizer.
It is recommended to leave the parameters of this optimizer at their default values.
Arguments:
lr: float >= 0. Learning rate.
It is recommended to leave it at the default value.
rho: float >= 0.
epsilon: float >= 0. Fuzz factor.
decay: float >= 0. Learning rate decay over each update.
References: - Adadelta - an adaptive learning rate method
Methods
__init__
__init__(
lr=1.0,
rho=0.95,
epsilon=1e-08,
decay=0.0,
**kwargs
)
from_config
from_config(
cls,
config
)
get_config
get_config()
get_gradients
get_gradients(
loss,
params
)
get_updates
get_updates(
params,
constraints,
loss
)
get_weights
get_weights()
Returns the current value of the weights of the optimizer.
Returns:
A list of numpy arrays.
set_weights
set_weights(weights)
Sets the weights of the optimizer, from Numpy arrays.
Should only be called after computing the gradients (otherwise the optimizer has no weights).
Arguments:
weights: a list of Numpy arrays. The number
of arrays and their shape must match
number of the dimensions of the weights
of the optimizer (i.e. it should match the
output of `get_weights`).
Raises:
ValueError: in case of incompatible weight shapes.
