Ftrl

public class Ftrl

Optimizer that implements the FTRL algorithm.

This version has support for both online L2 (the L2 penalty given in the paper below) and shrinkage-type L2 (which is the addition of an L2 penalty to the loss function).

Constants

String ACCUMULATOR
float INITIAL_ACCUMULATOR_VALUE_DEFAULT
float L1STRENGTH_DEFAULT
float L2STRENGTH_DEFAULT
float L2_SHRINKAGE_REGULARIZATION_STRENGTH_DEFAULT
float LEARNING_RATE_DEFAULT
float LEARNING_RATE_POWER_DEFAULT
String LINEAR_ACCUMULATOR

Inherited Constants

Public Constructors

Ftrl(Graph graph)
Creates a Ftrl Optimizer
Ftrl(Graph graph, String name)
Creates a Ftrl Optimizer
Ftrl(Graph graph, float learningRate)
Creates a Ftrl Optimizer
Ftrl(Graph graph, String name, float learningRate)
Creates a Ftrl Optimizer
Ftrl(Graph graph, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength)
Creates a Ftrl Optimizer
Ftrl(Graph graph, String name, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength)
Creates a Ftrl Optimizer

Public Methods

String
getOptimizerName()
Get the Name of the optimizer.

Inherited Methods

Constants

public static final String ACCUMULATOR

Constant Value: "gradient_accumulator"

public static final float INITIAL_ACCUMULATOR_VALUE_DEFAULT

Constant Value: 0.1

public static final float L1STRENGTH_DEFAULT

Constant Value: 0.0

public static final float L2STRENGTH_DEFAULT

Constant Value: 0.0

public static final float L2_SHRINKAGE_REGULARIZATION_STRENGTH_DEFAULT

Constant Value: 0.0

public static final float LEARNING_RATE_DEFAULT

Constant Value: 0.001

public static final float LEARNING_RATE_POWER_DEFAULT

Constant Value: -0.5

public static final String LINEAR_ACCUMULATOR

Constant Value: "linear_accumulator"

Public Constructors

public Ftrl (Graph graph)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph

public Ftrl (Graph graph, String name)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph
name the name of this Optimizer

public Ftrl (Graph graph, float learningRate)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph
learningRate the learning rate

public Ftrl (Graph graph, String name, float learningRate)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph
name the name of this Optimizer
learningRate the learning rate

public Ftrl (Graph graph, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph
learningRate the learning rate
learningRatePower Controls how the learning rate decreases during training. Use zero for a fixed learning rate.
initialAccumulatorValue The starting value for accumulators. Only zero or positive values are allowed.
l1Strength the L1 Regularization strength, must be greater than or equal to zero.
l2Strength the L2 Regularization strength, must be greater than or equal to zero.
l2ShrinkageRegularizationStrength This differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. must be greater than or equal to zero.
Throws
IllegalArgumentException if the initialAccumulatorValue, l1RegularizationStrength, l2RegularizationStrength, or l2ShrinkageRegularizationStrength are less than 0.0, or learningRatePower is greater than 0.0.

public Ftrl (Graph graph, String name, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength)

Creates a Ftrl Optimizer

Parameters
graph the TensorFlow Graph
name the name of this Optimizer
learningRate the learning rate
learningRatePower Controls how the learning rate decreases during training. Use zero for a fixed learning rate.
initialAccumulatorValue The starting value for accumulators. Only zero or positive values are allowed.
l1Strength the L1 Regularization strength, must be greater than or equal to zero.
l2Strength the L2 Regularization strength, must be greater than or equal to zero.
l2ShrinkageRegularizationStrength This differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. must be greater than or equal to zero.
Throws
IllegalArgumentException if the initialAccumulatorValue, l1RegularizationStrength, l2RegularizationStrength, or l2ShrinkageRegularizationStrength are less than 0.0, or learningRatePower is greater than 0.0.

Public Methods

public String getOptimizerName ()

Get the Name of the optimizer.

Returns
  • The optimizer name.