{ }
Distributed version of Stochastic Dual Coordinate Ascent (SDCA) optimizer for
tf.raw_ops.SdcaOptimizerV2(
sparse_example_indices,
sparse_feature_indices,
sparse_feature_values,
dense_features,
example_weights,
example_labels,
sparse_indices,
sparse_weights,
dense_weights,
example_state_data,
loss_type,
l1,
l2,
num_loss_partitions,
num_inner_iterations,
adaptive=True,
name=None
)
linear models with L1 + L2 regularization. As global optimization objective is strongly-convex, the optimizer optimizes the dual objective at each step. The optimizer applies each update one example at a time. Examples are sampled uniformly, and the optimizer is learning rate free and enjoys linear convergence rate.
Proximal Stochastic Dual Coordinate Ascent.
Shai Shalev-Shwartz, Tong Zhang. 2012
\[Loss Objective = \sum f_{i} (wx_{i}) + (l2 / 2) * |w|^2 + l1 * |w|\]
Adding vs. Averaging in Distributed Primal-Dual Optimization.
Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan,
Peter Richtarik, Martin Takac. 2015
Stochastic Dual Coordinate Ascent with Adaptive Probabilities.
Dominik Csiba, Zheng Qu, Peter Richtarik. 2015