Google I/O returns May 18-20! Reserve space and build your schedule Register now


Builds the TFF computations for optimization using federated SGD.

This function creates a tff.templates.IterativeProcess that performs federated SGD on client models. The iterative process has the following methods inherited from tff.templates.IterativeProcess:

  • initialize: A tff.Computation with the functional type signature ( -> S@SERVER), where S is a tff.learning.framework.ServerState representing the initial state of the server.
  • next: A tff.Computation with the functional type signature (<S@SERVER, {B*}@CLIENTS> -> <S@SERVER, T@SERVER>) where S is a tff.learning.framework.ServerState whose type matches that of the output of initialize, and {B*}@CLIENTS represents the client datasets, where B is the type of a single batch. This computation returns a tff.learning.framework.ServerState representing the updated server state and metrics that are the result of tff.learning.Model.federated_output_computation during client training and any other metrics from broadcast and aggregation processes.

The iterative process also has the following method not inherited from tff.templates.IterativeProcess:

Each time the next method is called, the server model is broadcast to each client using a broadcast function. Each client sums the gradients at each batch in the client's local dataset. These gradient sums are then aggregated at the server using an aggregation function. The aggregate gradients are applied at the server by using the tf.keras.optimizers.Optimizer.apply_gradients method of the server optimizer.

This implements the original FedSGD algorithm in McMahan et al., 2017.

model_fn A no-arg function that returns a tff.learning.Model. This method must not capture TensorFlow tensors or variables and use them. The model must be constructed entirely from scratch on each invocation, returning the same pre-constructed model each call will result in an error.
server_optimizer_fn A no-arg function that returns a tf.Optimizer. The apply_gradients method of this optimizer is used to apply client updates to the server model.
client_weight_fn Optional function that takes the output of model.report_local_outputs and returns a tensor that provides the weight in the federated average of the aggregated gradients. If not provided, the default is the total number of examples processed on device.
broadcast_process a tff.templates.MeasuredProcess that broadcasts the model weights on the server to the clients. It must support the signature (input_values@SERVER -> output_values@CLIENT).
aggregation_process a tff.templates.MeasuredProcess that aggregates the model updates on the clients back to the server. It must support the signature ({input_values}@CLIENTS-> output_values@SERVER). Must be None if model_update_aggregation_factory is not None.
model_update_aggregation_factory An optional tff.aggregators.WeightedAggregationFactory that constructs tff.templates.AggregationProcess for aggregating the client model updates on the server. If None, uses a default constructed tff.aggregators.MeanFactory, creating a stateless mean aggregation. Must be None if aggregation_process is not None.
use_experimental_simulation_loop Controls the reduce loop function for input dataset. An experimental reduce loop is used for simulation.

A tff.templates.IterativeProcess.