Registration is open for TensorFlow Dev Summit 2020 Learn more

Using TFF for Federated Learning Research

Note: This page is currently being populated

Overview

TFF is an extensible, powerful framework for conducting federated learning (FL) research by simulating federated computations on realistic proxy datasets. This page describes the main concepts and components that are relevant for research simulations, as well as detailed guidance for conducting different kinds of research in TFF.

The typical structure of research code in TFF

A research FL simulation implemented in TFF typically consists of three main types of logic.

  1. Individual pieces of TensorFlow code, typically tf.functions, that encapsulate logic that runs in a single location (e.g., on clients or on a server). This code is typically written and tested without any tff.* references, and can be re-used outside of TFF. For example, the client training loop in Federated Averaging is implemented at this level.

  2. TensorFlow Federated orchestration logic, which binds together the individual tf.functions from 1. by wrapping them as tff.tf_computations and then orchestrating them using abstractions like tff.federated_broadcast and tff.federated_mean inside a tff.federated_computation. See, for example, this orchestration for Federated Averaging.

  3. An outer driver script that simulates the control logic of a production FL system, selecting simulated clients from a dataset and then executing federated comptuations defined in 2. on those clients. For example, a Federated EMNIST experiment driver.

Federated learning datasets

TensorFlow federated hosts multiple datasets that are representative of the characteristics of real-world problems that could be solved with federated learning. Datasets include:

  • StackOverflow. A realistic text dataset for language modeling or supervised learning tasks, with 342,477 unique users with 135,818,730 examples (sentences) in the training set.

  • Federated EMNIST. A federated pre-processing of the EMNIST character and digit dataset, where each client corresponds to a different writer. The full train set contains 3400 users with 671,585 examples from 62 labels.

  • Shakespeare. A smaller char-level text dataset based on the complete works of William Shakespeare. The data set consists of 715 users (characters of Shakespeare plays), where each example corresponds to a contiguous set of lines spoken by the character in a given play.

High performance simulations

While the wall-clock time of an FL simulation is not a relevant metric for evaluating algorithms (as simulation hardware isn't representative of real FL deployment environments), being able to run FL simulations quickly is critical for research productivity. Hence, TFF has invested heavily in providing high-performance single and multi-machine runtimes. Documentation is under development, but for now see the High-performance simulations with TFF tutorial as well as instructions on setting up simulations with TFF on GCP. For fast-single machine experiments, use

tff.framework.set_default_executor(tff.framework.create_local_executor())

This should become the default soon.

TFF for different research areas

Federated optimization algorithms

Research on federated optimization algorithms can be done in different ways in TFF, depending on the desired level of customization.

A minimal implementation of the Federated Averaging](https://arxiv.org/abs/1602.05629) algorithm is provided here, along with an example federated EMNIST experiment. The training example can easily be adapted for simple experiment changes:

To implement more complicated federated optimization algorithms, you may need customize your federated training loop in order to gain more control over the orchestration and optimization logic of the experiment. Again, simple_fedavg may be a good place to start. For example, you could change the client update function to implement a custom local training procedure, modify the tff.federated_computation that controls the orchestration to change what is broadcast from the server to client and what is aggregated back, and alter the server update to change how the server model is learned from the client updates.

Model and update compression

TFF uses the tensor_encoding API to enable lossy compression algorithms to reduce communicatation costs between the server and clients. For an example of training with server-to-client and client-to-server compression using Federated Averaging algorithm, see this experiment.

To implement a custom compression algorithm and apply it to the training loop, you can:

  1. Implement a new compression algorithm as a subclass of EncodingStageInterface or its more general variant, AdaptiveEncodingStageInterface following this example.
  2. Construct your new Encoder and specialize it for model broadcast or model update averaging.
  3. Use those objects to build the entire training computation.

Differential privacy

TFF is interoperable with the TensorFlow Privacy library to enable research in new algorithms for federated training of models with differential privacy. For an example of training with DP using the basic DP-FedAvg algorithm and extensions, see this experiment driver.

If you want to implement a custom DP algorithm and apply it to the aggregate updates of federated averaging, you can:

  1. Implement a new DP mean algorithm as a subclass of tensorflow_privacy.DPQuery,
  2. construct your new DPQuery similarly to the way standard DPQueries are constructed here,
  3. and pass your query instance into tff.utils.build_dp_aggregate() similarly to run_dp_experiment.

Federated GANs (described below) are another example of a TFF project implementing user-level differential privacy (e.g., here in code).

Robustness and attacks

TFF can also be used to simulate the targeted attacks on federated learning systems and differential privacy based defenses considered in Can You Really Back door Federated Learning?. This is done by building an iterative process with potentially malicious clients (see build_federated_averaging_process_attacked). The targeted_attack directory contains more details.

  • New attacking algorithms can be implemented by writing a client update function which is a Tensorflow function, see ClientProjectBoost for an example.
  • New defenses can be implemented by customizing 'tff.utils.StatefulAggregateFn' which aggregates client outputs to get a global update.

For an example script for simulation, see emnist_with_targeted_attack.py.

Generative Adversarial Networks

GANs make for an interesting federated orchestration pattern that looks a little different than standard Federated Averaging. They involve two distinct networks (the generator and the discriminator) each trained with their own optimization step.

TFF can be used for research on federated training of GANs. For example, the DP-FedAvg-GAN algorithm presented in recent work is implemented in TFF. This work demonstrates the effectiveness of combining federated learning, generative models, and differential privacy.