import tensorflow as tf tf.compat.v1.enable_v2_behavior() import tensorflow_federated as tff # Load simulation data. source, _ = tff.simulation.datasets.emnist.load_data() def client_data(n): return source.create_tf_dataset_for_client(source.client_ids[n]).map( lambda e: (tf.reshape(e['pixels'], [-1]), e['label']) ).repeat(10).batch(20) # Pick a subset of client devices to participate in training. train_data = [client_data(n) for n in range(3)] # Grab a single batch of data so that TFF knows what data looks like. sample_batch = tf.nest.map_structure( lambda x: x.numpy(), iter(train_data).next()) # Wrap a Keras model for use with TFF. def model_fn(): model = tf.keras.models.Sequential([ tf.keras.layers.Dense(10, tf.nn.softmax, input_shape=(784,), kernel_initializer='zeros') ]) model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(), optimizer=tf.keras.optimizers.SGD(0.1), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) return tff.learning.from_compiled_keras_model(model, sample_batch) # Simulate a few rounds of training with the selected client devices. trainer = tff.learning.build_federated_averaging_process(model_fn) state = trainer.initialize() for _ in range(5): state, metrics = trainer.next(state, train_data) print (metrics.loss)
TensorFlow Federated (TFF) is an open-source framework for machine learning and other computations on decentralized data. TFF has been developed to facilitate open research and experimentation with Federated Learning (FL), an approach to machine learning where a shared global model is trained across many participating clients that keep their training data locally. For example, FL has been used to train prediction models for mobile keyboards without uploading sensitive typing data to servers.
TFF enables developers to simulate the included federated learning algorithms on their models and data, as well as to experiment with novel algorithms. The building blocks provided by TFF can also be used to implement non-learning computations, such as aggregated analytics over decentralized data. TFF’s interfaces are organized in two layers:
Federated Learning (FL) APIThis layer offers a set of high-level interfaces that allow developers to apply the included implementations of federated training and evaluation to their existing TensorFlow models.
Federated Core (FC) APIAt the core of the system is a set of lower-level interfaces for concisely expressing novel federated algorithms by combining TensorFlow with distributed communication operators within a strongly-typed functional programming environment. This layer also serves as the foundation upon which we've built Federated Learning.
TFF enables developers to declaratively express federated computations, so they could be deployed to diverse runtime environments. Included with TFF is a single-machine simulation runtime for experiments. Please visit the tutorials and try it out yourself!