Stay organized with collections Save and categorize content based on your preferences.

Creates a baseline task for autoencoding on EMNIST.

This task involves performing autoencoding on the EMNIST dataset using a densely connected bottleneck network. The model uses 8 layers of widths [1000, 500, 250, 30, 250, 500, 1000, 784], with the final layer being the output layer. Each layer uses a sigmoid activation function, except the smallest layer, which uses a linear activation function.

The goal of the task is to minimize the mean squared error between the input to the network and the output of the network.

train_client_spec A tff.simulation.baselines.ClientSpec specifying how to preprocess train client data.
eval_client_spec An optional tff.simulation.baselines.ClientSpec specifying how to preprocess evaluation client data. If set to None, the evaluation datasets will use a batch size of 64 with no extra preprocessing.
only_digits A boolean indicating whether to use the full EMNIST-62 dataset containing 62 alphanumeric classes (True) or the smaller EMNIST-10 dataset with only 10 numeric classes (False).
cache_dir An optional directory to cache the downloadeded datasets. If None, they will be cached to ~/.tff/.
use_synthetic_data A boolean indicating whether to use synthetic EMNIST data. This option should only be used for testing purposes, in order to avoid downloading the entire EMNIST dataset.

A tff.simulation.baselines.BaselineTask.