tff.backends.test.create_async_experimental_distributed_cpp_execution_context

Creates a local async execution context backed by TFF-C++ runtime.

When using this context, the local sequence reductions assumed to expressed using tff.sequence_reduce. Iterating over dataset or dataset.reduce inside TF graph are currently not supported.

distributed_config A runtime configuration for running TF computation in a distributed manner. A server side and/or client side mesh can be supplied in the configuration if TF computation should be executed with DTensor executor.
default_num_clients The number of clients to use as the default cardinality, if thus number cannot be inferred by the arguments of a computation.
max_concurrent_computation_calls The maximum number of concurrent calls to a single computation in the C++ runtime. If nonpositive, there is no limit.
stream_structs The flag to enable decomposing and streaming struct values.

An instance of context_base.AsyncContext representing the TFF-C++ runtime.