Create an executor backed by remote workers.
tff.framework.remote_executor_factory_from_stubs(
stubs: list[Union[remote_executor_grpc_stub.RemoteExecutorGrpcStub,
remote_executor_stub.RemoteExecutorStub]],
thread_pool_executor: Optional[futures.Executor] = None,
dispose_batch_size: int = 20,
max_fanout: int = 100,
default_num_clients: int = 0,
stream_structs: bool = False
) -> tff.framework.ExecutorFactory
Args |
stubs
|
A list stubs to the TFF executor service, running on remote machines.
|
thread_pool_executor
|
Optional concurrent.futures.Executor used to wait for
the reply to a streaming RPC message. Uses the default Executor if not
specified.
|
dispose_batch_size
|
The batch size for requests to dispose of remote worker
values. Lower values will result in more requests to the remote worker,
but will result in values being cleaned up sooner and therefore may result
in lower memory usage on the remote worker.
|
max_fanout
|
The maximum fanout at any point in the aggregation hierarchy. If
num_clients > max_fanout , the constructed executor stack will consist of
multiple levels of aggregators. The height of the stack will be on the
order of log(default_num_clients) / log(max_fanout) .
|
default_num_clients
|
The number of clients to use for simulations where the
number of clients cannot be inferred. Usually the number of clients will
be inferred from the number of values passed to computations which accept
client-placed values. However, when this inference isn't possible (such as
in the case of a no-argument or non-federated computation) this default
will be used instead.
|
stream_structs
|
The flag to enable decomposing and streaming struct values.
|
Returns |
An instance of executor_factory.ExecutorFactory encapsulating the
executor construction logic specified above.
|