|View source on GitHub|
Initializes a distributed TPU system for use with TensorFlow.
tf.compat.v1.tpu.initialize_system( embedding_config=None, job=None, compilation_failure_closes_chips=True )
embedding_config: If not None, a
TPUEmbeddingConfigurationproto describing the desired configuration of the hardware embedding lookup tables. If embedding_config is None, no hardware embeddings can be used.
job: The job (the XXX in TensorFlow device specification /job:XXX) that contains the TPU devices that will be initialized. If job=None it is assumed there is only one job in the TensorFlow flock, and an error will be returned if this assumption does not hold.
compilation_failure_closes_chips: Set the configuration whether we want to close TPU chips when there is a compilation failure.
TopologyProto that describes the TPU system. Note:
the topology must be evaluated using
Session.run before it can be used.