View source on GitHub |
Return status of soft device placement flag.
tf.config.get_soft_device_placement()
If enabled, ops can be placed on different devices than the device explicitly assigned by the user. This potentially has a large performance cost due to an increase in data communication between devices.
Some cases where soft_device_placement would modify device assignment are:
- no GPU/TPU implementation for the OP
- no GPU devices are known or registered
- need to co-locate with reftype input(s) which are from CPU
- an OP can not be compiled by XLA. Common for TPU which always requires the XLA compiler.
For TPUs, if this option is true, a feature called automatic outside compilation is enabled. Automatic outside compilation will move uncompilable ops within a TPU program to instead run on the host. This can be used when encountering compilation failures due to unsupported ops.
Returns | |
---|---|
A boolean indicating if soft placement is enabled. |