Compilation with XLA can greatly improve the performance of your programs, but the TensorFlow interop has a number of known sharp corners.
TensorArray TF/XLA interconversion
The problem manifests itself as an error message
Support for TensorList crossing the XLA/TF boundary is not implemented.
tf.TensorArray. However, the interconversion between TF and
XLA representations is not implemented yet.
This error often arises when the
TensorArray is used inside the compiled
block, but the derivative is taken outside.
Workaround: compile the outermost scope which is taking the derivative.
tf.TensorArray is not supported
tf.TensorArray(..., dynamic_size=True) are not compilable with
XLA, as such writes require an unknown number of reallocations when the array
exceeds the original bound.
Workaround: provide a statically known bound to your arrays.
Random number generation
XLA currently ignores TF seeds to random operations. This affects stateful TF
random operations, such as
tf.nn.dropout. XLA will
behave as if the compilation was seeded with a new unique seed at each run. This
limitation does not apply to stateless random ops.
TensorFlow while loops need to be bounded (or have backprop disabled)
Since XLA only supports bounded
TensorArrays, all compiled while loops need to
maximum_iterations parameter set to a constant value known at
compile time, or backpropagation disabled using