RSVP for your your local TensorFlow Everywhere event today!


Set the global op latency mode in execution context.

This is advanced TFQ feature that should be used only in very specific cases. Namely if memory requirements on simulation are extremely large OR when executing against a true chip.

If you are going to make use of this function please call it at the top of your module right after import:

import tensorflow_quantum as tfq

mode Python bool indicating whether or not circuit executing ops should block graph level parallelism. Advanced users should set mode=False when executing very large simulation workloads or when executing against a real quantum chip.