|View source on GitHub|
Enum defining options for variable handling when saving.
Compat aliases for migration
See Migration guide for more details.
NONE No policy applied: Distributed variables are saved as one variable, with no device attached.
When saving variables, also save their device assignment.
This is useful if one wants to hardcode devices in saved models, but it also
makes them non-portable if soft device placement is disabled (more details
tf.config.set_soft_device_placement). This is currently not
fully supported by
saved_model.load, and is mainly intended to be used
when one will be reading the saved model at a lower API level. In the
example below, the graph saved by the call to
saved_model.save will have
the variable devices correctly specified:
exported = tf.train.Checkpoint() with tf.device('/GPU:0'): exported.x_gpu = tf.Variable(1.0) with tf.device('/CPU:0'): exported.x_cpu = tf.Variable(1.0) tf.saved_model.save(exported, export_dir, options = tf.saved_model.SaveOptions( experimental_variable_policy= tf.saved_model.experimental.VariablePolicy.SAVE_VARIABLE_DEVICES))
Distributed variables are still saved as one variable under this policy.
EXPAND_DISTRIBUTED_VARIABLES Distributed variables will be saved with information about their components, allowing for their restoration on load. Also, the saved graph will contain references to those variables. This is useful when one wants to use the model for training in environments where the original distribution strategy is not available.