|TensorFlow 1 version||View source on GitHub|
Checkpoints input pipeline state every N steps or seconds.
Compat aliases for migration
See Migration guide for more details.
tf.data.experimental.CheckpointInputPipelineHook( estimator, external_state_policy=None )
This hook saves the state of the iterators in the
Graph so that when
training is resumed the input pipeline continues from where it left off.
This could potentially avoid overfitting in certain pipelines where the
number of training steps per eval are small compared to the dataset
size or if the training pipeline is pre-empted.
- Saves only the input pipelines in the "iterators" collection and not the global variables or other saveable objects.
- Does not write the
MetaGraphDefto the summary.
Example of checkpointing the training pipeline:
est = tf.estimator.Estimator(model_fn) while True: est.train( train_input_fn, hooks=[tf.data.experimental.CheckpointInputPipelineHook(est)], steps=train_steps_per_eval) # Note: We do not pass the hook here. metrics = est.evaluate(eval_input_fn) if should_stop_the_training(metrics): break
This hook should be used if the input pipeline state needs to be saved separate from the model checkpoint. Doing so may be useful for a few reasons: