Thanks for tuning in to Google I/O. View all sessions on demandWatch on demand


Runs experiment with Orbit training loop.

The default experiment runner for model garden experiments. User can customize the experiment pipeline by subclassing this class and replacing components or functions.

For example, an experiment runner with customized checkpoint manager:

class MyExpRunnerWithExporter(OrbitExperimentRunner):
  def _maybe_build_checkpoint_manager(sefl):
    # Replaces the default CheckpointManger with a customized one.
    return MyCheckpointManager(*args)

# In user code, instead of the orginal
# `OrbitExperimentRunner(..).run(mode)`, now user can do:

Similar override can be done to other components.

distribution_strategy A distribution strategy.
task A Task instance.
mode A 'str', specifying the mode. Can be 'train', 'eval', 'train_and_eval' or 'continuous_eval'.
params ExperimentConfig instance.
model_dir A 'str', a path to store model checkpoints and summaries.
run_post_eval Whether to run post eval once after training, metrics logs are returned.
save_summary Whether to save train and validation summary.
train_actions Optional list of Orbit train actions.
eval_actions Optional list of Orbit eval actions.
trainer the base_trainer.Trainer instance. It should be created within the strategy.scope().
controller_cls The controller class to manage the train and eval process. Must be a orbit.Controller subclass.
summary_manager Instance of the summary manager to override default summary manager.
eval_summary_manager Instance of the eval summary manager to override default eval summary manager.

checkpoint_manager The CheckpointManager that stores the checkpoints in a train job.
controller The Orbit controller object.
model_dir Path to the model folder, which stores checkpoints, params, log, etc.
params The whole experiment parameters object.
trainer The underlying Orbit Trainer object.



View source

Run experiments by mode.

A 2-tuple of (model, eval_logs). model: tf.keras.Model instance. eval_logs: returns eval metrics logs when run_post_eval is set to True, otherwise, returns {}.