![]() |
Condition that is satisfied when a new best metric is achieved.
orbit.actions.NewBestMetric(
metric: Union[str, MetricFn],
higher_is_better: bool = True,
filename: Optional[str] = None,
write_metric=True
)
This class keeps track of the best metric value seen so far, optionally in a persistent (preemption-safe) way.
Two methods are provided, which each satisfy the Action
protocol: test
for
only testing whether a new best metric is achieved by a given train/eval
output, and commit
, which both tests and records the new best metric value
if it is achieved. These separate methods enable the same NewBestMetric
instance to be reused as a condition multiple times, and can also provide
additional preemption/failure safety. For example, to avoid updating the best
metric if a model export fails or is pre-empted:
new_best_metric = orbit.actions.NewBestMetric(
'accuracy', filename='/model/dir/best_metric')
action = orbit.actions.ConditionalAction(
condition=new_best_metric.test,
action=[
orbit.actions.ExportSavedModel(...),
new_best_metric.commit
])
The default __call__
implementation is equivalent to commit
.
This class is safe to use in multi-client settings if all clients can be
guaranteed to compute the same metric. However when saving metrics it may be
helpful to avoid unnecessary writes by setting the write_value
parameter to
False
for most clients.
Args | |
---|---|
metric
|
Either a string key name to use to look up a metric (assuming the train/eval output is a dictionary), or a callable that accepts the train/eval output and returns a metric value. |
higher_is_better
|
Whether higher metric values are better. If True , a
new best metric is achieved when the metric value is strictly greater
than the previous best metric. If False , a new best metric is achieved
when the metric value is strictly less than the previous best metric.
|
filename
|
A filename to use for storage of the best metric value seen so
far, to allow peristence of the value across preemptions. If None
(default), values aren't persisted.
|
write_metric
|
If filename is set, this controls whether this instance
will write new best metric values to the file, or just read from the
file to obtain the initial value. Setting this to False for most
clients in some multi-client setups can avoid unnecessary file writes.
Has no effect if filename is None .
|
Attributes | |
---|---|
metric
|
The metric passed to init (may be a string key or a callable that can be applied to train/eval output). |
higher_is_better
|
Whether higher metric values are better. |
best_value
|
Returns the best metric value seen so far. |
Methods
commit
commit(
output: runner.Output
) -> bool
Tests output
and updates the current best value if necessary.
Unlike test
above, if output
does contain a new best metric value, this
method does save it (i.e., subsequent calls to this method with the same
output
will return False
).
Args | |
---|---|
output
|
The train or eval output to test. |
Returns | |
---|---|
True if output contains a new best metric value, False otherwise.
|
metric_value
metric_value(
output: runner.Output
) -> float
Computes the metric value for the given output
.
test
test(
output: runner.Output
) -> bool
Tests output
to see if it contains a new best metric value.
If output
does contain a new best metric value, this method does not
save it (i.e., calling this method multiple times in a row with the same
output
will continue to return True
).
Args | |
---|---|
output
|
The train or eval output to test. |
Returns | |
---|---|
True if output contains a new best metric value, False otherwise.
|
__call__
__call__(
output: runner.Output
) -> bool
Tests output
and updates the current best value if necessary.
This is equivalent to commit
below.
Args | |
---|---|
output
|
The train or eval output to test. |
Returns | |
---|---|
True if output contains a new best metric value, False otherwise.
|