![]() |
Estimator with TPU support.
Inherits From: Estimator
tf.compat.v1.estimator.tpu.TPUEstimator(
model_fn=None, model_dir=None, config=None, params=None, use_tpu=True,
train_batch_size=None, eval_batch_size=None, predict_batch_size=None,
batch_axis=None, eval_on_tpu=True, export_to_tpu=True, export_to_cpu=True,
warm_start_from=None, embedding_config_spec=None,
export_saved_model_api_version=ExportSavedModelApiVersion.V1
)
TPUEstimator also supports training on CPU and GPU. You don't need to define
a separate tf.estimator.Estimator
.
TPUEstimator handles many of the details of running on TPU devices, such as replicating inputs and models for each core, and returning to host periodically to run hooks.
TPUEstimator transforms a global batch size in params to a per-shard batch
size when calling the input_fn
and model_fn
. Users should specify
global batch size in constructor, and then get the batch size for each shard
in input_fn
and model_fn
by params['batch_size']
.
For training,
model_fn
gets per-core batch size;input_fn
may get per-core or per-host batch size depending onper_host_input_for_training
inTPUConfig
(See docstring for TPUConfig for details).For evaluation and prediction,
model_fn
gets per-core batch size andinput_fn
get per-host batch size.
Evaluation
model_fn
should return TPUEstimatorSpec
, which expects the eval_metrics
for TPU evaluation. If eval_on_tpu is False, the evaluation will execute on
CPU or GPU; in this case the following discussion on TPU evaluation does not
apply.
TPUEstimatorSpec.eval_metrics
is a tuple of metric_fn
and tensors
, where
tensors
could be a list of any nested structure of Tensor
s (See
TPUEstimatorSpec
for details). metric_fn
takes the tensors
and returns
a dict from metric string name to the result of calling a metric function,
namely a (metric_tensor, update_op)
tuple.
One can set use_tpu
to False
for testing. All training, evaluation, and
predict will be executed on CPU. input_fn
and model_fn
will receive
train_batch_size
or eval_batch_size
unmodified as params['batch_size']
.
Current limitations:
TPU evaluation only works on a single host (one TPU worker) except BROADCAST mode.
input_fn
for evaluation should NOT raise an end-of-input exception (OutOfRangeError
orStopIteration
). And all evaluation steps and all batches should have the same size.
Example (MNIST):
# The metric Fn which runs on CPU.
def metric_fn(labels, logits):
predictions = tf.argmax(logits, 1)
return {
'accuracy': tf.compat.v1.metrics.precision(
labels=labels, predictions=predictions),
}
# Your model Fn which runs on TPU (eval_metrics is list in this example)
def model_fn(features, labels, mode, config, params):
...
logits = ...
if mode = tf.estimator.ModeKeys.EVAL:
return tpu_estimator.TPUEstimatorSpec(
mode=mode,
loss=loss,
eval_metrics=(metric_fn, [labels, logits]))
# or specify the eval_metrics tensors as dict.
def model_fn(features, labels, mode, config, params):
...
final_layer_output = ...
if mode = tf.estimator.ModeKeys.EVAL:
return tpu_estimator.TPUEstimatorSpec(
mode=mode,
loss=loss,
eval_metrics=(metric_fn, {
'labels': labels,
'logits': final_layer_output,
}))
Prediction
Prediction on TPU is an experimental feature to support large batch inference.
It is not designed for latency-critical system. In addition, due to some
usability issues, for prediction with small dataset, CPU .predict
, i.e.,
creating a new TPUEstimator
instance with use_tpu=False
, might be more
convenient.
Current limitations:
TPU prediction only works on a single host (one TPU worker).
input_fn
must return aDataset
instance rather thanfeatures
. In fact, .train() and .evaluate() also support Dataset as return value.
Example (MNIST):
height = 32
width = 32
total_examples = 100
def predict_input_fn(params):
batch_size = params['batch_size']
images = tf.random.uniform(
[total_examples, height, width, 3], minval=-1, maxval=1)
dataset = tf.data.Dataset.from_tensor_slices(images)
dataset = dataset.map(lambda images: {'image': images})
dataset = dataset.batch(batch_size)
return dataset
def model_fn(features, labels, params, mode):
# Generate predictions, called 'output', from features['image']
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
predictions={
'predictions': output,
'is_padding': features['is_padding']
})
tpu_est = TPUEstimator(
model_fn=model_fn,
...,
predict_batch_size=16)
# Fully consume the generator so that TPUEstimator can shutdown the TPU
# system.
for item in tpu_est.predict(input_fn=input_fn):
# Filter out item if the `is_padding` is 1.
# Process the 'predictions'
Exporting
export_saved_model
exports 2 metagraphs, one with saved_model.SERVING
, and
another with saved_model.SERVING
and saved_model.TPU
tags. At serving
time, these tags are used to select the appropriate metagraph to load.
Before running the graph on TPU, the TPU system needs to be initialized. If
TensorFlow Serving model-server is used, this is done automatically. If not,
please use session.run(tpu.initialize_system())
.
There are two versions of the API: 1 or 2.
In V1, the exported CPU graph is model_fn
as it is. The exported TPU graph
wraps tpu.rewrite()
and TPUPartitionedCallOp
around model_fn
so
model_fn
is on TPU by default. To place ops on CPU,
tpu.outside_compilation(host_call, logits)
can be used.
Example:
def model_fn(features, labels, mode, config, params):
...
logits = ...
export_outputs = {
'logits': export_output_lib.PredictOutput(
{'logits': logits})
}
def host_call(logits):
class_ids = math_ops.argmax(logits)
classes = string_ops.as_string(class_ids)
export_outputs['classes'] =
export_output_lib.ClassificationOutput(classes=classes)
tpu.outside_compilation(host_call, logits)
...
In V2, export_saved_model()
sets up params['use_tpu']
flag to let the user
know if the code is exporting to TPU (or not). When params['use_tpu']
is
True
, users need to call tpu.rewrite()
, TPUPartitionedCallOp
and/or
batch_function()
. Alternatively use inference_on_tpu()
which is a
convenience wrapper of the three.
def model_fn(features, labels, mode, config, params):
...
# This could be some pre-processing on CPU like calls to input layer with
# embedding columns.
x2 = features['x'] * 2
def computation(input_tensor):
return layers.dense(
input_tensor, 1, kernel_initializer=init_ops.zeros_initializer())
inputs = [x2]
if params['use_tpu']:
predictions = array_ops.identity(
tpu_estimator.inference_on_tpu(computation, inputs,
num_batch_threads=1, max_batch_size=2, batch_timeout_micros=100),
name='predictions')
else:
predictions = array_ops.identity(
computation(*inputs), name='predictions')
key = signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
export_outputs = {
key: export_lib.PredictOutput({'prediction': predictions})
}
...
TIP: V2 is recommended as it is more flexible (eg: batching, etc).
Args | |
---|---|
model_fn
|
Model function as required by Estimator which returns
EstimatorSpec or TPUEstimatorSpec. training_hooks , 'evaluation_hooks',
and prediction_hooks must not capure any TPU Tensor inside the
model_fn.
|
model_dir
|
Directory to save model parameters, graph and etc. This can
also be used to load checkpoints from the directory into a estimator to
continue training a previously saved model. If None , the model_dir in
config will be used if set. If both are set, they must be same. If
both are None , a temporary directory will be used.
|
config
|
An tpu_config.RunConfig configuration object. Cannot be None .
|
params
|
An optional dict of hyper parameters that will be passed into
input_fn and model_fn . Keys are names of parameters, values are
basic python types. There are reserved keys for TPUEstimator ,
including 'batch_size'.
|
use_tpu
|
A bool indicating whether TPU support is enabled. Currently, - TPU training and evaluation respect this bit, but eval_on_tpu can override execution of eval. See below. |
train_batch_size
|
An int representing the global training batch size.
TPUEstimator transforms this global batch size to a per-shard batch
size, as params['batch_size'], when calling input_fn and model_fn .
Cannot be None if use_tpu is True . Must be divisible by total
number of replicas.
|
eval_batch_size
|
An int representing evaluation batch size. Must be divisible by total number of replicas. |
predict_batch_size
|
An int representing the prediction batch size. Must be divisible by total number of replicas. |
batch_axis
|
A python tuple of int values describing how each tensor
produced by the Estimator input_fn should be split across the TPU
compute shards. For example, if your input_fn produced (images, labels)
where the images tensor is in HWCN format, your shard dimensions would
be [3, 0], where 3 corresponds to the N dimension of your images
Tensor, and 0 corresponds to the dimension along which to split the
labels to match up with the corresponding images. If None is supplied,
and per_host_input_for_training is True, batches will be sharded based
on the major dimension. If tpu_config.per_host_input_for_training is
False or PER_HOST_V2 , batch_axis is ignored.
|
eval_on_tpu
|
If False, evaluation runs on CPU or GPU. In this case, the
model_fn must return EstimatorSpec when called with mode as EVAL .
|
export_to_tpu
|
If True, export_saved_model() exports a metagraph for
serving on TPU. Note that unsupported export modes such as EVAL will be
ignored. For those modes, only a CPU model will be exported. Currently,
export_to_tpu only supports PREDICT.
|
export_to_cpu
|
If True, export_saved_model() exports a metagraph for
serving on CPU.
|
warm_start_from
|
Optional string filepath to a checkpoint or SavedModel to
warm-start from, or a tf.estimator.WarmStartSettings object to fully
configure warm-starting. If the string filepath is provided instead of
a WarmStartSettings , then all variables are warm-started, and it is
assumed that vocabularies and Tensor names are unchanged.
|
embedding_config_spec
|
Optional EmbeddingConfigSpec instance to support using TPU embedding. |
export_saved_model_api_version
|
an integer: 1 or 2. 1 corresponds to V1,
2 corresponds to V2. (Defaults to V1). With
V1, export_saved_model() adds rewrite() and TPUPartitionedCallOp() for
user; while in v2, user is expected to add rewrite(),
TPUPartitionedCallOp() etc in their model_fn. A helper function
inference_on_tpu is provided for V2. brn_tpu_estimator.py includes
examples for both versions i.e. TPUEstimatorExportTest and
TPUEstimatorExportV2Test.
|
Raises | |
---|---|
ValueError
|
params has reserved keys already.
|
Attributes | |
---|---|
config
|
|
model_dir
|
|
model_fn
|
Returns the model_fn which is bound to self.params .
|
params
|
Methods
eval_dir
eval_dir(
name=None
)
Shows the directory name where evaluation metrics are dumped.
Args | |
---|---|
name
|
Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |
Returns | |
---|---|
A string which is the path of directory contains evaluation metrics. |
evaluate
evaluate(
input_fn, steps=None, hooks=None, checkpoint_path=None, name