Missed TensorFlow World? Check out the recap. Learn more

The BulkInferrer TFX Pipeline Component

The BulkInferrer TFX component performs offline batch processing on a model with unlabelled inference requests. The generated InferenceResult( tensorflow_serving.apis.prediction_log_pb2.PredictionLog) contains the original features and the prediction results.

BulkInferrer consumes: * A Trained model in SavedModel format. * Model validation result from ModelValidator component. * Unlabelled tf.Examples that contain features.

BulkInferrer emits: InferenceResult

Using the BulkInferrer Component

A BulkInferrer TFX component is used to perform batch inference on unlabelled tf.Examples. It is typically deployed after a ModelValidator component to perform inference with a validated model, or after a Trainer component to directly perform inference on exported model.

Typical code looks like this:

from tfx import components

...

bulk_inferrer = components.BulkInferrer(
      examples=examples_gen.outputs['examples'],
      model=trainer.outputs['model'],
      model_blessing=model_validator.outputs['blessing'],
      data_spec=bulk_inferrer_pb2.DataSpec(),
      model_spec=bulk_inferrer_pb2.ModelSpec()
      )