The BulkInferrer TFX component performs batch inference on unlabeled data. The generated InferenceResult(tensorflow_serving.apis.prediction_log_pb2.PredictionLog) contains the original features and the prediction results.
- A trained model in SavedModel format.
- Unlabelled tf.Examples that contain features.
- (Optional) Validation result from Evaluator component.
Using the BulkInferrer Component
A BulkInferrer TFX component is used to perform batch inference on unlabeled tf.Examples. It is typically deployed after an Evaluator component to perform inference with a validated model, or after a Trainer component to directly perform inference on exported model.
It currently performs in-memory model inference and remote inference. Remote inference requires the model to be hosted on Cloud AI Platform.
Typical code looks like this:
bulk_inferrer = BulkInferrer( examples=examples_gen.outputs['examples'], model=trainer.outputs['model'], model_blessing=evaluator.outputs['blessing'], data_spec=bulk_inferrer_pb2.DataSpec(), model_spec=bulk_inferrer_pb2.ModelSpec() )
More details are available in the BulkInferrer API reference.