tfx_bsl.public.beam.run_inference.CreateModelHandler

Stay organized with collections Save and categorize content based on your preferences.

Creates a Beam ModelHandler based on the inference spec type.

Used in the notebooks

Used in the tutorials

There are two model handlers:

  1. In-process inference from a SavedModel instance. Used when saved_model_spec field is set in inference_spec_type.
  2. Remote inference by using a service endpoint. Used when ai_platform_prediction_model_spec field is set in inference_spec_type.

from apache_beam.ml.inference import base

tf_handler = CreateModelHandler(inference_spec_type)

unkeyed

base.RunInference(tf_handler)

keyed

base.RunInference(base.KeyedModelHandler(tf_handler))

inference_spec_type Model inference endpoint.

A Beam RunInference ModelHandler for TensorFlow