tfx_bsl.public.beam.RunInference

Stay organized with collections Save and categorize content based on your preferences.

Run inference with a model.

There are two types of inference you can perform using this PTransform:

  1. In-process inference from a SavedModel instance. Used when saved_model_spec field is set in inference_spec_type.
  2. Remote inference by using a service endpoint. Used when ai_platform_prediction_model_spec field is set in inference_spec_type.

examples A PCollection containing examples of the following possible kinds, each with their corresponding return type.

  • PCollection[Example] -> PCollection[PredictionLog]

    • Works with Classify, Regress, MultiInference, Predict and RemotePredict.
  • PCollection[SequenceExample] -> PCollection[PredictionLog]

    • Works with Predict and (serialized) RemotePredict.
  • PCollection[bytes] -> PCollection[PredictionLog]

    • For serialized Example: Works with Classify, Regress, MultiInference, Predict and RemotePredict.

    • For everything else: Works with Predict and RemotePredict.

  • PCollection[Tuple[K, Example]] -> PCollection[ Tuple[K, PredictionLog]]

    • Works with Classify, Regress, MultiInference, Predict and RemotePredict.
  • PCollection[Tuple[K, SequenceExample]] -> PCollection[ Tuple[K, PredictionLog]]

    • Works with Predict and (serialized) RemotePredict.
  • PCollection[Tuple[K, bytes]] -> PCollection[ Tuple[K, PredictionLog]]

    • For serialized Example: Works with Classify, Regress, MultiInference, Predict and RemotePredict.
    • For everything else: Works with Predict and RemotePredict.
inference_spec_type Model inference endpoint.
load_override_fn Optional function taking a model path and sequence of tags, and returning a tf SavedModel. The loaded model must be equivalent in interface to the model that would otherwise be loaded. It is up to the caller to ensure compatibility. This argument is experimental and subject to change.

A PCollection (possibly keyed) containing prediction logs.