RSVP for your your local TensorFlow Everywhere event today!

tfx_bsl.public.beam.RunInference

Run inference with a model.

There are two types of inference you can perform using this PTransform:

  1. In-process inference from a SavedModel instance. Used when saved_model_spec field is set in inference_spec_type.
  2. Remote inference by using a service endpoint. Used when ai_platform_prediction_model_spec field is set in inference_spec_type.

  3. tf.train.SequenceExample as Input for RemotePredict.

  4. beam.Shared() initialization via Fingerprint for models CSE.

  5. Models as SideInput.

  6. TPU models.

examples A PCollection containing examples of the following possible kinds, each with their corresponding return type.

  • PCollection[Example] -> PCollection[PredictionLog]
  • Works with Classify, Regress, MultiInference, Predict and RemotePredict.

  • PCollection[SequenceExample] -> PCollection[PredictionLog]

  • Works with Predict and (serialized) RemotePredict.

  • PCollection[bytes] -> PCollection[PredictionLog]

  • For serialized Example: Works with Classify, Regress, MultiInference, Predict and RemotePredict.

  • For everything else: Works with Predict and RemotePredict.

  • PCollection[Tuple[K, Example]] -> PCollection[ Tuple[K, PredictionLog]]

  • Works with Classify, Regress, MultiInference, Predict and RemotePredict.

  • PCollection[Tuple[K, SequenceExample]] -> PCollection[ Tuple[K, PredictionLog]]

  • Works with Predict and (serialized) RemotePredict.

  • PCollection[Tuple[K, bytes]] -> PCollection[ Tuple[K, PredictionLog]]

  • For serialized Example: Works with Classify, Regress, MultiInference, Predict and RemotePredict.

  • For everything else: Works with Predict and RemotePredict.

inference_spec_type Model inference endpoint.

A PCollection (possibly keyed) containing prediction logs.