Module: tfx_bsl.public.beam.run_inference

Public API of batch inference.


model_spec_pb2 module: Generated protocol buffer code.


class ModelHandler: Has the ability to load and apply an ML model.


CreateModelHandler(...): Creates a Beam ModelHandler based on the inference spec type.

RunInference(...): Run inference with a model.

RunInferenceOnKeyedBatches(...): Run inference over pre-batched keyed inputs.

RunInferencePerModel(...): Vectorized variant of RunInference (useful for ensembles).

RunInferencePerModelOnKeyedBatches(...): Run inference over pre-batched keyed inputs on multiple models.