Module: tfx_bsl.public.beam.run_inference

Stay organized with collections Save and categorize content based on your preferences.

Publich API of batch inference.


model_spec_pb2 module: Generated protocol buffer code.


RunInference(...): Run inference with a model.

RunInferenceOnKeyedBatches(...): Run inference over pre-batched keyed inputs.

RunInferencePerModel(...): Vectorized variant of RunInference (useful for ensembles).

RunInferencePerModelOnKeyedBatches(...): Run inference over pre-batched keyed inputs on multiple models.