View source on GitHub |
Public API of batch inference.
Modules
model_spec_pb2
module: Generated protocol buffer code.
Classes
class ModelHandler
: Has the ability to load and apply an ML model.
Functions
CreateModelHandler(...)
: Creates a Beam ModelHandler based on the inference spec type.
RunInference(...)
: Run inference with a model.
RunInferenceOnKeyedBatches(...)
: Run inference over pre-batched keyed inputs.
RunInferencePerModel(...)
: Vectorized variant of RunInference (useful for ensembles).
RunInferencePerModelOnKeyedBatches(...)
: Run inference over pre-batched keyed inputs on multiple models.