|View source on GitHub|
Run inference over pre-batched keyed inputs on multiple models.
tfx_bsl.public.beam.run_inference.RunInferencePerModelOnKeyedBatches( examples: beam.pvalue.PCollection, inference_spec_types: Iterable[
tfx_bsl.public.proto.model_spec_pb2.InferenceSpecType], load_override_fn: Optional[run_inference.LoadOverrideFnType] = None ) -> beam.pvalue.PCollection
This API is experimental and may change in the future.
Supports the same inference specs as RunInferencePerModel. Inputs must consist of a keyed list of examples, and outputs consist of keyed list of prediction logs corresponding by index.
|A PCollection containing Tuples of a key and lists of batched prediction logs from each model provided in inference_spec_types. The Tuple of batched prediction logs is 1-1 aligned with inference_spec_types. The individual prediction logs in the batch are 1-1 aligned with the rows of data in the batch key.|