View source on GitHub |
Run inference over pre-batched keyed inputs.
tfx_bsl.public.beam.run_inference.RunInferenceOnKeyedBatches(
examples: beam.pvalue.PCollection,
inference_spec_type: tfx_bsl.public.proto.model_spec_pb2.InferenceSpecType
,
load_override_fn: Optional[run_inference.LoadOverrideFnType] = None
) -> beam.pvalue.PCollection
This API is experimental and may change in the future.
Supports the same inference specs as RunInference. Inputs must consist of a keyed list of examples, and outputs consist of keyed list of prediction logs corresponding by index.
Returns | |
---|---|
A PCollection of Tuple[K, List[PredictionLog]]. |