|View source on GitHub|
Creates an extractor for performing predictions over a batch.
tfma.EvalConfig, eval_shared_model: Optional[
tfma.types.EvalSharedModel] = None, experimental_bulk_inference: bool = False, batch_size: Optional[int] = None ) ->
The extractor's PTransform loads and runs the serving saved_model(s) against every Extracts yielding a copy of the incoming Extracts with an additional Extracts added for the predictions keyed by tfma.PREDICTIONS_KEY. The model inputs are searched for under tfma.FEATURES_KEY (keras only) or tfma.INPUT_KEY (if tfma.FEATURES_KEY is not set or the model is non-keras). If multiple models are used the predictions will be stored in a dict keyed by model name.
Note that the prediction_key in the ModelSpecs also serves as a key into the dict of the prediction's output.
|Extractor for extracting predictions.|