|View source on GitHub|
Bulk inferer executor for inference on AI Platform.
tfx.extensions.google_cloud_ai_platform.bulk_inferrer.executor.Executor( context: Optional[
tfx.dsl.components.base.base_executor.BaseExecutor.Context] = None )
Do( input_dict: Dict[Text, List[
tfx.types.Artifact]], output_dict: Dict[Text, List[
tfx.types.Artifact]], exec_properties: Dict[Text, Any] ) -> None
Runs batch inference on a given model with given input examples.
This function creates a new model (if necessary) and a new model version before inference, and cleans up resources after inference. It provides re-executability as it cleans up (only) the model resources that are created during the process even inference job failed.
Input dict from input key to a list of Artifacts.
Output dict from output key to a list of Artifacts.
A dict of execution properties.