tfx.extensions.google_cloud_ai_platform.bulk_inferrer.executor.Executor

Bulk inferer executor for inference on AI Platform.

Inherits From: Executor, BaseExecutor

Child Classes

class Context

Methods

Do

View source

Runs batch inference on a given model with given input examples.

This function creates a new model (if necessary) and a new model version before inference, and cleans up resources after inference. It provides re-executability as it cleans up (only) the model resources that are created during the process even inference job failed.

Args
input_dict Input dict from input key to a list of Artifacts.

  • examples: examples for inference.
  • model: exported model.
  • model_blessing: model blessing result
output_dict Output dict from output key to a list of Artifacts.
  • output: bulk inference results.
  • exec_properties A dict of execution properties.
  • data_spec: JSON string of bulk_inferrer_pb2.DataSpec instance.
  • custom_config: custom_config.ai_platform_serving_args need to contain the serving job parameters sent to Google Cloud AI Platform. For the full set of parameters, refer to https://cloud.google.com/ml-engine/reference/rest/v1/projects.models
  • Returns
    None