tfx.extensions.google_cloud_ai_platform.pusher.executor.Executor

View source on GitHub

Deploy a model to Google Cloud AI Platform serving.

Inherits From: Executor

Child Classes

class Context

Methods

CheckBlessing

View source

Check that model is blessed by upstream validators.

Args
input_dict Input dict from input key to a list of artifacts:

  • model_blessing: A ModelBlessing artifact from model validator or evaluator. Pusher looks for a custom property blessed in the artifact to check it is safe to push.
  • infra_blessing: An InfraBlessing artifact from infra validator. Pusher looks for a custom proeprty blessed in the artifact to determine whether the model is mechanically servable from the model server to which Pusher is going to push.

Returns
True if the model is blessed by validator.

Do

View source

Overrides the tfx_pusher_executor.

Args
input_dict Input dict from input key to a list of artifacts, including:

  • model_export: exported model from trainer.
  • model_blessing: model blessing path from model_validator.
output_dict Output dict from key to a list of artifacts, including:
  • model_push: A list of 'ModelPushPath' artifact of size one. It will include the model in this push execution if the model was pushed.
  • exec_properties Mostly a passthrough input dict for tfx.components.Pusher.executor. custom_config.ai_platform_serving_args is consumed by this class. For the full set of parameters supported by Google Cloud AI Platform, refer to https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models#creating_a_model_version

    Raises
    ValueError If ai_platform_serving_args is not in exec_properties.custom_config. If Serving model path does not start with gs://.
    RuntimeError if the Google Cloud AI Platform training job failed.