Missed TensorFlow Dev Summit? Check out the video playlist. Watch recordings

tfx.components.Pusher

View source on GitHub

A TFX component to push validated TensorFlow models to a model serving platform.

Inherits From: BaseComponent

tfx.components.Pusher(
    model=None, model_blessing=None, infra_blessing=None, push_destination=None,
    custom_config=None, custom_executor_spec=None, output=None, model_export=None,
    instance_name=None
)

Used in the notebooks

Used in the tutorials

The Pusher component can be used to push an validated SavedModel from output of the Trainer component to TensorFlow Serving. The Pusher will check the validation results from the ModelValidator component before deploying the model. If the model has not been blessed, then the model will not be pushed.

Note: The executor for this component can be overriden to enable the model to be pushed to other serving platforms than tf.serving. The Cloud AI Platform custom executor provides an example how to implement this.

Example

  # Checks whether the model passed the validation steps and pushes the model
  # to a file destination if check passed.
  pusher = Pusher(
      model=trainer.outputs['model'],
      model_blessing=model_validator.outputs['blessing'],
      push_destination=pusher_pb2.PushDestination(
          filesystem=pusher_pb2.PushDestination.Filesystem(
              base_directory=serving_model_dir)))

Args:

  • model: A Channel of type standard_artifacts.Model, usually produced by a Trainer component.
  • model_blessing: A Channel of type standard_artifacts.ModelBlessing, usually produced by a ModelValidator component. required
  • infra_blessing: An optional Channel of type standard_artifacts.InfraBlessing, usually produced from an InfraValidator component.
  • push_destination: A pusher_pb2.PushDestination instance, providing info for tensorflow serving to load models. Optional if executor_class doesn't require push_destination. If any field is provided as a RuntimeParameter, push_destination should be constructed as a dict with the same field names as PushDestination proto message.
  • custom_config: A dict which contains the deployment job parameters to be passed to cloud-based training platforms. The Kubeflow example contains an example how this can be used by custom executors.
  • custom_executor_spec: Optional custom executor spec.
  • output: Optional output standard_artifacts.PushedModel channel with result of push.
  • model_export: Backwards compatibility alias for the 'model' argument.
  • instance_name: Optional unique instance name. Necessary if multiple Pusher components are declared in the same pipeline.

Attributes:

  • component_id: DEPRECATED FUNCTION

  • component_type: DEPRECATED FUNCTION

  • downstream_nodes

  • exec_properties

  • id: Node id, unique across all TFX nodes in a pipeline.

    If instance name is available, node_id will be: . otherwise, node_id will be:

  • inputs

  • outputs

  • type

  • upstream_nodes

Child Classes

class DRIVER_CLASS

class SPEC_CLASS

Methods

add_downstream_node

View source

add_downstream_node(
    downstream_node
)

add_upstream_node

View source

add_upstream_node(
    upstream_node
)

from_json_dict

View source

@classmethod
from_json_dict(
    cls, dict_data
)

Convert from dictionary data to an object.

get_id

View source

@classmethod
get_id(
    cls, instance_name=None
)

Gets the id of a node.

This can be used during pipeline authoring time. For example: from tfx.components import Trainer

resolver = ResolverNode(..., model=Channel( type=Model, producer_component_id=Trainer.get_id('my_trainer')))

Args:

  • instance_name: (Optional) instance name of a node. If given, the instance name will be taken into consideration when generating the id.

Returns:

an id for the node.

to_json_dict

View source

to_json_dict()

Convert from an object to a JSON serializable dictionary.

Class Variables

  • EXECUTOR_SPEC