Defined in tensorflow/python/saved_model/

Convenience function to build a SavedModel suitable for serving.

In many common cases, saving models for serving will be as simple as:

            inputs={"x": x, "y": y},
            outputs={"z": z})

Although in many cases it's not necessary to understand all of the many ways to configure a SavedModel, this method has a few practical implications: - It will be treated as a graph for inference / serving (i.e. uses the tag tag_constants.SERVING) - The SavedModel will load in TensorFlow Serving and supports the Predict API. To use the Classify, Regress, or MultiInference APIs, please use either tf.Estimator or the lower level SavedModel APIs. - Some TensorFlow ops depend on information on disk or other information called "assets". These are generally handled automatically by adding the assets to the GraphKeys.ASSET_FILEPATHS collection. Only assets in that collection are exported; if you need more custom behavior, you'll need to use the SavedModelBuilder.

More information about SavedModel and signatures can be found here:


  • session: The TensorFlow session from which to save the meta graph and variables.
  • export_dir: The path to which the SavedModel will be stored.
  • inputs: dict mapping string input names to tensors. These are added to the SignatureDef as the inputs.
  • outputs: dict mapping string output names to tensors. These are added to the SignatureDef as the outputs.
  • legacy_init_op: Legacy support for op or group of ops to execute after the restore op upon a load.