tfma.run_model_analysis

tfma.run_model_analysis(
    eval_shared_model,
    data_location,
    file_format='tfrecords',
    slice_spec=None,
    output_path=None,
    extractors=None,
    evaluators=None,
    writers=None,
    write_config=True,
    pipeline_options=None,
    num_bootstrap_samples=1
)

Defined in api/model_eval_lib.py.

Runs TensorFlow model analysis.

It runs a Beam pipeline to compute the slicing metrics exported in TensorFlow Eval SavedModel and returns the results.

This is a simplified API for users who want to quickly get something running locally. Users who wish to create their own Beam pipelines can use the Evaluate PTransform instead.

Args:

  • eval_shared_model: Shared model parameters for EvalSavedModel including any additional metrics (see EvalSharedModel for more information on how to configure additional metrics).
  • data_location: The location of the data files.
  • file_format: The file format of the data, can be either 'text' or 'tfrecords' for now. By default, 'tfrecords' will be used.
  • slice_spec: A list of tfma.slicer.SingleSliceSpec. Each spec represents a way to slice the data. If None, defaults to the overall slice. Example usages: # TODO(xinzha): add more use cases once they are supported in frontend.
    • tfma.SingleSiceSpec(): no slice, metrics are computed on overall data.
    • tfma.SingleSiceSpec(columns=['country']): slice based on features in column "country". We might get metrics for slice "country:us", "country:jp", and etc in results.
    • tfma.SingleSiceSpec(features=[('country', 'us')]): metrics are computed on slice "country:us".
  • output_path: The directory to output metrics and results to. If None, we use a temporary directory.
  • extractors: Optional list of Extractors to apply to Extracts. Typically these will be added by calling the default_extractors function. If no extractors are provided, default_extractors (non-materialized) will be used.
  • evaluators: Optional list of Evaluators for evaluating Extracts. Typically these will be added by calling the default_evaluators function. If no evaluators are provided, default_evaluators will be used.
  • writers: Optional list of Writers for writing Evaluation output. Typically these will be added by calling the default_writers function. If no writers are provided, default_writers will be used.
  • write_config: True to write the config along with the results.
  • pipeline_options: Optional arguments to run the Pipeline, for instance whether to run directly.
  • num_bootstrap_samples: Optional, set to at least 20 in order to calculate metrics with confidence intervals.

Returns:

An EvalResult that can be used with the TFMA visualization functions.

Raises:

  • ValueError: If the file_format is unknown to us.