Fairness Indicators

Fairness Indicators is a library that enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers. With the Fairness Indicators tool suite, you can:

  • Compute commonly-identified fairness metrics for classification models
  • Compare model performance across subgroups to a baseline, or to other models
  • Use confidence intervals to surface statistically significant disparities
  • Perform evaluation over multiple thresholds

Use Fairness Indicators via the:

eval_config_pbtxt = """

model_specs {
    label_key: "%s"
}

metrics_specs {
    metrics {
        class_name: "FairnessIndicators"
        config: '{ "thresholds": [0.25, 0.5, 0.75] }'
    }
    metrics {
        class_name: "ExampleCount"
    }
}

slicing_specs {}
slicing_specs {
    feature_keys: "%s"
}

options {
    compute_confidence_intervals { value: False }
    disabled_outputs{values: "analysis"}
}
""" % (LABEL_KEY, GROUP_KEY)

Resources