|View source on GitHub|
Computes best specificity where sensitivity is >= specified value.
tfma.metrics.SpecificityAtSensitivity( sensitivity: float, num_thresholds: Optional[int] = None, class_id: Optional[int] = None, name: Optional[str] = None, top_k: Optional[int] = None )
Sensitivity measures the proportion of actual positives that are correctly
identified as such (tp / (tp + fn)).
Specificity measures the proportion of actual negatives that are correctly
identified as such (tn / (tn + fp)).
The threshold for the given sensitivity value is computed and used to evaluate the corresponding specificity.
None, weights default to 1.
sample_weight of 0 to mask values.
For additional information about specificity and sensitivity, see the following.
A scalar value in range
||(Optional) Defaults to 1000. The number of thresholds to use for matching the given sensitivity.|
||(Optional) Used with a multi-class model to specify which class to compute the confusion matrix for. When class_id is used, metrics_specs.binarize settings must not be present. Only one of class_id or top_k should be configured.|
||(Optional) string name of the metric instance.|
||(Optional) Used with a multi-class model to specify that the top-k values should be used to compute the confusion matrix. The net effect is that the non-top-k values are set to -inf and the matrix is then constructed from the average TP, FP, TN, FN across the classes. When top_k is used, metrics_specs.binarize settings must not be present. Only one of class_id or top_k should be configured.|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method.
computations( eval_config: Optional[
tfma.EvalConfig] = None, schema: Optional[schema_pb2.Schema] = None, model_names: Optional[List[str]] = None, output_names: Optional[List[str]] = None, sub_keys: Optional[List[Optional[SubKey]]] = None, aggregation_type: Optional[AggregationType] = None, class_weights: Optional[Dict[int, float]] = None, example_weighted: bool = False, query_key: Optional[str] = None ) ->
Creates computations associated with metric.
get_config() -> Dict[str, Any]
Returns serializable config.