![]() |
Computes the precision of the predictions with respect to the labels.
Inherits From: Metric
tfma.metrics.Precision(
thresholds: Optional[Union[float, List[float]]] = None,
top_k: Optional[int] = None,
class_id: Optional[int] = None,
name: Optional[str] = None
)
The metric uses true positives and false positives to compute precision by dividing the true positives by the sum of true positives and false positives.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Args | |
---|---|
thresholds
|
(Optional) A float value or a python list/tuple of float
threshold values in [0, 1]. A threshold is compared with prediction
values to determine the truth value of predictions (i.e., above the
threshold is true , below is false ). One metric value is generated
for each threshold value. If neither thresholds nor top_k are set, the
default is to calculate precision with thresholds=0.5 .
|
top_k
|
(Optional) Used with a multi-class model to specify that the top-k values should be used to compute the confusion matrix. The net effect is that the non-top-k values are set to -inf and the matrix is then constructed from the average TP, FP, TN, FN across the classes. When top_k is used, metrics_specs.binarize settings must not be present. Only one of class_id or top_k should be configured. |
class_id
|
(Optional) Used with a multi-class model to specify which class to compute the confusion matrix for. When class_id is used, metrics_specs.binarize settings must not be present. Only one of class_id or top_k should be configured. |
name
|
(Optional) string name of the metric instance. |
Attributes | |
---|---|
compute_confidence_interval
|
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method. |
Methods
computations
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
get_config
get_config() -> Dict[str, Any]
Returns serializable config.
result
result(
tp: float, tn: float, fp: float, fn: float
) -> float
Function for computing metric value from TP, TN, FP, FN values.