![]() |
Computes precision for sets of labels and predictions.
Inherits From: Precision
, Metric
tfma.metrics.SetMatchPrecision(
thresholds: Optional[Union[float, List[float]]] = None,
top_k: Optional[int] = None,
name: Optional[str] = None,
prediction_class_key: str = 'classes',
prediction_score_key: str = 'scores',
class_key: Optional[str] = None,
weight_key: Optional[str] = None,
**kwargs
)
The metric deals with labels and predictions which are provided in the format of sets (stored as variable length numpy arrays). The precision is the micro averaged classification precision. The metric is suitable for the case where the number of classes is large or the list of classes could not be provided in advance.
Example:
Label: ['cats'], Predictions: {'classes': ['cats, dogs']}
The precision is 0.5.
Methods
computations
computations(
eval_config: Optional[tfma.EvalConfig
] = None,
schema: Optional[schema_pb2.Schema] = None,
model_names: Optional[List[str]] = None,
output_names: Optional[List[str]] = None,
sub_keys: Optional[List[Optional[SubKey]]] = None,
aggregation_type: Optional[AggregationType] = None,
class_weights: Optional[Dict[int, float]] = None,
example_weighted: bool = False,
query_key: Optional[str] = None
) -> tfma.metrics.MetricComputations
Creates computations associated with metric.
from_config
@classmethod
from_config( config: Dict[str, Any] ) -> 'Metric'
get_config
get_config() -> Dict[str, Any]
Returns serializable config.
result
result(
tp: float, tn: float, fp: float, fn: float
) -> float
Function for computing metric value from TP, TN, FP, FN values.