|View source on GitHub|
Min label position metric.
tfma.metrics.MinLabelPosition( name=MIN_LABEL_POSITION_NAME )
Calculates the least index in a query which has a positive label. The final returned value is the weighted average over all queries in the evaluation set which have at least one labeled entry. Note, ranking is indexed from one, so the optimal value for this metric is one. If there are no labeled rows in the evaluation set, the final output will be zero.
This is a query/ranking based metric so a query_key must also be provided in the associated metrics spec.
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method.
computations( eval_config: Optional[
config.EvalConfig] = None, schema: Optional[schema_pb2.Schema] = None, model_names: Optional[List[Text]] = None, output_names: Optional[List[Text]] = None, sub_keys: Optional[List[
SubKey]] = None, class_weights: Optional[Dict[int, float]] = None, query_key: Optional[Text] = None, is_diff: Optional[bool] = False ) -> MetricComputations
Creates computations associated with metric.
get_config() -> Dict[Text, Any]
Returns serializable config.
is_model_independent() -> bool
Returns true if the metric does not depend on a model.