|View source on GitHub|
Relative coefficient of discrimination metric.
tfma.metrics.RelativeCoefficientOfDiscrimination( name=RELATIVE_COEFFICIENT_OF_DISCRIMINATION_NAME )
The relative coefficient of discrimination measures the ratio between the average prediction for the positive examples and the average prediction for the negative examples. This has a very simple intuitive explanation, measuring how much higher is the prediction going to be for a positive example than for a negative example.
Whether to compute confidence intervals for this metric.
Note that this may not completely remove the computational overhead involved in computing a given metric. This is only respected by the jackknife confidence interval method.
computations( eval_config: Optional[
config.EvalConfig] = None, schema: Optional[schema_pb2.Schema] = None, model_names: Optional[List[Text]] = None, output_names: Optional[List[Text]] = None, sub_keys: Optional[List[
SubKey]] = None, class_weights: Optional[Dict[int, float]] = None, query_key: Optional[Text] = None, is_diff: Optional[bool] = False ) -> MetricComputations
Creates computations associated with metric.
get_config() -> Dict[Text, Any]
Returns serializable config.
is_model_independent() -> bool
Returns true if the metric does not depend on a model.