|View source on GitHub|
Calculates Cohen's kappa.
tf.contrib.metrics.cohen_kappa( labels, predictions_idx, num_classes, weights=None, metrics_collections=None, updates_collections=None, name=None )
Cohen's kappa is a statistic that measures inter-annotator agreement.
cohen_kappa function calculates the confusion matrix, and creates three
local variables to compute the Cohen's kappa:
which refer to the diagonal part, rows and columns totals of the confusion
matrix, respectively. This value is ultimately returned as
idempotent operation that is calculated by
pe = (pe_row * pe_col) / N k = (sum(po) - sum(pe)) / (N - sum(pe))
For estimation of the metric over a stream of data, the function creates an
update_op operation that updates these variables and returns the
update_op weights each prediction by the corresponding value in
Class labels are expected to start at 0. E.g., if
was three, then the possible labels would be [0, 1, 2].
None, weights default to 1. Use weights of 0 to mask values.
NOTE: Equivalent to
sklearn.metrics.cohen_kappa_score, but the method
doesn't support weighted matrix yet.
Tensorof real labels for the classification task. Must be one of the following types: int16, int32, int64.
Tensorof predicted class indices for a given classification. Must have the same type as
num_classes: The possible number of labels.
Tensorwhose shape matches
metrics_collections: An optional list of collections that
kappashould be added to.
updates_collections: An optional list of collections that
update_opshould be added to.
name: An optional variable_scope name.
kappa: Scalar float
Tensorrepresenting the current Cohen's kappa.
pe_colvariables appropriately and whose value matches
num_classesis less than 2, or
labelshave mismatched shapes, or if
Noneand its shape doesn't match
predictions, or if either
updates_collectionsare not a list or tuple.
RuntimeError: If eager execution is enabled.