# tf.keras.losses.cosine_similarity

Computes the cosine similarity between labels and predictions.

Note that it is a negative quantity between -1 and 0, where 0 indicates orthogonality and values closer to -1 indicate greater similarity. This makes it usable as a loss function in a setting where you try to maximize the proximity between predictions and targets. If either `y_true` or `y_pred` is a zero vector, cosine similarity will be 0 regardless of the proximity between predictions and targets.

`loss = -sum(l2_norm(y_true) * l2_norm(y_pred))`

#### Usage:

````y_true = [[0., 1.], [1., 1.]]`
`y_pred =[[1., 0.], [1., 1.]]`
`loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1)`
`# l2_norm(y_true) = [[0., 1.], [1./1.414], 1./1.414]]]`
`# l2_norm(y_pred) = [[1., 0.], [1./1.414], 1./1.414]]]`
`# l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]]`
`# loss = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1))`
`#       = ((0. + 0.) +  (0.5 + 0.5)) / 2`
`loss.numpy()`
`array([-0., -0.999], dtype=float32)`
```

`y_true` Tensor of true targets.
`y_pred` Tensor of predicted targets.
`axis` Axis along which to determine similarity.

Cosine similarity tensor.

[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]