# tf.keras.losses.binary_crossentropy

Computes the binary crossentropy loss.

### Used in the notebooks

Used in the tutorials

#### Standalone usage:

````y_true = [[0, 1], [0, 0]]`
`y_pred = [[0.6, 0.4], [0.4, 0.6]]`
`loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)`
`assert loss.shape == (2,)`
`loss.numpy()`
`array([0.916 , 0.714], dtype=float32)`
```

`y_true` Ground truth values. shape = `[batch_size, d0, .. dN]`.
`y_pred` The predicted values. shape = `[batch_size, d0, .. dN]`.
`from_logits` Whether `y_pred` is expected to be a logits tensor. By default, we assume that `y_pred` encodes a probability distribution.
`label_smoothing` Float in [0, 1]. If > `0` then smooth the labels by squeezing them towards 0.5 That is, using `1. - 0.5 * label_smoothing` for the target class and `0.5 * label_smoothing` for the non-target class.

Binary crossentropy loss value. shape = `[batch_size, d0, .. dN-1]`.

[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"没有我需要的信息" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"太复杂/步骤太多" },{ "type": "thumb-down", "id": "outOfDate", "label":"内容需要更新" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"其他" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"易于理解" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"解决了我的问题" },{ "type": "thumb-up", "id": "otherUp", "label":"其他" }]