![]() |
Computes Softmax cross-entropy loss between y_true
and y_pred
.
tfr.keras.losses.SoftmaxLoss(
reduction=tf.losses.Reduction.AUTO,
name=None,
lambda_weight=None,
temperature=1.0,
ragged=False
)
For each list of scores s
in y_pred
and list of labels y
in y_true
:
loss = - sum_i y_i * log(softmax(s_i))
Standalone usage:
y_true = [[1., 0.]]
y_pred = [[0.6, 0.8]]
loss = tfr.keras.losses.SoftmaxLoss()
loss(y_true, y_pred).numpy()
0.7981389
# Using ragged tensors
y_true = tf.ragged.constant([[1., 0.], [0., 1., 0.]])
y_pred = tf.ragged.constant([[0.6, 0.8], [0.5, 0.8, 0.4]])
loss = tfr.keras.losses.SoftmaxLoss(ragged=True)
loss(y_true, y_pred).numpy()
0.83911896
Usage with the compile()
API:
model.compile(optimizer='sgd', loss=tfr.keras.losses.SoftmaxLoss())
Definition:
$$ \mathcal{L}({y}, {s}) =
- \sum_i y_i \cdot \log\left(\frac{exp(s_i)}{\sum_j exp(s_j)}\right) $$
Args | |
---|---|
reduction
|
(Optional) The tf.keras.losses.Reduction to use (see
tf.keras.losses.Loss ).
|
name
|
(Optional) The name for the op. |
lambda_weight
|
(Optional) A lambdaweight to apply to the loss. Can be one
of tfr.keras.losses.DCGLambdaWeight ,
tfr.keras.losses.NDCGLambdaWeight , or,
tfr.keras.losses.PrecisionLambdaWeight .
|
temperature
|
(Optional) The temperature to use for scaling the logits. |
ragged
|
(Optional) If True, this loss will accept ragged tensors. If False, this loss will accept dense tensors. |
Methods
from_config
@classmethod
from_config( config, custom_objects=None )
Instantiates a Loss
from its config (output of get_config()
).
Args | |
---|---|
config
|
Output of get_config() .
|
Returns | |
---|---|
A Loss instance.
|
get_config
get_config()
Returns the config dictionary for a Loss
instance.
__call__
__call__(
y_true, y_pred, sample_weight=None
)
See _RankingLoss.