![]() |
Computes unique softmax cross-entropy loss between y_true
and y_pred
.
tfr.keras.losses.UniqueSoftmaxLoss(
reduction=tf.losses.Reduction.AUTO,
name=None,
lambda_weight=None,
temperature=1.0,
ragged=False
)
Implements unique rating softmax loss (Zhu et al, 2020).
For each list of scores s
in y_pred
and list of labels y
in y_true
:
loss = - sum_i (2^{y_i} - 1) *
log(exp(s_i) / sum_j I(y_i > y_j) exp(s_j) + exp(s_i))
Standalone usage:
y_true = [[1., 0.]]
y_pred = [[0.6, 0.8]]
loss = tfr.keras.losses.UniqueSoftmaxLoss()
loss(y_true, y_pred).numpy()
0.7981389
# Using ragged tensors
y_true = tf.ragged.constant([[1., 0.], [0., 1., 0.]])
y_pred = tf.ragged.constant([[0.6, 0.8], [0.5, 0.8, 0.4]])
loss = tfr.keras.losses.UniqueSoftmaxLoss(ragged=True)
loss(y_true, y_pred).numpy()
0.83911896
Usage with the compile()
API:
model.compile(optimizer='sgd', loss=tfr.keras.losses.UniqueSoftmaxLoss())
Definition:
$$ \mathcal{L}({y}, {s}) =
- \sum_i (2^{y_i} - 1) \cdot \log\left(\frac{\exp(s_i)}{\sumj I{y_i > y_j} \exp(s_j) + \exp(s_i)}\right) $$
References | |
---|---|
Args | |
---|---|
reduction
|
Type of tf.keras.losses.Reduction to apply to
loss. Default value is AUTO . AUTO indicates that the reduction
option will be determined by the usage context. For almost all cases
this defaults to SUM_OVER_BATCH_SIZE . When used with
tf.distribute.Strategy , outside of built-in training loops such as
tf.keras compile and fit , using AUTO or SUM_OVER_BATCH_SIZE
will raise an error. Please see this custom training tutorial for
more details.
|
name
|
Optional name for the instance. |
Methods
from_config
@classmethod
from_config( config, custom_objects=None )
Instantiates a Loss
from its config (output of get_config()
).
Args | |
---|---|
config
|
Output of get_config() .
|
Returns | |
---|---|
A Loss instance.
|
get_config
get_config()
Returns the config dictionary for a Loss
instance.
__call__
__call__(
y_true, y_pred, sample_weight=None
)
See tf.keras.losses.Loss.