View source on GitHub |
The Jensen-Shannon Csiszar-function in log-space.
tfp.substrates.jax.vi.jensen_shannon(
logu, self_normalized=False, name=None
)
A Csiszar-function is a member of,
F = { f:R_+ to R : f convex }.
When self_normalized = True
, the Jensen-Shannon Csiszar-function is:
f(u) = u log(u) - (1 + u) log(1 + u) + (u + 1) log(2)
When self_normalized = False
the (u + 1) log(2)
term is omitted.
Observe that as an f-Divergence, this Csiszar-function implies:
D_f[p, q] = KL[p, m] + KL[q, m]
m(x) = 0.5 p(x) + 0.5 q(x)
In a sense, this divergence is the "reverse" of the Arithmetic-Geometric f-Divergence.
This Csiszar-function induces a symmetric f-Divergence, i.e.,
D_f[p, q] = D_f[q, p]
.
For more information, see: Lin, J. "Divergence measures based on the Shannon entropy." IEEE Trans. Inf. Th., 37, 145-151, 1991.
Returns | |
---|---|
jensen_shannon_of_u
|
float -like Tensor of the Csiszar-function
evaluated at u = exp(logu) .
|