View source on GitHub |
A autoregressively masked dense layer. (deprecated)
tf.contrib.distributions.bijectors.masked_dense(
inputs, units, num_blocks=None, exclusive=False, kernel_initializer=None,
reuse=None, name=None, *args, **kwargs
)
Analogous to tf.compat.v1.layers.dense
.
See [Germain et al. (2015)][1] for detailed explanation.
Arguments | |
---|---|
inputs
|
Tensor input. |
units
|
Python int scalar representing the dimensionality of the output
space.
|
num_blocks
|
Python int scalar representing the number of blocks for the
MADE masks.
|
exclusive
|
Python bool scalar representing whether to zero the diagonal of
the mask, used for the first layer of a MADE.
|
kernel_initializer
|
Initializer function for the weight matrix. If None
(default), weights are initialized using the
tf.glorot_random_initializer .
|
reuse
|
Python bool scalar representing whether to reuse the weights of a
previous layer by the same name.
|
name
|
Python str used to describe ops managed by this function.
|
*args
|
tf.compat.v1.layers.dense arguments.
|
**kwargs
|
tf.compat.v1.layers.dense keyword arguments.
|
Returns | |
---|---|
Output tensor. |
Raises | |
---|---|
NotImplementedError
|
if rightmost dimension of inputs is unknown prior to
graph execution.
|
References
[1]: Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: Masked Autoencoder for Distribution Estimation. In International Conference on Machine Learning, 2015. https://arxiv.org/abs/1502.03509