fully_connected creates a variable called weights, representing a fully
connected weight matrix, which is multiplied by the inputs to produce a
Tensor of hidden units. If a normalizer_fn is provided (such as
batch_norm), it is then applied. Otherwise, if normalizer_fn is
None and a biases_initializer is provided then a biases variable would be
created and added the hidden units. Finally, if activation_fn is not None,
it is applied to the hidden units as well.
A tensor of at least rank 2 and static value for the last dimension;
i.e. [batch_size, depth], [None, None, None, channels].
Integer or long, the number of output units in the layer.
Activation function. The default value is a ReLU function.
Explicitly set it to None to skip it and maintain a linear activation.
Normalization function to use instead of biases. If
normalizer_fn is provided then biases_initializer and
biases_regularizer are ignored and biases are not created nor added.
default set to None for no normalizer function
Normalization function parameters.
An initializer for the weights.
Optional regularizer for the weights.
An initializer for the biases. If None skip biases.
Optional regularizer for the biases.
Whether or not the layer and its variables should be reused. To be
able to reuse the layer scope must be given.
Optional list of collections for all the variables or
a dictionary containing a different list of collections per variable.
Collection to add the outputs.
If True also add variables to the graph collection
GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
Optional scope for variable_scope.
The tensor variable representing the result of the series of operations.
If x has rank less than 2 or if its last dimension is not set.