Google I/O is a wrap! Catch up on TensorFlow sessions

# Dense

``````@frozen
public struct Dense<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````

A densely-connected neural network layer.

`Dense` implements the operation `activation(matmul(input, weight) + bias)`, where `weight` is a weight matrix, `bias` is a bias vector, and `activation` is an element-wise activation function.

This layer also supports 3-D weight tensors with 2-D bias matrices. In this case the first dimension of both is treated as the batch size that is aligned with the first dimension of `input` and the batch variant of the `matmul(_:_:)` operation is used, thus using a different weight and bias for each element in input batch.

• ``` weight ```

The weight matrix.

#### Declaration

``public var weight: Tensor<Scalar>``
• ``` bias ```

The bias vector.

#### Declaration

``public var bias: Tensor<Scalar>``
• ``` activation ```

The element-wise activation function.

#### Declaration

``````@noDerivative
public let activation: Activation``````
• ``` Activation ```

The element-wise activation function type.

#### Declaration

``public typealias Activation = @differentiable (Tensor<Scalar>) -> Tensor<Scalar>``
• ``` init(weight:bias:activation:) ```

Creates an instance from the given weight, optional bias, and activation function.

Note

currently, `weight` is the only differentiability parameter. `bias` can be made a differentiability parameter after `Optional` conditionally conforms to `Differentiable`: TF-499.

#### Declaration

``````@differentiable(wrt: weight)
public init(
weight: Tensor<Scalar>,
bias: Tensor<Scalar>? = nil,
activation: @escaping Activation
)``````
• ``` forward(_:) ```

Returns the output obtained from applying the layer to the given input.

#### Declaration

``````@differentiable
public func forward(_ input: Tensor<Scalar>) -> Tensor<Scalar>``````

#### Parameters

 ``` input ``` The input to the layer.

#### Return Value

The output.

• ``` init(inputSize:outputSize:activation:useBias:weightInitializer:biasInitializer:) ```

Creates a `Dense` layer with the specified input size, output size, and element-wise activation function. The weight matrix is created with shape `[inputSize, outputSize]` and the bias vector is created with shape `[outputSize]`.

#### Declaration

``````public init(
inputSize: Int,
outputSize: Int,
activation: @escaping Activation = identity,
useBias: Bool = true,
weightInitializer: ParameterInitializer<Scalar> = glorotUniform(),
biasInitializer: ParameterInitializer<Scalar> = zeros()
)``````

#### Parameters

 ``` inputSize ``` The dimensionality of the input space. ``` outputSize ``` The dimensionality of the output space. ``` activation ``` The activation function to use. The default value is `identity(_:)`. ``` weightInitializer ``` Initializer to use for `weight`. ``` biasInitializer ``` Initializer to use for `bias`.
[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]