Help protect the Great Barrier Reef with TensorFlow on Kaggle

# BatchNorm

``````@frozen
public struct BatchNorm<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````

A batch normalization layer.

Normalizes the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to `0` and the activation standard deviation close to `1`.

• ``` axis ```

The feature dimension.

#### Declaration

``````@noDerivative
public let axis: Int``````
• ``` momentum ```

The momentum for the running mean and running variance.

#### Declaration

``````@noDerivative
public let momentum: Scalar``````
• ``` offset ```

The offset value, also known as beta.

#### Declaration

``public var offset: Tensor<Scalar>``
• ``` scale ```

The scale value, also known as gamma.

#### Declaration

``public var scale: Tensor<Scalar>``
• ``` epsilon ```

The variance epsilon value.

#### Declaration

``````@noDerivative
public let epsilon: Scalar``````
• ``` runningMean ```

The running mean.

#### Declaration

``````@noDerivative
public var runningMean: Parameter<Scalar>``````
• ``` runningVariance ```

The running variance.

#### Declaration

``````@noDerivative
public var runningVariance: Parameter<Scalar>``````
• ``` init(axis:momentum:offset:scale:epsilon:runningMean:runningVariance:) ```

Creates a batch normalization layer.

#### Declaration

``````public init(
axis: Int,
momentum: Scalar,
offset: Tensor<Scalar>,
scale: Tensor<Scalar>,
epsilon: Scalar,
runningMean: Tensor<Scalar>,
runningVariance: Tensor<Scalar>
)``````

#### Parameters

 ``` axis ``` The axis that should not be normalized (typically the feature axis). ``` momentum ``` The momentum for the moving average. ``` offset ``` The offset to be added to the normalized tensor. ``` scale ``` The scale to multiply the normalized tensor by. ``` epsilon ``` A small scalar added to the denominator to improve numerical stability. ``` runningMean ``` The running mean. ``` runningVariance ``` The running variance.
• ``` forward(_:) ```

Returns the output obtained from applying the layer to the given input.

#### Declaration

``````@differentiable
public func forward(_ input: Tensor<Scalar>) -> Tensor<Scalar>``````

#### Parameters

 ``` input ``` The input to the layer.

#### Return Value

The output.

• ``` init(featureCount:axis:momentum:epsilon:) ```

Creates a batch normalization layer.

#### Declaration

``````public init(
featureCount: Int,
axis: Int = -1,
momentum: Scalar = 0.99,
epsilon: Scalar = 0.001
)``````

#### Parameters

 ``` featureCount ``` The number of features. ``` axis ``` The axis that should be normalized (typically the features axis). ``` momentum ``` The momentum for the moving average. ``` epsilon ``` A small scalar added to the denominator to improve numerical stability.
[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]