# Structures

The following structures are available globally.

• ``` Concatenation ```

A concatenation of two sequences with the same element type.

#### Declaration

``````public struct Concatenation<Base1: Sequence, Base2: Sequence>: Sequence
where Base1.Element == Base2.Element``````
``extension Concatenation: Collection where Base1: Collection, Base2: Collection``
``````extension Concatenation: BidirectionalCollection
where Base1: BidirectionalCollection, Base2: BidirectionalCollection``````
``````extension Concatenation: RandomAccessCollection
where Base1: RandomAccessCollection, Base2: RandomAccessCollection``````
• ``` RotatedCollection ```

A rotated view onto a collection.

#### Declaration

``public struct RotatedCollection<Base> : Collection where Base : Collection``
``````extension RotatedCollection: BidirectionalCollection
where Base: BidirectionalCollection``````
``````extension RotatedCollection: RandomAccessCollection
where Base: RandomAccessCollection``````
• ``` AnyDifferentiable ```

#### Declaration

``public struct AnyDifferentiable : Differentiable``
• ``` AnyDerivative ```

A type-erased derivative value.

The `AnyDerivative` type forwards its operations to an arbitrary underlying base derivative value conforming to `Differentiable` and `AdditiveArithmetic`, hiding the specifics of the underlying value.

#### Declaration

``````@frozen
public struct AnyDerivative : Differentiable & AdditiveArithmetic``````
• ``` Tensor ```

A multidimensional array of elements that is a generalization of vectors and matrices to potentially higher dimensions.

The generic parameter `Scalar` describes the type of scalars in the tensor (such as `Int32`, `Float`, etc).

#### Declaration

``````@frozen
public struct Tensor<Scalar> where Scalar : TensorFlowScalar``````
``extension Tensor: Collatable``
``extension Tensor: CopyableToDevice``
``extension Tensor: AnyTensor``
``extension Tensor: ExpressibleByArrayLiteral``
``extension Tensor: CustomStringConvertible``
``extension Tensor: CustomPlaygroundDisplayConvertible``
``extension Tensor: CustomReflectable``
``extension Tensor: TensorProtocol``
``extension Tensor: TensorGroup``
``extension Tensor: ElementaryFunctions where Scalar: TensorFlowFloatingPoint``
``extension Tensor: VectorProtocol where Scalar: TensorFlowFloatingPoint``
``extension Tensor: Mergeable where Scalar: TensorFlowFloatingPoint``
``extension Tensor: Equatable where Scalar: Equatable``
``extension Tensor: Codable where Scalar: Codable``
``extension Tensor: AdditiveArithmetic where Scalar: Numeric``
``extension Tensor: PointwiseMultiplicative where Scalar: Numeric``
``extension Tensor: Differentiable & EuclideanDifferentiable where Scalar: TensorFlowFloatingPoint``
``````extension Tensor: DifferentiableTensorProtocol
where Scalar: TensorFlowFloatingPoint``````
• ``` BroadcastingPullback ```

A pullback function that performs the transpose of broadcasting two `Tensors`.

#### Declaration

``public struct BroadcastingPullback``
• ``` Context ```

A context that stores thread-local contextual information used by deep learning APIs such as layers.

Use `Context.local` to retrieve the current thread-local context.

Examples:

• Set the current learning phase to training so that layers like `BatchNorm` will compute mean and variance when applied to inputs.
``````  Context.local.learningPhase = .training
``````
• Set the current learning phase to inference so that layers like `Dropout` will not drop out units when applied to inputs.
``````  Context.local.learningPhase = .inference
``````

#### Declaration

``public struct Context``
• ``` Conv1D ```

A 1-D convolution layer (e.g. temporal convolution over a time-series).

This layer creates a convolution filter that is convolved with the layer input to produce a tensor of outputs.

#### Declaration

``````@frozen
public struct Conv1D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` Conv2D ```

A 2-D convolution layer (e.g. spatial convolution over images).

This layer creates a convolution filter that is convolved with the layer input to produce a tensor of outputs.

#### Declaration

``````@frozen
public struct Conv2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` Conv3D ```

A 3-D convolution layer for spatial/spatio-temporal convolution over images.

This layer creates a convolution filter that is convolved with the layer input to produce a tensor of outputs.

#### Declaration

``````@frozen
public struct Conv3D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` TransposedConv1D ```

A 1-D transposed convolution layer (e.g. temporal transposed convolution over images).

This layer creates a convolution filter that is transpose-convolved with the layer input to produce a tensor of outputs.

#### Declaration

``````@frozen
public struct TransposedConv1D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` TransposedConv2D ```

A 2-D transposed convolution layer (e.g. spatial transposed convolution over images).

This layer creates a convolution filter that is transpose-convolved with the layer input to produce a tensor of outputs.

#### Declaration

``````@frozen
public struct TransposedConv2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` TransposedConv3D ```

A 3-D transposed convolution layer (e.g. spatial transposed convolution over images).

This layer creates a convolution filter that is transpose-convolved with the layer input to produce a tensor of outputs.

#### Declaration

``````@frozen
public struct TransposedConv3D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` DepthwiseConv2D ```

A 2-D depthwise convolution layer.

This layer creates seperable convolution filters that are convolved with the layer input to produce a tensor of outputs.

#### Declaration

``````@frozen
public struct DepthwiseConv2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` ZeroPadding1D ```

#### Declaration

``public struct ZeroPadding1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``
• ``` ZeroPadding2D ```

#### Declaration

``public struct ZeroPadding2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``
• ``` ZeroPadding3D ```

#### Declaration

``public struct ZeroPadding3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``
• ``` SeparableConv1D ```

A 1-D separable convolution layer.

This layer performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels.

#### Declaration

``````@frozen
public struct SeparableConv1D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` SeparableConv2D ```

A 2-D Separable convolution layer.

This layer performs a depthwise convolution that acts separately on channels followed by a pointwise convolution that mixes channels.

#### Declaration

``````@frozen
public struct SeparableConv2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` Flatten ```

A flatten layer.

A flatten layer flattens the input when applied without affecting the batch size.

#### Declaration

``````@frozen
public struct Flatten<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` Reshape ```

A reshape layer.

#### Declaration

``````@frozen
public struct Reshape<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` Function ```

A layer that encloses a custom differentiable function.

#### Declaration

``public struct Function<Input, Output> : ParameterlessLayer where Input : Differentiable, Output : Differentiable``
• ``` TensorDataType ```

A TensorFlow dynamic type value that can be created from types that conform to `TensorFlowScalar`.

#### Declaration

``public struct TensorDataType : Equatable``
• ``` BFloat16 ```

#### Declaration

``````@frozen
public struct BFloat16``````
``extension BFloat16: TensorFlowScalar``
``extension BFloat16: XLAScalarType``
• ``` Dataset ```

Represents a potentially large set of elements.

A `Dataset` can be used to represent an input pipeline as a collection of element tensors.

#### Declaration

``````@available(*, deprecated, message: "Datasets will be removed in S4TF v0.10. Please use the new Batches API instead.")
@frozen
public struct Dataset<Element> where Element : TensorGroup``````
``extension Dataset: Sequence``
• ``` DatasetIterator ```

The type that allows iteration over a dataset’s elements.

#### Declaration

``````@available(*, deprecated)
@frozen
public struct DatasetIterator<Element> where Element : TensorGroup``````
``extension DatasetIterator: IteratorProtocol``
• ``` Zip2TensorGroup ```

A 2-tuple-like struct that conforms to TensorGroup that represents a tuple of 2 types conforming to `TensorGroup`.

#### Declaration

``````@frozen
public struct Zip2TensorGroup<T, U> : TensorGroup where T : TensorGroup, U : TensorGroup``````
• ``` Dense ```

A densely-connected neural network layer.

`Dense` implements the operation `activation(matmul(input, weight) + bias)`, where `weight` is a weight matrix, `bias` is a bias vector, and `activation` is an element-wise activation function.

This layer also supports 3-D weight tensors with 2-D bias matrices. In this case the first dimension of both is treated as the batch size that is aligned with the first dimension of `input` and the batch variant of the `matmul(_:_:)` operation is used, thus using a different weight and bias for each element in input batch.

#### Declaration

``````@frozen
public struct Dense<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` Device ```

A device on which `Tensor`s can be allocated.

#### Declaration

``public struct Device``
``extension Device: Equatable``
``extension Device: CustomStringConvertible``
• ``` Dropout ```

A dropout layer.

Dropout consists in randomly setting a fraction of input units to `0` at each update during training time, which helps prevent overfitting.

#### Declaration

``````@frozen
public struct Dropout<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` GaussianNoise ```

`GaussianNoise` adds noise sampled from a normal distribution.

The noise added always has mean zero, but has a configurable standard deviation.

#### Declaration

``public struct GaussianNoise<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``
• ``` GaussianDropout ```

`GaussianDropout` multiplies the input with the noise sampled from a normal distribution with mean 1.0.

Because this is a regularization layer, it is only active during training time. During inference, `GaussianDropout` passes through the input unmodified.

#### Declaration

``public struct GaussianDropout<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``
• ``` AlphaDropout ```

An Alpha dropout layer.

Alpha Dropout is a `Dropout` that keeps mean and variance of inputs to their original values, in order to ensure the self-normalizing property even after this dropout. Alpha Dropout fits well to Scaled Exponential Linear Units by randomly setting activations to the negative saturation value.

Source : Self-Normalizing Neural Networks: https://arxiv.org/abs/1706.02515

#### Declaration

``````@frozen
public struct AlphaDropout<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` Embedding ```

An embedding layer.

`Embedding` is effectively a lookup table that maps indices from a fixed vocabulary to fixed-size (dense) vector representations, e.g. `[, ] -> [[0.25, 0.1], [0.6, -0.2]]`.

#### Declaration

``public struct Embedding<Scalar> : Module where Scalar : TensorFlowFloatingPoint``
• ``` EmptyTangentVector ```

An empty struct representing empty `TangentVector`s for parameterless layers.

#### Declaration

``````public struct EmptyTangentVector: EuclideanDifferentiable, VectorProtocol, ElementaryFunctions,
PointwiseMultiplicative, KeyPathIterable``````
• ``` Moments ```

Pair of first and second moments (i.e., mean and variance).

Note

This is needed because tuple types are not differentiable.

#### Declaration

``public struct Moments<Scalar> : Differentiable where Scalar : TensorFlowFloatingPoint``
• ``` Dilation2D ```

A 2-D morphological dilation layer

This layer returns the morphogical dilation of the input tensor with the provided filters

#### Declaration

``````@frozen
public struct Dilation2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` Erosion2D ```

A 2-D morphological erosion layer

This layer returns the morphogical erosion of the input tensor with the provided filters

#### Declaration

``````@frozen
public struct Erosion2D<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` Sampling ```

A lazy selection of elements, in a given order, from some base collection.

#### Declaration

``````public struct Sampling<Base: Collection, Selection: Collection>
where Selection.Element == Base.Index``````
``extension Sampling: SamplingProtocol``
``extension Sampling: Collection``
``````extension Sampling: BidirectionalCollection
where Selection: BidirectionalCollection``````
``````extension Sampling: RandomAccessCollection
where Selection: RandomAccessCollection``````
• ``` Slices ```

A collection of the longest non-overlapping contiguous slices of some `Base` collection, starting with its first element, and having some fixed maximum length.

The elements of this collection, except for the last, all have a `count` of `batchSize`, unless `Base.count % batchSize !=0`, in which case the last batch’s `count` is `base.count % batchSize.`

#### Declaration

``public struct Slices<Base> where Base : Collection``
``extension Slices: Collection``
• ``` BatchNorm ```

A batch normalization layer.

Normalizes the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to `0` and the activation standard deviation close to `1`.

#### Declaration

``````@frozen
public struct BatchNorm<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` LayerNorm ```

A layer that applies layer normalization over a mini-batch of inputs.

Reference: Layer Normalization.

#### Declaration

``````@frozen
public struct LayerNorm<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` GroupNorm ```

A layer that applies group normalization over a mini-batch of inputs.

Reference: Group Normalization.

#### Declaration

``````@frozen
public struct GroupNorm<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` InstanceNorm ```

A layer that applies instance normalization over a mini-batch of inputs.

#### Declaration

``````@frozen
public struct InstanceNorm<Scalar> : Layer where Scalar : TensorFlowFloatingPoint``````
• ``` OptimizerWeightStepState ```

State for a single step of a single weight inside an optimizer.

#### Declaration

``public struct OptimizerWeightStepState``
• ``` OptimizerState ```

Global state accessed through `StateAccessor`.

#### Declaration

``public struct OptimizerState``
• ``` HyperparameterDictionary ```

`[String: Float]` but elements can be accessed as though they were members.

#### Declaration

``````@dynamicMemberLookup
public struct HyperparameterDictionary``````
• ``` ParameterGroupOptimizer ```

An optimizer that works on a single parameter group.

#### Declaration

``public struct ParameterGroupOptimizer``
• ``` LocalAccessor ```

A type-safe wrapper around an `Int` index value for optimizer local values.

#### Declaration

``public struct LocalAccessor``
• ``` GlobalAccessor ```

A type-safe wrapper around an `Int` index value for optimizer global values.

#### Declaration

``public struct GlobalAccessor``
• ``` StateAccessor ```

A type-safe wrapper around an `Int` index value for optimizer state values.

#### Declaration

``public struct StateAccessor``
• ``` ParameterGroupOptimizerBuilder ```

Builds a `ParameterGroupOptimizer`. This is used at essentially the level of a single weight in the model. A mapping from parameter groups selected by (`[Bool]` to ParameterGroupOptimizer) defines the final optimizer.

#### Declaration

``public struct ParameterGroupOptimizerBuilder``
• ``` MaxPool1D ```

A max pooling layer for temporal data.

#### Declaration

``````@frozen
public struct MaxPool1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` MaxPool2D ```

A max pooling layer for spatial data.

#### Declaration

``````@frozen
public struct MaxPool2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` MaxPool3D ```

A max pooling layer for spatial or spatio-temporal data.

#### Declaration

``````@frozen
public struct MaxPool3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` AvgPool1D ```

An average pooling layer for temporal data.

#### Declaration

``````@frozen
public struct AvgPool1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` AvgPool2D ```

An average pooling layer for spatial data.

#### Declaration

``````@frozen
public struct AvgPool2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` AvgPool3D ```

An average pooling layer for spatial or spatio-temporal data.

#### Declaration

``````@frozen
public struct AvgPool3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` GlobalAvgPool1D ```

A global average pooling layer for temporal data.

#### Declaration

``````@frozen
public struct GlobalAvgPool1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` GlobalAvgPool2D ```

A global average pooling layer for spatial data.

#### Declaration

``````@frozen
public struct GlobalAvgPool2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` GlobalAvgPool3D ```

A global average pooling layer for spatial and spatio-temporal data.

#### Declaration

``````@frozen
public struct GlobalAvgPool3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` GlobalMaxPool1D ```

A global max pooling layer for temporal data.

#### Declaration

``````@frozen
public struct GlobalMaxPool1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` GlobalMaxPool2D ```

A global max pooling layer for spatial data.

#### Declaration

``````@frozen
public struct GlobalMaxPool2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` GlobalMaxPool3D ```

A global max pooling layer for spatial and spatio-temporal data.

#### Declaration

``````@frozen
public struct GlobalMaxPool3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` FractionalMaxPool2D ```

A fractional max pooling layer for spatial data. Note: `FractionalMaxPool` does not have an XLA implementation, and thus may have performance implications.

#### Declaration

``````@frozen
public struct FractionalMaxPool2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` PythonObject ```

`PythonObject` represents an object in Python and supports dynamic member lookup. Any member access like `object.foo` will dynamically request the Python runtime for a member with the specified name in this object.

`PythonObject` is passed to and returned from all Python function calls and member references. It supports standard Python arithmetic and comparison operators.

Internally, `PythonObject` is implemented as a reference-counted pointer to a Python C API `PyObject`.

#### Declaration

``````@dynamicCallable
@dynamicMemberLookup
public struct PythonObject``````
``extension PythonObject : CustomStringConvertible``
``extension PythonObject : CustomPlaygroundDisplayConvertible``
``extension PythonObject : CustomReflectable``
``extension PythonObject : PythonConvertible, ConvertibleFromPython``
``extension PythonObject : SignedNumeric``
``extension PythonObject : Strideable``
``extension PythonObject : Equatable, Comparable``
``extension PythonObject : Hashable``
``extension PythonObject : MutableCollection``
``extension PythonObject : Sequence``
``````extension PythonObject : ExpressibleByBooleanLiteral, ExpressibleByIntegerLiteral,
ExpressibleByFloatLiteral, ExpressibleByStringLiteral``````
``extension PythonObject : ExpressibleByArrayLiteral, ExpressibleByDictionaryLiteral``
• ``` ThrowingPythonObject ```

A `PythonObject` wrapper that enables throwing method calls. Exceptions produced by Python functions are reflected as Swift errors and thrown.

Note

It is intentional that `ThrowingPythonObject` does not have the `@dynamicCallable` attribute because the call syntax is unintuitive: `x.throwing(arg1, arg2, ...)`. The methods will still be named `dynamicallyCall` until further discussion/design.

#### Declaration

``public struct ThrowingPythonObject``
• ``` CheckingPythonObject ```

A `PythonObject` wrapper that enables member accesses. Member access operations return an `Optional` result. When member access fails, `nil` is returned.

#### Declaration

``````@dynamicMemberLookup
public struct CheckingPythonObject``````
• ``` PythonInterface ```

An interface for Python.

`PythonInterface` allows interaction with Python. It can be used to import modules and dynamically access Python builtin types and functions.

Note

It is not intended for `PythonInterface` to be initialized directly. Instead, please use the global instance of `PythonInterface` called `Python`.

#### Declaration

``````@dynamicMemberLookup
public struct PythonInterface``````
• ``` PythonLibrary ```

#### Declaration

``public struct PythonLibrary``
• ``` AnyRandomNumberGenerator ```

A type-erased random number generator.

The `AnyRandomNumberGenerator` type forwards random number generating operations to an underlying random number generator, hiding its specific underlying type.

#### Declaration

``public struct AnyRandomNumberGenerator : RandomNumberGenerator``
• ``` ARC4RandomNumberGenerator ```

An implementation of `SeedableRandomNumberGenerator` using ARC4.

ARC4 is a stream cipher that generates a pseudo-random stream of bytes. This PRNG uses the seed as its key.

ARC4 is described in Schneier, B., “Applied Cryptography: Protocols, Algorithms, and Source Code in C”, 2nd Edition, 1996.

An individual generator is not thread-safe, but distinct generators do not share state. The random data generated is of high-quality, but is not suitable for cryptographic applications.

#### Declaration

``````@frozen
public struct ARC4RandomNumberGenerator : SeedableRandomNumberGenerator``````
• ``` ThreefryRandomNumberGenerator ```

An implementation of `SeedableRandomNumberGenerator` using Threefry. Salmon et al. SC 2011. Parallel random numbers: as easy as 1, 2, 3. http://www.thesalmons.org/john/random123/papers/random123sc11.pdf

This struct implements a 20-round Threefry2x32 PRNG. It must be seeded with a 64-bit value.

An individual generator is not thread-safe, but distinct generators do not share state. The random data generated is of high-quality, but is not suitable for cryptographic applications.

#### Declaration

``public struct ThreefryRandomNumberGenerator : SeedableRandomNumberGenerator``
• ``` PhiloxRandomNumberGenerator ```

An implementation of `SeedableRandomNumberGenerator` using Philox. Salmon et al. SC 2011. Parallel random numbers: as easy as 1, 2, 3. http://www.thesalmons.org/john/random123/papers/random123sc11.pdf

This struct implements a 10-round Philox4x32 PRNG. It must be seeded with a 64-bit value.

An individual generator is not thread-safe, but distinct generators do not share state. The random data generated is of high-quality, but is not suitable for cryptographic applications.

#### Declaration

``public struct PhiloxRandomNumberGenerator : SeedableRandomNumberGenerator``
• ``` UniformIntegerDistribution ```

#### Declaration

``````@frozen
public struct UniformIntegerDistribution<T> : RandomDistribution where T : FixedWidthInteger``````
• ``` UniformFloatingPointDistribution ```

#### Declaration

``````@frozen
public struct UniformFloatingPointDistribution<T: BinaryFloatingPoint>: RandomDistribution
where T.RawSignificand: FixedWidthInteger``````
• ``` NormalDistribution ```

#### Declaration

``````@frozen
public struct NormalDistribution<T: BinaryFloatingPoint>: RandomDistribution
where T.RawSignificand: FixedWidthInteger``````
• ``` BetaDistribution ```

#### Declaration

``````@frozen
• ``` RNNCellInput ```

An input to a recurrent neural network.

#### Declaration

``public struct RNNCellInput<Input, State> : Differentiable where Input : Differentiable, State : Differentiable``
``````extension RNNCellInput: EuclideanDifferentiable
where Input: EuclideanDifferentiable, State: EuclideanDifferentiable``````
• ``` RNNCellOutput ```

An output to a recurrent neural network.

#### Declaration

``public struct RNNCellOutput<Output, State> : Differentiable where Output : Differentiable, State : Differentiable``
``````extension RNNCellOutput: EuclideanDifferentiable
where Output: EuclideanDifferentiable, State: EuclideanDifferentiable``````
• ``` BasicRNNCell ```

A basic RNN cell.

#### Declaration

``public struct BasicRNNCell<Scalar> : RecurrentLayerCell where Scalar : TensorFlowFloatingPoint``
• ``` LSTMCell ```

An LSTM cell.

#### Declaration

``public struct LSTMCell<Scalar> : RecurrentLayerCell where Scalar : TensorFlowFloatingPoint``
• ``` GRUCell ```

An GRU cell.

#### Declaration

``public struct GRUCell<Scalar> : RecurrentLayerCell where Scalar : TensorFlowFloatingPoint``
• ``` RecurrentLayer ```

#### Declaration

``public struct RecurrentLayer<Cell> : Layer where Cell : RecurrentLayerCell``
``extension RecurrentLayer: Equatable where Cell: Equatable``
``extension RecurrentLayer: AdditiveArithmetic where Cell: AdditiveArithmetic``
• ``` BidirectionalRecurrentLayer ```

#### Declaration

``````public struct BidirectionalRecurrentLayer<Cell: RecurrentLayerCell>: Layer
where Cell.TimeStepOutput: Mergeable``````
• ``` Sequential ```

A layer that sequentially composes two or more other layers.

### Examples:

• Build a simple 2-layer perceptron model for MNIST:
``````let inputSize = 28 * 28
let hiddenSize = 300
var classifier = Sequential {
Dense<Float>(inputSize: inputSize, outputSize: hiddenSize, activation: relu)
Dense<Float>(inputSize: hiddenSize, outputSize: 3, activation: identity)
}
``````
• Build an autoencoder for MNIST:
``````var autoencoder = Sequential {
// The encoder.
Dense<Float>(inputSize: 28 * 28, outputSize: 128, activation: relu)
Dense<Float>(inputSize: 128, outputSize: 64, activation: relu)
Dense<Float>(inputSize: 64, outputSize: 12, activation: relu)
Dense<Float>(inputSize: 12, outputSize: 3, activation: relu)
// The decoder.
Dense<Float>(inputSize: 3, outputSize: 12, activation: relu)
Dense<Float>(inputSize: 12, outputSize: 64, activation: relu)
Dense<Float>(inputSize: 64, outputSize: 128, activation: relu)
Dense<Float>(inputSize: 128, outputSize: imageHeight * imageWidth, activation: tanh)
}
``````

#### Declaration

``````public struct Sequential<Layer1: Module, Layer2: Layer>: Module
where Layer1.Output == Layer2.Input,
Layer1.TangentVector.VectorSpaceScalar == Layer2.TangentVector.VectorSpaceScalar``````
``extension Sequential: Layer where Layer1: Layer``
• ``` LayerBuilder ```

#### Declaration

``````@_functionBuilder
public struct LayerBuilder``````
• ``` ShapedArray ```

`ShapedArray` is a multi-dimensional array. It has a shape, which has type `[Int]` and defines the array dimensions, and uses a `TensorBuffer` internally as storage.

#### Declaration

``````@frozen
public struct ShapedArray<Scalar> : _ShapedArrayProtocol``````
``extension ShapedArray: RandomAccessCollection, MutableCollection``
``extension ShapedArray: CustomStringConvertible``
``extension ShapedArray: CustomPlaygroundDisplayConvertible``
``extension ShapedArray: CustomReflectable``
``extension ShapedArray: ExpressibleByArrayLiteral where Scalar: TensorFlowScalar``
``extension ShapedArray: Equatable where Scalar: Equatable``
``extension ShapedArray: Hashable where Scalar: Hashable``
``extension ShapedArray: Codable where Scalar: Codable``
• ``` ShapedArraySlice ```

A contiguous slice of a `ShapedArray` or `ShapedArraySlice` instance.

`ShapedArraySlice` enables fast, efficient operations on contiguous slices of `ShapedArray` instances. `ShapedArraySlice` instances do not have their own storage. Instead, they provides a view onto the storage of their base `ShapedArray`. `ShapedArraySlice` can represent two different kinds of slices: element arrays and subarrays.

Element arrays are subdimensional elements of a `ShapedArray`: their rank is one less than that of their base. Element array slices are obtained by indexing a `ShapedArray` instance with a singular `Int32` index.

For example:

``````    var matrix = ShapedArray(shape: [2, 2], scalars: [0, 1, 2, 3])
// `matrix` represents [[0, 1], [2, 3]].

let element = matrix
// `element` is a `ShapedArraySlice` with shape . It is an element
// array, specifically the first element in `matrix`: [0, 1].

matrix = ShapedArraySlice(shape: , scalars: [4, 8])
// The second element in `matrix` has been mutated.
// `matrix` now represents [[0, 1, 4, 8]].
``````

Subarrays are a contiguous range of the elements in a `ShapedArray`. The rank of a subarray is the same as that of its base, but its leading dimension is the count of the slice range. Subarray slices are obtained by indexing a `ShapedArray` with a `Range<Int32>` that represents a range of elements (in the leading dimension). Methods like `prefix(:)` and `suffix(:)` that internally index with a range also produce subarray.

For example:

``````    let zeros = ShapedArray(repeating: 0, shape: [3, 2])
var matrix = ShapedArray(shape: [3, 2], scalars: Array(0..<6))
// `zeros` represents [[0, 0], [0, 0], [0, 0]].
// `matrix` represents [[0, 1], [2, 3], [4, 5]].

let subarray = matrix.prefix(2)
// `subarray` is a `ShapedArraySlice` with shape [2, 2]. It is a slice
// of the first 2 elements in `matrix` and represents [[0, 1], [2, 3]].

matrix[0..<2] = zeros.prefix(2)
// The first 2 elements in `matrix` have been mutated.
// `matrix` now represents [[0, 0], [0, 0], [4, 5]].
``````

#### Declaration

``````@frozen
public struct ShapedArraySlice<Scalar> : _ShapedArrayProtocol``````
``extension ShapedArraySlice: RandomAccessCollection, MutableCollection``
``extension ShapedArraySlice: CustomStringConvertible``
``extension ShapedArraySlice: CustomPlaygroundDisplayConvertible``
``extension ShapedArraySlice: CustomReflectable``
``extension ShapedArraySlice: ExpressibleByArrayLiteral where Scalar: TensorFlowScalar``
``extension ShapedArraySlice: Equatable where Scalar: Equatable``
``extension ShapedArraySlice: Hashable where Scalar: Hashable``
``extension ShapedArraySlice: Codable where Scalar: Codable``
• ``` StringTensor ```

`StringTensor` is a multi-dimensional array whose elements are `String`s.

#### Declaration

``````@frozen
public struct StringTensor``````
``extension StringTensor: TensorGroup``
• ``` TensorHandle ```

`TensorHandle` is the type used by ops. It includes a `Scalar` type, which compiler internals can use to determine the datatypes of parameters when they are extracted into a tensor program.

#### Declaration

``public struct TensorHandle<Scalar> where Scalar : _TensorFlowDataTypeCompatible``
``extension TensorHandle: TensorGroup``
• ``` ResourceHandle ```

#### Declaration

``public struct ResourceHandle``
``extension ResourceHandle: TensorGroup``
• ``` VariantHandle ```

#### Declaration

``public struct VariantHandle``
``extension VariantHandle: TensorGroup``
• ``` TensorShape ```

A struct representing the shape of a tensor.

`TensorShape` is a thin wrapper around an array of integers that represent shape dimensions. All tensor types use `TensorShape` to represent their shape.

#### Declaration

``````@frozen
public struct TensorShape : ExpressibleByArrayLiteral``````
``extension TensorShape: Collection, MutableCollection``
``extension TensorShape: RandomAccessCollection``
``extension TensorShape: RangeReplaceableCollection``
``extension TensorShape: Equatable``
``extension TensorShape: Codable``
``extension TensorShape: CustomStringConvertible``
• ``` TensorVisitorPlan ```

TensorVisitorPlan approximates `[WritableKeyPath<Base, Tensor<Float>]` but is more efficient. This is useful for writing generic optimizers which want to map over the gradients, the existing weights, and an index which can be used to find auxiliarily stored weights. This is slightly more efficient (~2x) but it could be better because it trades off slightly higher overheads (extra pointer dereference) for not having to do O(depth_of_tree) work that is required with a plain list to track down each individual KeyPath.

#### Declaration

``public struct TensorVisitorPlan<Base>``
• ``` UpSampling1D ```

An upsampling layer for 1-D inputs.

#### Declaration

``````@frozen
public struct UpSampling1D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` UpSampling2D ```

An upsampling layer for 2-D inputs.

#### Declaration

``````@frozen
public struct UpSampling2D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` UpSampling3D ```

An upsampling layer for 3-D inputs.

#### Declaration

``````@frozen
public struct UpSampling3D<Scalar> : ParameterlessLayer where Scalar : TensorFlowFloatingPoint``````
• ``` HostStatistics ```

Collects correct prediction counters and loss totals.

#### Declaration

``public struct HostStatistics``
[{ "type": "thumb-down", "id": "missingTheInformationINeed", "label":"Missing the information I need" },{ "type": "thumb-down", "id": "tooComplicatedTooManySteps", "label":"Too complicated / too many steps" },{ "type": "thumb-down", "id": "outOfDate", "label":"Out of date" },{ "type": "thumb-down", "id": "samplesCodeIssue", "label":"Samples / code issue" },{ "type": "thumb-down", "id": "otherDown", "label":"Other" }]
[{ "type": "thumb-up", "id": "easyToUnderstand", "label":"Easy to understand" },{ "type": "thumb-up", "id": "solvedMyProblem", "label":"Solved my problem" },{ "type": "thumb-up", "id": "otherUp", "label":"Other" }]