The following functions are available globally.

• ``` withoutDerivative(at:) ```

Returns `x` like an identity function. When used in a context where `x` is being differentiated with respect to, this function will not produce any derivative at `x`.

#### Declaration

``````@_semantics("autodiff.nonvarying")
public func withoutDerivative<T>(at x: T) -> T``````
• ``` withoutDerivative(at:in:) ```

Applies the given closure `body` to `x`. When used in a context where `x` is being differentiated with respect to, this function will not produce any derivative at `x`.

#### Declaration

``````@_semantics("autodiff.nonvarying")
public func withoutDerivative<T, R>(at x: T, in body: (T) -> R) -> R``````
• ``` differentiableFunction(from:) ```

Create a differentiable function from a vector-Jacobian products function.

#### Declaration

``````public func differentiableFunction<T : Differentiable, R : Differentiable>(
from vjp: @escaping (T)
-> (value: R, pullback: (R.TangentVector) -> T.TangentVector)
) -> @differentiable (T) -> R``````
• ``` differentiableFunction(from:) ```

Create a differentiable function from a vector-Jacobian products function.

#### Declaration

``````public func differentiableFunction<T, U, R>(
from vjp: @escaping (T, U)
-> (value: R, pullback: (R.TangentVector)
-> (T.TangentVector, U.TangentVector))
) -> @differentiable (T, U) -> R
where T : Differentiable, U : Differentiable, R : Differentiable``````
• ``` withRecomputationInPullbacks(_:) ```

Make a function be recomputed in its pullback, known as checkpointing in traditional automatic differentiation.

#### Declaration

``````public func withRecomputationInPullbacks<T, U>(
_ body: @escaping @differentiable (T) -> U
) -> @differentiable (T) -> U where T : Differentiable, U : Differentiable``````
• ``` valueWithDifferential(at:in:) ```

#### Declaration

``````public func valueWithDifferential<T, R>(
at x: T, in f: @differentiable (T) -> R
) -> (value: R, differential: (T.TangentVector) -> R.TangentVector)``````
• ``` valueWithDifferential(at:_:in:) ```

#### Declaration

``````public func valueWithDifferential<T, U, R>(
at x: T, _ y: U, in f: @differentiable (T, U) -> R
) -> (value: R,
differential: (T.TangentVector, U.TangentVector) -> R.TangentVector)``````
• ``` valueWithDifferential(at:_:_:in:) ```

#### Declaration

``````public func valueWithDifferential<T, U, V, R>(
at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
) -> (value: R,
differential: (T.TangentVector, U.TangentVector, V.TangentVector)
-> (R.TangentVector))``````
• ``` valueWithPullback(at:in:) ```

#### Declaration

``````public func valueWithPullback<T, R>(
at x: T, in f: @differentiable (T) -> R
) -> (value: R, pullback: (R.TangentVector) -> T.TangentVector)``````
• ``` valueWithPullback(at:_:in:) ```

#### Declaration

``````public func valueWithPullback<T, U, R>(
at x: T, _ y: U, in f: @differentiable (T, U) -> R
) -> (value: R,
pullback: (R.TangentVector) -> (T.TangentVector, U.TangentVector))``````
• ``` valueWithPullback(at:_:_:in:) ```

#### Declaration

``````public func valueWithPullback<T, U, V, R>(
at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
) -> (value: R,
pullback: (R.TangentVector)
-> (T.TangentVector, U.TangentVector, V.TangentVector))``````
• ``` differential(at:in:) ```

#### Declaration

``````public func differential<T, R>(
at x: T, in f: @differentiable (T) -> R
) -> (T.TangentVector) -> R.TangentVector``````
• ``` differential(at:_:in:) ```

#### Declaration

``````public func differential<T, U, R>(
at x: T, _ y: U, in f: @differentiable (T, U) -> R
) -> (T.TangentVector, U.TangentVector) -> R.TangentVector``````
• ``` differential(at:_:_:in:) ```

#### Declaration

``````public func differential<T, U, V, R>(
at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
) -> (T.TangentVector, U.TangentVector, V.TangentVector) -> (R.TangentVector)``````
• ``` pullback(at:in:) ```

#### Declaration

``````public func pullback<T, R>(
at x: T, in f: @differentiable (T) -> R
) -> (R.TangentVector) -> T.TangentVector``````
• ``` pullback(at:_:in:) ```

#### Declaration

``````public func pullback<T, U, R>(
at x: T, _ y: U, in f: @differentiable (T, U) -> R
) -> (R.TangentVector) -> (T.TangentVector, U.TangentVector)``````
• ``` pullback(at:_:_:in:) ```

#### Declaration

``````public func pullback<T, U, V, R>(
at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
) -> (R.TangentVector)
-> (T.TangentVector, U.TangentVector, V.TangentVector)``````
• ``` derivative(at:in:) ```

#### Declaration

``````public func derivative<T: FloatingPoint, R>(
at x: T, in f: @differentiable (T) -> R
) ->  R.TangentVector
where T.TangentVector == T``````
• ``` derivative(at:_:in:) ```

#### Declaration

``````public func derivative<T: FloatingPoint, U: FloatingPoint, R>(
at x: T, _ y: U, in f: @differentiable (T, U) -> R
) -> R.TangentVector
where T.TangentVector == T,
U.TangentVector == U``````
• ``` derivative(at:_:_:in:) ```

#### Declaration

``````public func derivative<T: FloatingPoint, U: FloatingPoint, V: FloatingPoint, R>(
at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
) -> R.TangentVector
where T.TangentVector == T,
U.TangentVector == U,
V.TangentVector == V``````
• ``` gradient(at:in:) ```

#### Declaration

``````public func gradient<T, R>(
at x: T, in f: @differentiable (T) -> R
) -> T.TangentVector
where R : FloatingPoint, R.TangentVector == R``````
• ``` gradient(at:_:in:) ```

#### Declaration

``````public func gradient<T, U, R>(
at x: T, _ y: U, in f: @differentiable (T, U) -> R
) -> (T.TangentVector, U.TangentVector)
where R : FloatingPoint, R.TangentVector == R``````
• ``` gradient(at:_:_:in:) ```

#### Declaration

``````public func gradient<T, U, V, R>(
at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
) -> (T.TangentVector, U.TangentVector, V.TangentVector)
where R : FloatingPoint, R.TangentVector == R``````
• ``` valueWithDerivative(at:in:) ```

#### Declaration

``````public func valueWithDerivative<T: FloatingPoint, R>(
at x: T, in f: @escaping @differentiable (T) -> R
) -> (value: R, derivative: R.TangentVector)
where T.TangentVector == T``````
• ``` valueWithDerivative(at:_:in:) ```

#### Declaration

``````public func valueWithDerivative<T: FloatingPoint, U: FloatingPoint, R>(
at x: T, _ y: U, in f: @escaping @differentiable (T, U) -> R
) -> (value: R, derivative: R.TangentVector)
where T.TangentVector == T,
U.TangentVector == U``````
• ``` valueWithDerivative(at:_:_:in:) ```

#### Declaration

``````public func valueWithDerivative<
T: FloatingPoint, U: FloatingPoint, V: FloatingPoint, R>(
at x: T, _ y: U, _ z: V, in f: @escaping @differentiable (T, U, V) -> R
) -> (value: R, derivative: R.TangentVector)
where T.TangentVector == T,
U.TangentVector == U,
V.TangentVector == V``````
• ``` valueWithGradient(at:in:) ```

#### Declaration

``````public func valueWithGradient<T, R>(
at x: T, in f: @differentiable (T) -> R
) -> (value: R, gradient: T.TangentVector)
where R : FloatingPoint, R.TangentVector == R``````
• ``` valueWithGradient(at:_:in:) ```

#### Declaration

``````public func valueWithGradient<T, U, R>(
at x: T, _ y: U, in f: @differentiable (T, U) -> R
) -> (value: R, gradient: (T.TangentVector, U.TangentVector))
where R : FloatingPoint, R.TangentVector == R``````
• ``` valueWithGradient(at:_:_:in:) ```

#### Declaration

``````public func valueWithGradient<T, U, V, R>(
at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
) -> (value: R,
where R : FloatingPoint, R.TangentVector == R``````
• ``` derivative(of:) ```

#### Declaration

``````public func derivative<T: FloatingPoint, R>(
of f: @escaping @differentiable (T) -> R
) -> (T) -> R.TangentVector
where T.TangentVector == T``````
• ``` derivative(of:) ```

#### Declaration

``````public func derivative<T: FloatingPoint, U: FloatingPoint, R>(
of f: @escaping @differentiable (T, U) -> R
) -> (T, U) -> R.TangentVector
where T.TangentVector == T,
U.TangentVector == U``````
• ``` derivative(of:) ```

#### Declaration

``````public func derivative<T: FloatingPoint, U: FloatingPoint, V: FloatingPoint, R>(
of f: @escaping @differentiable (T, U, V) -> R
) -> (T, U, V) -> R.TangentVector
where T.TangentVector == T,
U.TangentVector == U,
V.TangentVector == V``````
• ``` gradient(of:) ```

#### Declaration

``````public func gradient<T, R>(
of f: @escaping @differentiable (T) -> R
) -> (T) -> T.TangentVector
where R : FloatingPoint, R.TangentVector == R``````
• ``` gradient(of:) ```

#### Declaration

``````public func gradient<T, U, R>(
of f: @escaping @differentiable (T, U) -> R
) -> (T, U) -> (T.TangentVector, U.TangentVector)
where R : FloatingPoint, R.TangentVector == R``````
• ``` gradient(of:) ```

#### Declaration

``````public func gradient<T, U, V, R>(
of f: @escaping @differentiable (T, U, V) -> R
) -> (T, U, V) -> (T.TangentVector, U.TangentVector, V.TangentVector)
where R : FloatingPoint, R.TangentVector == R``````
• ``` valueWithDerivative(of:) ```

#### Declaration

``````public func valueWithDerivative<T: FloatingPoint, R>(
of f: @escaping @differentiable (T) -> R
) -> (T) -> (value: R, derivative: R.TangentVector)
where T.TangentVector == T``````
• ``` valueWithDerivative(of:) ```

#### Declaration

``````public func valueWithDerivative<T: FloatingPoint, U: FloatingPoint, R>(
of f: @escaping @differentiable (T, U) -> R
) -> (T, U) -> (value: R, derivative: R.TangentVector)
where T.TangentVector == T,
U.TangentVector == U``````
• ``` valueWithDerivative(of:) ```

#### Declaration

``````public func valueWithDerivative<
T: FloatingPoint, U: FloatingPoint, V: FloatingPoint, R>(
of f: @escaping @differentiable (T, U, V) -> R
) -> (T, U, V) -> (value: R, derivative: R.TangentVector)
where T.TangentVector == T,
U.TangentVector == U,
V.TangentVector == V``````
• ``` valueWithGradient(of:) ```

#### Declaration

``````public func valueWithGradient<T, R>(
of f: @escaping @differentiable (T) -> R
) -> (T) -> (value: R, gradient: T.TangentVector)
where R : FloatingPoint, R.TangentVector == R``````
• ``` valueWithGradient(of:) ```

#### Declaration

``````public func valueWithGradient<T, U, R>(
of f: @escaping @differentiable (T, U) -> R
) -> (T, U) -> (value: R, gradient: (T.TangentVector, U.TangentVector))
where R : FloatingPoint, R.TangentVector == R``````
• ``` valueWithGradient(of:) ```

#### Declaration

``````public func valueWithGradient<T, U, V, R>(
of f: @escaping @differentiable (T, U, V) -> R
) -> (T, U, V)
-> (value: R,
where R : FloatingPoint, R.TangentVector == R``````
• ``` l1Loss(predicted:expected:) ```

Returns the L1 loss between predictions and expectations.

#### Declaration

``````@differentiable
public func l1Loss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` l2Loss(predicted:expected:) ```

Returns the L2 loss between predictions and expectations.

#### Declaration

``````@differentiable
public func l2Loss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` hingeLoss(predicted:expected:) ```

Returns the hinge loss between predictions and expectations.

#### Declaration

``````@differentiable
public func hingeLoss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` squaredHingeLoss(predicted:expected:) ```

Returns the squared hinge loss between predictions and expectations.

#### Declaration

``````@differentiable
public func squaredHingeLoss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` categoricalHingeLoss(predicted:expected:) ```

Returns the categorical hinge loss between predictions and expectations.

#### Declaration

``````@differentiable
public func categoricalHingeLoss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` logCoshLoss(predicted:expected:) ```

Returns the logarithm of the hyperbolic cosine of the error between predictions and expectations.

#### Declaration

``````@differentiable
public func logCoshLoss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` poissonLoss(predicted:expected:) ```

Returns the Poisson loss between predictions and expectations.

#### Declaration

``````@differentiable
public func poissonLoss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` kullbackLeiblerDivergence(predicted:expected:) ```

Returns the Kullback-Leibler divergence (KL divergence) between between expectations and predictions. Given two distributions `p` and `q`, KL divergence computes `p * log(p / q)`.

#### Declaration

``````@differentiable
public func kullbackLeiblerDivergence<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` softmaxCrossEntropy(logits:probabilities:) ```

Returns the softmax cross entropy (categorical cross entropy) between logits and labels.

#### Declaration

``````@differentiable
public func softmaxCrossEntropy<Scalar: TensorFlowFloatingPoint>(
logits: Tensor<Scalar>,
probabilities: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` logits ``` One-hot encoded outputs from a neural network. ``` labels ``` Indices (zero-indexed) of the correct outputs.
• ``` sigmoidCrossEntropy(logits:labels:) ```

Returns the sigmoid cross entropy (binary cross entropy) between logits and labels.

#### Declaration

``````@differentiable
public func sigmoidCrossEntropy<Scalar: TensorFlowFloatingPoint>(
logits: Tensor<Scalar>,
labels: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` logits ``` The unscaled output of a neural network. ``` labels ``` Integer values that correspond to the correct output.
• ``` identity(_:) ```

Returns a tensor with the same shape and scalars as the specified tensor.

#### Declaration

``````@differentiable
public func identity<Scalar>(_ x: Tensor<Scalar>) -> Tensor<Scalar> where Scalar : TensorFlowScalar``````
• ``` withContext(_:_:) ```

Calls the given closure within a context that has everything identical to the current context except for the given learning phase.

#### Declaration

``public func withContext<R>(_ context: Context, _ body: () throws -> R) rethrows -> R``

#### Parameters

 ``` context ``` A context that will be set before the closure gets called and restored after the closure returns. ``` body ``` A nullary closure. If the closure has a return value, that value is also used as the return value of the `withContext(_:_:)` function.

#### Return Value

The return value, if any, of the `body` closure.

• ``` withLearningPhase(_:_:) ```

Calls the given closure within a context that has everything identical to the current context except for the given learning phase.

#### Declaration

``````public func withLearningPhase<R>(
_ learningPhase: LearningPhase,
_ body: () throws -> R
) rethrows -> R``````

#### Parameters

 ``` learningPhase ``` A learning phase that will be set before the closure gets called and restored after the closure returns. ``` body ``` A nullary closure. If the closure has a return value, that value is also used as the return value of the `withLearningPhase(_:_:)` function.

#### Return Value

The return value, if any, of the `body` closure.

• ``` withRandomSeedForTensorFlow(_:_:) ```

Calls the given closure within a context that has everything identical to the current context except for the given random seed.

#### Declaration

``````public func withRandomSeedForTensorFlow<R>(
_ randomSeed: TensorFlowSeed,
_ body: () throws -> R
) rethrows -> R``````

#### Parameters

 ``` randomSeed ``` A random seed that will be set before the closure gets called and restored after the closure returns. ``` body ``` A nullary closure. If the closure has a return value, that value is also used as the return value of the `withRandomSeedForTensorFlow(_:_:)` function.

#### Return Value

The return value, if any, of the `body` closure.

• ``` withRandomNumberGeneratorForTensorFlow(_:_:) ```

Calls the given closure within a context that has everything identical to the current context except for the given random number generator.

#### Declaration

``````public func withRandomNumberGeneratorForTensorFlow<G: RandomNumberGenerator, R>(
_ randomNumberGenerator: inout G,
_ body: () throws -> R
) rethrows -> R``````

#### Parameters

 ``` randomNumberGenerator ``` A random number generator that will be set before the closure gets called and restored after the closure returns. ``` body ``` A nullary closure. If the closure has a return value, that value is also used as the return value of the `withRandomNumberGeneratorForTensorFlow(_:_:)` function.

#### Return Value

The return value, if any, of the `body` closure.

• ``` zip(_:_:) ```

#### Declaration

``````public func zip<T: TensorGroup, U: TensorGroup>(
_ dataset1: Dataset<T>, _ dataset2: Dataset<U>
) -> Dataset<Zip2TensorGroup<T, U>>``````
• ``` valueWithGradient(at:in:) ```

#### Declaration

``````public func valueWithGradient<T, R>(
at x: T,
in f: @differentiable (T) -> Tensor<R>
) -> (value: Tensor<R>, gradient: T.TangentVector)
where T: Differentiable, R: TensorFlowFloatingPoint``````
• ``` valueWithGradient(at:_:in:) ```

#### Declaration

``````public func valueWithGradient<T, U, R>(
at x: T,
_ y: U,
in f: @differentiable (T, U) -> Tensor<R>
) -> (value: Tensor<R>, gradient: (T.TangentVector, U.TangentVector))
where T: Differentiable, U: Differentiable, R: TensorFlowFloatingPoint``````
• ``` valueWithGradient(of:) ```

#### Declaration

``````public func valueWithGradient<T, R>(
of f: @escaping @differentiable (T) -> Tensor<R>
) -> (T) -> (value: Tensor<R>, gradient: T.TangentVector)
where T: Differentiable, R: TensorFlowFloatingPoint``````
• ``` valueWithGradient(of:) ```

#### Declaration

``````public func valueWithGradient<T, U, R>(
of f: @escaping @differentiable (T, U) -> Tensor<R>
) -> (T, U) -> (value: Tensor<R>, gradient: (T.TangentVector, U.TangentVector))
where T: Differentiable, U: Differentiable, R: TensorFlowFloatingPoint``````
• ``` gradient(at:in:) ```

#### Declaration

``````public func gradient<T, R>(
at x: T,
in f: @differentiable (T) -> Tensor<R>
) -> T.TangentVector where T: Differentiable, R: TensorFlowFloatingPoint``````
• ``` gradient(at:_:in:) ```

#### Declaration

``````public func gradient<T, U, R>(
at x: T,
_ y: U,
in f: @differentiable (T, U) -> Tensor<R>
) -> (T.TangentVector, U.TangentVector)
where T: Differentiable, U: Differentiable, R: TensorFlowFloatingPoint``````
• ``` gradient(of:) ```

#### Declaration

``````public func gradient<T, R>(
of f: @escaping @differentiable (T) -> Tensor<R>
) -> (T) -> T.TangentVector where T: Differentiable, R: TensorFlowFloatingPoint``````
• ``` gradient(of:) ```

#### Declaration

``````public func gradient<T, U, R>(
of f: @escaping @differentiable (T, U) -> Tensor<R>
) -> (T, U) -> (T.TangentVector, U.TangentVector)
where T: Differentiable, U: Differentiable, R: TensorFlowFloatingPoint``````
• ``` withDevice(_:_:perform:) ```

Executes a closure, making TensorFlow operations run on a specific kind of device.

#### Declaration

``````public func withDevice<R>(
_ kind: DeviceKind,
_ index: UInt = 0,
perform body: () throws -> R
) rethrows -> R``````

#### Parameters

 ``` kind ``` A kind of device to run TensorFlow operations on. ``` index ``` The device to run the ops on. ``` body ``` A closure whose TensorFlow operations are to be executed on the specified kind of device.
• ``` withDevice(named:perform:) ```

Executes a closure, making TensorFlow operations run on a device with a specific name.

Some examples of device names:

• /device:CPU:0: The CPU of your machine.
• /GPU:0: Short-hand notation for the first GPU of your machine that is visible to TensorFlow
• /job:localhost/replica:0/task:0/device:GPU:1: Fully qualified name of the second GPU of your machine that is visible to TensorFlow.

#### Declaration

``public func withDevice<R>(named name: String, perform body: () throws -> R) rethrows -> R``

#### Parameters

 ``` name ``` Device name. ``` body ``` A closure whose TensorFlow operations are to be executed on the specified kind of device.
• ``` withDefaultDevice(perform:) ```

Executes a closure, allowing TensorFlow to place TensorFlow operations on any device. This should restore the default placement behavior.

#### Declaration

``public func withDefaultDevice<R>(perform body: () throws -> R) rethrows -> R``

#### Parameters

 ``` body ``` A closure whose TensorFlow operations are to be executed on the specified kind of device.
• ``` zeros() ```

Returns a function that creates a tensor by initializing all its values to zeros.

#### Declaration

``public func zeros<Scalar>() -> ParameterInitializer<Scalar> where Scalar : TensorFlowFloatingPoint, Scalar : TensorFlowScalar``
• ``` glorotUniform(seed:) ```

Returns a function that creates a tensor by performing Glorot uniform initialization for the specified shape, randomly sampling scalar values from a uniform distribution between `-limit` and `limit`, generated by the default random number generator, where limit is `sqrt(6 / (fanIn + fanOut))`, and `fanIn`/`fanOut` represent the number of input and output features multiplied by the receptive field, if present.

#### Declaration

``````public func glorotUniform<Scalar: TensorFlowFloatingPoint>(
seed: TensorFlowSeed = Context.local.randomSeed
) -> ParameterInitializer<Scalar>``````
• ``` ==(_:_:) ```

#### Declaration

``public func == (lhs: TFETensorHandle, rhs: TFETensorHandle) -> Bool``
• ``` l1Loss(predicted:expected:reduction:) ```

Returns the L1 loss between predictions and expectations.

#### Declaration

``````@differentiable
public func l1Loss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>,
reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = { \$0.sum() }
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output. ``` reduction ``` Reduction to apply on the computed element-wise loss values.
• ``` l2Loss(predicted:expected:reduction:) ```

Returns the L2 loss between predictions and expectations.

#### Declaration

``````@differentiable
public func l2Loss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>,
reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = { \$0.sum() }
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output. ``` reduction ``` Reduction to apply on the computed element-wise loss values.
• ``` meanAbsoluteError(predicted:expected:) ```

Returns the mean absolute error between predictions and expectations.

#### Declaration

``````@differentiable
public func meanAbsoluteError<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` meanSquaredError(predicted:expected:) ```

Returns the mean squared error between predictions and expectations.

#### Declaration

``````@differentiable
public func meanSquaredError<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` meanSquaredLogarithmicError(predicted:expected:) ```

Returns the mean squared logarithmic error between predictions and expectations.

Note

Negative tensor entries will be clamped at `0` to avoid undefined logarithmic behavior, as `log(_:)` is undefined for negative reals.

#### Declaration

``````@differentiable
public func meanSquaredLogarithmicError<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` meanAbsolutePercentageError(predicted:expected:) ```

Returns the mean absolute percentage error between predictions and expectations.

#### Declaration

``````@differentiable
public func meanAbsolutePercentageError<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output.
• ``` hingeLoss(predicted:expected:reduction:) ```

Returns the hinge loss between predictions and expectations.

#### Declaration

``````@differentiable
public func hingeLoss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>,
reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output. ``` reduction ``` Reduction to apply on the computed element-wise loss values.
• ``` squaredHingeLoss(predicted:expected:reduction:) ```

Returns the squared hinge loss between predictions and expectations.

#### Declaration

``````@differentiable
public func squaredHingeLoss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>,
reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output. ``` reduction ``` Reduction to apply on the computed element-wise loss values.
• ``` categoricalHingeLoss(predicted:expected:reduction:) ```

Returns the hinge loss between predictions and expectations.

#### Declaration

``````@differentiable
public func categoricalHingeLoss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>,
reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output. ``` reduction ``` Reduction to apply on the computed element-wise loss values.
• ``` logCoshLoss(predicted:expected:reduction:) ```

Returns the logarithm of the hyperbolic cosine of the error between predictions and expectations.

#### Declaration

``````@differentiable
public func logCoshLoss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>,
reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output. ``` reduction ``` Reduction to apply on the computed element-wise loss values.
• ``` poissonLoss(predicted:expected:reduction:) ```

Returns the Poisson loss between predictions and expectations.

#### Declaration

``````@differentiable
public func poissonLoss<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>,
reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output. ``` reduction ``` Reduction to apply on the computed element-wise loss values.
• ``` kullbackLeiblerDivergence(predicted:expected:reduction:) ```

Returns the Kullback-Leibler divergence (KL divergence) between between expectations and predictions. Given two distributions `p` and `q`, KL divergence computes `p * log(p / q)`.

#### Declaration

``````@differentiable
public func kullbackLeiblerDivergence<Scalar: TensorFlowFloatingPoint>(
predicted: Tensor<Scalar>,
expected: Tensor<Scalar>,
reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = { \$0.sum() }
) -> Tensor<Scalar>``````

#### Parameters

 ``` predicted ``` Predicted outputs from a neural network. ``` expected ``` Expected values, i.e. targets, that correspond to the correct output. ``` reduction ``` Reduction to apply on the computed element-wise loss values.
• ``` softmaxCrossEntropy(logits:labels:reduction:) ```

Returns the softmax cross entropy (categorical cross entropy) between logits and labels.

#### Declaration

``````@differentiable
public func softmaxCrossEntropy<Scalar: TensorFlowFloatingPoint>(
logits: Tensor<Scalar>,
labels: Tensor<Int32>,
reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
) -> Tensor<Scalar>``````

#### Parameters

 ``` logits ``` One-hot encoded outputs from a neural network. ``` labels ``` Indices (zero-indexed) of the correct outputs. ``` reduction ``` Reduction to apply on the computed element-wise loss values.
• ``` softmaxCrossEntropy(logits:probabilities:reduction:) ```

Returns the softmax cross entropy (categorical cross entropy) between logits and labels.

#### Declaration

``````@differentiable
public func softmaxCrossEntropy<Scalar: TensorFlowFloatingPoint>(
logits: Tensor<Scalar>,
probabilities: Tensor<Scalar>,
reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
) -> Tensor<Scalar>``````

#### Parameters

 ``` logits ``` Unscaled log probabilities from a neural network. ``` probabilities ``` Probability values that correspond to the correct output. Each row must be a valid probability distribution. ``` reduction ``` Reduction to apply on the computed element-wise loss values.
• ``` sigmoidCrossEntropy(logits:labels:reduction:) ```

Returns the sigmoid cross entropy (binary cross entropy) between logits and labels.

The reduction is reduced over all elements. If reduced over batch size is intended, please consider to scale the loss.

#### Declaration

``````@differentiable
public func sigmoidCrossEntropy<Scalar: TensorFlowFloatingPoint>(
logits: Tensor<Scalar>,
labels: Tensor<Scalar>,
reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
) -> Tensor<Scalar>``````

#### Parameters

 ``` logits ``` The unscaled output of a neural network. ``` labels ``` Integer values that correspond to the correct output. ``` reduction ``` Reduction to apply on the computed element-wise loss values.
• ``` abs(_:) ```

Returns the absolute value of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func abs<T>(_ x: Tensor<T>) -> Tensor<T> where T : SignedNumeric, T : TensorFlowScalar``````
• ``` log(_:) ```

Returns the natural logarithm of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func log<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` log2(_:) ```

Returns the base-two logarithm of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func log2<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` log10(_:) ```

Returns the base-ten logarithm of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func log10<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` log1p(_:) ```

Returns the logarithm of `1 + x` element-wise.

#### Declaration

``````@differentiable
public func log1p<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` log1mexp(_:) ```

Returns `log(1 - exp(x))` using a numerically stable approach.

#### Declaration

``````@differentiable
public func log1mexp<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` sin(_:) ```

Returns the sine of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func sin<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` cos(_:) ```

Returns the cosine of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func cos<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` tan(_:) ```

Returns the tangent of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func tan<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` sinh(_:) ```

Returns the hyperbolic sine of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func sinh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` cosh(_:) ```

Returns the hyperbolic cosine of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func cosh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` tanh(_:) ```

Returns the hyperbolic tangent of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func tanh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` acos(_:) ```

Returns the inverse cosine of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func acos<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` asin(_:) ```

Returns the inverse sine of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func asin<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` atan(_:) ```

Returns the inverse tangent of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func atan<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` acosh(_:) ```

Returns the inverse hyperbolic cosine of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func acosh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` asinh(_:) ```

Returns the inverse hyperbolic sine of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func asinh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` atanh(_:) ```

Returns the inverse hyperbolic tangent of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func atanh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` sqrt(_:) ```

Returns the square root of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func sqrt<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` rsqrt(_:) ```

Returns the inverse square root of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func rsqrt<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` exp(_:) ```

Returns the exponential of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func exp<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` exp2(_:) ```

Returns two raised to the power of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func exp2<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` exp10(_:) ```

Returns ten raised to the power of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func exp10<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` expm1(_:) ```

Returns the exponential of `x - 1` element-wise.

#### Declaration

``````@differentiable
public func expm1<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` round(_:) ```

Returns the values of the specified tensor rounded to the nearest integer, element-wise.

#### Declaration

``````@differentiable
public func round<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` ceil(_:) ```

Returns the ceiling of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func ceil<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` floor(_:) ```

Returns the floor of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func floor<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` sign(_:) ```

Returns an indication of the sign of the specified tensor element-wise. Specifically, computes `y = sign(x) = -1` if `x < 0`; 0 if `x == 0`; 1 if `x > 0`.

#### Declaration

``````@differentiable
public func sign<T>(_ x: Tensor<T>) -> Tensor<T> where T : Numeric, T : TensorFlowScalar``````
• ``` sigmoid(_:) ```

Returns the sigmoid of the specified tensor element-wise. Specifically, computes `1 / (1 + exp(-x))`.

#### Declaration

``````@differentiable
public func sigmoid<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` logSigmoid(_:) ```

Returns the log-sigmoid of the specified tensor element-wise. Specifically, `log(1 / (1 + exp(-x)))`. For numerical stability, we use `-softplus(-x)`.

#### Declaration

``````@differentiable
public func logSigmoid<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` softplus(_:) ```

Returns the softplus of the specified tensor element-wise. Specifically, computes `log(exp(features) + 1)`.

#### Declaration

``````@differentiable
public func softplus<T>(_ features: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` softsign(_:) ```

Returns the softsign of the specified tensor element-wise. Specifically, computes `features/ (abs(features) + 1)`.

#### Declaration

``````@differentiable
public func softsign<T>(_ features: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` softmax(_:) ```

Returns the softmax of the specified tensor along the last axis. Specifically, computes `exp(x) / exp(x).sum(alongAxes: -1)`.

#### Declaration

``````@differentiable
public func softmax<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` softmax(_:alongAxis:) ```

Returns the softmax of the specified tensor along the specified axis. Specifically, computes `exp(x) / exp(x).sum(alongAxes: axis)`.

#### Declaration

``````@differentiable
public func softmax<T>(_ x: Tensor<T>, alongAxis axis: Int) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` logSoftmax(_:) ```

Returns the log-softmax of the specified tensor element-wise.

#### Declaration

``````@differentiable
public func logSoftmax<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` elu(_:) ```

Returns a tensor by applying an exponential linear unit. Specifically, computes `exp(x) - 1` if < 0, `x` otherwise. See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

#### Declaration

``````@differentiable
public func elu<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` gelu(_:) ```

Returns the Gaussian Error Linear Unit (GELU) activations of the specified tensor element-wise.

Specifically, `gelu` approximates `xP(X <= x)`, where `P(X <= x)` is the Standard Gaussian cumulative distribution, by computing: x * [0.5 * (1 + tanh[√(2/π) * (x + 0.044715 * x^3)])].

#### Declaration

``````@differentiable
public func gelu<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` relu(_:) ```

Returns a tensor by applying the ReLU activation function to the specified tensor element-wise. Specifically, computes `max(0, x)`.

#### Declaration

``````@differentiable
public func relu<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` relu6(_:) ```

Returns a tensor by applying the ReLU6 activation function, namely `min(max(0, x), 6)`.

#### Declaration

``````@differentiable
public func relu6<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` leakyRelu(_:alpha:) ```

Returns a tensor by applying the leaky ReLU activation function to the specified tensor element-wise. Specifically, computes `max(x, x * alpha)`.

#### Declaration

``````@differentiable
public func leakyRelu<T: TensorFlowFloatingPoint>(
_ x: Tensor<T>,
alpha: Double = 0.2
) -> Tensor<T>``````
• ``` selu(_:) ```

Returns a tensor by applying the SeLU activation function, namely `scale * alpha * (exp(x) - 1)` if `x < 0`, and `scale * x` otherwise.

Note

This is designed to be used together with the variance scaling layer initializers. Please refer to Self-Normalizing Neural Networks for more information.

#### Declaration

``````@differentiable
public func selu<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` pow(_:_:) ```

Returns the power of the first tensor to the second tensor.

#### Declaration

``````@differentiable
public func pow<T>(_ lhs: Tensor<T>, _ rhs: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` pow(_:_:) ```

Returns the power of the scalar to the tensor, broadcasting the scalar.

#### Declaration

``````@differentiable
public func pow<T>(_ lhs: T, _ rhs: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` pow(_:_:) ```

Returns the power of the tensor to the scalar, broadcasting the scalar.

#### Declaration

``````@differentiable
public func pow<T>(_ lhs: Tensor<T>, _ rhs: T) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` pow(_:_:) ```

Returns the power of the tensor to the scalar, broadcasting the scalar.

#### Declaration

``````@differentiable
public func pow<T>(_ x: Tensor<T>, _ n: Int) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` root(_:_:) ```

Returns the element-wise `n`th root of the tensor.

#### Declaration

``````@differentiable
public func root<T>(_ x: Tensor<T>, _ n: Int) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar``````
• ``` squaredDifference(_:_:) ```

Returns the squared difference between `x` and `y`.

#### Declaration

``````@differentiable
public func squaredDifference<T>(_ x: Tensor<T>, _ y: Tensor<T>) -> Tensor<T> where T : Numeric, T : TensorFlowScalar``````

#### Return Value

`(x - y) ^ 2`.

• ``` max(_:_:) ```

Returns the element-wise maximum of two tensors.

Note

`max` supports broadcasting.

#### Declaration

``````@differentiable
public func max<T>(_ lhs: Tensor<T>, _ rhs: Tensor<T>) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar``````
• ``` max(_:_:) ```

Returns the element-wise maximum of the scalar and the tensor, broadcasting the scalar.

#### Declaration

``````@differentiable
public func max<T>(_ lhs: T, _ rhs: Tensor<T>) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar``````
• ``` max(_:_:) ```

Returns the element-wise maximum of the scalar and the tensor, broadcasting the scalar.

#### Declaration

``````@differentiable
public func max<T>(_ lhs: Tensor<T>, _ rhs: T) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar``````
• ``` min(_:_:) ```

Returns the element-wise minimum of two tensors.

Note

`min` supports broadcasting.

#### Declaration

``````@differentiable
public func min<T>(_ lhs: Tensor<T>, _ rhs: Tensor<T>) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar``````
• ``` min(_:_:) ```

Returns the element-wise minimum of the scalar and the tensor, broadcasting the scalar.

#### Declaration

``````@differentiable
public func min<T>(_ lhs: T, _ rhs: Tensor<T>) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar``````
• ``` min(_:_:) ```

Returns the element-wise minimum of the scalar and the tensor, broadcasting the scalar.

#### Declaration

``````@differentiable
public func min<T>(_ lhs: Tensor<T>, _ rhs: T) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar``````
• ``` cosineSimilarity(_:_:) ```

Returns the cosine similarity between `x` and `y`.

#### Declaration

``````@differentiable
public func cosineSimilarity<Scalar: TensorFlowFloatingPoint>(
_ x: Tensor<Scalar>,
_ y: Tensor<Scalar>
) -> Tensor<Scalar>``````
• ``` cosineDistance(_:_:) ```

Returns the cosine distance between `x` and `y`. Cosine distance is defined as `1 - cosineSimilarity(x, y)`.

#### Declaration

``````@differentiable
public func cosineDistance<Scalar: TensorFlowFloatingPoint>(
_ x: Tensor<Scalar>,
_ y: Tensor<Scalar>
) -> Tensor<Scalar>``````
• ``` matmul(_:transposed:_:transposed:) ```

Performs matrix multiplication with another tensor and produces the result.

#### Declaration

``````@differentiable
public func matmul<Scalar: Numeric>(
_ lhs: Tensor<Scalar>,
transposed transposeLhs: Bool = false,
_ rhs: Tensor<Scalar>,
transposed transposeRhs: Bool = false
) -> Tensor<Scalar>``````
• ``` conv2D(_:filter:strides:padding:dilations:) ```

Returns a 2-D convolution with the specified input, filter, strides, and padding.

Precondition

`input` must have rank `4`.

Precondition

`filter` must have rank 4.

#### Declaration

``````@differentiable
public func conv2D<Scalar: TensorFlowFloatingPoint>(
_ input: Tensor<Scalar>,
filter: Tensor<Scalar>,
strides: (Int, Int, Int, Int) = (1, 1, 1, 1),
dilations: (Int, Int, Int, Int) = (1, 1, 1, 1)
) -> Tensor<Scalar>``````

#### Parameters

 ``` input ``` The input. ``` filter ``` The convolution filter. ``` strides ``` The strides of the sliding filter for each dimension of the input. ``` padding ``` The padding for the operation ``` dilations ``` The dilation factor for each dimension of the input.
• ``` conv3D(_:filter:strides:padding:) ```

Returns a 3-D convolution with the specified input, filter, strides, and padding.

Precondition

`input` must have rank `5`.

Precondition

`filter` must have rank 5.

#### Declaration

``````@differentiable
public func conv3D<Scalar: TensorFlowFloatingPoint>(
_ input: Tensor<Scalar>,
filter: Tensor<Scalar>,
strides: (Int, Int, Int, Int, Int) = (1, 1, 1, 1, 1),
) -> Tensor<Scalar>``````

#### Parameters

 ``` input ``` The input. ``` filter ``` The convolution filter. ``` strides ``` The strides of the sliding filter for each dimension of the input. ``` padding ``` The padding for the operation.
• ``` depthwiseConv2D(_:filter:strides:padding:) ```

Returns a 2-D depthwise convolution with the specified input, filter, strides, and padding.

Precondition

`input` must have rank 4.

Precondition

`filter` must have rank 4.

#### Declaration

``````@differentiable
public func depthwiseConv2D<Scalar: TensorFlowFloatingPoint>(
_ input: Tensor<Scalar>,
filter: Tensor<Scalar>,
strides: (Int, Int, Int, Int),
) -> Tensor<Scalar>``````

#### Parameters

 ``` input ``` The input. ``` filter ``` The depthwise convolution filter. ``` strides ``` The strides of the sliding filter for each dimension of the input. ``` padding ``` The padding for the operation.
• ``` maxPool2D(_:filterSize:strides:padding:) ```

Returns a 2-D max pooling, with the specified filter sizes, strides, and padding.

#### Declaration

``````@differentiable
public func maxPool2D<Scalar: TensorFlowFloatingPoint>(
_ input: Tensor<Scalar>,
filterSize: (Int, Int, Int, Int),
strides: (Int, Int, Int, Int),
) -> Tensor<Scalar>``````

#### Parameters

 ``` input ``` The input. ``` filterSize ``` The dimensions of the pooling kernel. ``` strides ``` The strides of the sliding filter for each dimension of the input. ``` padding ``` The padding for the operation.
• ``` maxPool3D(_:filterSize:strides:padding:) ```

Returns a 3-D max pooling, with the specified filter sizes, strides, and padding.

#### Declaration

``````@differentiable
public func maxPool3D<Scalar: TensorFlowFloatingPoint>(
_ input: Tensor<Scalar>,
filterSize: (Int, Int, Int, Int, Int),
strides: (Int, Int, Int, Int, Int),
) -> Tensor<Scalar>``````

#### Parameters

 ``` input ``` The input. ``` filterSize ``` The dimensions of the pooling kernel. ``` strides ``` The strides of the sliding filter for each dimension of the input. ``` padding ``` The padding for the operation.
• ``` avgPool2D(_:filterSize:strides:padding:) ```

Returns a 2-D average pooling, with the specified filter sizes, strides, and padding.

#### Declaration

``````@differentiable
public func avgPool2D<Scalar: TensorFlowFloatingPoint>(
_ input: Tensor<Scalar>,
filterSize: (Int, Int, Int, Int),
strides: (Int, Int, Int, Int),
) -> Tensor<Scalar>``````

#### Parameters

 ``` input ``` The input. ``` filterSize ``` The dimensions of the pooling kernel. ``` strides ``` The strides of the sliding filter for each dimension of the input. ``` padding ``` The padding for the operation.
• ``` avgPool3D(_:filterSize:strides:padding:) ```

Returns a 3-D average pooling, with the specified filter sizes, strides, and padding.

#### Declaration

``````@differentiable
public func avgPool3D<Scalar: TensorFlowFloatingPoint>(
_ input: Tensor<Scalar>,
filterSize: (Int, Int, Int, Int, Int),
strides: (Int, Int, Int, Int, Int),
 ``` input ``` The input. ``` filterSize ``` The dimensions of the pooling kernel. ``` strides ``` The strides of the sliding filter for each dimension of the input. ``` padding ``` The padding for the operation.
• ``` randomSeedForTensorFlow(using:) ```
``public func randomSeedForTensorFlow(using seed: TensorFlowSeed? = nil) -> TensorFlowSeed``