Functions

The following functions are available globally.

  • Returns the L1 loss between predictions and expectations.

    Declaration

    @differentiable
    public func l1Loss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Returns the L2 loss between predictions and expectations.

    Declaration

    @differentiable
    public func l2Loss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Returns the hinge loss between predictions and expectations.

    Declaration

    @differentiable
    public func hingeLoss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Returns the squared hinge loss between predictions and expectations.

    Declaration

    @differentiable
    public func squaredHingeLoss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Returns the categorical hinge loss between predictions and expectations.

    Declaration

    @differentiable
    public func categoricalHingeLoss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Returns the logarithm of the hyperbolic cosine of the error between predictions and expectations.

    Declaration

    @differentiable
    public func logCoshLoss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Returns the Poisson loss between predictions and expectations.

    Declaration

    @differentiable
    public func poissonLoss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Returns the Kullback-Leibler divergence (KL divergence) between between expectations and predictions. Given two distributions p and q, KL divergence computes p * log(p / q).

    Declaration

    @differentiable
    public func kullbackLeiblerDivergence<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Returns the softmax cross entropy (categorical cross entropy) between logits and labels.

    Declaration

    @differentiable
    public func softmaxCrossEntropy<Scalar: TensorFlowFloatingPoint>(
        logits: Tensor<Scalar>,
        probabilities: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    logits

    One-hot encoded outputs from a neural network.

    labels

    Indices (zero-indexed) of the correct outputs.

  • Returns the sigmoid cross entropy (binary cross entropy) between logits and labels.

    Declaration

    @differentiable
    public func sigmoidCrossEntropy<Scalar: TensorFlowFloatingPoint>(
        logits: Tensor<Scalar>,
        labels: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    logits

    The unscaled output of a neural network.

    labels

    Integer values that correspond to the correct output.

  • Returns a tensor with the same shape and scalars as the specified tensor.

    Declaration

    @differentiable
    public func identity<Scalar>(_ x: Tensor<Scalar>) -> Tensor<Scalar> where Scalar : TensorFlowScalar
  • Calls the given closure within a context that has everything identical to the current context except for the given learning phase.

    Declaration

    public func withContext<R>(_ context: Context, _ body: () throws -> R) rethrows -> R

    Parameters

    context

    A context that will be set before the closure gets called and restored after the closure returns.

    body

    A nullary closure. If the closure has a return value, that value is also used as the return value of the withContext(_:_:) function.

    Return Value

    The return value, if any, of the body closure.

  • Calls the given closure within a context that has everything identical to the current context except for the given learning phase.

    Declaration

    public func withLearningPhase<R>(
        _ learningPhase: LearningPhase,
        _ body: () throws -> R
    ) rethrows -> R

    Parameters

    learningPhase

    A learning phase that will be set before the closure gets called and restored after the closure returns.

    body

    A nullary closure. If the closure has a return value, that value is also used as the return value of the withLearningPhase(_:_:) function.

    Return Value

    The return value, if any, of the body closure.

  • Calls the given closure within a context that has everything identical to the current context except for the given random seed.

    Declaration

    public func withRandomSeedForTensorFlow<R>(
        _ randomSeed: TensorFlowSeed,
        _ body: () throws -> R
    ) rethrows -> R

    Parameters

    randomSeed

    A random seed that will be set before the closure gets called and restored after the closure returns.

    body

    A nullary closure. If the closure has a return value, that value is also used as the return value of the withRandomSeedForTensorFlow(_:_:) function.

    Return Value

    The return value, if any, of the body closure.

  • Calls the given closure within a context that has everything identical to the current context except for the given random number generator.

    Declaration

    public func withRandomNumberGeneratorForTensorFlow<G: RandomNumberGenerator, R>(
        _ randomNumberGenerator: inout G,
        _ body: () throws -> R
    ) rethrows -> R

    Parameters

    randomNumberGenerator

    A random number generator that will be set before the closure gets called and restored after the closure returns.

    body

    A nullary closure. If the closure has a return value, that value is also used as the return value of the withRandomNumberGeneratorForTensorFlow(_:_:) function.

    Return Value

    The return value, if any, of the body closure.

  • Declaration

    public func zip<T: TensorGroup, U: TensorGroup>(
        _ dataset1: Dataset<T>, _ dataset2: Dataset<U>
    ) -> Dataset<Zip2TensorGroup<T, U>>
  • Declaration

    public func valueWithGradient<T, R>(
        at x: T,
        in f: @differentiable (T) -> Tensor<R>
    ) -> (value: Tensor<R>, gradient: T.TangentVector)
    where T: Differentiable, R: TensorFlowFloatingPoint
  • Declaration

    public func valueWithGradient<T, U, R>(
        at x: T,
        _ y: U,
        in f: @differentiable (T, U) -> Tensor<R>
    ) -> (value: Tensor<R>, gradient: (T.TangentVector, U.TangentVector))
        where T: Differentiable, U: Differentiable, R: TensorFlowFloatingPoint
  • Declaration

    public func valueWithGradient<T, U, V, R>(
        at x: T,
        _ y: U,
        _ z: V,
        in f: @differentiable (T, U, V) -> Tensor<R>
    ) -> (value: Tensor<R>, gradient: (T.TangentVector, U.TangentVector, V.TangentVector))
      where T: Differentiable, U: Differentiable, V: Differentiable, R: TensorFlowFloatingPoint
  • Declaration

    public func valueWithGradient<T, R>(
        of f: @escaping @differentiable (T) -> Tensor<R>
    ) -> (T) -> (value: Tensor<R>, gradient: T.TangentVector)
        where T: Differentiable, R: TensorFlowFloatingPoint
  • Declaration

    public func valueWithGradient<T, U, R>(
        of f: @escaping @differentiable (T, U) -> Tensor<R>
    ) -> (T, U) -> (value: Tensor<R>, gradient: (T.TangentVector, U.TangentVector))
      where T: Differentiable, U: Differentiable, R: TensorFlowFloatingPoint
  • Declaration

    public func valueWithGradient<T, U, V, R>(
        of f: @escaping @differentiable (T, U, V) -> Tensor<R>
    ) -> (T, U, V) -> (
        value: Tensor<R>,
        gradient: (T.TangentVector, U.TangentVector, V.TangentVector))
        where T: Differentiable, U: Differentiable, V: Differentiable, R: TensorFlowFloatingPoint
  • Declaration

    public func gradient<T, R>(
        at x: T,
        in f: @differentiable (T) -> Tensor<R>
    ) -> T.TangentVector where T: Differentiable, R: TensorFlowFloatingPoint
  • Declaration

    public func gradient<T, U, R>(
        at x: T,
        _ y: U,
        in f: @differentiable (T, U) -> Tensor<R>
    ) -> (T.TangentVector, U.TangentVector)
        where T: Differentiable, U: Differentiable, R: TensorFlowFloatingPoint
  • Declaration

    public func gradient<T, U, V, R>(
        at x: T,
        _ y: U,
        _ z: V,
        in f: @differentiable (T, U, V) -> Tensor<R>
    ) -> (T.TangentVector, U.TangentVector, V.TangentVector)
        where T: Differentiable, U: Differentiable, V: Differentiable, R: TensorFlowFloatingPoint
  • Declaration

    public func gradient<T, R>(
        of f: @escaping @differentiable (T) -> Tensor<R>
    ) -> (T) -> T.TangentVector where T: Differentiable, R: TensorFlowFloatingPoint
  • Declaration

    public func gradient<T, U, R>(
        of f: @escaping @differentiable (T, U) -> Tensor<R>
    ) -> (T, U) -> (T.TangentVector, U.TangentVector)
        where T: Differentiable, U: Differentiable, R: TensorFlowFloatingPoint
  • Declaration

    public func gradient<T, U, V, R>(
        of f: @escaping @differentiable (T, U, V) -> Tensor<R>
    ) -> (T, U, V) -> (T.TangentVector, U.TangentVector, V.TangentVector)
        where T: Differentiable, U: Differentiable, V: Differentiable, R: TensorFlowFloatingPoint
  • Returns x like an identity function. When used in a context where x is being differentiated with respect to, this function will not produce any derivative at x.

    Declaration

    @_semantics("autodiff.nonvarying")
    public func withoutDerivative<T>(at x: T) -> T
  • Applies the given closure body to x. When used in a context where x is being differentiated with respect to, this function will not produce any derivative at x.

    Declaration

    @_semantics("autodiff.nonvarying")
    public func withoutDerivative<T, R>(at x: T, in body: (T) -> R) -> R
  • Create a differentiable function from a vector-Jacobian products function.

    Declaration

    public func differentiableFunction<T : Differentiable, R : Differentiable>(
      from vjp: @escaping (T)
               -> (value: R, pullback: (R.TangentVector) -> T.TangentVector)
    ) -> @differentiable (T) -> R
  • Create a differentiable function from a vector-Jacobian products function.

    Declaration

    public func differentiableFunction<T, U, R>(
      from vjp: @escaping (T, U)
               -> (value: R, pullback: (R.TangentVector)
                 -> (T.TangentVector, U.TangentVector))
    ) -> @differentiable (T, U) -> R
  • Make a function be recomputed in its pullback, known as “checkpointing” in traditional automatic differentiation.

    Declaration

    public func withRecomputationInPullbacks<T, U>(
      _ body: @escaping @differentiable (T) -> U
    ) -> @differentiable (T) -> U where T : Differentiable, U : Differentiable
  • Declaration

    public func transpose<T, R>(
      of body: @escaping @differentiable(linear) (T) -> R
    ) -> @differentiable(linear) (R) -> T
  • Declaration

    public func valueWithDifferential<T, R>(
      at x: T, in f: @differentiable (T) -> R
    ) -> (value: R, differential: (T.TangentVector) -> R.TangentVector)
  • Declaration

    public func valueWithDifferential<T, U, R>(
      at x: T, _ y: U, in f: @differentiable (T, U) -> R
    ) -> (value: R,
          differential: (T.TangentVector, U.TangentVector) -> R.TangentVector)
  • Declaration

    public func valueWithDifferential<T, U, V, R>(
      at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
    ) -> (value: R,
          differential: (T.TangentVector, U.TangentVector, V.TangentVector)
            -> (R.TangentVector))
  • Declaration

    public func valueWithPullback<T, R>(
      at x: T, in f: @differentiable (T) -> R
    ) -> (value: R, pullback: (R.TangentVector) -> T.TangentVector)
  • Declaration

    public func valueWithPullback<T, U, R>(
      at x: T, _ y: U, in f: @differentiable (T, U) -> R
    ) -> (value: R,
          pullback: (R.TangentVector) -> (T.TangentVector, U.TangentVector))
  • Declaration

    public func valueWithPullback<T, U, V, R>(
      at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
    ) -> (value: R,
          pullback: (R.TangentVector)
            -> (T.TangentVector, U.TangentVector, V.TangentVector))
  • Declaration

    public func differential<T, R>(
      at x: T, in f: @differentiable (T) -> R
    ) -> (T.TangentVector) -> R.TangentVector
  • Declaration

    public func differential<T, U, R>(
      at x: T, _ y: U, in f: @differentiable (T, U) -> R
    ) -> (T.TangentVector, U.TangentVector) -> R.TangentVector
  • Declaration

    public func differential<T, U, V, R>(
      at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
    ) -> (T.TangentVector, U.TangentVector, V.TangentVector) -> (R.TangentVector)
  • Declaration

    public func pullback<T, R>(
      at x: T, in f: @differentiable (T) -> R
    ) -> (R.TangentVector) -> T.TangentVector
  • Declaration

    public func pullback<T, U, R>(
      at x: T, _ y: U, in f: @differentiable (T, U) -> R
    ) -> (R.TangentVector) -> (T.TangentVector, U.TangentVector)
  • Declaration

    public func pullback<T, U, V, R>(
      at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
    ) -> (R.TangentVector)
        -> (T.TangentVector, U.TangentVector, V.TangentVector)
  • Declaration

    public func derivative<T: FloatingPoint, R>(
      at x: T, in f: @differentiable (T) -> R
    ) ->  R.TangentVector
      where T.TangentVector == T
  • Declaration

    public func derivative<T: FloatingPoint, U: FloatingPoint, R>(
      at x: T, _ y: U, in f: @differentiable (T, U) -> R
    ) -> R.TangentVector
      where T.TangentVector == T,
            U.TangentVector == U
  • Declaration

    public func derivative<T: FloatingPoint, U: FloatingPoint, V: FloatingPoint, R>(
      at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
    ) -> R.TangentVector
      where T.TangentVector == T,
            U.TangentVector == U,
            V.TangentVector == V
  • Declaration

    public func gradient<T, R>(
      at x: T, in f: @differentiable (T) -> R
    ) -> T.TangentVector
      where R : FloatingPoint, R.TangentVector == R
  • Declaration

    public func gradient<T, U, R>(
      at x: T, _ y: U, in f: @differentiable (T, U) -> R
    ) -> (T.TangentVector, U.TangentVector)
      where R : FloatingPoint, R.TangentVector == R
  • Declaration

    public func gradient<T, U, V, R>(
      at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
    ) -> (T.TangentVector, U.TangentVector, V.TangentVector)
      where R : FloatingPoint, R.TangentVector == R
  • Declaration

    public func valueWithDerivative<T: FloatingPoint, R>(
      at x: T, in f: @escaping @differentiable (T) -> R
    ) -> (value: R, derivative: R.TangentVector)
      where T.TangentVector == T
  • Declaration

    public func valueWithDerivative<T: FloatingPoint, U: FloatingPoint, R>(
      at x: T, _ y: U, in f: @escaping @differentiable (T, U) -> R
    ) -> (value: R, derivative: R.TangentVector)
      where T.TangentVector == T,
            U.TangentVector == U
  • Declaration

    public func valueWithDerivative<
      T: FloatingPoint, U: FloatingPoint, V: FloatingPoint, R>(
      at x: T, _ y: U, _ z: V, in f: @escaping @differentiable (T, U, V) -> R
    ) -> (value: R, derivative: R.TangentVector)
      where T.TangentVector == T,
            U.TangentVector == U,
            V.TangentVector == V
  • Declaration

    public func valueWithGradient<T, R>(
      at x: T, in f: @differentiable (T) -> R
    ) -> (value: R, gradient: T.TangentVector)
      where R : FloatingPoint, R.TangentVector == R
  • Declaration

    public func valueWithGradient<T, U, R>(
      at x: T, _ y: U, in f: @differentiable (T, U) -> R
    ) -> (value: R, gradient: (T.TangentVector, U.TangentVector))
      where R : FloatingPoint, R.TangentVector == R
  • Declaration

    public func valueWithGradient<T, U, V, R>(
      at x: T, _ y: U, _ z: V, in f: @differentiable (T, U, V) -> R
    ) -> (value: R,
          gradient: (T.TangentVector, U.TangentVector, V.TangentVector))
      where R : FloatingPoint, R.TangentVector == R
  • Declaration

    public func derivative<T: FloatingPoint, R>(
      of f: @escaping @differentiable (T) -> R
    ) -> (T) -> R.TangentVector
      where T.TangentVector == T
  • Declaration

    public func derivative<T: FloatingPoint, U: FloatingPoint, R>(
      of f: @escaping @differentiable (T, U) -> R
    ) -> (T, U) -> R.TangentVector
      where T.TangentVector == T,
            U.TangentVector == U
  • Declaration

    public func derivative<T: FloatingPoint, U: FloatingPoint, V: FloatingPoint, R>(
      of f: @escaping @differentiable (T, U, V) -> R
    ) -> (T, U, V) -> R.TangentVector
      where T.TangentVector == T,
            U.TangentVector == U,
            V.TangentVector == V
  • Declaration

    public func gradient<T, R>(
      of f: @escaping @differentiable (T) -> R
    ) -> (T) -> T.TangentVector
      where R : FloatingPoint, R.TangentVector == R
  • Declaration

    public func gradient<T, U, R>(
      of f: @escaping @differentiable (T, U) -> R
    ) -> (T, U) -> (T.TangentVector, U.TangentVector)
      where R : FloatingPoint, R.TangentVector == R
  • Declaration

    public func gradient<T, U, V, R>(
      of f: @escaping @differentiable (T, U, V) -> R
    ) -> (T, U, V) -> (T.TangentVector, U.TangentVector, V.TangentVector)
      where R : FloatingPoint, R.TangentVector == R
  • Declaration

    public func valueWithDerivative<T: FloatingPoint, R>(
      of f: @escaping @differentiable (T) -> R
    ) -> (T) -> (value: R, derivative: R.TangentVector)
      where T.TangentVector == T
  • Declaration

    public func valueWithDerivative<T: FloatingPoint, U: FloatingPoint, R>(
      of f: @escaping @differentiable (T, U) -> R
    ) -> (T, U) -> (value: R, derivative: R.TangentVector)
      where T.TangentVector == T,
            U.TangentVector == U
  • Declaration

    public func valueWithDerivative<
      T: FloatingPoint, U: FloatingPoint, V: FloatingPoint, R>(
      of f: @escaping @differentiable (T, U, V) -> R
    ) -> (T, U, V) -> (value: R, derivative: R.TangentVector)
      where T.TangentVector == T,
            U.TangentVector == U,
            V.TangentVector == V
  • Declaration

    public func valueWithGradient<T, R>(
      of f: @escaping @differentiable (T) -> R
    ) -> (T) -> (value: R, gradient: T.TangentVector)
      where R : FloatingPoint, R.TangentVector == R
  • Declaration

    public func valueWithGradient<T, U, R>(
      of f: @escaping @differentiable (T, U) -> R
    ) -> (T, U) -> (value: R, gradient: (T.TangentVector, U.TangentVector))
      where R : FloatingPoint, R.TangentVector == R
  • Declaration

    public func valueWithGradient<T, U, V, R>(
      of f: @escaping @differentiable (T, U, V) -> R
    ) -> (T, U, V)
      -> (value: R,
          gradient: (T.TangentVector, U.TangentVector, V.TangentVector))
      where R : FloatingPoint, R.TangentVector == R
  • Executes a closure, making TensorFlow operations run on a specific kind of device.

    Declaration

    public func withDevice<R>(
        _ kind: DeviceKind,
        _ index: UInt = 0,
        perform body: () throws -> R
    ) rethrows -> R

    Parameters

    kind

    A kind of device to run TensorFlow operations on.

    index

    The device to run the ops on.

    body

    A closure whose TensorFlow operations are to be executed on the specified kind of device.

  • Executes a closure, making TensorFlow operations run on a device with a specific name.

    Some examples of device names:

    • “/device:CPU:0”: The CPU of your machine.
    • “/GPU:0”: Short-hand notation for the first GPU of your machine that is visible to TensorFlow
    • “/job:localhost/replica:0/task:0/device:GPU:1”: Fully qualified name of the second GPU of your machine that is visible to TensorFlow.

    Declaration

    public func withDevice<R>(named name: String, perform body: () throws -> R) rethrows -> R

    Parameters

    name

    Device name.

    body

    A closure whose TensorFlow operations are to be executed on the specified kind of device.

  • Executes a closure, allowing TensorFlow to place TensorFlow operations on any device. This should restore the default placement behavior.

    Declaration

    public func withDefaultDevice<R>(perform body: () throws -> R) rethrows -> R

    Parameters

    body

    A closure whose TensorFlow operations are to be executed on the specified kind of device.

  • Returns a function that creates a tensor by initializing all its values to zeros.

    Declaration

    public func zeros<Scalar>() -> ParameterInitializer<Scalar> where Scalar : TensorFlowFloatingPoint, Scalar : TensorFlowScalar
  • Returns a function that creates a tensor by initializing all its values to the provided value.

    Declaration

    public func constantInitializer<Scalar: TensorFlowFloatingPoint>(
        value: Scalar
    ) -> ParameterInitializer<Scalar>
  • Returns a function that creates a tensor by initializing it to the provided value. Note that broadcasting of the provided value is not supported.

    Declaration

    public func constantInitializer<Scalar: TensorFlowFloatingPoint>(
        value: Tensor<Scalar>
    ) -> ParameterInitializer<Scalar>
  • Returns a function that creates a tensor by performing Glorot (Xavier) uniform initialization for the specified shape, randomly sampling scalar values from a uniform distribution between -limit and limit, generated by the default random number generator, where limit is sqrt(6 / (fanIn + fanOut)), and fanIn/fanOut represent the number of input and output features multiplied by the receptive field, if present.

    Declaration

    public func glorotUniform<Scalar: TensorFlowFloatingPoint>(
        seed: TensorFlowSeed = Context.local.randomSeed
    ) -> ParameterInitializer<Scalar>
  • Returns a function that creates a tensor by performing Glorot (Xavier) normal initialization for the specified shape, randomly sampling scalar values from a truncated normal distribution centered on 0 with standard deviation sqrt(2 / (fanIn + fanOut)), where fanIn/fanOut represent the number of input and output features multiplied by the receptive field size, if present.

    Declaration

    public func glorotNormal<Scalar: TensorFlowFloatingPoint>(
        seed: TensorFlowSeed = Context.local.randomSeed
    ) -> ParameterInitializer<Scalar>
  • Returns a function that creates a tensor by performing He (Kaiming) uniform initialization for the specified shape, randomly sampling scalar values from a uniform distribution between -limit and limit, generated by the default random number generator, where limit is sqrt(6 / fanIn), and fanIn represents the number of input features multiplied by the receptive field, if present.

    Declaration

    public func heUniform<Scalar: TensorFlowFloatingPoint>(
        seed: TensorFlowSeed = Context.local.randomSeed
    ) -> ParameterInitializer<Scalar>
  • Returns a function that creates a tensor by performing He (Kaiming) normal initialization for the specified shape, randomly sampling scalar values from a truncated normal distribution centered on 0 with standard deviation sqrt(2 / fanIn), where fanIn represents the number of input features multiplied by the receptive field size, if present.

    Declaration

    public func heNormal<Scalar: TensorFlowFloatingPoint>(
        seed: TensorFlowSeed = Context.local.randomSeed
    ) -> ParameterInitializer<Scalar>
  • Returns a function that creates a tensor by performing LeCun uniform initialization for the specified shape, randomly sampling scalar values from a uniform distribution between -limit and limit, generated by the default random number generator, where limit is sqrt(3 / fanIn), and fanIn represents the number of input features multiplied by the receptive field, if present.

    Declaration

    public func leCunUniform<Scalar: TensorFlowFloatingPoint>(
        seed: TensorFlowSeed = Context.local.randomSeed
    ) -> ParameterInitializer<Scalar>
  • Returns a function that creates a tensor by performing LeCun normal initialization for the specified shape, randomly sampling scalar values from a truncated normal distribution centered on 0 with standard deviation sqrt(1 / fanIn), where fanIn represents the number of input features multiplied by the receptive field size, if present.

    Declaration

    public func leCunNormal<Scalar: TensorFlowFloatingPoint>(
        seed: TensorFlowSeed = Context.local.randomSeed
    ) -> ParameterInitializer<Scalar>
  • Returns a function that creates a tensor by initializing all its values randomly from a truncated Normal distribution. The generated values follow a Normal distribution with mean mean and standard deviation standardDeviation, except that values whose magnitude is more than two standard deviations from the mean are dropped and resampled.

    Declaration

    public func truncatedNormalInitializer<Scalar: TensorFlowFloatingPoint>(
        mean: Tensor<Scalar> = Tensor<Scalar>(0),
        standardDeviation: Tensor<Scalar> = Tensor<Scalar>(1),
        seed: TensorFlowSeed = Context.local.randomSeed
    ) -> ParameterInitializer<Scalar>

    Parameters

    mean

    Mean of the Normal distribution.

    standardDeviation

    Standard deviation of the Normal distribution.

    Return Value

    A truncated normal parameter initializer function.

  • Declaration

    public func == (lhs: TFETensorHandle, rhs: TFETensorHandle) -> Bool
  • Returns an identity matrix or a batch of matrices.

    Declaration

    public func eye<Scalar: Numeric>(
        rowCount: Int,
        columnCount: Int? = nil,
        batchShape: [Int] = []
    ) -> Tensor<Scalar>

    Parameters

    rowCount

    The number of rows in each batch matrix.

    columnCount

    The number of columns in each batch matrix.

    batchShape

    The leading batch dimensions of the returned tensor.

  • Computes the trace of an optionally batched matrix. The trace is the the sum along the main diagonal of each inner-most matrix.

    The input is a tensor with shape [..., M, N]. The output is a tensor with shape [...].

    Precondition

    matrix must be a tensor with shape [..., M, N].

    Declaration

    @differentiable
    public func trace<T>(_ matrix: Tensor<T>) -> Tensor<T> where T : Numeric, T : TensorFlowScalar

    Parameters

    matrix

    A tensor of shape [..., M, N].

  • Returns the Cholesky decomposition of one or more square matrices.

    The input is a tensor of shape [..., M, M] whose inner-most 2 dimensions form square matrices.

    The input has to be symmetric and positive definite. Only the lower-triangular part of the input will be used for this operation. The upper-triangular part will not be read.

    The output is a tensor of the same shape as the input containing the Cholesky decompositions for all input submatrices [..., :, :].

    Declaration

    @differentiable
    public func cholesky<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar

    Parameters

    input

    A tensor of shape [..., M, M].

  • Computes the L1 loss between expected and predicted. loss = reduction(abs(expected - predicted))

    Declaration

    @differentiable
    public func l1Loss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _sum
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Computes the L2 loss between expected and predicted. loss = reduction(square(expected - predicted))

    Declaration

    @differentiable
    public func l2Loss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _sum
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Computes the mean of absolute difference between labels and predictions. loss = mean(abs(expected - predicted))

    Declaration

    @differentiable
    public func meanAbsoluteError<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Computes the mean of squares of errors between labels and predictions. loss = mean(square(expected - predicted))

    Declaration

    @differentiable
    public func meanSquaredError<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Computes the mean squared logarithmic error between predicted and expected loss = square(log(expected) - log(predicted))

    Note

    Negative tensor entries will be clamped at 0 to avoid undefined logarithmic behavior, as log(_:) is undefined for negative reals.

    Declaration

    @differentiable
    public func meanSquaredLogarithmicError<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Computes the mean absolute percentage error between predicted and expected. loss = 100 * mean(abs((expected - predicted) / abs(expected)))

    Declaration

    @differentiable
    public func meanAbsolutePercentageError<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

  • Computes the hinge loss between predicted and expected. loss = reduction(max(0, 1 - predicted * expected)) expected values are expected to be -1 or 1.

    Declaration

    @differentiable
    public func hingeLoss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Computes the squared hinge loss between predicted and expected. loss = reduction(square(max(0, 1 - predicted * expected))) expected values are expected to be -1 or 1.

    Declaration

    @differentiable
    public func squaredHingeLoss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Computes the categorical hinge loss between predicted and expected. loss = maximum(negative - positive + 1, 0) where negative = max((1 - expected) * predicted) and positive = sum(predicted * expected)

    Declaration

    @differentiable
    public func categoricalHingeLoss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Computes the logarithm of the hyperbolic cosine of the prediction error. logcosh = log((exp(x) + exp(-x))/2), where x is the error predicted - expected

    Declaration

    @differentiable
    public func logCoshLoss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Computes the Poisson loss between predicted and expected The Poisson loss is the mean of the elements of the Tensor predicted - expected * log(predicted).

    Declaration

    @differentiable
    public func poissonLoss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Computes Kullback-Leibler divergence loss between expected and predicted. loss = reduction(expected * log(expected / predicted))

    Declaration

    @differentiable
    public func kullbackLeiblerDivergence<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _sum
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Computes the sparse softmax cross entropy (categorical cross entropy) between logits and labels. Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided as integers. There should be # classes floating point values per feature for logits and a single floating point value per feature for expected.

    Declaration

    @differentiable
    public func softmaxCrossEntropy<Scalar: TensorFlowFloatingPoint>(
        logits: Tensor<Scalar>,
        labels: Tensor<Int32>,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
    ) -> Tensor<Scalar>

    Parameters

    logits

    One-hot encoded outputs from a neural network.

    labels

    Indices (zero-indexed) of the correct outputs.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Computes the sparse softmax cross entropy (categorical cross entropy) between logits and labels. Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided provided in a one_hot representation. There should be # classes floating point values per feature.

    Declaration

    @differentiable
    public func softmaxCrossEntropy<Scalar: TensorFlowFloatingPoint>(
        logits: Tensor<Scalar>,
        probabilities: Tensor<Scalar>,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
    ) -> Tensor<Scalar>

    Parameters

    logits

    Unscaled log probabilities from a neural network.

    probabilities

    Probability values that correspond to the correct output. Each row must be a valid probability distribution.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Computes the sigmoid cross entropy (binary cross entropy) between logits and labels. Use this cross-entropy loss when there are only two label classes (assumed to be 0 and 1). For each example, there should be a single floating-point value per prediction.

    Declaration

    @differentiable
    public func sigmoidCrossEntropy<Scalar: TensorFlowFloatingPoint>(
        logits: Tensor<Scalar>,
        labels: Tensor<Scalar>,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _mean
    ) -> Tensor<Scalar>

    Parameters

    logits

    The unscaled output of a neural network.

    labels

    Integer values that correspond to the correct output.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Computes the Huber loss between predicted and expected.

    For each value x in error = expected - predicted:

    • 0.5 * x^2 if |x| <= δ.
    • 0.5 * δ^2 + δ * (|x| - δ) otherwise.

    • Source: Wikipedia article.

    Declaration

    @differentiable
    public func huberLoss<Scalar: TensorFlowFloatingPoint>(
        predicted: Tensor<Scalar>,
        expected: Tensor<Scalar>,
        delta: Scalar,
        reduction: @differentiable (Tensor<Scalar>) -> Tensor<Scalar> = _sum
    ) -> Tensor<Scalar>

    Parameters

    predicted

    Predicted outputs from a neural network.

    expected

    Expected values, i.e. targets, that correspond to the correct output.

    delta

    A floating point scalar representing the point where the Huber loss function changes from quadratic to linear.

    reduction

    Reduction to apply on the computed element-wise loss values.

  • Returns the absolute value of the specified tensor element-wise.

    Declaration

    @differentiable
    public func abs<T>(_ x: Tensor<T>) -> Tensor<T> where T : SignedNumeric, T : TensorFlowScalar
  • Returns the natural logarithm of the specified tensor element-wise.

    Declaration

    @differentiable
    public func log<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the base-two logarithm of the specified tensor element-wise.

    Declaration

    @differentiable
    public func log2<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the base-ten logarithm of the specified tensor element-wise.

    Declaration

    @differentiable
    public func log10<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the logarithm of 1 + x element-wise.

    Declaration

    @differentiable
    public func log1p<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns log(1 - exp(x)) using a numerically stable approach.

    Declaration

    @differentiable
    public func log1mexp<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the sine of the specified tensor element-wise.

    Declaration

    @differentiable
    public func sin<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the cosine of the specified tensor element-wise.

    Declaration

    @differentiable
    public func cos<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the tangent of the specified tensor element-wise.

    Declaration

    @differentiable
    public func tan<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the hyperbolic sine of the specified tensor element-wise.

    Declaration

    @differentiable
    public func sinh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the hyperbolic cosine of the specified tensor element-wise.

    Declaration

    @differentiable
    public func cosh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the hyperbolic tangent of the specified tensor element-wise.

    Declaration

    @differentiable
    public func tanh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the inverse cosine of the specified tensor element-wise.

    Declaration

    @differentiable
    public func acos<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the inverse sine of the specified tensor element-wise.

    Declaration

    @differentiable
    public func asin<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the inverse tangent of the specified tensor element-wise.

    Declaration

    @differentiable
    public func atan<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the inverse hyperbolic cosine of the specified tensor element-wise.

    Declaration

    @differentiable
    public func acosh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the inverse hyperbolic sine of the specified tensor element-wise.

    Declaration

    @differentiable
    public func asinh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the inverse hyperbolic tangent of the specified tensor element-wise.

    Declaration

    @differentiable
    public func atanh<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the square root of the specified tensor element-wise.

    Declaration

    @differentiable
    public func sqrt<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the inverse square root of the specified tensor element-wise.

    Declaration

    @differentiable
    public func rsqrt<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the exponential of the specified tensor element-wise.

    Declaration

    @differentiable
    public func exp<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns two raised to the power of the specified tensor element-wise.

    Declaration

    @differentiable
    public func exp2<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns ten raised to the power of the specified tensor element-wise.

    Declaration

    @differentiable
    public func exp10<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the exponential of x - 1 element-wise.

    Declaration

    @differentiable
    public func expm1<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the values of the specified tensor rounded to the nearest integer, element-wise.

    Declaration

    @differentiable
    public func round<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the ceiling of the specified tensor element-wise.

    Declaration

    @differentiable
    public func ceil<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the floor of the specified tensor element-wise.

    Declaration

    @differentiable
    public func floor<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns an indication of the sign of the specified tensor element-wise. Specifically, computes y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0.

    Declaration

    @differentiable
    public func sign<T>(_ x: Tensor<T>) -> Tensor<T> where T : Numeric, T : TensorFlowScalar
  • Returns the sigmoid of the specified tensor element-wise. Specifically, computes 1 / (1 + exp(-x)).

    Declaration

    @differentiable
    public func sigmoid<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the log-sigmoid of the specified tensor element-wise. Specifically, log(1 / (1 + exp(-x))). For numerical stability, we use -softplus(-x).

    Declaration

    @differentiable
    public func logSigmoid<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the softplus of the specified tensor element-wise. Specifically, computes log(exp(features) + 1).

    Declaration

    @differentiable
    public func softplus<T>(_ features: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the softsign of the specified tensor element-wise. Specifically, computes features/ (abs(features) + 1).

    Declaration

    @differentiable
    public func softsign<T>(_ features: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the softmax of the specified tensor along the last axis. Specifically, computes exp(x) / exp(x).sum(alongAxes: -1).

    Declaration

    @differentiable
    public func softmax<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the softmax of the specified tensor along the specified axis. Specifically, computes exp(x) / exp(x).sum(alongAxes: axis).

    Declaration

    @differentiable
    public func softmax<T>(_ x: Tensor<T>, alongAxis axis: Int) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the log-softmax of the specified tensor element-wise.

    Declaration

    @differentiable
    public func logSoftmax<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns a tensor by applying an exponential linear unit. Specifically, computes exp(x) - 1 if < 0, x otherwise. See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

    Declaration

    @differentiable
    public func elu<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the Gaussian Error Linear Unit (GELU) activations of the specified tensor element-wise.

    Specifically, gelu approximates xP(X <= x), where P(X <= x) is the Standard Gaussian cumulative distribution, by computing: x * [0.5 * (1 + tanh[√(2/π) * (x + 0.044715 * x^3)])].

    See Gaussian Error Linear Units.

    Declaration

    @differentiable
    public func gelu<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns a tensor by applying the ReLU activation function to the specified tensor element-wise. Specifically, computes max(0, x).

    Declaration

    @differentiable
    public func relu<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns a tensor by applying the ReLU6 activation function, namely min(max(0, x), 6).

    Declaration

    @differentiable
    public func relu6<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns a tensor by applying the leaky ReLU activation function to the specified tensor element-wise. Specifically, computes max(x, x * alpha).

    Declaration

    @differentiable
    public func leakyRelu<T: TensorFlowFloatingPoint>(
        _ x: Tensor<T>,
        alpha: Double = 0.2
    ) -> Tensor<T>
  • Returns a tensor by applying the SeLU activation function, namely scale * alpha * (exp(x) - 1) if x < 0, and scale * x otherwise.

    Note

    This is designed to be used together with the variance scaling layer initializers. Please refer to Self-Normalizing Neural Networks for more information.

    Declaration

    @differentiable
    public func selu<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns a tensor by applying the swish activation function, namely x * sigmoid(x).

    Source: “Searching for Activation Functions” (Ramachandran et al. 2017) https://arxiv.org/abs/1710.05941

    Declaration

    @differentiable
    public func swish<T>(_ x: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the power of the first tensor to the second tensor.

    Declaration

    @differentiable
    public func pow<T>(_ lhs: Tensor<T>, _ rhs: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the power of the scalar to the tensor, broadcasting the scalar.

    Declaration

    @differentiable
    public func pow<T>(_ lhs: T, _ rhs: Tensor<T>) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the power of the tensor to the scalar, broadcasting the scalar.

    Declaration

    @differentiable
    public func pow<T>(_ lhs: Tensor<T>, _ rhs: T) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the power of the tensor to the scalar, broadcasting the scalar.

    Declaration

    @differentiable
    public func pow<T>(_ x: Tensor<T>, _ n: Int) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the element-wise nth root of the tensor.

    Declaration

    @differentiable
    public func root<T>(_ x: Tensor<T>, _ n: Int) -> Tensor<T> where T : TensorFlowFloatingPoint, T : TensorFlowScalar
  • Returns the squared difference between x and y.

    Declaration

    @differentiable
    public func squaredDifference<T>(_ x: Tensor<T>, _ y: Tensor<T>) -> Tensor<T> where T : Numeric, T : TensorFlowScalar

    Return Value

    (x - y) ^ 2.

  • Returns the element-wise maximum of two tensors.

    Note

    max supports broadcasting.

    Declaration

    @differentiable
    public func max<T>(_ lhs: Tensor<T>, _ rhs: Tensor<T>) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar
  • Returns the element-wise maximum of the scalar and the tensor, broadcasting the scalar.

    Declaration

    @differentiable
    public func max<T>(_ lhs: T, _ rhs: Tensor<T>) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar
  • Returns the element-wise maximum of the scalar and the tensor, broadcasting the scalar.

    Declaration

    @differentiable
    public func max<T>(_ lhs: Tensor<T>, _ rhs: T) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar
  • Returns the element-wise minimum of two tensors.

    Note

    min supports broadcasting.

    Declaration

    @differentiable
    public func min<T>(_ lhs: Tensor<T>, _ rhs: Tensor<T>) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar
  • Returns the element-wise minimum of the scalar and the tensor, broadcasting the scalar.

    Declaration

    @differentiable
    public func min<T>(_ lhs: T, _ rhs: Tensor<T>) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar
  • Returns the element-wise minimum of the scalar and the tensor, broadcasting the scalar.

    Declaration

    @differentiable
    public func min<T>(_ lhs: Tensor<T>, _ rhs: T) -> Tensor<T> where T : Comparable, T : Numeric, T : TensorFlowScalar
  • Returns the cosine similarity between x and y.

    Declaration

    @differentiable
    public func cosineSimilarity<Scalar: TensorFlowFloatingPoint>(
        _ x: Tensor<Scalar>,
        _ y: Tensor<Scalar>
    ) -> Tensor<Scalar>
  • Returns the cosine distance between x and y. Cosine distance is defined as 1 - cosineSimilarity(x, y).

    Declaration

    @differentiable
    public func cosineDistance<Scalar: TensorFlowFloatingPoint>(
        _ x: Tensor<Scalar>,
        _ y: Tensor<Scalar>
    ) -> Tensor<Scalar>
  • Performs matrix multiplication with another tensor and produces the result.

    Declaration

    @differentiable
    public func matmul<Scalar: Numeric>(
        _ lhs: Tensor<Scalar>,
        transposed transposeLhs: Bool = false,
        _ rhs: Tensor<Scalar>,
        transposed transposeRhs: Bool = false
    ) -> Tensor<Scalar>
  • Returns a 1-D convolution with the specified input, filter, stride, and padding.

    Precondition

    input must have rank 3.

    Precondition

    filter must have rank 3.

    Declaration

    @differentiable
    public func conv1D<Scalar: TensorFlowFloatingPoint>(
        _ input: Tensor<Scalar>,
        filter: Tensor<Scalar>,
        stride: Int = 1,
        padding: Padding = .valid,
        dilation: Int = 1
    ) -> Tensor<Scalar>

    Parameters

    input

    The input.

    filter

    The convolution filter.

    stride

    The stride of the sliding filter.

    padding

    The padding for the operation.

    dilation

    The dilation factor.

  • Returns a 2-D convolution with the specified input, filter, strides, and padding.

    Precondition

    input must have rank 4.

    Precondition

    filter must have rank 4.

    Declaration

    @differentiable
    public func conv2D<Scalar: TensorFlowFloatingPoint>(
        _ input: Tensor<Scalar>,
        filter: Tensor<Scalar>,
        strides: (Int, Int, Int, Int) = (1, 1, 1, 1),
        padding: Padding = .valid,
        dilations: (Int, Int, Int, Int) = (1, 1, 1, 1)
    ) -> Tensor<Scalar>

    Parameters

    input

    The input.

    filter

    The convolution filter.

    strides

    The strides of the sliding filter for each dimension of the input.

    padding

    The padding for the operation

    dilations

    The dilation factor for each dimension of the input.

  • Returns a 3-D convolution with the specified input, filter, strides, padding and dilations.

    Precondition

    input must have rank 5.

    Precondition

    filter must have rank 5.

    Declaration

    @differentiable
    public func conv3D<Scalar: TensorFlowFloatingPoint>(
        _ input: Tensor<Scalar>,
        filter: Tensor<Scalar>,
        strides: (Int, Int, Int, Int, Int) = (1, 1, 1, 1, 1),
        padding: Padding = .valid,
        dilations: (Int, Int, Int, Int, Int) = (1, 1, 1, 1, 1)
    ) -> Tensor<Scalar>

    Parameters

    input

    The input.

    filter

    The convolution filter.

    strides

    The strides of the sliding filter for each dimension of the input.

    padding

    The padding for the operation.

    dilations

    The dilation factor for each dimension of the input.

  • Returns a 2-D depthwise convolution with the specified input, filter, strides, and padding.

    Precondition

    input must have rank 4.

    Precondition

    filter must have rank 4.

    Declaration

    @differentiable
    public func depthwiseConv2D<Scalar: TensorFlowFloatingPoint>(
        _ input: Tensor<Scalar>,
        filter: Tensor<Scalar>,
        strides: (Int, Int, Int, Int),
        padding: Padding
    ) -> Tensor<Scalar>

    Parameters

    input

    The input.

    filter

    The depthwise convolution filter.

    strides

    The strides of the sliding filter for each dimension of the input.

    padding

    The padding for the operation.

  • Returns a 2-D max pooling, with the specified filter sizes, strides, and padding.

    Declaration

    @differentiable
    public func maxPool2D<Scalar: TensorFlowFloatingPoint>(
        _ input: Tensor<Scalar>,
        filterSize: (Int, Int, Int, Int),
        strides: (Int, Int, Int, Int),
        padding: Padding
    ) -> Tensor<Scalar>

    Parameters

    input

    The input.

    filterSize

    The dimensions of the pooling kernel.

    strides

    The strides of the sliding filter for each dimension of the input.

    padding

    The padding for the operation.

  • Returns a 3-D max pooling, with the specified filter sizes, strides, and padding.

    Declaration

    @differentiable
    public func maxPool3D<Scalar: TensorFlowFloatingPoint>(
        _ input: Tensor<Scalar>,
        filterSize: (Int, Int, Int, Int, Int),
        strides: (Int, Int, Int, Int, Int),
        padding: Padding
    ) -> Tensor<Scalar>

    Parameters

    input

    The input.

    filterSize

    The dimensions of the pooling kernel.

    strides

    The strides of the sliding filter for each dimension of the input.

    padding

    The padding for the operation.

  • Returns a 2-D average pooling, with the specified filter sizes, strides, and padding.

    Declaration

    @differentiable
    public func avgPool2D<Scalar: TensorFlowFloatingPoint>(
        _ input: Tensor<Scalar>,
        filterSize: (Int, Int, Int, Int),
        strides: (Int, Int, Int, Int),
        padding: Padding
    ) -> Tensor<Scalar>

    Parameters

    input

    The input.

    filterSize

    The dimensions of the pooling kernel.

    strides

    The strides of the sliding filter for each dimension of the input.

    padding

    The padding for the operation.

  • Returns a 3-D average pooling, with the specified filter sizes, strides, and padding.

    Declaration

    @differentiable
    public func avgPool3D<Scalar: TensorFlowFloatingPoint>(
        _ input: Tensor<Scalar>,
        filterSize: (Int, Int, Int, Int, Int),
        strides: (Int, Int, Int, Int, Int),
        padding: Padding
    ) -> Tensor<Scalar>

    Parameters

    input

    The input.

    filterSize

    The dimensions of the pooling kernel.

    strides

    The strides of the sliding filter for each dimension of the input.

    padding

    The padding for the operation.

  • Generates a new random seed for TensorFlow.

    Declaration

    public func randomSeedForTensorFlow(using seed: TensorFlowSeed? = nil) -> TensorFlowSeed