@frozen
public struct Tensor<Scalar> : TensorProtocol where Scalar : TensorFlowScalar

A multidimensional array of elements that is a generalization of vectors and matrices to potentially higher dimensions.

The generic parameter Scalar describes the type of scalars in the tensor (such as Int32, Float, etc).

  • Returns a Boolean value indicating whether the results of element-wise comparison lhs .< rhs are all true.

    Declaration

    @available(*, deprecated, message: "This API will be removed after Swift for TensorFlow 0.5.\nUse `(lhs .< rhs﹚.all(﹚` instead.")
    public static func < (lhs: Tensor, rhs: Tensor) -> Bool
  • Returns a Boolean value indicating whether the results of element-wise comparison lhs .<= rhs are all true.

    Declaration

    @available(*, deprecated, message: "This API will be removed after Swift for TensorFlow 0.5.\nUse `(lhs .<= rhs﹚.all(﹚` instead.")
    public static func <= (lhs: Tensor, rhs: Tensor) -> Bool
  • Returns a Boolean value indicating whether the results of element-wise comparison lhs .> rhs are all true.

    Declaration

    @available(*, deprecated, message: "This API will be removed after Swift for TensorFlow 0.5.\nUse `(lhs .> rhs﹚.all(﹚` instead.")
    public static func > (lhs: Tensor, rhs: Tensor) -> Bool
  • Returns a Boolean value indicating whether the results of element-wise comparison lhs .>= rhs are all true.

    Declaration

    @available(*, deprecated, message: "This API will be removed after Swift for TensorFlow 0.5.\nUse `(lhs .>= rhs﹚.all(﹚` instead.")
    public static func >= (lhs: Tensor, rhs: Tensor) -> Bool
  • Returns a Boolean value indicating whether all scalars in the first argument are less than the second argument.

    Declaration

    @available(*, deprecated, message: "This API will be removed after Swift for TensorFlow 0.5.\nUse `(lhs .< rhs﹚.all(﹚` instead.")
    static func < (lhs: Tensor, rhs: Scalar) -> Bool
  • Returns a Boolean value indicating whether all scalars in the first argument are less than or equal to the second argument.

    Declaration

    @available(*, deprecated, message: "This API will be removed after Swift for TensorFlow 0.5.\nUse `(lhs .<= rhs﹚.all(﹚` instead.")
    static func <= (lhs: Tensor, rhs: Scalar) -> Bool
  • Returns a Boolean value indicating whether all scalars in the first argument are greater than the second argument.

    Declaration

    @available(*, deprecated, message: "This API will be removed after Swift for TensorFlow 0.5.\nUse `(lhs .> rhs﹚.all(﹚` instead.")
    static func > (lhs: Tensor, rhs: Scalar) -> Bool
  • Returns a Boolean value indicating whether all scalars in the first argument are greater than or equal to the second argument.

    Declaration

    @available(*, deprecated, message: "This API will be removed after Swift for TensorFlow 0.5.\nUse `(lhs .>= rhs﹚.all(﹚` instead.")
    static func >= (lhs: Tensor, rhs: Scalar) -> Bool
  • Unpacks the given dimension of a rank-R tensor into multiple rank-(R-1) tensors. Unpacks N tensors from this tensor by chipping it along the axis dimension, where N is inferred from this tensor’s shape. For example, given a tensor with shape [A, B, C, D]:

    • If axis == 0 then the i-th tensor in the returned array is the slice self[i, :, :, :] and each tensor in that array will have shape [B, C, D]. (Note that the dimension unpacked along is gone, unlike Tensor.split(numSplits:alongAxis), or Tensor.split(sizes:alongAxis)).
    • If axis == 1 then the i-th tensor in the returned array is the slice value[:, i, :, :] and each tensor in that array will have shape [A, C, D].
    • Etc.

    This is the opposite of Tensor.init(stacking:alongAxis:).

    Precondition

    axis must be in the range [-rank, rank), where rank is the rank of the provided tensors.

    Declaration

    @differentiable
    func unstacked(alongAxis axis: Int = 0) -> [Tensor]

    Parameters

    axis

    Dimension along which to unstack. Negative values wrap around.

    Return Value

    Array containing the unstacked tensors.

  • Splits a tensor into multiple tensors. The tensor is split along dimension axis into count smaller tensors. This requires that count evenly divides shape[axis].

    For example:

    // 'value' is a tensor with shape [5, 30]
    // Split 'value' into 3 tensors along dimension 1:
    let parts = value.split(count: 3, alongAxis: 1)
    parts[0] // has shape [5, 10]
    parts[1] // has shape [5, 10]
    parts[2] // has shape [5, 10]
    

    Precondition

    count must divide the size of dimension axis evenly.

    Precondition

    axis must be in the range [-rank, rank), where rank is the rank of the provided tensors.

    Declaration

    @differentiable
    func split(count: Int, alongAxis axis: Int = 0) -> [Tensor]

    Parameters

    count

    Number of splits to create.

    axis

    The dimension along which to split this tensor. Negative values wrap around.

    Return Value

    An array containing the tensors part.

  • Splits a tensor into multiple tensors. The tensor is split into sizes.shape[0] pieces. The shape of the i-th piece has the same shape as this tensor except along dimension axis where the size is sizes[i].

    For example:

    // 'value' is a tensor with shape [5, 30]
    // Split 'value' into 3 tensors with sizes [4, 15, 11] along dimension 1:
    let parts = value.split(sizes: Tensor<Int32>([4, 15, 11]), alongAxis: 1)
    parts[0] // has shape [5, 4]
    parts[1] // has shape [5, 15]
    parts[2] // has shape [5, 11]
    

    Precondition

    The values in sizes must add up to the size of dimension axis.

    Precondition

    axis must be in the range [-rank, rank), where rank is the rank of the provided tensors.

    Declaration

    @differentiable
    func split(sizes: Tensor<Int32>, alongAxis axis: Int = 0) -> [Tensor]

    Parameters

    sizes

    1-D tensor containing the size of each split.

    axis

    Dimension along which to split this tensor. Negative values wrap around.

    Return Value

    Array containing the tensors parts.

  • Returns a tiled tensor, constructed by tiling this tensor.

    This constructor creates a new tensor by replicating this tensor multiples times. The constructed tensor’s i‘th dimension has self.shape[i] * multiples[i] elements, and the values of this tensor are replicated multiples[i] times along the i'th dimension. For example, tiling [a b c d] by [2] produces [a b c d a b c d].

    Precondition

    The shape of multiples must be [tensor.rank].

    Declaration

    @differentiable
    func tiled(multiples: Tensor<Int32>) -> Tensor
  • Reshape to the shape of the specified Tensor.

    Precondition

    The number of scalars matches the new shape.

    Declaration

    @differentiable
    func reshaped<T>(like other: Tensor<T>) -> Tensor where T : TensorFlowScalar
  • Reshape to the specified shape.

    Precondition

    The number of scalars matches the new shape.

    Declaration

    @differentiable
    func reshaped(to newShape: TensorShape) -> Tensor
  • Reshape to the specified Tensor representing a shape.

    Precondition

    The number of scalars matches the new shape.

    Declaration

    @differentiable
    func reshaped(toShape newShape: Tensor<Int32>) -> Tensor
  • Return a copy of the tensor collapsed into a 1-D Tensor, in row-major order.

    Declaration

    @differentiable
    func flattened() -> Tensor
  • Returns a shape-expanded Tensor, with a dimension of 1 inserted at the specified shape indices.

    Declaration

    @differentiable
    func expandingShape(at axes: Int...) -> Tensor
  • Returns a shape-expanded Tensor, with a dimension of 1 inserted at the specified shape indices.

    Declaration

    @differentiable
    func expandingShape(at axes: [Int]) -> Tensor
  • Returns a rank-lifted Tensor with a leading dimension of 1.

    Declaration

    @differentiable
    func rankLifted() -> Tensor
  • Removes the specified dimensions of size 1 from the shape of a tensor. If no dimensions are specified, then all dimensions of size 1 will be removed.

    Declaration

    @differentiable
    func squeezingShape(at axes: Int...) -> Tensor
  • Removes the specified dimensions of size 1 from the shape of a tensor. If no dimensions are specified, then all dimensions of size 1 will be removed.

    Declaration

    @differentiable
    func squeezingShape(at axes: [Int]) -> Tensor
  • Returns a transposed tensor, with dimensions permuted in the specified order.

    Declaration

    @differentiable
    func transposed(withPermutations permutations: Tensor<Int32>) -> Tensor
  • Returns a transposed tensor, with dimensions permuted in the specified order.

    Declaration

    @differentiable
    func transposed(withPermutations permutations: [Int]) -> Tensor
  • Returns a transposed tensor, with dimensions permuted in the specified order.

    Declaration

    @differentiable
    func transposed(withPermutations permutations: Int...) -> Tensor
  • Returns a transposed tensor, with dimensions permuted in reverse order.

    Declaration

    @differentiable
    func transposed() -> Tensor
  • Returns a concatenated tensor along the specified axis.

    Precondition

    The tensors must have the same dimensions, except for the specified axis.

    Precondition

    The axis must be in the range -rank..<rank.

    Declaration

    @differentiable
    func concatenated(with other: Tensor, alongAxis axis: Int = 0) -> Tensor
  • Concatenation operator.

    Note

    ++ is a custom operator that does not exist in Swift, but does in Haskell/Scala. Its addition is not an insignificant language change and may be controversial. The existence/naming of ++ will be discussed during a later API design phase.

    Declaration

    @differentiable
    static func ++ (lhs: Tensor, rhs: Tensor) -> Tensor
  • Returns a tensor by gathering slices of the input at indices along the axis dimension

    For 0-D (scalar) indices:

    result[p_0,          ..., p_{axis-1},
           p_{axis + 1}, ..., p_{N-1}] =
    self[p_0,          ..., p_{axis-1},
         indices,
         p_{axis + 1}, ..., p_{N-1}]
    

    For 1-D (vector) indices:

    result[p_0,          ..., p_{axis-1},
           i,
           p_{axis + 1}, ..., p_{N-1}] =
    self[p_0,          ..., p_{axis-1},
         indices[i],
         p_{axis + 1}, ..., p_{N-1}]
    

    In the general case, produces a resulting tensor where:

    result[p_0,             ..., p_{axis-1},
           i_{batch\_dims}, ..., i_{M-1},
           p_{axis + 1},    ..., p_{N-1}] =
    self[p_0,             ..., p_{axis-1},
         indices[i_0,     ..., i_{M-1}],
         p_{axis + 1},    ..., p_{N-1}]
    

    where N = self.rank and M = indices.rank.

    The shape of the resulting tensor is: self.shape[..<axis] + indices.shape + self.shape[(axis + 1)...].

    Note

    On CPU, if an out-of-range index is found, an error is thrown. On GPU, if an out-of-range index is found, a 0 is stored in the corresponding output values.

    Precondition

    axis must be in the range [-rank, rank).

    Declaration

    @differentiable
    func gathering<Index: TensorFlowIndex>(
        atIndices indices: Tensor<Index>,
        alongAxis axis: Int = 0
    ) -> Tensor

    Parameters

    indices

    Contains the indices to gather at.

    axis

    Dimension along which to gather. Negative values wrap around.

    Return Value

    The gathered tensor.

  • Returns slices of this tensor at indices along the axis dimension, while ignoring the first batchDimensionCount dimensions that correspond to batch dimensions. The gather is performed along the first non-batch dimension.

    Performs similar functionality to gathering, except that the resulting tensor shape is now shape[..<axis] + indices.shape[batchDimensionCount...] + shape[(axis + 1)...].

    Precondition

    axis must be in the range -rank..<rank, while also being greater than or equal to batchDimensionCount.

    Precondition

    batchDimensionCount must be less than indices.rank.

    Declaration

    @differentiable
    func batchGathering<Index: TensorFlowIndex>(
        atIndices indices: Tensor<Index>,
        alongAxis axis: Int = 1,
        batchDimensionCount: Int = 1
    ) -> Tensor

    Parameters

    indices

    Contains the indices to gather.

    axis

    Dimension along which to gather. Negative values wrap around.

    batchDimensionCount

    Number of leading batch dimensions to ignore.

    Return Value

    The gathered tensor.

  • Returns a tensor by gathering the values after applying the provided boolean mask to the input.

    For example:

    // 1-D example
    // tensor is [0, 1, 2, 3]
    // mask is [true, false, true, false]
    tensor.gathering(where: mask) // is [0, 2]
    
    // 2-D example
    // tensor is [[1, 2], [3, 4], [5, 6]]
    // mask is [true, false, true]
    tensor.gathering(where: mask) // is [[1, 2], [5, 6]]
    

    In general, 0 < mask.rank = K <= tensor.rank, and the mask‘s shape must match the first K dimensions of the tensor’s shape. We then have: tensor.gathering(where: mask)[i, j1, ..., jd] = tensor[i1, ..., iK, j1, ..., jd], where [i1, ..., iK] is the ith true entry of mask (row-major order).

    The axis could be used with mask to indicate the axis to mask from. In that case, axis + mask.rank <= tensor.rank and the mask's shape must match the first axis + mask.rankdimensions of thetensor`’s shape.

    Precondition

    The mask cannot be a scalar: mask.rank != 0.

    Declaration

    @differentiable
    func gathering(where mask: Tensor<Bool>, alongAxis axis: Int = 0) -> Tensor

    Parameters

    mask

    K-D boolean tensor, where K <= self.rank.

    axis

    0-D integer tensor representing the axis in self to mask from, where K + axis <= self.rank.

    Return Value

    (self.rank - K + 1)-dimensional tensor populated by entries in this tensor corresponding to true values in mask.

  • Returns the locations of non-zero / true values in this tensor.

    The coordinates are returned in a 2-D tensor where the first dimension (rows) represents the number of non-zero elements, and the second dimension (columns) represents the coordinates of the non-zero elements. Keep in mind that the shape of the output tensor can vary depending on how many true values there are in this tensor. Indices are output in row-major order.

    For example:

    // 'input' is [[true, false], [true, false]]
    // 'input' has 2 true values and so the output has 2 rows.
    // 'input' has rank of 2, and so the second dimension of the output has size 2.
    input.nonZeroIndices() // is [[0, 0], [1, 0]]
    
    // 'input' is [[[ true, false], [ true, false]],
    //             [[false,  true], [false,  true]],
    //             [[false, false], [false,  true]]]
    // 'input' has 5 true values and so the output has 5 rows.
    // 'input' has rank 3, and so the second dimension of the output has size 3.
    input.nonZeroIndices() // is [[0, 0, 0],
                           //     [0, 1, 0],
                           //     [1, 0, 1],
                           //     [1, 1, 1],
                           //     [2, 1, 1]]
    

    Declaration

    func nonZeroIndices() -> Tensor<Int64>

    Return Value

    A tensor with shape (num_true, rank(condition)).

  • Declaration

    @differentiable
    func broadcasted(toShape shape: Tensor<Int32>) -> Tensor
  • Declaration

    @differentiable
    func broadcasted(to shape: TensorShape) -> Tensor
  • Broadcast to the same shape as the specified Tensor.

    Precondition

    The specified shape must be compatible for broadcasting.

    Declaration

    @differentiable
    func broadcasted<OtherScalar>(like other: Tensor<OtherScalar>) -> Tensor where OtherScalar : TensorFlowScalar
  • Declaration

    static func .= (lhs: inout Tensor, rhs: Tensor)
  • Returns a padded tensor according to the specified padding sizes.

    Declaration

    @differentiable
    func padded(forSizes sizes: [(before: Int, after: Int)], with value: Scalar = 0) -> Tensor
  • Extracts a slice from the tensor defined by lower and upper bounds for each dimension.

    Declaration

    @differentiable
    func slice(lowerBounds: [Int], upperBounds: [Int]) -> Tensor

    Parameters

    lowerBounds

    The lower bounds at each dimension.

    upperBounds

    The upper bounds at each dimension.

  • Declaration

    @differentiable
    func slice(lowerBounds: Tensor<Int32>, sizes: Tensor<Int32>) -> Tensor
  • Returns a tensor of Boolean scalars by computing lhs < rhs element-wise.

    Declaration

    static func .< (lhs: Tensor, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs <= rhs element-wise.

    Declaration

    static func .<= (lhs: Tensor, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs > rhs element-wise.

    Declaration

    static func .> (lhs: Tensor, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs >= rhs element-wise.

    Declaration

    static func .>= (lhs: Tensor, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs < rhs element-wise.

    Note

    .< supports broadcasting.

    Declaration

    static func .< (lhs: Scalar, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs <= rhs element-wise.

    Note

    .<= supports broadcasting.

    Declaration

    static func .<= (lhs: Scalar, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs > rhs element-wise.

    Note

    .> supports broadcasting.

    Declaration

    static func .> (lhs: Scalar, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs >= rhs element-wise.

    Note

    .>= supports broadcasting.

    Declaration

    static func .>= (lhs: Scalar, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs < rhs element-wise.

    Note

    .< supports broadcasting.

    Declaration

    static func .< (lhs: Tensor, rhs: Scalar) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs <= rhs element-wise.

    Note

    .<= supports broadcasting.

    Declaration

    static func .<= (lhs: Tensor, rhs: Scalar) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs > rhs element-wise.

    Note

    .> supports broadcasting.

    Declaration

    static func .> (lhs: Tensor, rhs: Scalar) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs >= rhs element-wise.

    Note

    .>= supports broadcasting.

    Declaration

    static func .>= (lhs: Tensor, rhs: Scalar) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs == rhs element-wise.

    Note

    .== supports broadcasting.

    Declaration

    static func .== (lhs: Tensor, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs != rhs element-wise.

    Note

    .!= supports broadcasting.

    Declaration

    static func .!= (lhs: Tensor, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs == rhs element-wise.

    Note

    .== supports broadcasting.

    Declaration

    static func .== (lhs: Scalar, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs != rhs element-wise.

    Note

    .!= supports broadcasting.

    Declaration

    static func .!= (lhs: Scalar, rhs: Tensor) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs == rhs element-wise.

    Note

    .== supports broadcasting.

    Declaration

    static func .== (lhs: Tensor, rhs: Scalar) -> Tensor<Bool>
  • Returns a tensor of Boolean scalars by computing lhs != rhs element-wise.

    Note

    .!= supports broadcasting.

    Declaration

    static func .!= (lhs: Tensor, rhs: Scalar) -> Tensor<Bool>
  • Returns a tensor of Boolean values indicating whether the elements of self are approximately equal to those of other.

    Precondition

    self and other must be of the same shape.

    Declaration

    func elementsAlmostEqual(
        _ other: Tensor,
        tolerance: Scalar = Scalar.ulpOfOne.squareRoot()
    ) -> Tensor<Bool>
  • Returns true if all elements of self are approximately equal to those of other.

    Precondition

    self and other must be of the same shape.

    Declaration

    func isAlmostEqual(
        to other: Tensor,
        tolerance: Scalar = Scalar.ulpOfOne.squareRoot()
    ) -> Bool
  • Computes dropout given a probability.

    Declaration

    @differentiable
    func droppingOut(probability: Double) -> Tensor
  • Creates a tensor with the specified shape and a single, repeated scalar value.

    Declaration

    @available(*, deprecated, renamed: "init(repeating:shape:﹚")
    init(shape: TensorShape, repeating repeatedValue: Scalar)

    Parameters

    shape

    The dimensions of the tensor.

    repeatedValue

    The scalar value to repeat.

  • Creates a tensor with the specified shape and a single, repeated scalar value.

    Declaration

    @differentiable
    init(repeating repeatedValue: Scalar, shape: TensorShape)

    Parameters

    repeatedValue

    The scalar value to repeat.

    shape

    The dimensions of the tensor.

  • Creates a tensor by broadcasting the given scalar to a given rank with all dimensions being 1.

    Declaration

    init(broadcasting scalar: Scalar, rank: Int)
  • Perform an element-wise type conversion from a Bool tensor.

    Declaration

    init(_ other: Tensor<Bool>)
  • Perform an element-wise conversion from another Tensor.

    Declaration

    @differentiable
    init<OtherScalar>(_ other: Tensor<OtherScalar>) where OtherScalar : Numeric, OtherScalar : TensorFlowScalar
  • Creates a tensor from an array of tensors (which may themselves be scalars).

    Declaration

    @differentiable
    init(_ elements: [Tensor])
  • Stacks tensors, along the axis dimension, into a new tensor with rank one higher than the current tensor and each tensor in tensors.

    Given that tensors all have shape [A, B, C], and tensors.count = N, then:

    • if axis == 0 then the resulting tensor will have the shape [N, A, B, C].
    • if axis == 1 then the resulting tensor will have the shape [A, N, B, C].
    • etc.

    For example:

    // 'x' is [1, 4]
    // 'y' is [2, 5]
    // 'z' is [3, 6]
    Tensor(stacking: [x, y, z]) // is [[1, 4], [2, 5], [3, 6]]
    Tensor(stacking: [x, y, z], alongAxis: 1) // is [[1, 2, 3], [4, 5, 6]]
    

    This is the opposite of Tensor.unstacked(alongAxis:).

    Precondition

    All tensors must have the same shape.

    Precondition

    axis must be in the range [-rank, rank), where rank is the rank of the provided tensors.

    Declaration

    @differentiable
    init(stacking tensors: [Tensor], alongAxis axis: Int = 0)

    Parameters

    tensors

    Tensors to stack.

    axis

    Dimension along which to stack. Negative values wrap around.

    Return Value

    The stacked tensor.

  • Concatenates tensors along the axis dimension.

    Given that tensors[i].shape = [D0, D1, ... Daxis(i), ...Dn], then the concatenated result has shape [D0, D1, ... Raxis, ...Dn], where Raxis = sum(Daxis(i)). That is, the data from the input tensors is joined along the axis dimension.

    For example:

    // t1 is [[1, 2, 3], [4, 5, 6]]
    // t2 is [[7, 8, 9], [10, 11, 12]]
    Tensor(concatenating: [t1, t2]) // is [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
    Tensor(concatenating: [t1, t2], alongAxis: 1) // is [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
    
    // t3 has shape [2, 3]
    // t4 has shape [2, 3]
    Tensor(concatenating: [t3, t4]) // has shape [4, 3]
    Tensor(concatenating: [t3, t4], alongAxis: 1) // has shape [2, 6]
    

    Note

    If you are concatenating along a new axis consider using Tensor.init(stacking:alongAxis:).

    Precondition

    All tensors must have the same rank and all dimensions except axis must be equal.

    Precondition

    axis must be in the range [-rank, rank), where rank is the rank of the provided tensors.

    Declaration

    @differentiable
    init(concatenating tensors: [Tensor], alongAxis axis: Int = 0)

    Parameters

    tensors

    Tensors to concatenate.

    axis

    Dimension along which to concatenate. Negative values wrap around.

    Return Value

    The concatenated tensor.

  • Creates a tensor with all scalars set to zero.

    Declaration

    init(zeros shape: TensorShape)

    Parameters

    shape

    Shape of the tensor.

  • Creates a tensor with all scalars set to one.

    Declaration

    init(ones shape: TensorShape)

    Parameters

    shape

    Shape of the tensor.

  • Creates a tensor with all scalars set to zero that has the same shape and type as the provided tensor.

    Declaration

    init(zerosLike other: Tensor)

    Parameters

    other

    Tensor whose shape and data type to use.

  • Creates a tensor with all scalars set to one that has the same shape and type as the provided tensor.

    Declaration

    init(onesLike other: Tensor)

    Parameters

    other

    Tensor whose shape and data type to use.

  • Creates a 1-D tensor representing a sequence from a starting value to, but not including, an end value, stepping by the specified amount.

    Declaration

    init(rangeFrom start: Scalar, to end: Scalar, stride: Scalar)

    Parameters

    start

    The starting value to use for the sequence. If the sequence contains any values, the first one is start.

    end

    An end value to limit the sequence. end is never an element of the resulting sequence.

    stride

    The amount to step by with each iteration. stride must be positive.

  • Creates a 1-D tensor representing a sequence from a starting value to, but not including, an end value, stepping by the specified amount.

    Declaration

    init(rangeFrom start: Tensor<Scalar>, to end: Tensor<Scalar>, stride: Tensor<Scalar>)

    Parameters

    start

    The starting value to use for the sequence. If the sequence contains any values, the first one is start.

    end

    An end value to limit the sequence. end is never an element of the resulting sequence.

    stride

    The amount to step by with each iteration. stride must be positive.

  • Creates a one-hot tensor at given indices. The locations represented by indices take value onValue (1 by default), while all other locations take value offValue (0 by default). If the input indices is rank n, the new tensor will have rank n+1. The new axis is created at dimension axis (by default, the new axis is appended at the end).

    If indices is a scalar, the new tensor’s shape will be a vector of length depth.

    If indices is a vector of length features, the output shape will be: features x depth, if axis == -1 depth x features, if axis == 0

    If indices is a matrix (batch) with shape [batch, features], the output shape will be: batch x features x depth, if axis == -1 batch x depth x features, if axis == 1 depth x batch x features, if axis == 0

    Declaration

    init(
        oneHotAtIndices indices: Tensor<Int32>,
        depth: Int,
        onValue: Scalar = 1,
        offValue: Scalar = 0,
        axis: Int = -1
    )

    Parameters

    indices

    A Tensor of indices.

    depth

    A scalar defining the depth of the one hot dimension.

    onValue

    A scalar defining the value at the location referred to by some index in indices.

    offValue

    A scalar defining the value at a location that is not referred to by any index in indices.

    axis

    The axis to fill. The default is -1, a new inner-most axis.

  • Creates a 1-D tensor representing a sequence from a starting value, up to and including an end value, spaced evenly to generate the number of values specified.

    Declaration

    init(linearSpaceFrom start: Scalar, to end: Scalar, count: Int)

    Parameters

    start

    The starting value to use for the sequence. If the sequence contains any values, the first one is start.

    end

    An end value to limit the sequence. end is the last element of the resulting sequence.

    count

    The number of values in the resulting sequence. count must be positive.

  • Creates a 1-D tensor representing a sequence from a starting value, up to and including an end value, spaced evenly to generate the number of values specified.

    Precondition

    start, to, and count must be Tensors containing a single Scalar value.

    Declaration

    init(linearSpaceFrom start: Tensor<Scalar>, to end: Tensor<Scalar>, count: Tensor<Int32>)

    Parameters

    start

    The starting value to use for the sequence. If the sequence contains any values, the first one is start.

    end

    An end value to limit the sequence. end is the last element of the resulting sequence.

    count

    The number of values in the resulting sequence. count must be positive.

  • Creates a tensor with the specified shape, randomly sampling scalar values from a uniform distribution between lowerBound and upperBound.

    Declaration

    init(
        randomUniform shape: TensorShape,
        lowerBound: Tensor<Scalar> = Tensor<Scalar>(0),
        upperBound: Tensor<Scalar> = Tensor<Scalar>(1),
        seed: TensorFlowSeed = Context.local.randomSeed
    )

    Parameters

    shape

    The dimensions of the tensor.

    lowerBound

    The lower bound of the distribution.

    upperBound

    The upper bound of the distribution.

    seed

    The seed value.

  • Creates a tensor with the specified shape, randomly sampling scalar values from a uniform distribution between lowerBound and upperBound.

    Declaration

    init(
        randomUniform shape: TensorShape,
        lowerBound: Tensor<Scalar> = Tensor<Scalar>(0),
        upperBound: Tensor<Scalar> = Tensor<Scalar>(1),
        seed: TensorFlowSeed = Context.local.randomSeed
    )

    Parameters

    shape

    The dimensions of the tensor.

    lowerBound

    The lower bound of the distribution.

    upperBound

    The upper bound of the distribution.

    seed

    The seed value.

  • Creates a tensor with the specified shape, randomly sampling scalar values from a normal distribution.

    Declaration

    init(
        randomNormal shape: TensorShape,
        mean: Tensor<Scalar> = Tensor<Scalar>(0),
        standardDeviation: Tensor<Scalar> = Tensor<Scalar>(1),
        seed: TensorFlowSeed = Context.local.randomSeed
    )

    Parameters

    shape

    The dimensions of the tensor.

    mean

    The mean of the distribution.

    standardDeviation

    The standard deviation of the distribution.

    seed

    The seed value.

  • Creates a tensor by performing Glorot uniform initialization for the specified shape, randomly sampling scalar values from a uniform distribution between -limit and limit, generated by the default random number generator, where limit is sqrt(6 / (fanIn + fanOut)) and fanIn/fanOut represent the number of input and output features multiplied by the receptive field if present.

    Declaration

    init(glorotUniform shape: TensorShape, seed: TensorFlowSeed = Context.local.randomSeed)

    Parameters

    shape

    The dimensions of the tensor.

  • Creates a tensor by performing Glorot normal initialization for the specified shape, randomly sampling scalar values from a uniform distribution between -limit and limit, generated by the default random number generator, where limit is sqrt(2 / (fanIn + fanOut)) and fanIn/fanOut represent the number of input and output features multiplied by the receptive field if present.

    Declaration

    init(glorotNormal shape: TensorShape, seed: TensorFlowSeed = Context.local.randomSeed)

    Parameters

    shape

    The dimensions of the tensor.

  • Creates an orthogonal matrix or tensor.

    If the shape of the tensor to initialize is two-dimensional, it is initialized with an orthogonal matrix obtained from the QR decomposition of a matrix of random numbers drawn from a normal distribution. If the matrix has fewer rows than columns then the output will have orthogonal rows. Otherwise, the output will have orthogonal columns.

    If the shape of the tensor to initialize is more than two-dimensional, a matrix of shape [shape[0] * ... * shape[rank - 2], shape[rank - 1]] is initialized. The matrix is subsequently reshaped to give a tensor of the desired shape.

    Declaration

    init(
        orthogonal shape: TensorShape,
        gain: Tensor<Scalar> = Tensor<Scalar>(1),
        seed: TensorFlowSeed = Context.local.randomSeed
    )

    Parameters

    shape

    The shape of the tensor.

    gain

    A multiplicative factor to apply to the orthogonal tensor.

    seed

    A tuple of two integers to seed the random number generator.

  • The square root of x.

    For real types, if x is negative the result is .nan. For complex types there is a branch cut on the negative real axis.

    Declaration

    @differentiable
    public static func sqrt(_ x: `Self`) -> Tensor<Scalar>
  • The cosine of x, interpreted as an angle in radians.

    Declaration

    @differentiable
    public static func cos(_ x: `Self`) -> Tensor<Scalar>
  • The sine of x, interpreted as an angle in radians.

    Declaration

    @differentiable
    public static func sin(_ x: `Self`) -> Tensor<Scalar>
  • The tangent of x, interpreted as an angle in radians.

    Declaration

    @differentiable
    public static func tan(_ x: `Self`) -> Tensor<Scalar>
  • The inverse cosine of x in radians.

    Declaration

    @differentiable
    public static func acos(_ x: `Self`) -> Tensor<Scalar>
  • The inverse sine of x in radians.

    Declaration

    @differentiable
    public static func asin(_ x: `Self`) -> Tensor<Scalar>
  • The inverse tangent of x in radians.

    Declaration

    @differentiable
    public static func atan(_ x: `Self`) -> Tensor<Scalar>
  • The hyperbolic cosine of x.

    Declaration

    @differentiable
    public static func cosh(_ x: `Self`) -> Tensor<Scalar>
  • The hyperbolic sine of x.

    Declaration

    @differentiable
    public static func sinh(_ x: `Self`) -> Tensor<Scalar>
  • The hyperbolic tangent of x.

    Declaration

    @differentiable
    public static func tanh(_ x: `Self`) -> Tensor<Scalar>
  • The inverse hyperbolic cosine of x.

    Declaration

    @differentiable
    public static func acosh(_ x: `Self`) -> Tensor<Scalar>
  • The inverse hyperbolic sine of x.

    Declaration

    @differentiable
    public static func asinh(_ x: `Self`) -> Tensor<Scalar>
  • The inverse hyperbolic tangent of x.

    Declaration

    @differentiable
    public static func atanh(_ x: `Self`) -> Tensor<Scalar>
  • The exponential function applied to x, or e**x.

    Declaration

    @differentiable
    public static func exp(_ x: `Self`) -> Tensor<Scalar>
  • Two raised to to power x.

    Declaration

    @differentiable
    public static func exp2(_ x: `Self`) -> Tensor<Scalar>
  • Ten raised to to power x.

    Declaration

    @differentiable
    public static func exp10(_ x: `Self`) -> Tensor<Scalar>
  • exp(x) - 1 evaluated so as to preserve accuracy close to zero.

    Declaration

    @differentiable
    public static func expm1(_ x: `Self`) -> Tensor<Scalar>
  • The natural logarithm of x.

    Declaration

    @differentiable
    public static func log(_ x: `Self`) -> Tensor<Scalar>
  • The base-two logarithm of x.

    Declaration

    @differentiable
    public static func log2(_ x: `Self`) -> Tensor<Scalar>
  • The base-ten logarithm of x.

    Declaration

    @differentiable
    public static func log10(_ x: `Self`) -> Tensor<Scalar>
  • log(1 + x) evaluated so as to preserve accuracy close to zero.

    Declaration

    @differentiable
    public static func log1p(_ x: `Self`) -> Tensor<Scalar>
  • exp(y log(x)) computed without loss of intermediate precision.

    For real types, if x is negative the result is NaN, even if y has an integral value. For complex types, there is a branch cut on the negative real axis.

    Declaration

    @differentiable
    public static func pow(_ x: `Self`, _ y: `Self`) -> Tensor<Scalar>
  • x raised to the nth power.

    The product of n copies of x.

    Declaration

    @differentiable
    public static func pow(_ x: `Self`, _ n: Int) -> Tensor<Scalar>
  • The nth root of x.

    For real types, if x is negative and n is even, the result is NaN. For complex types, there is a branch cut along the negative real axis.

    Declaration

    @differentiable
    public static func root(_ x: `Self`, _ n: Int) -> Tensor<Scalar>
  • Declaration

    public typealias VectorSpaceScalar = Float
  • Declaration

    public func scaled(by scale: Float) -> Tensor<Scalar>
  • Declaration

    public func adding(_ scalar: Float) -> Tensor<Scalar>
  • Declaration

    public func subtracting(_ scalar: Float) -> Tensor<Scalar>
  • Adds the scalar to every scalar of the tensor and produces the sum.

    Declaration

    @differentiable
    static func + (lhs: Scalar, rhs: Tensor) -> Tensor
  • Adds the scalar to every scalar of the tensor and produces the sum.

    Declaration

    @differentiable
    static func + (lhs: Tensor, rhs: Scalar) -> Tensor
  • Subtracts the scalar from every scalar of the tensor and produces the difference.

    Declaration

    @differentiable
    static func - (lhs: Scalar, rhs: Tensor) -> Tensor
  • Subtracts the scalar from every scalar of the tensor and produces the difference

    Declaration

    @differentiable
    static func - (lhs: Tensor, rhs: Scalar) -> Tensor
  • Adds two tensors and stores the result in the left-hand-side variable.

    Note

    += supports broadcasting.

    Declaration

    static func += (lhs: inout Tensor, rhs: Tensor)
  • Adds the scalar to every scalar of the tensor and stores the result in the left-hand-side variable.

    Declaration

    static func += (lhs: inout Tensor, rhs: Scalar)
  • Subtracts the second tensor from the first and stores the result in the left-hand-side variable.

    Note

    -= supports broadcasting.

    Declaration

    static func -= (lhs: inout Tensor, rhs: Tensor)
  • Subtracts the scalar from every scalar of the tensor and stores the result in the left-hand-side variable.

    Declaration

    static func -= (lhs: inout Tensor, rhs: Scalar)
  • Returns the tensor produced by multiplying the two tensors.

    Note

    * supports broadcasting.

    Declaration

    @differentiable
    static func * (lhs: Tensor, rhs: Tensor) -> Tensor
  • Returns the tensor by multiplying it with every scalar of the tensor.

    Declaration

    @differentiable
    static func * (lhs: Scalar, rhs: Tensor) -> Tensor
  • Multiplies the scalar with every scalar of the tensor and produces the product.

    Declaration

    @differentiable
    static func * (lhs: Tensor, rhs: Scalar) -> Tensor
  • Multiplies two tensors and stores the result in the left-hand-side variable.

    Note

    *= supports broadcasting.

    Declaration

    static func *= (lhs: inout Tensor, rhs: Tensor)
  • Multiplies the tensor with the scalar, broadcasting the scalar, and stores the result in the left-hand-side variable.

    Declaration

    static func *= (lhs: inout Tensor, rhs: Scalar)
  • Returns the quotient of dividing the first tensor by the second.

    Note

    / supports broadcasting.

    Declaration

    @differentiable
    static func / (lhs: Tensor, rhs: Tensor) -> Tensor
  • Returns the quotient of dividing the scalar by the tensor, broadcasting the scalar.

    Declaration

    @differentiable
    static func / (lhs: Scalar, rhs: Tensor) -> Tensor
  • Returns the quotient of dividing the tensor by the scalar, broadcasting the scalar.

    Declaration

    @differentiable
    static func / (lhs: Tensor, rhs: Scalar) -> Tensor
  • Divides the first tensor by the second and stores the quotient in the left-hand-side variable.

    Declaration

    static func /= (lhs: inout Tensor, rhs: Tensor)
  • Divides the tensor by the scalar, broadcasting the scalar, and stores the quotient in the left-hand-side variable.

    Declaration

    static func /= (lhs: inout Tensor, rhs: Scalar)
  • Returns the remainder of dividing the first tensor by the second.

    Note

    % supports broadcasting.

    Declaration

    static func % (lhs: Tensor, rhs: Tensor) -> Tensor
  • Returns the remainder of dividing the tensor by the scalar, broadcasting the scalar.

    Declaration

    static func % (lhs: Tensor, rhs: Scalar) -> Tensor
  • Returns the remainder of dividing the scalar by the tensor, broadcasting the scalar.

    Declaration

    static func % (lhs: Scalar, rhs: Tensor) -> Tensor
  • Divides the first tensor by the second and stores the remainder in the left-hand-side variable.

    Declaration

    static func %= (lhs: inout Tensor, rhs: Tensor)
  • Divides the tensor by the scalar and stores the remainder in the left-hand-side variable.

    Declaration

    static func %= (lhs: inout Tensor, rhs: Scalar)
  • Returns !self element-wise.

    Declaration

    func elementsLogicalNot() -> Tensor
  • Returns self && other element-wise.

    Note

    && supports broadcasting.

    Declaration

    func elementsLogicalAnd(_ other: Tensor) -> Tensor
  • Returns self && other element-wise, broadcasting other.

    Declaration

    func elementsLogicalAnd(_ other: Scalar) -> Tensor
  • Returns self || other element-wise.

    Declaration

    func elementsLogicalOr(_ other: Tensor) -> Tensor
  • Returns self || other element-wise, broadcasting other.

    Declaration

    func elementsLogicalOr(_ other: Scalar) -> Tensor
  • Returns max(min(self, max), min).

    Declaration

    @differentiable
    func clipped(min: Tensor, max: Tensor) -> Tensor
  • Returns max(min(self, max), min).

    Declaration

    @differentiable
    func clipped(min: Tensor, max: Scalar) -> Tensor
  • Returns max(min(self, max), min).

    Declaration

    @differentiable
    func clipped(min: Scalar, max: Tensor) -> Tensor
  • Returns max(min(self, max), min).

    Declaration

    @differentiable
    func clipped(min: Scalar, max: Scalar) -> Tensor
  • Returns the negation of the specified tensor element-wise.

    Declaration

    @differentiable
    prefix static func - (rhs: Tensor) -> Tensor
  • Declaration

    @differentiable
    func squared() -> Tensor
  • Returns a boolean tensor indicating which elements of x are finite.

    Declaration

    var isFinite: Tensor<Bool> { get }
  • Returns a boolean tensor indicating which elements of x are infinite.

    Declaration

    var isInfinite: Tensor<Bool> { get }
  • Returns a boolean tensor indicating which elements of x are NaN-valued.

    Declaration

    var isNaN: Tensor<Bool> { get }
  • Replaces elements of this tensor with other in the lanes where mask is true.

    Precondition

    self and other must have the same shape. If self and other are scalar, then mask must also be scalar. If self and other have rank greater than or equal to 1, then mask must be either have the same shape as self or be a 1-D Tensor such that mask.scalarCount == self.shape[0].

    Declaration

    @differentiable
    func replacing(with other: Tensor, where mask: Tensor<Bool>) -> Tensor
  • Returns true if all scalars are equal to true. Otherwise, returns false.

    Declaration

    func all() -> Bool
  • Returns true if any scalars are equal to true. Otherwise, returns false.

    Declaration

    func any() -> Bool
  • Performs a logical AND operation along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    func all(squeezingAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Performs a logical AND operation along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    func any(squeezingAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Performs a logical AND operation along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    func all(alongAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Performs a logical OR operation along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    func any(alongAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Declaration

    @differentiable
    func min() -> Tensor
  • Declaration

    @differentiable
    func max() -> Tensor
  • Returns the maximum values along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func max(squeezingAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the maximum values along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func max(squeezingAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the maximum values along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func max(squeezingAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the minimum values along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func min(squeezingAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the minimum values along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func min(squeezingAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the minimum values along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func min(squeezingAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the indices of the maximum values along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    func argmax(squeezingAxis axis: Int) -> Tensor<Int32>

    Parameters

    axes

    The dimensions to reduce.

  • Returns the indices of the minimum values along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    func argmin(squeezingAxis axis: Int) -> Tensor<Int32>

    Parameters

    axes

    The dimensions to reduce.

  • Returns the minimum along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func min(alongAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the minimum along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func min(alongAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the minimum along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func min(alongAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the minimum along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func max(alongAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the minimum along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func max(alongAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the minimum along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func max(alongAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the index of the maximum value of the flattened scalars.

    Declaration

    func argmax() -> Tensor<Int32>
  • Returns the index of the minimum value of the flattened scalars.

    Declaration

    func argmin() -> Tensor<Int32>
  • Returns the sum along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank...rank.

    Declaration

    @differentiable
    func sum(squeezingAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the sum along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank...rank.

    Declaration

    @differentiable
    func sum(squeezingAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the sum along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank...rank.

    Declaration

    @differentiable
    func sum(squeezingAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Declaration

    @differentiable
    func sum() -> Tensor
  • Returns the sum along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func sum(alongAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the sum along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func sum(alongAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the sum along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func sum(alongAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the product along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank...rank.

    Declaration

    func product(squeezingAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the product along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank...rank.

    Declaration

    func product(squeezingAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the product along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank...rank.

    Declaration

    func product(squeezingAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Declaration

    func product() -> Tensor
  • Returns the product along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    func product(alongAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the product along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    func product(alongAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the product along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    func product(alongAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the arithmetic mean along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank...rank.

    Declaration

    @differentiable
    func mean(squeezingAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the arithmetic mean along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank...rank.

    Declaration

    @differentiable
    func mean(squeezingAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the arithmetic mean along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank...rank.

    Declaration

    @differentiable
    func mean(squeezingAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Declaration

    @differentiable
    func mean() -> Tensor
  • Returns the arithmetic mean along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func mean(alongAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the arithmetic mean along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func mean(alongAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the arithmetic mean along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func mean(alongAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the variance along the specified axes. The reduced dimensions are removed. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func variance(squeezingAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the variance along the specified axes. The reduced dimensions are removed. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func variance(squeezingAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the variance along the specified axes. The reduced dimensions are retained with value 1. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func variance(squeezingAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Declaration

    @differentiable
    func variance() -> Tensor
  • Returns the variance along the specified axes. The reduced dimensions are retained with value 1. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func variance(alongAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the variance along the specified axes. The reduced dimensions are retained with value 1. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func variance(alongAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the variance along the specified axes. The reduced dimensions are retained with value 1. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func variance(alongAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the cumulative sum of this tensor along the specified axis. By default, this function performs an inclusive cumulative sum which means that the first element of the input is identical to the first element of the output:

    Tensor<Float>([a, b, c]).cumulativeSum() = Tensor<Float>([a, a + b, a + b + c])
    

    By setting the exclusive argument to true, an exclusive cumulative sum is performed instead:

    Tensor<Float>([a, b, c]).cumulativeSum(exclusive: true) = Tensor<Float>([0, a, a + b])
    

    By setting the reverse argument to true, the cumulative sum is performed in the opposite direction:

    Tensor<Float>([a, b, c]).cumulativeSum(reverse: true) ==
        Tensor<Float>([a + b + c, a + b, a])
    

    This is more efficient than separately reversing the resulting tensor.

    Precondition

    axis must be in the range -rank..<rank.

    Declaration

    @differentiable
    func cumulativeSum(
        alongAxis axis: Int,
        exclusive: Bool = false,
        reverse: Bool = false
    ) -> Tensor

    Parameters

    axis

    Axis along which to perform the cumulative sum operation.

    exclusive

    Indicates whether to perform an exclusive cumulative sum.

    reverse

    Indicates whether to perform the cumulative sum in reversed order.

    Return Value

    Result of the cumulative sum operation.

  • Returns the cumulative sum of this tensor along the specified axis. By default, this function performs an inclusive cumulative sum which means that the first element of the input is identical to the first element of the output:

    Tensor<Float>([a, b, c]).cumulativeSum() = Tensor<Float>([a, a + b, a + b + c])
    

    By setting the exclusive argument to true, an exclusive cumulative sum is performed instead:

    Tensor<Float>([a, b, c]).cumulativeSum(exclusive: true) = Tensor<Float>([0, a, a + b])
    

    By setting the reverse argument to true, the cumulative sum is performed in the opposite direction:

    Tensor<Float>([a, b, c]).cumulativeSum(reverse: true) ==
        Tensor<Float>([a + b + c, a + b, a])
    

    This is more efficient than separately reversing the resulting tensor.

    Precondition

    axis.rank must be 0.

    Precondition

    axis must be in the range -rank..<rank.

    Declaration

    @differentiable
    func cumulativeSum(
        alongAxis axis: Tensor<Int32>,
        exclusive: Bool = false,
        reverse: Bool = false
    ) -> Tensor

    Parameters

    axis

    Axis along which to perform the cumulative sum operation.

    exclusive

    Indicates whether to perform an exclusive cumulative sum.

    reverse

    Indicates whether to perform the cumulative sum in reversed order.

    Return Value

    Result of the cumulative sum operation.

  • Returns the cumulative product of this tensor along the specified axis. By default, this function performs an inclusive cumulative product which means that the first element of the input is identical to the first element of the output:

    Tensor<Float>([a, b, c]).cumulativeProduct() = Tensor<Float>([a, a * b, a * b * c])
    

    By setting the exclusive argument to true, an exclusive cumulative product is performed instead:

    Tensor<Float>([a, b, c]).cumulativeProduct(exclusive: true) = Tensor<Float>([1, a, a * b])
    

    By setting the reverse argument to true, the cumulative product is performed in the opposite direction:

    Tensor<Float>([a, b, c]).cumulativeProduct(reverse: true) ==
        Tensor<Float>([a * b * c, a * b, a])
    

    This is more efficient than separately reversing the resulting tensor.

    Precondition

    axis must be in the range -rank..<rank.

    Declaration

    @differentiable
    func cumulativeProduct(
        alongAxis axis: Int,
        exclusive: Bool = false,
        reverse: Bool = false
    ) -> Tensor

    Parameters

    axis

    Axis along which to perform the cumulative product operation.

    exclusive

    Indicates whether to perform an exclusive cumulative product.

    reverse

    Indicates whether to perform the cumulative product in reversed order.

    Return Value

    Result of the cumulative product operation.

  • Returns the cumulative product of this tensor along the specified axis. By default, this function performs an inclusive cumulative product which means that the first element of the input is identical to the first element of the output:

    Tensor<Float>([a, b, c]).cumulativeProduct() = Tensor<Float>([a, a * b, a * b * c])
    

    By setting the exclusive argument to true, an exclusive cumulative product is performed instead:

    Tensor<Float>([a, b, c]).cumulativeProduct(exclusive: true) = Tensor<Float>([1, a, a * b])
    

    By setting the reverse argument to true, the cumulative product is performed in the opposite direction:

    Tensor<Float>([a, b, c]).cumulativeProduct(reverse: true) ==
        Tensor<Float>([a * b * c, a * b, a])
    

    This is more efficient than separately reversing the resulting tensor.

    Precondition

    axis must be in the range -rank..<rank.

    Declaration

    @differentiable
    func cumulativeProduct(
        alongAxis axis: Tensor<Int32>,
        exclusive: Bool = false,
        reverse: Bool = false
    ) -> Tensor

    Parameters

    axis

    Axis along which to perform the cumulative product operation.

    exclusive

    Indicates whether to perform an exclusive cumulative product.

    reverse

    Indicates whether to perform the cumulative product in reversed order.

    Return Value

    Result of the cumulative product operation.

  • Returns the standard deviation of the elements along the specified axes. The reduced dimensions are retained with value 1. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func standardDeviation(squeezingAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the standard deviation of the elements along the specified axes. The reduced dimensions are retained with value 1. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func standardDeviation(squeezingAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the standard deviation of the elements along the specified axes. The reduced dimensions are retained with value 1. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func standardDeviation(squeezingAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the standard deviation of all elements in this tensor. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func standardDeviation() -> Tensor
  • Returns the standard deviation of the elements along the specified axes. The reduced dimensions are retained with value 1. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func standardDeviation(alongAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the standard deviation of the elements along the specified axes. The reduced dimensions are retained with value 1. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func standardDeviation(alongAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the standard deviation of the elements along the specified axes. The reduced dimensions are retained with value 1. Does not apply Bessel’s correction.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func standardDeviation(alongAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns log(exp(self).sum(squeezingAxes: axes)). The reduced dimensions are removed.

    This function is more numerically stable than computing log(exp(self).sum(squeezingAxes: axes)) directly. It avoids overflows caused by computing the exp of large inputs and underflows caused by computing the log of small inputs.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func logSumExp(squeezingAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns log(exp(self).sum(squeezingAxes: axes)). The reduced dimensions are removed.

    This function is more numerically stable than computing log(exp(self).sum(squeezingAxes: axes)) directly. It avoids overflows caused by computing the exp of large inputs and underflows caused by computing the log of small inputs.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func logSumExp(squeezingAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns log(exp(self).sum(squeezingAxes: axes)). The reduced dimensions are removed.

    This function is more numerically stable than computing log(exp(self).sum(squeezingAxes: axes)) directly. It avoids overflows caused by computing the exp of large inputs and underflows caused by computing the log of small inputs.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func logSumExp(squeezingAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns log(exp(self).sum()). The result is a scalar.

    This function is more numerically stable than computing log(exp(self).sum()) directly. It avoids overflows caused by computing the exp of large inputs and underflows caused by computing the log of small inputs.

    Declaration

    @differentiable
    func logSumExp() -> Tensor
  • Returns log(exp(self).sum(alongAxes: axes)). The reduced dimensions are retained with value 1.

    This function is more numerically stable than computing log(exp(self).sum(alongAxes: axes)) directly. It avoids overflows caused by computing the exp of large inputs and underflows caused by computing the log of small inputs.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func logSumExp(alongAxes axes: Tensor<Int32>) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns log(exp(self).sum(alongAxes: axes)). The reduced dimensions are retained with value 1.

    This function is more numerically stable than computing log(exp(self).sum(alongAxes: axes)) directly. It avoids overflows caused by computing the exp of large inputs and underflows caused by computing the log of small inputs.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func logSumExp(alongAxes axes: [Int]) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns log(exp(self).sum(alongAxes: axes)). The reduced dimensions are retained with value 1.

    This function is more numerically stable than computing log(exp(self).sum(alongAxes: axes)) directly. It avoids overflows caused by computing the exp of large inputs and underflows caused by computing the log of small inputs.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func logSumExp(alongAxes axes: Int...) -> Tensor

    Parameters

    axes

    The dimensions to reduce.

  • Returns the mean and variance of this tensor along the specified axes. The reduced dimensions are removed.

    Precondition

    axes must have rank 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func moments(squeezingAxes axes: Tensor<Int32>) -> Moments<Scalar>

    Parameters

    axes

    The dimensions to reduce.

  • Returns the mean and variance of this tensor along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func moments(squeezingAxes axes: [Int]) -> Moments<Scalar>

    Parameters

    axes

    The dimensions to reduce.

  • Returns the mean and variance of this tensor along the specified axes. The reduced dimensions are removed.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func moments(squeezingAxes axes: Int...) -> Moments<Scalar>

    Parameters

    axes

    The dimensions to reduce.

  • Returns the mean and variance of this tensor’s elements.

    Declaration

    @differentiable
    func moments() -> Moments<Scalar>
  • Returns the mean and variance of this tensor along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    axes must have rank 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func moments(alongAxes axes: Tensor<Int32>) -> Moments<Scalar>

    Parameters

    axes

    The dimensions to reduce.

  • Returns the mean and variance of this tensor along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func moments(alongAxes axes: [Int]) -> Moments<Scalar>

    Parameters

    axes

    The dimensions to reduce.

  • Returns the mean and variance of this tensor along the specified axes. The reduced dimensions are retained with value 1.

    Precondition

    Each value in axes must be in the range -rank..<rank.

    Declaration

    @differentiable
    func moments(alongAxes axes: Int...) -> Moments<Scalar>

    Parameters

    axes

    The dimensions to reduce.

  • Performs matrix multiplication between two tensors and produces the result.

    Declaration

    @differentiable
    static func  (lhs: Tensor, rhs: Tensor) -> Tensor
  • Returns the QR decomposition of each inner matrix in the tensor, a tensor with inner orthogonal matrices q and a tensor with inner upper triangular matrices r, such that the tensor is equal to matmul(q, r).

    Declaration

    func qrDecomposition(fullMatrices: Bool = false) -> (q: Tensor<Scalar>, r: Tensor<Scalar>)

    Parameters

    fullMatrices

    If true, compute full-sized q and r. Otherwise compute only the leading min(shape[rank - 1], shape[rank - 2]) columns of q.

  • Returns the diagonal part of the tensor.

    For example:

    // 't' is [[1, 0, 0, 0]
    //         [0, 2, 0, 0]
    //         [0, 0, 3, 0]
    //         [0, 0, 0, 4]]
    t.diagonalPart()
    // [1, 2, 3, 4]
    

    Declaration

    func diagonalPart() -> Tensor<Scalar>
  • Returns a tensor computed from batch-normalizing the input along the specified axis.

    Specifically, returns (self - mu) / (var + epsilon) * gamma + beta where mu and var are respectively the mean and variance of self along axis.

    Declaration

    @differentiable
    func batchNormalized(
        alongAxis axis: Int,
        offset: Tensor = Tensor(0),
        scale: Tensor = Tensor(1),
        epsilon: Scalar = 0.001
    ) -> Tensor

    Parameters

    axis

    The batch dimension.

    offset

    The offset, also known as beta.

    scale

    The scale, also known as gamma.

    epsilon

    A small value added to the denominator for numerical stability.

  • Creates a tensor with the same shape and scalars as the specified numpy.ndarray instance.

    Precondition

    The numpy Python package must be installed.

    Declaration

    public init?(numpy numpyArray: PythonObject)

    Parameters

    numpyArray

    The numpy.ndarray instance to convert.

    Return Value

    numpyArray converted to an Array. Returns nil if numpyArray does not have a compatible scalar dtype.

  • Creates a numpy.ndarray instance with the same shape and scalars as this tensor.

    Precondition

    The numpy Python package must be installed.

    Declaration

    public func makeNumpyArray() -> PythonObject
  • The number of dimensions of the Tensor.

    Declaration

    var rank: Int { get }
  • The shape of the Tensor.

    Declaration

    var shape: TensorShape { get }
  • The number of scalars in the Tensor.

    Declaration

    var scalarCount: Int { get }
  • The rank of the tensor, represented as a Tensor<Int32>.

    Declaration

    var rankTensor: Tensor<Int32> { get }
  • The dimensions of the tensor, represented as a Tensor<Int32>.

    Declaration

    var shapeTensor: Tensor<Int32> { get }
  • The number of scalars in the tensor, represented as a Tensor<Int32>.

    Declaration

    var scalarCountTensor: Tensor<Int32> { get }
  • Returns true if rank is equal to 0 and false otherwise.

    Declaration

    var isScalar: Bool { get }
  • Returns the single scalar element if rank is equal to 0 and nil otherwise.

    Declaration

    var scalar: Scalar? { get }
  • Reshape to scalar.

    Precondition

    The tensor has exactly one scalar.

    Declaration

    @differentiable
    func scalarized() -> Scalar
  • Creates a 0-D tensor from a scalar value.

    Declaration

    @differentiable
    init(_ value: Scalar)
  • Creates a 1D tensor from scalars.

    Declaration

    init(_ scalars: [Scalar])
  • Creates a 1D tensor from scalars.

    Declaration

    init<C>(_ vector: C) where Scalar == C.Element, C : RandomAccessCollection
  • Creates a tensor with the specified shape and contiguous scalars in row-major order.

    Precondition

    The product of the dimensions of the shape must equal the number of scalars.

    Declaration

    init(shape: TensorShape, scalars: [Scalar])

    Parameters

    shape

    The shape of the tensor.

    scalars

    The scalar contents of the tensor.

  • Creates a tensor with the specified shape and contiguous scalars in row-major order.

    Precondition

    The product of the dimensions of the shape must equal the number of scalars.

    Declaration

    init(shape: TensorShape, scalars: UnsafeBufferPointer<Scalar>)

    Parameters

    shape

    The shape of the tensor.

    scalars

    The scalar contents of the tensor.

  • Creates a tensor with the specified shape and contiguous scalars in row-major order.

    Precondition

    The product of the dimensions of the shape must equal the number of scalars.

    Declaration

    init<C>(shape: TensorShape, scalars: C) where Scalar == C.Element, C : RandomAccessCollection

    Parameters

    shape

    The shape of the tensor.

    scalars

    The scalar contents of the tensor.

  • The type of the elements of an array literal.

    Declaration

    public typealias ArrayLiteralElement = _TensorElementLiteral<Scalar>
  • Creates a tensor initialized with the given elements.

    Declaration

    public init(arrayLiteral elements: _TensorElementLiteral<Scalar>...)
  • Declaration

    public static func == (lhs: Tensor, rhs: Tensor) -> Bool
  • Declaration

    public static func != (lhs: Tensor, rhs: Tensor) -> Bool
  • A textual representation of the tensor.

    Note

    use fullDescription for a non-pretty-printed description showing all scalars.

    Declaration

    public var description: String { get }
  • A textual representation of the tensor. Returns a summarized description if summarize is true and the element count exceeds twice the edgeElementCount.

    Declaration

    func description(
        lineWidth: Int = 80,
        edgeElementCount: Int = 3,
        summarizing: Bool = false
    ) -> String

    Parameters

    lineWidth

    The max line width for printing. Used to determine number of scalars to print per line.

    edgeElementCount

    The maximum number of elements to print before and after summarization via ellipses (...).

    summarizing

    If true, summarize description if element count exceeds twice edgeElementCount.

  • A full, non-pretty-printed textual representation of the tensor, showing all scalars.

    Declaration

    var fullDescription: String { get }
  • Declaration

    public var customMirror: Mirror { get }
  • Declaration

    public func encode(to encoder: Encoder) throws
  • Declaration

    public init(from decoder: Decoder) throws
  • The scalar zero tensor.

    Declaration

    public static var zero: Tensor { get }
  • Adds two tensors and produces their sum.

    Note

    + supports broadcasting.

    Declaration

    @differentiable
    public static func + (lhs: Tensor, rhs: Tensor) -> Tensor
  • Subtracts one tensor from another and produces their difference.

    Note

    - supports broadcasting.

    Declaration

    @differentiable
    public static func - (lhs: Tensor, rhs: Tensor) -> Tensor
  • one

    The scalar one tensor.

    Declaration

    public static var one: Tensor { get }
  • Returns the element-wise reciprocal of self.

    Declaration

    public var reciprocal: Tensor { get }
  • Multiplies two tensors element-wise and produces their product.

    Note

    .* supports broadcasting.

    Declaration

    public static func .* (lhs: Tensor, rhs: Tensor) -> Tensor
  • Declaration

    public typealias TangentVector = Tensor
  • Declaration

    public init(_owning tensorHandles: UnsafePointer<CTensorHandle>?)
  • Declaration

    public init<C: RandomAccessCollection>(
        _handles: C
    ) where C.Element: _AnyTensorHandle