Raw TensorFlow operators

View on TensorFlow.org Run in Google Colab View source on GitHub

Building on TensorFlow, Swift for TensorFlow takes a fresh approach to API design. APIs are carefully curated from established libraries and combined with new language idioms. This means that not all TensorFlow APIs will be directly available as Swift APIs, and our API curation needs time and dedicated effort to evolve. However, do not worry if your favorite TensorFlow operator is not available in Swift -- the TensorFlow Swift library gives you transparent access to most TensorFlow operators, under the _Raw namespace.

Import TensorFlow to get started.

import TensorFlow

Calling raw operators

Simply find the function you need under the _Raw namespace via code completion.

print(_Raw.mul(Tensor([2.0, 3.0]), Tensor([5.0, 6.0])))
[10.0, 18.0]

Defining a new multiply operator

Multiply is already available as operator * on Tensor, but let us pretend that we wanted to make it available under a new name as .*. Swift allows you to retroactively add methods or computed properties to existing types using extension declarations.

Now, let us add .* to Tensor by declaring an extension and make it available when the tensor's Scalar type conforms to Numeric.

infix operator .* : MultiplicationPrecedence

extension Tensor where Scalar: Numeric {
    static func .* (_ lhs: Tensor, _ rhs: Tensor) -> Tensor {
        return _Raw.mul(lhs, rhs)
    }
}

let x: Tensor<Double> = [[1.0, 2.0], [3.0, 4.0]]
let y: Tensor<Double> = [[8.0, 7.0], [6.0, 5.0]]
print(x .* y)
[[ 8.0, 14.0],
 [18.0, 20.0]]

Defining a derivative of a wrapped function

Not only can you easily define a Swift API for a raw TensorFlow operator, you can also make it differentiable to work with Swift's first-class automatic differentiation.

To make .* differentiable, use the @derivative attribute on the derivative function and specify the original function as an attribute argument under the of: label. Since the .* operator is defined when the generic type Scalar conforms to Numeric, it is not enough for making Tensor<Scalar> conform to the Differentiable protocol. Born with type safety, Swift will remind us to add a generic constraint on the @differentiable attribute to require Scalar to conform to TensorFlowFloatingPoint protocol, which would make Tensor<Scalar> conform to Differentiable.

@differentiable(where Scalar: TensorFlowFloatingPoint)
infix operator .* : MultiplicationPrecedence

extension Tensor where Scalar: Numeric {
    @differentiable(where Scalar: TensorFlowFloatingPoint)
    static func .* (_ lhs: Tensor,  _ rhs: Tensor) -> Tensor {
        return _Raw.mul(lhs, rhs)
    }
}

extension Tensor where Scalar : TensorFlowFloatingPoint { 
    @derivative(of: .*)
    static func multiplyDerivative(
        _ lhs: Tensor, _ rhs: Tensor
    ) -> (value: Tensor, pullback: (Tensor) -> (Tensor, Tensor)) {
        return (lhs * rhs, { v in
            ((rhs * v).unbroadcasted(to: lhs.shape),
            (lhs * v).unbroadcasted(to: rhs.shape))
        })
    }
}

// Now, we can take the derivative of a function that calls `.*` that we just defined.
print(gradient(at: x, y) { x, y in
    (x .* y).sum()
})
(0.0, 0.0)

More examples

let matrix = Tensor<Float>([[1, 2], [3, 4]])

print(_Raw.matMul(matrix, matrix, transposeA: true, transposeB: true))
print(_Raw.matMul(matrix, matrix, transposeA: true, transposeB: false))
print(_Raw.matMul(matrix, matrix, transposeA: false, transposeB: true))
print(_Raw.matMul(matrix, matrix, transposeA: false, transposeB: false))
[[ 7.0, 15.0],
 [10.0, 22.0]]
[[10.0, 14.0],
 [14.0, 20.0]]
[[ 5.0, 11.0],
 [11.0, 25.0]]
[[ 7.0, 10.0],
 [15.0, 22.0]]