Creates a positional embedding.

This layer calculates the position encoding as a mix of sine and cosine functions with geometrically increasing wavelengths. Defined and formulized in "Attention is All You Need", section 3.5. (

hidden_size Size of the hidden layer.
min_timescale Minimum scale that will be applied at each position
max_timescale Maximum scale that will be applied at each position.



View source

Implements call() for the layer.

inputs An tensor whose second dimension will be used as length. If None, the other length argument must be specified.
length An optional integer specifying the number of positions. If both inputs and length are spcified, length must be equal to the second dimension of inputs.

A tensor in shape of (length, hidden_size).