A layer that uses tf.einsum as the backing computation.
tfnlp.layers.EinsumDense(
equation, output_shape, activation=None, bias_axes=None,
kernel_initializer='glorot_uniform',
bias_initializer='zeros', kernel_regularizer=None,
bias_regularizer=None, activity_regularizer=None, kernel_constraint=None,
bias_constraint=None, **kwargs
)
This layer can perform einsum calculations of arbitrary dimensionality.
Args  

equation

An equation describing the einsum to perform. This equation must
be a valid einsum string of the form ab,bc>ac , ...ab,bc>...ac , or
ab...,bc>ac... where 'ab', 'bc', and 'ac' can be any valid einsum axis
expression sequence.

output_shape

The expected shape of the output tensor (excluding the batch dimension and any dimensions represented by ellipses). You can specify None for any dimension that is unknown or can be inferred from the input shape. 
activation

Activation function to use. If you don't specify anything, no
activation is applied (that is, a "linear" activation: a(x) = x ).

bias_axes

A string containing the output dimension(s) to apply a bias to.
Each character in the bias_axes string should correspond to a character
in the output portion of the equation string.

kernel_initializer

Initializer for the kernel weights matrix.

bias_initializer

Initializer for the bias vector. 
kernel_regularizer

Regularizer function applied to the kernel weights
matrix.

bias_regularizer

Regularizer function applied to the bias vector. 
activity_regularizer

Regularizer function applied to the output of the layer (its "activation").. 
kernel_constraint

Constraint function applied to the kernel weights
matrix.

bias_constraint

Constraint function applied to the bias vector. 
Examples:
Biased dense layer with einsums
This example shows how to instantiate a standard Keras dense layer using
einsum operations. This example is equivalent to
tf.keras.layers.Dense(64, use_bias=True)
.
layer = EinsumDense("ab,bc>ac", output_shape=64, bias_axes="c")
input_tensor = tf.keras.Input(shape=[32])
output_tensor = layer(input_tensor)
output_tensor
<... shape=(None, 64) dtype=...>
Applying a dense layer to a sequence
This example shows how to instantiate a layer that applies the same dense
operation to every element in a sequence. Here, the 'output_shape' has two
values (since there are two nonbatch dimensions in the output); the first
dimension in the output_shape is None
, because the sequence dimension b
has an unknown shape.
layer = EinsumDense("abc,cd>abd",
output_shape=(None, 64),
bias_axes="d")
input_tensor = tf.keras.Input(shape=[32, 128])
output_tensor = layer(input_tensor)
output_tensor
<... shape=(None, 32, 64) dtype=...>
Applying a dense layer to a sequence using ellipses
This example shows how to instantiate a layer that applies the same dense operation to every element in a sequence, but uses the ellipsis notation instead of specifying the batch and sequence dimensions.
Because we are using ellipsis notation and have specified only one axis, the output_shape arg is a single value. When instantiated in this way, the layer can handle any number of sequence dimensions  including the case where no sequence dimension exists.
layer = EinsumDense("...x,xy>...y", output_shape=64, bias_axes="y")
input_tensor = tf.keras.Input(shape=[32, 128])
output_tensor = layer(input_tensor)
output_tensor
<... shape=(None, 32, 64) dtype=...>
Methods
call
call(
inputs
)
This is where the layer's logic lives.
Note here that call()
method in tf.keras
is little bit different
from keras
API. In keras
API, you can pass support masking for
layers as additional arguments. Whereas tf.keras
has compute_mask()
method to support masking.
Args  

inputs

Input tensor, or list/tuple of input tensors. 
*args

Additional positional arguments. Currently unused. 
**kwargs

Additional keyword arguments. Currently unused. 
Returns  

A tensor or list/tuple of tensors. 