|View source on GitHub|
Creates kernel attention mask.
tfm.nlp.layers.KernelMask( trainable=True, name=None, dtype=None, dynamic=False, **kwargs )
inputs: from_tensor: 2D or 3D Tensor of shape [batch_size, from_seq_length, ...]. mask: a Tensor of shape [batch_size, from_seq_length] which indicates which part of the inputs we should not attend.
|float Tensor of shape [batch_size, from_seq_length] that KernelAttention takes as mask.|
call( inputs, mask )
This is where the layer's logic lives.
call() method may not create state (except in its first
invocation, wrapping the creation of variables or other resources in
tf.init_scope()). It is recommended to create state, including
tf.Variable instances and nested
__init__(), or in the
build() method that is
called automatically before
call() executes for the first time.
Input tensor, or dict/list/tuple of input tensors.
The first positional
||Additional positional arguments. May contain tensors, although this is not recommended, for the reasons above.|
Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above.
The following optional keyword arguments are reserved:
|A tensor or list/tuple of tensors.|