Have a question? Connect with the community at the TensorFlow Forum Visit Forum

Module: tfnlp.layers

Layers package definition.

Classes

class BertPackInputs: Packs tokens into model inputs for BERT.

class BertTokenizer: Wraps BertTokenizer with pre-defined vocab as a Keras Layer.

class CachedAttention: Attention layer with cache used for auto-agressive decoding.

class ClassificationHead: Pooling head for sentence-level classification tasks.

class CompiledTransformer: Transformer layer.

class DenseEinsum: A densely connected layer that uses tf.einsum as the backing computation.

class EinsumDense: A layer that uses tf.einsum as the backing computation.

class GatedFeedforward: Gated linear feedforward layer.

class MaskedLM: Masked language model network head for BERT modeling.

class MaskedSoftmax: Performs a softmax with optional masking on a tensor.

class MatMulWithMargin: This layer computs a dot product matrix given two encoded inputs.

class MobileBertEmbedding: Performs an embedding lookup for MobileBERT.

class MobileBertMaskedLM: Masked language model network head for BERT modeling.

class MobileBertTransformer: Transformer block for MobileBERT.

class MultiChannelAttention: Multi-channel Attention layer.

class MultiClsHeads: Pooling heads sharing the same pooling stem.

class MultiHeadAttention: MultiHeadAttention layer.

class MultiHeadRelativeAttention: A multi-head attention layer with relative attention + position encoding.

class OnDeviceEmbedding: Performs an embedding lookup suitable for accelerator devices.

class ReZeroTransformer: Transformer layer with ReZero.

class RelativePositionBias: Relative position embedding via per-head bias in T5 style.

class RelativePositionEmbedding: Creates a positional embedding.

class SelfAttentionMask: Create 3D attention mask from a 2D tensor mask.

class SentencepieceTokenizer: Wraps tf_text.SentencepieceTokenizer as a Keras Layer.

class TNTransformerExpandCondense: Transformer layer using tensor network Expand-Condense layer.

class TalkingHeadsAttention: Implements Talking-Heads Attention.

class Transformer: Transformer layer.

class TransformerDecoderBlock: Single transformer layer for decoder.

class TransformerScaffold: Transformer scaffold layer.

class TransformerXL: Transformer XL.

class TransformerXLBlock: Transformer XL block.

class TwoStreamRelativeAttention: Two-stream relative self-attention for XLNet.

class VotingAttention: Voting Attention layer.

Functions

tf_function_if_eager(...): Applies the @tf.function decorator only if running in eager mode.