Module google/random-nnlm-en-dim50/1

Token based text embedding initialized randomly.

Module URL: https://tfhub.dev/google/random-nnlm-en-dim50/1

Overview

Text embedding initialized with tf.random_normal([vocabulary_size, 50]). It contains no "knowledge", but can conveniently be used as a baseline when comparing to other modules.

Example use

embed = hub.Module("https://tfhub.dev/google/random-nnlm-en-dim50/1")
embeddings = embed(["cat is on the mat", "dog is in the fog"])

Details

Vocabulary of the module is based on nnlm-en-dim50.

Input

The module takes a batch of sentences in a 1-D tensor of strings as input.

Preprocessing

The module preprocesses its input by splitting on spaces.

Out of vocabulary tokens

Small fraction of the least frequent tokens from the original vocabulary (~2.5%) are replaced by hash buckets, initialized also randomly and from the same distribution.