Modules pre-trained to embed words, phrases, and sentences as many-dimensional vectors.
Click on a module to view its documentation, or reference the URL from the TensorFlow Hub library like so:
m = hub.Module("https://tfhub.dev/...")
Universal Sentence Encoder
Encoder of greater-than-word length text trained on a variety of data.
- universal-sentence-encoder-lite (*Text preprocessing required)
Deep Contextualized Word Representations trained on the 1 Billion Word Benchmark.
NNLM embedding trained on Google News
Embedding from a neural network language model trained on Google News dataset.
Word2vec trained on Wikipedia
Embedding trained by word2vec on Wikipedia.
|250 dimensions||500 dimensions|