lm1b

  • Description:

A benchmark corpus to be used for measuring progress in statistical language modeling. This has almost one billion words in the training data.

Split Examples
'test' 306,688
'train' 30,301,028
@article{DBLP:journals/corr/ChelbaMSGBK13,
  author    = {Ciprian Chelba and
               Tomas Mikolov and
               Mike Schuster and
               Qi Ge and
               Thorsten Brants and
               Phillipp Koehn},
  title     = {One Billion Word Benchmark for Measuring Progress in Statistical Language
               Modeling},
  journal   = {CoRR},
  volume    = {abs/1312.3005},
  year      = {2013},
  url       = {http://arxiv.org/abs/1312.3005},
  archivePrefix = {arXiv},
  eprint    = {1312.3005},
  timestamp = {Mon, 13 Aug 2018 16:46:16 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/ChelbaMSGBK13},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

lm1b/plain_text (default config)

  • Config description: Plain text

  • Features:

FeaturesDict({
    'text': Text(shape=(), dtype=tf.string),
})

lm1b/bytes

  • Config description: Uses byte-level text encoding with tfds.deprecated.text.ByteTextEncoder

  • Features:

FeaturesDict({
    'text': Text(shape=(None,), dtype=tf.int64, encoder=<ByteTextEncoder vocab_size=257>),
})

lm1b/subwords8k

  • Config description: Uses tfds.deprecated.text.SubwordTextEncoder with 8k vocab size

  • Features:

FeaturesDict({
    'text': Text(shape=(None,), dtype=tf.int64, encoder=<SubwordTextEncoder vocab_size=8189>),
})

lm1b/subwords32k

  • Config description: Uses tfds.deprecated.text.SubwordTextEncoder with 32k vocab size

  • Features:

FeaturesDict({
    'text': Text(shape=(None,), dtype=tf.int64, encoder=<SubwordTextEncoder vocab_size=32711>),
})