O Dia da Comunidade de ML é dia 9 de novembro! Junte-nos para atualização de TensorFlow, JAX, e mais Saiba mais

reddit

  • Descrição:

Este corpus contém postagens pré-processadas do conjunto de dados Reddit. O conjunto de dados consiste em 3.848.330 postagens com comprimento médio de 270 palavras para o conteúdo e 28 palavras para o resumo.

Os recursos incluem strings: autor, corpo, normalizedBody, conteúdo, resumo, subreddit, subreddit_id. O conteúdo é usado como documento e o resumo é usado como resumo.

Dividir Exemplos
'train' 3.848.330
  • Características:
FeaturesDict({
    'author': tf.string,
    'body': tf.string,
    'content': tf.string,
    'id': tf.string,
    'normalizedBody': tf.string,
    'subreddit': tf.string,
    'subreddit_id': tf.string,
    'summary': tf.string,
})
  • citação:
@inproceedings{volske-etal-2017-tl,
    title = "{TL};{DR}: Mining {R}eddit to Learn Automatic Summarization",
    author = {V{"o}lske, Michael  and
      Potthast, Martin  and
      Syed, Shahbaz  and
      Stein, Benno},
    booktitle = "Proceedings of the Workshop on New Frontiers in Summarization",
    month = sep,
    year = "2017",
    address = "Copenhagen, Denmark",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/W17-4508",
    doi = "10.18653/v1/W17-4508",
    pages = "59--63",
    abstract = "Recent advances in automatic text summarization have used deep neural networks to generate high-quality abstractive summaries, but the performance of these models strongly depends on large amounts of suitable training data. We propose a new method for mining social media for author-provided summaries, taking advantage of the common practice of appending a {``}TL;DR{''} to long posts. A case study using a large Reddit crawl yields the Webis-TLDR-17 dataset, complementing existing corpora primarily from the news genre. Our technique is likely applicable to other social media sites and general web crawls.",
}