• Description:

Ce corpus contient des publications prétraitées de l'ensemble de données Reddit. L'ensemble de données se compose de 3 848 330 articles d'une longueur moyenne de 270 mots pour le contenu et de 28 mots pour le résumé.

Les fonctionnalités incluent des chaînes : auteur, corps, normalizedBody, contenu, résumé, subreddit, subreddit_id. Le contenu est utilisé comme document et le résumé est utilisé comme résumé.

Diviser Exemples
'train' 3 848 330
  • Caractéristiques:
    'author': tf.string,
    'body': tf.string,
    'content': tf.string,
    'id': tf.string,
    'normalizedBody': tf.string,
    'subreddit': tf.string,
    'subreddit_id': tf.string,
    'summary': tf.string,
  • citation:
    title = "{TL};{DR}: Mining {R}eddit to Learn Automatic Summarization",
    author = {V{"o}lske, Michael  and
      Potthast, Martin  and
      Syed, Shahbaz  and
      Stein, Benno},
    booktitle = "Proceedings of the Workshop on New Frontiers in Summarization",
    month = sep,
    year = "2017",
    address = "Copenhagen, Denmark",
    publisher = "Association for Computational Linguistics",
    url = "",
    doi = "10.18653/v1/W17-4508",
    pages = "59--63",
    abstract = "Recent advances in automatic text summarization have used deep neural networks to generate high-quality abstractive summaries, but the performance of these models strongly depends on large amounts of suitable training data. We propose a new method for mining social media for author-provided summaries, taking advantage of the common practice of appending a {``}TL;DR{''} to long posts. A case study using a large Reddit crawl yields the Webis-TLDR-17 dataset, complementing existing corpora primarily from the news genre. Our technique is likely applicable to other social media sites and general web crawls.",