reddit

  • תיאור:

קורפוס זה מכיל פוסטים מעובדים מראש ממערך הנתונים של Reddit. מערך הנתונים כולל 3,848,330 פוסטים באורך ממוצע של 270 מילים לתוכן, ו -28 מילים לסיכום.

התכונות כוללות מחרוזות: מחבר, גוף, normalizedBody, תוכן, סיכום, subreddit, subreddit_id. התוכן משמש כמסמך וסיכום משמש כסיכום.

לְפַצֵל דוגמאות
'train' 3,848,330
  • מאפיינים:
FeaturesDict({
    'author': tf.string,
    'body': tf.string,
    'content': tf.string,
    'id': tf.string,
    'normalizedBody': tf.string,
    'subreddit': tf.string,
    'subreddit_id': tf.string,
    'summary': tf.string,
})
  • ציטוט:
@inproceedings{volske-etal-2017-tl,
    title = "{TL};{DR}: Mining {R}eddit to Learn Automatic Summarization",
    author = {V{"o}lske, Michael  and
      Potthast, Martin  and
      Syed, Shahbaz  and
      Stein, Benno},
    booktitle = "Proceedings of the Workshop on New Frontiers in Summarization",
    month = sep,
    year = "2017",
    address = "Copenhagen, Denmark",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/W17-4508",
    doi = "10.18653/v1/W17-4508",
    pages = "59--63",
    abstract = "Recent advances in automatic text summarization have used deep neural networks to generate high-quality abstractive summaries, but the performance of these models strongly depends on large amounts of suitable training data. We propose a new method for mining social media for author-provided summaries, taking advantage of the common practice of appending a {``}TL;DR{''} to long posts. A case study using a large Reddit crawl yields the Webis-TLDR-17 dataset, complementing existing corpora primarily from the news genre. Our technique is likely applicable to other social media sites and general web crawls.",
}