Stay organized with collections Save and categorize content based on your preferences.

  • Description:

NEWSROOM is a large dataset for training and evaluating summarization systems. It contains 1.3 million articles and summaries written by authors and editors in the newsrooms of 38 major publications.

Dataset features includes: - text: Input news text. - summary: Summary for the news. And additional features: - title: news title. - url: url of the news. - date: date of the article. - density: extractive density. - coverage: extractive coverage. - compression: compression ratio. - density_bin: low, medium, high. - coverage_bin: extractive, abstractive. - compression_bin: low, medium, high.

This dataset can be downloaded upon requests. Unzip all the contents "train.jsonl, dev.josnl, test.jsonl" to the tfds folder.

  • Homepage:

  • Source code: tfds.summarization.Newsroom

  • Versions:

    • 1.0.0 (default): No release notes.
  • Download size: Unknown size

  • Dataset size: Unknown size

  • Manual download instructions: This dataset requires you to download the source data manually into download_config.manual_dir (defaults to ~/tensorflow_datasets/downloads/manual/):
    You should download the dataset from The webpage requires registration. After downloading, please put dev.jsonl, test.jsonl and train.jsonl files in the manual_dir.

  • Auto-cached (documentation): Unknown

  • Splits:

Split Examples
'test' 108,862
'train' 995,041
'validation' 108,837
  • Feature structure:
    'compression': tf.float32,
    'compression_bin': Text(shape=(), dtype=tf.string),
    'coverage': tf.float32,
    'coverage_bin': Text(shape=(), dtype=tf.string),
    'date': Text(shape=(), dtype=tf.string),
    'density': tf.float32,
    'density_bin': Text(shape=(), dtype=tf.string),
    'summary': Text(shape=(), dtype=tf.string),
    'text': Text(shape=(), dtype=tf.string),
    'title': Text(shape=(), dtype=tf.string),
    'url': Text(shape=(), dtype=tf.string),
  • Feature documentation:
Feature Class Shape Dtype Description
compression Tensor tf.float32
compression_bin Text tf.string
coverage Tensor tf.float32
coverage_bin Text tf.string
date Text tf.string
density Tensor tf.float32
density_bin Text tf.string
summary Text tf.string
text Text tf.string
title Text tf.string
url Text tf.string
  • Citation:
   title={Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies},
   journal={Proceedings of the 2018 Conference of the North American Chapter of
          the Association for Computational Linguistics: Human Language
          Technologies, Volume 1 (Long Papers)},
   publisher={Association for Computational Linguistics},
   author={Grusky, Max and Naaman, Mor and Artzi, Yoav},