natural_questions

Stay organized with collections Save and categorize content based on your preferences.

  • Description:

The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. The inclusion of real user questions, and the requirement that solutions should read an entire page to find the answer, cause NQ to be a more realistic and challenging task than prior QA datasets.

Split Examples
'train' 307,373
'validation' 7,830
@article{47761,
title = {Natural Questions: a Benchmark for Question Answering Research},
author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
year = {2019},
journal = {Transactions of the Association of Computational Linguistics}
}

natural_questions/default (default config)

  • Config description: Default natural_questions config

  • Dataset size: 90.26 GiB

  • Feature structure:

FeaturesDict({
    'annotations': Sequence({
        'id': tf.string,
        'long_answer': FeaturesDict({
            'end_byte': tf.int64,
            'end_token': tf.int64,
            'start_byte': tf.int64,
            'start_token': tf.int64,
        }),
        'short_answers': Sequence({
            'end_byte': tf.int64,
            'end_token': tf.int64,
            'start_byte': tf.int64,
            'start_token': tf.int64,
            'text': Text(shape=(), dtype=tf.string),
        }),
        'yes_no_answer': ClassLabel(shape=(), dtype=tf.int64, num_classes=2),
    }),
    'document': FeaturesDict({
        'html': Text(shape=(), dtype=tf.string),
        'title': Text(shape=(), dtype=tf.string),
        'tokens': Sequence({
            'is_html': tf.bool,
            'token': Text(shape=(), dtype=tf.string),
        }),
        'url': Text(shape=(), dtype=tf.string),
    }),
    'id': tf.string,
    'question': FeaturesDict({
        'text': Text(shape=(), dtype=tf.string),
        'tokens': Sequence(tf.string),
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
annotations Sequence
annotations/id Tensor tf.string
annotations/long_answer FeaturesDict
annotations/long_answer/end_byte Tensor tf.int64
annotations/long_answer/end_token Tensor tf.int64
annotations/long_answer/start_byte Tensor tf.int64
annotations/long_answer/start_token Tensor tf.int64
annotations/short_answers Sequence
annotations/short_answers/end_byte Tensor tf.int64
annotations/short_answers/end_token Tensor tf.int64
annotations/short_answers/start_byte Tensor tf.int64
annotations/short_answers/start_token Tensor tf.int64
annotations/short_answers/text Text tf.string
annotations/yes_no_answer ClassLabel tf.int64
document FeaturesDict
document/html Text tf.string
document/title Text tf.string
document/tokens Sequence
document/tokens/is_html Tensor tf.bool
document/tokens/token Text tf.string
document/url Text tf.string
id Tensor tf.string
question FeaturesDict
question/text Text tf.string
question/tokens Sequence(Tensor) (None,) tf.string

natural_questions/longt5

  • Config description: natural_questions preprocessed as in the longT5 benchmark

  • Dataset size: 8.91 GiB

  • Feature structure:

FeaturesDict({
    'all_answers': Sequence(Text(shape=(), dtype=tf.string)),
    'answer': Text(shape=(), dtype=tf.string),
    'context': Text(shape=(), dtype=tf.string),
    'id': Text(shape=(), dtype=tf.string),
    'question': Text(shape=(), dtype=tf.string),
    'title': Text(shape=(), dtype=tf.string),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
all_answers Sequence(Text) (None,) tf.string
answer Text tf.string
context Text tf.string
id Text tf.string
question Text tf.string
title Text tf.string