anli

  • Description:

Adversarial NLI (ANLI) is a large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure.

FeaturesDict({
    'context': Text(shape=(), dtype=tf.string),
    'hypothesis': Text(shape=(), dtype=tf.string),
    'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=3),
    'uid': Text(shape=(), dtype=tf.string),
})
@inproceedings{Nie2019AdversarialNA,
    title = "Adversarial NLI: A New Benchmark for Natural Language Understanding",
    author = "Nie, Yixin and
      Williams, Adina and
      Dinan, Emily  and
      Bansal, Mohit and
      Weston, Jason and
      Kiela, Douwe",
      year="2019",
    url ="https://arxiv.org/abs/1910.14599"
}

anli/r1 (default config)

  • Config description: Round One

  • Dataset size: 9.04 MiB

  • Splits:

Split Examples
'test' 1,000
'train' 16,946
'validation' 1,000

anli/r2

  • Config description: Round Two

  • Dataset size: 22.39 MiB

  • Splits:

Split Examples
'test' 1,000
'train' 45,460
'validation' 1,000

anli/r3

  • Config description: Round Three

  • Dataset size: 47.03 MiB

  • Splits:

Split Examples
'test' 1,200
'train' 100,459
'validation' 1,200