- Description:
Adversarial NLI (ANLI) is a large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure.
Additional Documentation: Explore on Papers With Code
Homepage: https://github.com/facebookresearch/anli
Source code:
tfds.datasets.anli.Builder
Versions:
0.1.0
(default): No release notes.
Download size:
17.76 MiB
Auto-cached (documentation): Yes
Feature structure:
FeaturesDict({
'context': Text(shape=(), dtype=string),
'hypothesis': Text(shape=(), dtype=string),
'label': ClassLabel(shape=(), dtype=int64, num_classes=3),
'uid': Text(shape=(), dtype=string),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
context | Text | string | ||
hypothesis | Text | string | ||
label | ClassLabel | int64 | ||
uid | Text | string |
Supervised keys (See
as_supervised
doc):None
Figure (tfds.show_examples): Not supported.
Citation:
@inproceedings{Nie2019AdversarialNA,
title = "Adversarial NLI: A New Benchmark for Natural Language Understanding",
author = "Nie, Yixin and
Williams, Adina and
Dinan, Emily and
Bansal, Mohit and
Weston, Jason and
Kiela, Douwe",
year="2019",
url ="https://arxiv.org/abs/1910.14599"
}
anli/r1 (default config)
Config description: Round One
Dataset size:
9.04 MiB
Splits:
Split | Examples |
---|---|
'test' |
1,000 |
'train' |
16,946 |
'validation' |
1,000 |
- Examples (tfds.as_dataframe):
anli/r2
Config description: Round Two
Dataset size:
22.39 MiB
Splits:
Split | Examples |
---|---|
'test' |
1,000 |
'train' |
45,460 |
'validation' |
1,000 |
- Examples (tfds.as_dataframe):
anli/r3
Config description: Round Three
Dataset size:
47.03 MiB
Splits:
Split | Examples |
---|---|
'test' |
1,200 |
'train' |
100,459 |
'validation' |
1,200 |
- Examples (tfds.as_dataframe):