unificado_qa

  • Descripción :

El punto de referencia UnifiedQA consta de 20 conjuntos de datos principales de respuesta a preguntas (QA) (cada uno puede tener múltiples versiones) que apuntan a diferentes formatos, así como a varios fenómenos lingüísticos complejos. Estos conjuntos de datos se agrupan en varios formatos/categorías, que incluyen: control de calidad extractivo, control de calidad abstractivo, control de calidad de opción múltiple y control de calidad sí/no. Además, los conjuntos de contraste se utilizan para varios conjuntos de datos (indicados como " conjuntos de contraste"). Estos conjuntos de evaluación son perturbaciones generadas por expertos que se desvían de los patrones comunes en el conjunto de datos original. Para varios conjuntos de datos que no vienen con párrafos de evidencia, se incluyen dos variantes: una donde los conjuntos de datos se usan tal cual y otra que usa párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional, indicados con etiquetas "_ir".

Se puede encontrar más información en: https://github.com/allenai/unifiedqa

FeaturesDict({
    'input': string,
    'output': string,
})
  • Documentación de características :
Rasgo Clase Forma Tipo D Descripción
CaracterísticasDict
aporte Tensor cuerda
producción Tensor cuerda

unified_qa/ai2_science_elementary (configuración predeterminada)

  • Descripción de la configuración : el conjunto de datos AI2 Science Questions consta de preguntas utilizadas en las evaluaciones de los estudiantes en los Estados Unidos en los niveles de grado de la escuela primaria y secundaria. Cada pregunta tiene un formato de opción múltiple de 4 vías y puede o no incluir un elemento de diagrama. Este conjunto consta de preguntas utilizadas para los niveles de grado de la escuela primaria.

  • Tamaño de la descarga : 345.59 KiB

  • Tamaño del conjunto de datos : 390.02 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 542
'train' 623
'validation' 123
  • Cita :
http://data.allenai.org/ai2-science-questions

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/ai2_science_middle

  • Descripción de la configuración : el conjunto de datos AI2 Science Questions consta de preguntas utilizadas en las evaluaciones de los estudiantes en los Estados Unidos en los niveles de grado de la escuela primaria y secundaria. Cada pregunta tiene un formato de opción múltiple de 4 vías y puede o no incluir un elemento de diagrama. Este conjunto consta de preguntas utilizadas para los niveles de grado de la escuela intermedia.

  • Tamaño de la descarga : 428.41 KiB

  • Tamaño del conjunto de datos : 477.40 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 679
'train' 605
'validation' 125
  • Cita :
http://data.allenai.org/ai2-science-questions

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unificado_qa/ambigqa

  • Descripción de la configuración : AmbigQA es una tarea de respuesta a preguntas de dominio abierto que implica encontrar todas las respuestas plausibles y luego volver a escribir la pregunta para cada una para resolver la ambigüedad.

  • Tamaño de descarga : 2.27 MiB

  • Tamaño del conjunto de datos : 3.04 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 19,806
'validation' 5,674
  • Cita :
@inproceedings{min-etal-2020-ambigqa,
    title = "{A}mbig{QA}: Answering Ambiguous Open-domain Questions",
    author = "Min, Sewon  and
      Michael, Julian  and
      Hajishirzi, Hannaneh  and
      Zettlemoyer, Luke",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.emnlp-main.466",
    doi = "10.18653/v1/2020.emnlp-main.466",
    pages = "5783--5797",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/arc_fácil

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "fáciles".

  • Tamaño de descarga : 1.24 MiB

  • Tamaño del conjunto de datos : 1.42 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 2,376
'train' 2,251
'validation' 570
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/arc_easy_dev

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "fáciles".

  • Tamaño de descarga : 1.24 MiB

  • Tamaño del conjunto de datos : 1.42 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 2,376
'train' 2,251
'validation' 570
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/arc_easy_with_ir

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "fáciles". Esta versión incluye párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional.

  • Tamaño de la descarga : 7.00 MiB

  • Tamaño del conjunto de datos : 7.17 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 2,376
'train' 2,251
'validation' 570
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/arc_easy_with_ir_dev

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "fáciles". Esta versión incluye párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional.

  • Tamaño de la descarga : 7.00 MiB

  • Tamaño del conjunto de datos : 7.17 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 2,376
'train' 2,251
'validation' 570
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/arc_hard

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "difíciles".

  • Tamaño de la descarga : 758.03 KiB

  • Tamaño del conjunto de datos : 848.28 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,172
'train' 1,119
'validation' 299
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/arc_hard_dev

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "difíciles".

  • Tamaño de la descarga : 758.03 KiB

  • Tamaño del conjunto de datos : 848.28 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,172
'train' 1,119
'validation' 299
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/arc_hard_with_ir

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "difíciles". Esta versión incluye párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional.

  • Tamaño de la descarga : 3.53 MiB

  • Tamaño del conjunto de datos : 3.62 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,172
'train' 1,119
'validation' 299
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/arc_hard_with_ir_dev

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "difíciles". Esta versión incluye párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional.

  • Tamaño de la descarga : 3.53 MiB

  • Tamaño del conjunto de datos : 3.62 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,172
'train' 1,119
'validation' 299
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unificado_qa/boolq

  • Descripción de la configuración : BoolQ es un conjunto de datos de respuesta a preguntas para preguntas de sí/no. Estas preguntas ocurren de forma natural, se generan en entornos sin restricciones ni indicaciones. Cada ejemplo es un triplete de (pregunta, pasaje, respuesta), con el título de la página como contexto adicional opcional. La configuración de clasificación de pares de texto es similar a las tareas de inferencia de lenguaje natural existentes.

  • Tamaño de la descarga : 7.77 MiB

  • Tamaño del conjunto de datos : 8.20 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 9,427
'validation' 3,270
  • Cita :
@inproceedings{clark-etal-2019-boolq,
    title = "{B}ool{Q}: Exploring the Surprising Difficulty of Natural Yes/No Questions",
    author = "Clark, Christopher  and
      Lee, Kenton  and
      Chang, Ming-Wei  and
      Kwiatkowski, Tom  and
      Collins, Michael  and
      Toutanova, Kristina",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1300",
    doi = "10.18653/v1/N19-1300",
    pages = "2924--2936",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/boolq_np

  • Descripción de la configuración : BoolQ es un conjunto de datos de respuesta a preguntas para preguntas de sí/no. Estas preguntas ocurren de forma natural, se generan en entornos sin restricciones ni indicaciones. Cada ejemplo es un triplete de (pregunta, pasaje, respuesta), con el título de la página como contexto adicional opcional. La configuración de clasificación de pares de texto es similar a las tareas de inferencia de lenguaje natural existentes. Esta versión añade perturbaciones naturales a la versión original.

  • Tamaño de descarga : 10.80 MiB

  • Tamaño del conjunto de datos : 11.40 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 9,727
'validation' 7,596
  • Cita :
@inproceedings{khashabi-etal-2020-bang,
    title = "More Bang for Your Buck: Natural Perturbation for Robust Question Answering",
    author = "Khashabi, Daniel  and
      Khot, Tushar  and
      Sabharwal, Ashish",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.emnlp-main.12",
    doi = "10.18653/v1/2020.emnlp-main.12",
    pages = "163--170",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/commonsenseqa

  • Descripción de la configuración : CommonsenseQA es un nuevo conjunto de datos de respuesta a preguntas de opción múltiple que requiere diferentes tipos de conocimientos de sentido común para predecir las respuestas correctas. Contiene preguntas con una respuesta correcta y cuatro respuestas distractoras.

  • Tamaño de la descarga : 1.79 MiB

  • Tamaño del conjunto de datos : 2.19 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,140
'train' 9,741
'validation' 1,221
  • Cita :
@inproceedings{talmor-etal-2019-commonsenseqa,
    title = "{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge",
    author = "Talmor, Alon  and
      Herzig, Jonathan  and
      Lourie, Nicholas  and
      Berant, Jonathan",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1421",
    doi = "10.18653/v1/N19-1421",
    pages = "4149--4158",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/commonsenseqa_test

  • Descripción de la configuración : CommonsenseQA es un nuevo conjunto de datos de respuesta a preguntas de opción múltiple que requiere diferentes tipos de conocimientos de sentido común para predecir las respuestas correctas. Contiene preguntas con una respuesta correcta y cuatro respuestas distractoras.

  • Tamaño de la descarga : 1.79 MiB

  • Tamaño del conjunto de datos : 2.19 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,140
'train' 9,741
'validation' 1,221
  • Cita :
@inproceedings{talmor-etal-2019-commonsenseqa,
    title = "{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge",
    author = "Talmor, Alon  and
      Herzig, Jonathan  and
      Lourie, Nicholas  and
      Berant, Jonathan",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1421",
    doi = "10.18653/v1/N19-1421",
    pages = "4149--4158",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/contrast_sets_boolq

  • Descripción de la configuración : BoolQ es un conjunto de datos de respuesta a preguntas para preguntas de sí/no. Estas preguntas ocurren de forma natural, se generan en entornos sin restricciones ni indicaciones. Cada ejemplo es un triplete de (pregunta, pasaje, respuesta), con el título de la página como contexto adicional opcional. La configuración de clasificación de pares de texto es similar a las tareas de inferencia de lenguaje natural existentes. Esta versión utiliza conjuntos de contraste. Estos conjuntos de evaluación son perturbaciones generadas por expertos que se desvían de los patrones comunes en el conjunto de datos original.

  • Tamaño de la descarga : 438.51 KiB

  • Tamaño del conjunto de datos : 462.35 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 340
'validation' 340
  • Cita :
@inproceedings{clark-etal-2019-boolq,
    title = "{B}ool{Q}: Exploring the Surprising Difficulty of Natural Yes/No Questions",
    author = "Clark, Christopher  and
      Lee, Kenton  and
      Chang, Ming-Wei  and
      Kwiatkowski, Tom  and
      Collins, Michael  and
      Toutanova, Kristina",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1300",
    doi = "10.18653/v1/N19-1300",
    pages = "2924--2936",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/contrast_sets_drop

  • Descripción de la configuración : DROP es un punto de referencia de control de calidad de colaboración colectiva creado por adversarios, en el que un sistema debe resolver las referencias en una pregunta, tal vez a múltiples posiciones de entrada, y realizar operaciones discretas sobre ellas (como sumar, contar o clasificar). Estas operaciones requieren una comprensión mucho más completa del contenido de los párrafos que la necesaria para los conjuntos de datos anteriores. Esta versión utiliza conjuntos de contraste. Estos conjuntos de evaluación son perturbaciones generadas por expertos que se desvían de los patrones comunes en el conjunto de datos original.

  • Tamaño de la descarga : 2.20 MiB

  • Tamaño del conjunto de datos : 2.26 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 947
'validation' 947
  • Cita :
@inproceedings{dua-etal-2019-drop,
    title = "{DROP}: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs",
    author = "Dua, Dheeru  and
      Wang, Yizhong  and
      Dasigi, Pradeep  and
      Stanovsky, Gabriel  and
      Singh, Sameer  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1246",
    doi = "10.18653/v1/N19-1246",
    pages = "2368--2378",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/contrast_sets_quoref

  • Descripción de la configuración : este conjunto de datos prueba la capacidad de razonamiento correferencial de los sistemas de comprensión de lectura. En este punto de referencia de selección de intervalos que contiene preguntas sobre párrafos de Wikipedia, un sistema debe resolver las correferencias duras antes de seleccionar los intervalos apropiados en los párrafos para responder a las preguntas. Esta versión utiliza conjuntos de contraste. Estos conjuntos de evaluación son perturbaciones generadas por expertos que se desvían de los patrones comunes en el conjunto de datos original.

  • Tamaño de descarga : 2.60 MiB

  • Tamaño del conjunto de datos : 2.65 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 700
'validation' 700
  • Cita :
@inproceedings{dasigi-etal-2019-quoref,
    title = "{Q}uoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning",
    author = "Dasigi, Pradeep  and
      Liu, Nelson F.  and
      Marasovi{'c}, Ana  and
      Smith, Noah A.  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-1606",
    doi = "10.18653/v1/D19-1606",
    pages = "5925--5932",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/contrast_sets_ropes

  • Descripción de configuración : este conjunto de datos prueba la capacidad de un sistema para aplicar el conocimiento de un pasaje de texto a una nueva situación. A un sistema se le presenta un pasaje de fondo que contiene una relación causal o cualitativa (p. ej., "los animales polinizadores aumentan la eficiencia de la fertilización de las flores"), una situación novedosa que usa este fondo y preguntas que requieren razonamiento sobre los efectos de las relaciones en el pasaje de fondo en el contexto de la situación. Esta versión utiliza conjuntos de contraste. Estos conjuntos de evaluación son perturbaciones generadas por expertos que se desvían de los patrones comunes en el conjunto de datos original.

  • Tamaño de la descarga : 1.97 MiB

  • Tamaño del conjunto de datos : 2.04 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 974
'validation' 974
  • Cita :
@inproceedings{lin-etal-2019-reasoning,
    title = "Reasoning Over Paragraph Effects in Situations",
    author = "Lin, Kevin  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2nd Workshop on Machine Reading for Question Answering",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-5808",
    doi = "10.18653/v1/D19-5808",
    pages = "58--62",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/drop

  • Descripción de la configuración : DROP es un punto de referencia de control de calidad de colaboración colectiva creado por adversarios, en el que un sistema debe resolver las referencias en una pregunta, tal vez a múltiples posiciones de entrada, y realizar operaciones discretas sobre ellas (como sumar, contar o clasificar). Estas operaciones requieren una comprensión mucho más completa del contenido de los párrafos que la necesaria para los conjuntos de datos anteriores.

  • Tamaño de la descarga : 105.18 MiB

  • Tamaño del conjunto de datos : 108.16 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 77,399
'validation' 9,536
  • Cita :
@inproceedings{dua-etal-2019-drop,
    title = "{DROP}: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs",
    author = "Dua, Dheeru  and
      Wang, Yizhong  and
      Dasigi, Pradeep  and
      Stanovsky, Gabriel  and
      Singh, Sameer  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1246",
    doi = "10.18653/v1/N19-1246",
    pages = "2368--2378",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/mctest

  • Descripción de la configuración : MCTest requiere que las máquinas respondan preguntas de comprensión de lectura de opción múltiple sobre historias ficticias, abordando directamente el objetivo de alto nivel de comprensión de máquinas de dominio abierto. La comprensión de lectura puede evaluar habilidades avanzadas como el razonamiento causal y la comprensión del mundo; sin embargo, al ser de opción múltiple, aún proporciona una métrica clara. Al ser ficticio, la respuesta generalmente solo se puede encontrar en la historia misma. Las historias y las preguntas también se limitan cuidadosamente a aquellas que un niño pequeño entendería, lo que reduce el conocimiento del mundo que se requiere para la tarea.

  • Tamaño de descarga : 2.14 MiB

  • Tamaño del conjunto de datos : 2.20 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 1,480
'validation' 320
  • Cita :
@inproceedings{richardson-etal-2013-mctest,
    title = "{MCT}est: A Challenge Dataset for the Open-Domain Machine Comprehension of Text",
    author = "Richardson, Matthew  and
      Burges, Christopher J.C.  and
      Renshaw, Erin",
    booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
    month = oct,
    year = "2013",
    address = "Seattle, Washington, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D13-1020",
    pages = "193--203",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/mctest_corregido_el_separador

  • Descripción de la configuración : MCTest requiere que las máquinas respondan preguntas de comprensión de lectura de opción múltiple sobre historias ficticias, abordando directamente el objetivo de alto nivel de comprensión de máquinas de dominio abierto. La comprensión de lectura puede evaluar habilidades avanzadas como el razonamiento causal y la comprensión del mundo; sin embargo, al ser de opción múltiple, aún proporciona una métrica clara. Al ser ficticio, la respuesta generalmente solo se puede encontrar en la historia misma. Las historias y las preguntas también se limitan cuidadosamente a aquellas que un niño pequeño entendería, lo que reduce el conocimiento del mundo que se requiere para la tarea.

  • Tamaño de la descarga : 2.15 MiB

  • Tamaño del conjunto de datos : 2.21 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 1,480
'validation' 320
  • Cita :
@inproceedings{richardson-etal-2013-mctest,
    title = "{MCT}est: A Challenge Dataset for the Open-Domain Machine Comprehension of Text",
    author = "Richardson, Matthew  and
      Burges, Christopher J.C.  and
      Renshaw, Erin",
    booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
    month = oct,
    year = "2013",
    address = "Seattle, Washington, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D13-1020",
    pages = "193--203",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unificado_qa/multirc

  • Descripción de la configuración : MultiRC es un desafío de comprensión de lectura en el que las preguntas solo se pueden responder teniendo en cuenta la información de varias oraciones. Las preguntas y respuestas para este desafío se solicitaron y verificaron a través de un experimento de crowdsourcing de 4 pasos. El conjunto de datos contiene preguntas para párrafos en 7 dominios diferentes (ciencias de la escuela primaria, noticias, guías de viaje, historias de ficción, etc.) que aportan diversidad lingüística a los textos y a la redacción de las preguntas.

  • Tamaño de la descarga : 897.09 KiB

  • Tamaño del conjunto de datos : 918.42 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 312
'validation' 312
  • Cita :
@inproceedings{khashabi-etal-2018-looking,
    title = "Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences",
    author = "Khashabi, Daniel  and
      Chaturvedi, Snigdha  and
      Roth, Michael  and
      Upadhyay, Shyam  and
      Roth, Dan",
    booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)",
    month = jun,
    year = "2018",
    address = "New Orleans, Louisiana",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N18-1023",
    doi = "10.18653/v1/N18-1023",
    pages = "252--262",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/qa narrativo

  • Descripción de la configuración : NarrativeQA es un conjunto de datos en inglés de historias y preguntas correspondientes diseñadas para evaluar la comprensión de lectura, especialmente en documentos extensos.

  • Tamaño de la descarga : 308.28 MiB

  • Tamaño del conjunto de datos : 311.22 MiB

  • Almacenamiento automático en caché ( documentación ): No

  • Divisiones :

Separar Ejemplos
'test' 21,114
'train' 65,494
'validation' 6,922
  • Cita :
@article{kocisky-etal-2018-narrativeqa,
    title = "The {N}arrative{QA} Reading Comprehension Challenge",
    author = "Ko{
{c} }isk{'y}, Tom{'a}{
{s} }  and
      Schwarz, Jonathan  and
      Blunsom, Phil  and
      Dyer, Chris  and
      Hermann, Karl Moritz  and
      Melis, G{'a}bor  and
      Grefenstette, Edward",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "6",
    year = "2018",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q18-1023",
    doi = "10.1162/tacl_a_00023",
    pages = "317--328",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/narrativeqa_dev

  • Descripción de la configuración : NarrativeQA es un conjunto de datos en inglés de historias y preguntas correspondientes diseñadas para evaluar la comprensión de lectura, especialmente en documentos extensos.

  • Tamaño de la descarga : 308.28 MiB

  • Tamaño del conjunto de datos : 311.22 MiB

  • Almacenamiento automático en caché ( documentación ): No

  • Divisiones :

Separar Ejemplos
'test' 21,114
'train' 65,494
'validation' 6,922
  • Cita :
@article{kocisky-etal-2018-narrativeqa,
    title = "The {N}arrative{QA} Reading Comprehension Challenge",
    author = "Ko{
{c} }isk{'y}, Tom{'a}{
{s} }  and
      Schwarz, Jonathan  and
      Blunsom, Phil  and
      Dyer, Chris  and
      Hermann, Karl Moritz  and
      Melis, G{'a}bor  and
      Grefenstette, Edward",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "6",
    year = "2018",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q18-1023",
    doi = "10.1162/tacl_a_00023",
    pages = "317--328",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/preguntas_naturales

  • Descripción de la configuración : el corpus de NQ contiene preguntas de usuarios reales y requiere que los sistemas de control de calidad lean y comprendan un artículo completo de Wikipedia que puede o no contener la respuesta a la pregunta. La inclusión de preguntas de usuarios reales y el requisito de que las soluciones deban leer una página completa para encontrar la respuesta hacen que NQ sea una tarea más realista y desafiante que los conjuntos de datos de control de calidad anteriores.

  • Tamaño de la descarga : 6.95 MiB

  • Tamaño del conjunto de datos : 9.88 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 96,075
'validation' 2,295
  • Cita :
@article{kwiatkowski-etal-2019-natural,
    title = "Natural Questions: A Benchmark for Question Answering Research",
    author = "Kwiatkowski, Tom  and
      Palomaki, Jennimaria  and
      Redfield, Olivia  and
      Collins, Michael  and
      Parikh, Ankur  and
      Alberti, Chris  and
      Epstein, Danielle  and
      Polosukhin, Illia  and
      Devlin, Jacob  and
      Lee, Kenton  and
      Toutanova, Kristina  and
      Jones, Llion  and
      Kelcey, Matthew  and
      Chang, Ming-Wei  and
      Dai, Andrew M.  and
      Uszkoreit, Jakob  and
      Le, Quoc  and
      Petrov, Slav",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "7",
    year = "2019",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q19-1026",
    doi = "10.1162/tacl_a_00276",
    pages = "452--466",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/natural_questions_direct_ans

  • Descripción de la configuración : el corpus de NQ contiene preguntas de usuarios reales y requiere que los sistemas de control de calidad lean y comprendan un artículo completo de Wikipedia que puede o no contener la respuesta a la pregunta. La inclusión de preguntas de usuarios reales y el requisito de que las soluciones deban leer una página completa para encontrar la respuesta hacen que NQ sea una tarea más realista y desafiante que los conjuntos de datos de control de calidad anteriores. Esta versión consta de preguntas de respuesta directa.

  • Tamaño de la descarga : 6.82 MiB

  • Tamaño del conjunto de datos : 10.19 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 6,468
'train' 96,676
'validation' 10,693
  • Cita :
@article{kwiatkowski-etal-2019-natural,
    title = "Natural Questions: A Benchmark for Question Answering Research",
    author = "Kwiatkowski, Tom  and
      Palomaki, Jennimaria  and
      Redfield, Olivia  and
      Collins, Michael  and
      Parikh, Ankur  and
      Alberti, Chris  and
      Epstein, Danielle  and
      Polosukhin, Illia  and
      Devlin, Jacob  and
      Lee, Kenton  and
      Toutanova, Kristina  and
      Jones, Llion  and
      Kelcey, Matthew  and
      Chang, Ming-Wei  and
      Dai, Andrew M.  and
      Uszkoreit, Jakob  and
      Le, Quoc  and
      Petrov, Slav",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "7",
    year = "2019",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q19-1026",
    doi = "10.1162/tacl_a_00276",
    pages = "452--466",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/natural_questions_direct_ans_test

  • Descripción de la configuración : el corpus de NQ contiene preguntas de usuarios reales y requiere que los sistemas de control de calidad lean y comprendan un artículo completo de Wikipedia que puede o no contener la respuesta a la pregunta. La inclusión de preguntas de usuarios reales y el requisito de que las soluciones deban leer una página completa para encontrar la respuesta hacen que NQ sea una tarea más realista y desafiante que los conjuntos de datos de control de calidad anteriores. Esta versión consta de preguntas de respuesta directa.

  • Tamaño de la descarga : 6.82 MiB

  • Tamaño del conjunto de datos : 10.19 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 6,468
'train' 96,676
'validation' 10,693
  • Cita :
@article{kwiatkowski-etal-2019-natural,
    title = "Natural Questions: A Benchmark for Question Answering Research",
    author = "Kwiatkowski, Tom  and
      Palomaki, Jennimaria  and
      Redfield, Olivia  and
      Collins, Michael  and
      Parikh, Ankur  and
      Alberti, Chris  and
      Epstein, Danielle  and
      Polosukhin, Illia  and
      Devlin, Jacob  and
      Lee, Kenton  and
      Toutanova, Kristina  and
      Jones, Llion  and
      Kelcey, Matthew  and
      Chang, Ming-Wei  and
      Dai, Andrew M.  and
      Uszkoreit, Jakob  and
      Le, Quoc  and
      Petrov, Slav",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "7",
    year = "2019",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q19-1026",
    doi = "10.1162/tacl_a_00276",
    pages = "452--466",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/natural_questions_with_dpr_para

  • Descripción de la configuración : el corpus de NQ contiene preguntas de usuarios reales y requiere que los sistemas de control de calidad lean y comprendan un artículo completo de Wikipedia que puede o no contener la respuesta a la pregunta. La inclusión de preguntas de usuarios reales y el requisito de que las soluciones deban leer una página completa para encontrar la respuesta hacen que NQ sea una tarea más realista y desafiante que los conjuntos de datos de control de calidad anteriores. Esta versión incluye párrafos adicionales (obtenidos mediante el motor de recuperación DPR) para complementar cada pregunta.

  • Tamaño de la descarga : 319.22 MiB

  • Tamaño del conjunto de datos : 322.91 MiB

  • Almacenamiento automático en caché ( documentación ): No

  • Divisiones :

Separar Ejemplos
'train' 96,676
'validation' 10,693
  • Cita :
@article{kwiatkowski-etal-2019-natural,
    title = "Natural Questions: A Benchmark for Question Answering Research",
    author = "Kwiatkowski, Tom  and
      Palomaki, Jennimaria  and
      Redfield, Olivia  and
      Collins, Michael  and
      Parikh, Ankur  and
      Alberti, Chris  and
      Epstein, Danielle  and
      Polosukhin, Illia  and
      Devlin, Jacob  and
      Lee, Kenton  and
      Toutanova, Kristina  and
      Jones, Llion  and
      Kelcey, Matthew  and
      Chang, Ming-Wei  and
      Dai, Andrew M.  and
      Uszkoreit, Jakob  and
      Le, Quoc  and
      Petrov, Slav",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "7",
    year = "2019",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q19-1026",
    doi = "10.1162/tacl_a_00276",
    pages = "452--466",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/natural_questions_with_dpr_para_test

  • Descripción de la configuración : el corpus de NQ contiene preguntas de usuarios reales y requiere que los sistemas de control de calidad lean y comprendan un artículo completo de Wikipedia que puede o no contener la respuesta a la pregunta. La inclusión de preguntas de usuarios reales y el requisito de que las soluciones deban leer una página completa para encontrar la respuesta hacen que NQ sea una tarea más realista y desafiante que los conjuntos de datos de control de calidad anteriores. Esta versión incluye párrafos adicionales (obtenidos mediante el motor de recuperación DPR) para complementar cada pregunta.

  • Tamaño de la descarga : 306.94 MiB

  • Tamaño del conjunto de datos : 310.48 MiB

  • Almacenamiento automático en caché ( documentación ): No

  • Divisiones :

Separar Ejemplos
'test' 6,468
'train' 96,676
  • Cita :
@article{kwiatkowski-etal-2019-natural,
    title = "Natural Questions: A Benchmark for Question Answering Research",
    author = "Kwiatkowski, Tom  and
      Palomaki, Jennimaria  and
      Redfield, Olivia  and
      Collins, Michael  and
      Parikh, Ankur  and
      Alberti, Chris  and
      Epstein, Danielle  and
      Polosukhin, Illia  and
      Devlin, Jacob  and
      Lee, Kenton  and
      Toutanova, Kristina  and
      Jones, Llion  and
      Kelcey, Matthew  and
      Chang, Ming-Wei  and
      Dai, Andrew M.  and
      Uszkoreit, Jakob  and
      Le, Quoc  and
      Petrov, Slav",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "7",
    year = "2019",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q19-1026",
    doi = "10.1162/tacl_a_00276",
    pages = "452--466",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/newsqa

  • Descripción de la configuración : NewsQA es un desafiante conjunto de datos de comprensión de máquina de pares de preguntas y respuestas generados por humanos. Los trabajadores colectivos brindan preguntas y respuestas basadas en un conjunto de artículos de noticias de CNN, con respuestas que consisten en fragmentos de texto de los artículos correspondientes.

  • Tamaño de la descarga : 283.33 MiB

  • Tamaño del conjunto de datos : 285.94 MiB

  • Almacenamiento automático en caché ( documentación ): No

  • Divisiones :

Separar Ejemplos
'train' 75,882
'validation' 4,309
  • Cita :
@inproceedings{trischler-etal-2017-newsqa,
    title = "{N}ews{QA}: A Machine Comprehension Dataset",
    author = "Trischler, Adam  and
      Wang, Tong  and
      Yuan, Xingdi  and
      Harris, Justin  and
      Sordoni, Alessandro  and
      Bachman, Philip  and
      Suleman, Kaheer",
    booktitle = "Proceedings of the 2nd Workshop on Representation Learning for {NLP}",
    month = aug,
    year = "2017",
    address = "Vancouver, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/W17-2623",
    doi = "10.18653/v1/W17-2623",
    pages = "191--200",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/openbookqa

  • Descripción de la configuración : OpenBookQA tiene como objetivo promover la investigación en la respuesta avanzada a preguntas, probando una comprensión más profunda tanto del tema (con hechos destacados resumidos como un libro abierto, que también se proporciona con el conjunto de datos) como del lenguaje en el que se expresa. En particular, contiene preguntas que requieren un razonamiento de varios pasos, el uso de conocimientos comunes y de sentido común adicionales y comprensión de texto enriquecido. OpenBookQA es un nuevo tipo de conjunto de datos de preguntas y respuestas modelado a partir de exámenes de libro abierto para evaluar la comprensión humana de un tema.

  • Tamaño de la descarga : 942.34 KiB

  • Tamaño del conjunto de datos : 1.11 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 500
'train' 4,957
'validation' 500
  • Cita :
@inproceedings{mihaylov-etal-2018-suit,
    title = "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering",
    author = "Mihaylov, Todor  and
      Clark, Peter  and
      Khot, Tushar  and
      Sabharwal, Ashish",
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
    month = oct # "-" # nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D18-1260",
    doi = "10.18653/v1/D18-1260",
    pages = "2381--2391",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/openbookqa_dev

  • Descripción de la configuración : OpenBookQA tiene como objetivo promover la investigación en la respuesta avanzada a preguntas, probando una comprensión más profunda tanto del tema (con hechos destacados resumidos como un libro abierto, que también se proporciona con el conjunto de datos) como del lenguaje en el que se expresa. En particular, contiene preguntas que requieren un razonamiento de varios pasos, el uso de conocimientos comunes y de sentido común adicionales y comprensión de texto enriquecido. OpenBookQA es un nuevo tipo de conjunto de datos de preguntas y respuestas modelado a partir de exámenes de libro abierto para evaluar la comprensión humana de un tema.

  • Tamaño de la descarga : 942.34 KiB

  • Tamaño del conjunto de datos : 1.11 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 500
'train' 4,957
'validation' 500
  • Cita :
@inproceedings{mihaylov-etal-2018-suit,
    title = "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering",
    author = "Mihaylov, Todor  and
      Clark, Peter  and
      Khot, Tushar  and
      Sabharwal, Ashish",
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
    month = oct # "-" # nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D18-1260",
    doi = "10.18653/v1/D18-1260",
    pages = "2381--2391",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/openbookqa_with_ir

  • Descripción de la configuración : OpenBookQA tiene como objetivo promover la investigación en la respuesta avanzada a preguntas, probando una comprensión más profunda tanto del tema (con hechos destacados resumidos como un libro abierto, que también se proporciona con el conjunto de datos) como del lenguaje en el que se expresa. En particular, contiene preguntas que requieren un razonamiento de varios pasos, el uso de conocimientos comunes y de sentido común adicionales y comprensión de texto enriquecido. OpenBookQA es un nuevo tipo de conjunto de datos de preguntas y respuestas modelado a partir de exámenes de libro abierto para evaluar la comprensión humana de un tema. Esta versión incluye párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional.

  • Tamaño de la descarga : 6.08 MiB

  • Tamaño del conjunto de datos : 6.28 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 500
'train' 4,957
'validation' 500
  • Cita :
@inproceedings{mihaylov-etal-2018-suit,
    title = "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering",
    author = "Mihaylov, Todor  and
      Clark, Peter  and
      Khot, Tushar  and
      Sabharwal, Ashish",
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
    month = oct # "-" # nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D18-1260",
    doi = "10.18653/v1/D18-1260",
    pages = "2381--2391",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/openbookqa_with_ir_dev

  • Descripción de la configuración : OpenBookQA tiene como objetivo promover la investigación en la respuesta avanzada a preguntas, probando una comprensión más profunda tanto del tema (con hechos destacados resumidos como un libro abierto, que también se proporciona con el conjunto de datos) como del lenguaje en el que se expresa. En particular, contiene preguntas que requieren un razonamiento de varios pasos, el uso de conocimientos comunes y de sentido común adicionales y comprensión de texto enriquecido. OpenBookQA es un nuevo tipo de conjunto de datos de preguntas y respuestas modelado a partir de exámenes de libro abierto para evaluar la comprensión humana de un tema. Esta versión incluye párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional.

  • Tamaño de la descarga : 6.08 MiB

  • Tamaño del conjunto de datos : 6.28 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 500
'train' 4,957
'validation' 500
  • Cita :
@inproceedings{mihaylov-etal-2018-suit,
    title = "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering",
    author = "Mihaylov, Todor  and
      Clark, Peter  and
      Khot, Tushar  and
      Sabharwal, Ashish",
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
    month = oct # "-" # nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D18-1260",
    doi = "10.18653/v1/D18-1260",
    pages = "2381--2391",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/iqa_físico

  • Descripción de la configuración : este es un conjunto de datos para comparar el progreso en la comprensión del sentido común físico. La tarea subyacente es la respuesta a preguntas de opción múltiple: dada una pregunta q y dos posibles soluciones s1, s2, un modelo o un ser humano debe elegir la solución más adecuada, de las cuales exactamente una es correcta. El conjunto de datos se centra en situaciones cotidianas con preferencia por soluciones atípicas. El conjunto de datos está inspirado en instructables.com, que brinda a los usuarios instrucciones sobre cómo construir, fabricar, hornear o manipular objetos utilizando materiales cotidianos. Se pide a los anotadores que proporcionen perturbaciones semánticas o enfoques alternativos que, por lo demás, son sintácticamente y tópicamente similares para garantizar que se apunte al conocimiento físico. El conjunto de datos se limpia aún más de artefactos básicos utilizando el algoritmo AFLite.

  • Tamaño de la descarga : 6.01 MiB

  • Tamaño del conjunto de datos : 6.59 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 16,113
'validation' 1,838
  • Cita :
@inproceedings{bisk2020piqa,
    title={Piqa: Reasoning about physical commonsense in natural language},
    author={Bisk, Yonatan and Zellers, Rowan and Gao, Jianfeng and Choi, Yejin and others},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={34},
    number={05},
    pages={7432--7439},
    year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/qasc

  • Descripción de la configuración : QASC es un conjunto de datos de preguntas y respuestas centrado en la composición de oraciones. Consta de preguntas de opción múltiple de 8 vías sobre ciencias de la escuela primaria y viene con un corpus de 17 millones de oraciones.

  • Tamaño de la descarga : 1.75 MiB

  • Tamaño del conjunto de datos : 2.09 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 920
'train' 8,134
'validation' 926
  • Cita :
@inproceedings{khot2020qasc,
    title={Qasc: A dataset for question answering via sentence composition},
    author={Khot, Tushar and Clark, Peter and Guerquin, Michal and Jansen, Peter and Sabharwal, Ashish},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={34},
    number={05},
    pages={8082--8090},
    year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/qasc_test

  • Descripción de la configuración : QASC es un conjunto de datos de preguntas y respuestas centrado en la composición de oraciones. Consta de preguntas de opción múltiple de 8 vías sobre ciencias de la escuela primaria y viene con un corpus de 17 millones de oraciones.

  • Tamaño de la descarga : 1.75 MiB

  • Tamaño del conjunto de datos : 2.09 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 920
'train' 8,134
'validation' 926
  • Cita :
@inproceedings{khot2020qasc,
    title={Qasc: A dataset for question answering via sentence composition},
    author={Khot, Tushar and Clark, Peter and Guerquin, Michal and Jansen, Peter and Sabharwal, Ashish},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={34},
    number={05},
    pages={8082--8090},
    year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/qasc_con_ir

  • Descripción de la configuración : QASC es un conjunto de datos de preguntas y respuestas centrado en la composición de oraciones. Consta de preguntas de opción múltiple de 8 vías sobre ciencias de la escuela primaria y viene con un corpus de 17 millones de oraciones. Esta versión incluye párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional.

  • Tamaño de descarga : 16.95 MiB

  • Tamaño del conjunto de datos : 17.30 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 920
'train' 8,134
'validation' 926
  • Cita :
@inproceedings{khot2020qasc,
    title={Qasc: A dataset for question answering via sentence composition},
    author={Khot, Tushar and Clark, Peter and Guerquin, Michal and Jansen, Peter and Sabharwal, Ashish},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={34},
    number={05},
    pages={8082--8090},
    year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/qasc_with_ir_test

  • Descripción de la configuración : QASC es un conjunto de datos de preguntas y respuestas centrado en la composición de oraciones. Consta de preguntas de opción múltiple de 8 vías sobre ciencias de la escuela primaria y viene con un corpus de 17 millones de oraciones. Esta versión incluye párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional.

  • Tamaño de descarga : 16.95 MiB

  • Tamaño del conjunto de datos : 17.30 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 920
'train' 8,134
'validation' 926
  • Cita :
@inproceedings{khot2020qasc,
    title={Qasc: A dataset for question answering via sentence composition},
    author={Khot, Tushar and Clark, Peter and Guerquin, Michal and Jansen, Peter and Sabharwal, Ashish},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={34},
    number={05},
    pages={8082--8090},
    year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/quoref

  • Descripción de la configuración : este conjunto de datos prueba la capacidad de razonamiento correferencial de los sistemas de comprensión de lectura. En este punto de referencia de selección de intervalos que contiene preguntas sobre párrafos de Wikipedia, un sistema debe resolver las correferencias duras antes de seleccionar los intervalos apropiados en los párrafos para responder a las preguntas.

  • Tamaño de la descarga : 51.43 MiB

  • Tamaño del conjunto de datos : 52.29 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 22,265
'validation' 2,768
  • Cita :
@inproceedings{dasigi-etal-2019-quoref,
    title = "{Q}uoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning",
    author = "Dasigi, Pradeep  and
      Liu, Nelson F.  and
      Marasovi{'c}, Ana  and
      Smith, Noah A.  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-1606",
    doi = "10.18653/v1/D19-1606",
    pages = "5925--5932",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/cadena_de_carrera

  • Descripción de la configuración : Race es un conjunto de datos de comprensión de lectura a gran escala. El conjunto de datos se recopila de los exámenes de inglés en China, que están diseñados para estudiantes de secundaria y preparatoria. El conjunto de datos se puede servir como conjuntos de entrenamiento y prueba para la comprensión de la máquina.

  • Tamaño de la descarga : 167.97 MiB

  • Tamaño del conjunto de datos : 171.23 MiB

  • Almacenamiento automático en caché ( documentación ): Sí (prueba, validación), solo cuando shuffle_files=False (tren)

  • Divisiones :

Separar Ejemplos
'test' 4,934
'train' 87,863
'validation' 4,887
  • Cita :
@inproceedings{lai-etal-2017-race,
    title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
    author = "Lai, Guokun  and
      Xie, Qizhe  and
      Liu, Hanxiao  and
      Yang, Yiming  and
      Hovy, Eduard",
    booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
    month = sep,
    year = "2017",
    address = "Copenhagen, Denmark",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D17-1082",
    doi = "10.18653/v1/D17-1082",
    pages = "785--794",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/race_string_dev

  • Descripción de la configuración : Race es un conjunto de datos de comprensión de lectura a gran escala. El conjunto de datos se recopila de los exámenes de inglés en China, que están diseñados para estudiantes de secundaria y preparatoria. El conjunto de datos se puede servir como conjuntos de entrenamiento y prueba para la comprensión de la máquina.

  • Tamaño de la descarga : 167.97 MiB

  • Tamaño del conjunto de datos : 171.23 MiB

  • Almacenamiento automático en caché ( documentación ): Sí (prueba, validación), solo cuando shuffle_files=False (tren)

  • Divisiones :

Separar Ejemplos
'test' 4,934
'train' 87,863
'validation' 4,887
  • Cita :
@inproceedings{lai-etal-2017-race,
    title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
    author = "Lai, Guokun  and
      Xie, Qizhe  and
      Liu, Hanxiao  and
      Yang, Yiming  and
      Hovy, Eduard",
    booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
    month = sep,
    year = "2017",
    address = "Copenhagen, Denmark",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D17-1082",
    doi = "10.18653/v1/D17-1082",
    pages = "785--794",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/cuerdas

  • Descripción de configuración : este conjunto de datos prueba la capacidad de un sistema para aplicar el conocimiento de un pasaje de texto a una nueva situación. A un sistema se le presenta un pasaje de fondo que contiene una relación causal o cualitativa (p. ej., "los animales polinizadores aumentan la eficiencia de la fertilización de las flores"), una situación novedosa que usa este fondo y preguntas que requieren razonamiento sobre los efectos de las relaciones en el pasaje de fondo en el contexto de la situación.

  • Tamaño de la descarga : 12.91 MiB

  • Tamaño del conjunto de datos : 13.35 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 10,924
'validation' 1,688
  • Cita :
@inproceedings{lin-etal-2019-reasoning,
    title = "Reasoning Over Paragraph Effects in Situations",
    author = "Lin, Kevin  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2nd Workshop on Machine Reading for Question Answering",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-5808",
    doi = "10.18653/v1/D19-5808",
    pages = "58--62",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/iqa_social

  • Descripción de la configuración : este es un punto de referencia a gran escala para el razonamiento de sentido común sobre situaciones sociales. Social IQa contiene preguntas de opción múltiple para probar la inteligencia emocional y social en una variedad de situaciones cotidianas. A través del crowdsourcing, se recopilan preguntas de sentido común junto con respuestas correctas e incorrectas sobre las interacciones sociales, utilizando un nuevo marco que mitiga los artefactos estilísticos en las respuestas incorrectas al pedirles a los trabajadores que proporcionen la respuesta correcta a una pregunta diferente pero relacionada.

  • Tamaño de la descarga : 7.08 MiB

  • Tamaño del conjunto de datos : 8.22 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 33,410
'validation' 1,954
  • Cita :
@inproceedings{sap-etal-2019-social,
    title = "Social {IQ}a: Commonsense Reasoning about Social Interactions",
    author = "Sap, Maarten  and
      Rashkin, Hannah  and
      Chen, Derek  and
      Le Bras, Ronan  and
      Choi, Yejin",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-1454",
    doi = "10.18653/v1/D19-1454",
    pages = "4463--4473",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/escuadrón1_1

  • Descripción de configuración : este es un conjunto de datos de comprensión de lectura que consta de preguntas planteadas por trabajadores colaborativos en un conjunto de artículos de Wikipedia, donde la respuesta a cada pregunta es un segmento de texto del pasaje de lectura correspondiente.

  • Tamaño de la descarga : 80.62 MiB

  • Tamaño del conjunto de datos : 83.99 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 87,514
'validation' 10,570
  • Cita :
@inproceedings{rajpurkar-etal-2016-squad,
    title = "{SQ}u{AD}: 100,000+ Questions for Machine Comprehension of Text",
    author = "Rajpurkar, Pranav  and
      Zhang, Jian  and
      Lopyrev, Konstantin  and
      Liang, Percy",
    booktitle = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2016",
    address = "Austin, Texas",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D16-1264",
    doi = "10.18653/v1/D16-1264",
    pages = "2383--2392",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unificado_qa/escuadrón2

  • Descripción de la configuración : este conjunto de datos combina el conjunto de datos original de Stanford Question Answering Dataset (SQuAD) con preguntas sin respuesta escritas de manera contradictoria por trabajadores colectivos para que se vean similares a las que se pueden responder.

  • Tamaño de la descarga : 116.56 MiB

  • Tamaño del conjunto de datos : 121.43 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 130,149
'validation' 11,873
  • Cita :
@inproceedings{rajpurkar-etal-2018-know,
    title = "Know What You Don{'}t Know: Unanswerable Questions for {SQ}u{AD}",
    author = "Rajpurkar, Pranav  and
      Jia, Robin  and
      Liang, Percy",
    booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
    month = jul,
    year = "2018",
    address = "Melbourne, Australia",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/P18-2124",
    doi = "10.18653/v1/P18-2124",
    pages = "784--789",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/winogrande_l

  • Descripción de la configuración : este conjunto de datos está inspirado en el diseño original de Winograd Schema Challenge, pero ajustado para mejorar tanto la escala como la dureza del conjunto de datos. Los pasos clave de la construcción del conjunto de datos consisten en (1) un procedimiento de crowdsourcing cuidadosamente diseñado, seguido de (2) una reducción sistemática del sesgo utilizando un novedoso algoritmo AfLite que generaliza asociaciones de palabras detectables por humanos a asociaciones de incrustación detectables por máquinas. Se proporcionan conjuntos de entrenamiento con diferentes tamaños. Este conjunto corresponde a la talla l .

  • Tamaño de la descarga : 1.49 MiB

  • Tamaño del conjunto de datos : 1.83 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 10,234
'validation' 1,267
  • Cita :
@inproceedings{sakaguchi2020winogrande,
  title={Winogrande: An adversarial winograd schema challenge at scale},
  author={Sakaguchi, Keisuke and Le Bras, Ronan and Bhagavatula, Chandra and Choi, Yejin},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={34},
  number={05},
  pages={8732--8740},
  year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/winogrande_m

  • Descripción de la configuración : este conjunto de datos está inspirado en el diseño original de Winograd Schema Challenge, pero ajustado para mejorar tanto la escala como la dureza del conjunto de datos. Los pasos clave de la construcción del conjunto de datos consisten en (1) un procedimiento de crowdsourcing cuidadosamente diseñado, seguido de (2) una reducción sistemática del sesgo utilizando un novedoso algoritmo AfLite que generaliza asociaciones de palabras detectables por humanos a asociaciones de incrustación detectables por máquinas. Se proporcionan conjuntos de entrenamiento con diferentes tamaños. Este conjunto corresponde a la talla m .

  • Tamaño de la descarga : 507.46 KiB

  • Tamaño del conjunto de datos : 623.15 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 2,558
'validation' 1,267
  • Cita :
@inproceedings{sakaguchi2020winogrande,
  title={Winogrande: An adversarial winograd schema challenge at scale},
  author={Sakaguchi, Keisuke and Le Bras, Ronan and Bhagavatula, Chandra and Choi, Yejin},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={34},
  number={05},
  pages={8732--8740},
  year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/winogrande_s

  • Descripción de la configuración : este conjunto de datos está inspirado en el diseño original de Winograd Schema Challenge, pero ajustado para mejorar tanto la escala como la dureza del conjunto de datos. Los pasos clave de la construcción del conjunto de datos consisten en (1) un procedimiento de crowdsourcing cuidadosamente diseñado, seguido de (2) una reducción sistemática del sesgo utilizando un novedoso algoritmo AfLite que generaliza asociaciones de palabras detectables por humanos a asociaciones de incrustación detectables por máquinas. Se proporcionan conjuntos de entrenamiento con diferentes tamaños. Este conjunto corresponde a la talla s .

  • Tamaño de la descarga : 479.24 KiB

  • Tamaño del conjunto de datos : 590.47 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,767
'train' 640
'validation' 1,267
  • Cita :
@inproceedings{sakaguchi2020winogrande,
  title={Winogrande: An adversarial winograd schema challenge at scale},
  author={Sakaguchi, Keisuke and Le Bras, Ronan and Bhagavatula, Chandra and Choi, Yejin},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={34},
  number={05},
  pages={8732--8740},
  year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."
,

  • Descripción :

El punto de referencia UnifiedQA consta de 20 conjuntos de datos principales de respuesta a preguntas (QA) (cada uno puede tener múltiples versiones) que apuntan a diferentes formatos, así como a varios fenómenos lingüísticos complejos. Estos conjuntos de datos se agrupan en varios formatos/categorías, que incluyen: control de calidad extractivo, control de calidad abstractivo, control de calidad de opción múltiple y control de calidad sí/no. Además, los conjuntos de contraste se utilizan para varios conjuntos de datos (indicados como " conjuntos de contraste"). Estos conjuntos de evaluación son perturbaciones generadas por expertos que se desvían de los patrones comunes en el conjunto de datos original. Para varios conjuntos de datos que no vienen con párrafos de evidencia, se incluyen dos variantes: una donde los conjuntos de datos se usan tal cual y otra que usa párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional, indicados con etiquetas "_ir".

Se puede encontrar más información en: https://github.com/allenai/unifiedqa

FeaturesDict({
    'input': string,
    'output': string,
})
  • Documentación de características :
Rasgo Clase Forma Tipo D Descripción
CaracterísticasDict
aporte Tensor cuerda
producción Tensor cuerda

unified_qa/ai2_science_elementary (configuración predeterminada)

  • Descripción de la configuración : el conjunto de datos AI2 Science Questions consta de preguntas utilizadas en las evaluaciones de los estudiantes en los Estados Unidos en los niveles de grado de la escuela primaria y secundaria. Cada pregunta tiene un formato de opción múltiple de 4 vías y puede o no incluir un elemento de diagrama. Este conjunto consta de preguntas utilizadas para los niveles de grado de la escuela primaria.

  • Tamaño de la descarga : 345.59 KiB

  • Tamaño del conjunto de datos : 390.02 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 542
'train' 623
'validation' 123
  • Cita :
http://data.allenai.org/ai2-science-questions

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/ai2_science_middle

  • Descripción de la configuración : el conjunto de datos AI2 Science Questions consta de preguntas utilizadas en las evaluaciones de los estudiantes en los Estados Unidos en los niveles de grado de la escuela primaria y secundaria. Cada pregunta tiene un formato de opción múltiple de 4 vías y puede o no incluir un elemento de diagrama. Este conjunto consta de preguntas utilizadas para los niveles de grado de la escuela intermedia.

  • Tamaño de la descarga : 428.41 KiB

  • Tamaño del conjunto de datos : 477.40 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 679
'train' 605
'validation' 125
  • Cita :
http://data.allenai.org/ai2-science-questions

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unificado_qa/ambigqa

  • Descripción de la configuración : AmbigQA es una tarea de respuesta a preguntas de dominio abierto que implica encontrar todas las respuestas plausibles y luego volver a escribir la pregunta para cada una para resolver la ambigüedad.

  • Tamaño de descarga : 2.27 MiB

  • Tamaño del conjunto de datos : 3.04 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 19,806
'validation' 5,674
  • Cita :
@inproceedings{min-etal-2020-ambigqa,
    title = "{A}mbig{QA}: Answering Ambiguous Open-domain Questions",
    author = "Min, Sewon  and
      Michael, Julian  and
      Hajishirzi, Hannaneh  and
      Zettlemoyer, Luke",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.emnlp-main.466",
    doi = "10.18653/v1/2020.emnlp-main.466",
    pages = "5783--5797",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/arc_fácil

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "fáciles".

  • Tamaño de descarga : 1.24 MiB

  • Tamaño del conjunto de datos : 1.42 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 2,376
'train' 2,251
'validation' 570
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/arc_easy_dev

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "fáciles".

  • Tamaño de descarga : 1.24 MiB

  • Tamaño del conjunto de datos : 1.42 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 2,376
'train' 2,251
'validation' 570
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/arc_easy_with_ir

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "fáciles". Esta versión incluye párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional.

  • Tamaño de la descarga : 7.00 MiB

  • Tamaño del conjunto de datos : 7.17 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 2,376
'train' 2,251
'validation' 570
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/arc_easy_with_ir_dev

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "fáciles". Esta versión incluye párrafos obtenidos a través de un sistema de recuperación de información como evidencia adicional.

  • Tamaño de la descarga : 7.00 MiB

  • Tamaño del conjunto de datos : 7.17 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 2,376
'train' 2,251
'validation' 570
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

qa_unificado/arc_hard

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "difíciles".

  • Tamaño de la descarga : 758.03 KiB

  • Tamaño del conjunto de datos : 848.28 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,172
'train' 1,119
'validation' 299
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/arc_hard_dev

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. El conjunto de datos se divide en un conjunto de desafío y un conjunto fácil, donde el primero contiene solo preguntas respondidas incorrectamente tanto por un algoritmo basado en recuperación como por un algoritmo de co-ocurrencia de palabras. Este conjunto consta de preguntas "difíciles".

  • Tamaño de la descarga : 758.03 KiB

  • Tamaño del conjunto de datos : 848.28 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,172
'train' 1,119
'validation' 299
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/arc_hard_with_ir

  • Descripción de la configuración : este conjunto de datos consta de preguntas científicas de opción múltiple genuinas a nivel de escuela primaria, reunidas para fomentar la investigación en la respuesta avanzada a preguntas. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. This set consists of "hard" questions. This version includes paragraphs fetched via an information retrieval system as additional evidence.

  • Download size : 3.53 MiB

  • Tamaño del conjunto de datos : 3.62 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,172
'train' 1,119
'validation' 299
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/arc_hard_with_ir_dev

  • Config description : This dataset consists of genuine grade-school level, multiple-choice science questions, assembled to encourage research in advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. This set consists of "hard" questions. This version includes paragraphs fetched via an information retrieval system as additional evidence.

  • Download size : 3.53 MiB

  • Tamaño del conjunto de datos : 3.62 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,172
'train' 1,119
'validation' 299
  • Cita :
@article{clark2018think,
    title={Think you have solved question answering? try arc, the ai2 reasoning challenge},
    author={Clark, Peter and Cowhey, Isaac and Etzioni, Oren and Khot, Tushar and Sabharwal, Ashish and Schoenick, Carissa and Tafjord, Oyvind},
    journal={arXiv preprint arXiv:1803.05457},
    year={2018}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/boolq

  • Config description : BoolQ is a question answering dataset for yes/no questions. These questions are naturally occurring ---they are generated in unprompted and unconstrained settings. Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context. The text-pair classification setup is similar to existing natural language inference tasks.

  • Download size : 7.77 MiB

  • Dataset size : 8.20 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 9,427
'validation' 3,270
  • Cita :
@inproceedings{clark-etal-2019-boolq,
    title = "{B}ool{Q}: Exploring the Surprising Difficulty of Natural Yes/No Questions",
    author = "Clark, Christopher  and
      Lee, Kenton  and
      Chang, Ming-Wei  and
      Kwiatkowski, Tom  and
      Collins, Michael  and
      Toutanova, Kristina",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1300",
    doi = "10.18653/v1/N19-1300",
    pages = "2924--2936",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/boolq_np

  • Config description : BoolQ is a question answering dataset for yes/no questions. These questions are naturally occurring ---they are generated in unprompted and unconstrained settings. Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context. The text-pair classification setup is similar to existing natural language inference tasks. This version adds natural perturbations to the original version.

  • Download size : 10.80 MiB

  • Dataset size : 11.40 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 9,727
'validation' 7,596
  • Cita :
@inproceedings{khashabi-etal-2020-bang,
    title = "More Bang for Your Buck: Natural Perturbation for Robust Question Answering",
    author = "Khashabi, Daniel  and
      Khot, Tushar  and
      Sabharwal, Ashish",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.emnlp-main.12",
    doi = "10.18653/v1/2020.emnlp-main.12",
    pages = "163--170",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/commonsenseqa

  • Config description : CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers . It contains questions with one correct answer and four distractor answers.

  • Download size : 1.79 MiB

  • Dataset size : 2.19 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,140
'train' 9,741
'validation' 1,221
  • Cita :
@inproceedings{talmor-etal-2019-commonsenseqa,
    title = "{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge",
    author = "Talmor, Alon  and
      Herzig, Jonathan  and
      Lourie, Nicholas  and
      Berant, Jonathan",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1421",
    doi = "10.18653/v1/N19-1421",
    pages = "4149--4158",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/commonsenseqa_test

  • Config description : CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers . It contains questions with one correct answer and four distractor answers.

  • Download size : 1.79 MiB

  • Dataset size : 2.19 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,140
'train' 9,741
'validation' 1,221
  • Cita :
@inproceedings{talmor-etal-2019-commonsenseqa,
    title = "{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge",
    author = "Talmor, Alon  and
      Herzig, Jonathan  and
      Lourie, Nicholas  and
      Berant, Jonathan",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1421",
    doi = "10.18653/v1/N19-1421",
    pages = "4149--4158",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/contrast_sets_boolq

  • Config description : BoolQ is a question answering dataset for yes/no questions. These questions are naturally occurring ---they are generated in unprompted and unconstrained settings. Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context. The text-pair classification setup is similar to existing natural language inference tasks. This version uses contrast sets. These evaluation sets are expert-generated perturbations that deviate from the patterns common in the original dataset.

  • Download size : 438.51 KiB

  • Dataset size : 462.35 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 340
'validation' 340
  • Cita :
@inproceedings{clark-etal-2019-boolq,
    title = "{B}ool{Q}: Exploring the Surprising Difficulty of Natural Yes/No Questions",
    author = "Clark, Christopher  and
      Lee, Kenton  and
      Chang, Ming-Wei  and
      Kwiatkowski, Tom  and
      Collins, Michael  and
      Toutanova, Kristina",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1300",
    doi = "10.18653/v1/N19-1300",
    pages = "2924--2936",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/contrast_sets_drop

  • Config description : DROP is a crowdsourced, adversarially-created QA benchmark, in which a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets. This version uses contrast sets. These evaluation sets are expert-generated perturbations that deviate from the patterns common in the original dataset.

  • Tamaño de la descarga : 2.20 MiB

  • Dataset size : 2.26 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 947
'validation' 947
  • Cita :
@inproceedings{dua-etal-2019-drop,
    title = "{DROP}: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs",
    author = "Dua, Dheeru  and
      Wang, Yizhong  and
      Dasigi, Pradeep  and
      Stanovsky, Gabriel  and
      Singh, Sameer  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1246",
    doi = "10.18653/v1/N19-1246",
    pages = "2368--2378",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/contrast_sets_quoref

  • Config description : This dataset tests the coreferential reasoning capability of reading comprehension systems. In this span-selection benchmark containing questions over paragraphs from Wikipedia, a system must resolve hard coreferences before selecting the appropriate span(s) in the paragraphs for answering questions. This version uses contrast sets. These evaluation sets are expert-generated perturbations that deviate from the patterns common in the original dataset.

  • Download size : 2.60 MiB

  • Dataset size : 2.65 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 700
'validation' 700
  • Cita :
@inproceedings{dasigi-etal-2019-quoref,
    title = "{Q}uoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning",
    author = "Dasigi, Pradeep  and
      Liu, Nelson F.  and
      Marasovi{'c}, Ana  and
      Smith, Noah A.  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-1606",
    doi = "10.18653/v1/D19-1606",
    pages = "5925--5932",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/contrast_sets_ropes

  • Config description : This dataset tests a system's ability to apply knowledge from a passage of text to a new situation. A system is presented a background passage containing a causal or qualitative relation(s) (eg, "animal pollinators increase efficiency of fertilization in flowers"), a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the background passage in the context of the situation. This version uses contrast sets. These evaluation sets are expert-generated perturbations that deviate from the patterns common in the original dataset.

  • Tamaño de la descarga : 1.97 MiB

  • Dataset size : 2.04 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 974
'validation' 974
  • Cita :
@inproceedings{lin-etal-2019-reasoning,
    title = "Reasoning Over Paragraph Effects in Situations",
    author = "Lin, Kevin  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2nd Workshop on Machine Reading for Question Answering",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-5808",
    doi = "10.18653/v1/D19-5808",
    pages = "58--62",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/drop

  • Config description : DROP is a crowdsourced, adversarially-created QA benchmark, in which a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets.

  • Download size : 105.18 MiB

  • Dataset size : 108.16 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 77,399
'validation' 9,536
  • Cita :
@inproceedings{dua-etal-2019-drop,
    title = "{DROP}: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs",
    author = "Dua, Dheeru  and
      Wang, Yizhong  and
      Dasigi, Pradeep  and
      Stanovsky, Gabriel  and
      Singh, Sameer  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
    month = jun,
    year = "2019",
    address = "Minneapolis, Minnesota",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N19-1246",
    doi = "10.18653/v1/N19-1246",
    pages = "2368--2378",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/mctest

  • Config description : MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task.

  • Tamaño de descarga : 2.14 MiB

  • Tamaño del conjunto de datos : 2.20 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 1,480
'validation' 320
  • Cita :
@inproceedings{richardson-etal-2013-mctest,
    title = "{MCT}est: A Challenge Dataset for the Open-Domain Machine Comprehension of Text",
    author = "Richardson, Matthew  and
      Burges, Christopher J.C.  and
      Renshaw, Erin",
    booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
    month = oct,
    year = "2013",
    address = "Seattle, Washington, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D13-1020",
    pages = "193--203",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/mctest_corrected_the_separator

  • Config description : MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task.

  • Download size : 2.15 MiB

  • Dataset size : 2.21 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 1,480
'validation' 320
  • Cita :
@inproceedings{richardson-etal-2013-mctest,
    title = "{MCT}est: A Challenge Dataset for the Open-Domain Machine Comprehension of Text",
    author = "Richardson, Matthew  and
      Burges, Christopher J.C.  and
      Renshaw, Erin",
    booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
    month = oct,
    year = "2013",
    address = "Seattle, Washington, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D13-1020",
    pages = "193--203",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/multirc

  • Config description : MultiRC is a reading comprehension challenge in which questions can only be answered by taking into account information from multiple sentences. Questions and answers for this challenge were solicited and verified through a 4-step crowdsourcing experiment. The dataset contains questions for paragraphs across 7 different domains ( elementary school science, news, travel guides, fiction stories, etc) bringing in linguistic diversity to the texts and to the questions wordings.

  • Download size : 897.09 KiB

  • Dataset size : 918.42 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 312
'validation' 312
  • Cita :
@inproceedings{khashabi-etal-2018-looking,
    title = "Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences",
    author = "Khashabi, Daniel  and
      Chaturvedi, Snigdha  and
      Roth, Michael  and
      Upadhyay, Shyam  and
      Roth, Dan",
    booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)",
    month = jun,
    year = "2018",
    address = "New Orleans, Louisiana",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/N18-1023",
    doi = "10.18653/v1/N18-1023",
    pages = "252--262",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/narrativeqa

  • Config description : NarrativeQA is an English-lanaguage dataset of stories and corresponding questions designed to test reading comprehension, especially on long documents.

  • Download size : 308.28 MiB

  • Dataset size : 311.22 MiB

  • Almacenamiento automático en caché ( documentación ): No

  • Divisiones :

Separar Ejemplos
'test' 21,114
'train' 65,494
'validation' 6,922
  • Cita :
@article{kocisky-etal-2018-narrativeqa,
    title = "The {N}arrative{QA} Reading Comprehension Challenge",
    author = "Ko{
{c} }isk{'y}, Tom{'a}{
{s} }  and
      Schwarz, Jonathan  and
      Blunsom, Phil  and
      Dyer, Chris  and
      Hermann, Karl Moritz  and
      Melis, G{'a}bor  and
      Grefenstette, Edward",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "6",
    year = "2018",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q18-1023",
    doi = "10.1162/tacl_a_00023",
    pages = "317--328",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/narrativeqa_dev

  • Config description : NarrativeQA is an English-lanaguage dataset of stories and corresponding questions designed to test reading comprehension, especially on long documents.

  • Download size : 308.28 MiB

  • Dataset size : 311.22 MiB

  • Almacenamiento automático en caché ( documentación ): No

  • Divisiones :

Separar Ejemplos
'test' 21,114
'train' 65,494
'validation' 6,922
  • Cita :
@article{kocisky-etal-2018-narrativeqa,
    title = "The {N}arrative{QA} Reading Comprehension Challenge",
    author = "Ko{
{c} }isk{'y}, Tom{'a}{
{s} }  and
      Schwarz, Jonathan  and
      Blunsom, Phil  and
      Dyer, Chris  and
      Hermann, Karl Moritz  and
      Melis, G{'a}bor  and
      Grefenstette, Edward",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "6",
    year = "2018",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q18-1023",
    doi = "10.1162/tacl_a_00023",
    pages = "317--328",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/natural_questions

  • Config description : The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. La inclusión de preguntas de usuarios reales y el requisito de que las soluciones deban leer una página completa para encontrar la respuesta hacen que NQ sea una tarea más realista y desafiante que los conjuntos de datos de control de calidad anteriores.

  • Download size : 6.95 MiB

  • Dataset size : 9.88 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 96,075
'validation' 2,295
  • Cita :
@article{kwiatkowski-etal-2019-natural,
    title = "Natural Questions: A Benchmark for Question Answering Research",
    author = "Kwiatkowski, Tom  and
      Palomaki, Jennimaria  and
      Redfield, Olivia  and
      Collins, Michael  and
      Parikh, Ankur  and
      Alberti, Chris  and
      Epstein, Danielle  and
      Polosukhin, Illia  and
      Devlin, Jacob  and
      Lee, Kenton  and
      Toutanova, Kristina  and
      Jones, Llion  and
      Kelcey, Matthew  and
      Chang, Ming-Wei  and
      Dai, Andrew M.  and
      Uszkoreit, Jakob  and
      Le, Quoc  and
      Petrov, Slav",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "7",
    year = "2019",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q19-1026",
    doi = "10.1162/tacl_a_00276",
    pages = "452--466",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/natural_questions_direct_ans

  • Config description : The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. La inclusión de preguntas de usuarios reales y el requisito de que las soluciones deban leer una página completa para encontrar la respuesta hacen que NQ sea una tarea más realista y desafiante que los conjuntos de datos de control de calidad anteriores. This version consists of direct-answer questions.

  • Download size : 6.82 MiB

  • Dataset size : 10.19 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 6,468
'train' 96,676
'validation' 10,693
  • Cita :
@article{kwiatkowski-etal-2019-natural,
    title = "Natural Questions: A Benchmark for Question Answering Research",
    author = "Kwiatkowski, Tom  and
      Palomaki, Jennimaria  and
      Redfield, Olivia  and
      Collins, Michael  and
      Parikh, Ankur  and
      Alberti, Chris  and
      Epstein, Danielle  and
      Polosukhin, Illia  and
      Devlin, Jacob  and
      Lee, Kenton  and
      Toutanova, Kristina  and
      Jones, Llion  and
      Kelcey, Matthew  and
      Chang, Ming-Wei  and
      Dai, Andrew M.  and
      Uszkoreit, Jakob  and
      Le, Quoc  and
      Petrov, Slav",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "7",
    year = "2019",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q19-1026",
    doi = "10.1162/tacl_a_00276",
    pages = "452--466",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/natural_questions_direct_ans_test

  • Config description : The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. La inclusión de preguntas de usuarios reales y el requisito de que las soluciones deban leer una página completa para encontrar la respuesta hacen que NQ sea una tarea más realista y desafiante que los conjuntos de datos de control de calidad anteriores. This version consists of direct-answer questions.

  • Download size : 6.82 MiB

  • Dataset size : 10.19 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 6,468
'train' 96,676
'validation' 10,693
  • Cita :
@article{kwiatkowski-etal-2019-natural,
    title = "Natural Questions: A Benchmark for Question Answering Research",
    author = "Kwiatkowski, Tom  and
      Palomaki, Jennimaria  and
      Redfield, Olivia  and
      Collins, Michael  and
      Parikh, Ankur  and
      Alberti, Chris  and
      Epstein, Danielle  and
      Polosukhin, Illia  and
      Devlin, Jacob  and
      Lee, Kenton  and
      Toutanova, Kristina  and
      Jones, Llion  and
      Kelcey, Matthew  and
      Chang, Ming-Wei  and
      Dai, Andrew M.  and
      Uszkoreit, Jakob  and
      Le, Quoc  and
      Petrov, Slav",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "7",
    year = "2019",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q19-1026",
    doi = "10.1162/tacl_a_00276",
    pages = "452--466",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/natural_questions_with_dpr_para

  • Config description : The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. La inclusión de preguntas de usuarios reales y el requisito de que las soluciones deban leer una página completa para encontrar la respuesta hacen que NQ sea una tarea más realista y desafiante que los conjuntos de datos de control de calidad anteriores. This version includes additional paragraphs (obtained using the DPR retrieval engine) to augment each question.

  • Download size : 319.22 MiB

  • Dataset size : 322.91 MiB

  • Almacenamiento automático en caché ( documentación ): No

  • Divisiones :

Separar Ejemplos
'train' 96,676
'validation' 10,693
  • Cita :
@article{kwiatkowski-etal-2019-natural,
    title = "Natural Questions: A Benchmark for Question Answering Research",
    author = "Kwiatkowski, Tom  and
      Palomaki, Jennimaria  and
      Redfield, Olivia  and
      Collins, Michael  and
      Parikh, Ankur  and
      Alberti, Chris  and
      Epstein, Danielle  and
      Polosukhin, Illia  and
      Devlin, Jacob  and
      Lee, Kenton  and
      Toutanova, Kristina  and
      Jones, Llion  and
      Kelcey, Matthew  and
      Chang, Ming-Wei  and
      Dai, Andrew M.  and
      Uszkoreit, Jakob  and
      Le, Quoc  and
      Petrov, Slav",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "7",
    year = "2019",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q19-1026",
    doi = "10.1162/tacl_a_00276",
    pages = "452--466",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/natural_questions_with_dpr_para_test

  • Config description : The NQ corpus contains questions from real users, and it requires QA systems to read and comprehend an entire Wikipedia article that may or may not contain the answer to the question. La inclusión de preguntas de usuarios reales y el requisito de que las soluciones deban leer una página completa para encontrar la respuesta hacen que NQ sea una tarea más realista y desafiante que los conjuntos de datos de control de calidad anteriores. This version includes additional paragraphs (obtained using the DPR retrieval engine) to augment each question.

  • Download size : 306.94 MiB

  • Dataset size : 310.48 MiB

  • Almacenamiento automático en caché ( documentación ): No

  • Divisiones :

Separar Ejemplos
'test' 6,468
'train' 96,676
  • Cita :
@article{kwiatkowski-etal-2019-natural,
    title = "Natural Questions: A Benchmark for Question Answering Research",
    author = "Kwiatkowski, Tom  and
      Palomaki, Jennimaria  and
      Redfield, Olivia  and
      Collins, Michael  and
      Parikh, Ankur  and
      Alberti, Chris  and
      Epstein, Danielle  and
      Polosukhin, Illia  and
      Devlin, Jacob  and
      Lee, Kenton  and
      Toutanova, Kristina  and
      Jones, Llion  and
      Kelcey, Matthew  and
      Chang, Ming-Wei  and
      Dai, Andrew M.  and
      Uszkoreit, Jakob  and
      Le, Quoc  and
      Petrov, Slav",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "7",
    year = "2019",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/Q19-1026",
    doi = "10.1162/tacl_a_00276",
    pages = "452--466",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/newsqa

  • Config description : NewsQA is a challenging machine comprehension dataset of human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of news articles from CNN, with answers consisting of spans of text from the corresponding articles.

  • Download size : 283.33 MiB

  • Dataset size : 285.94 MiB

  • Almacenamiento automático en caché ( documentación ): No

  • Divisiones :

Separar Ejemplos
'train' 75,882
'validation' 4,309
  • Cita :
@inproceedings{trischler-etal-2017-newsqa,
    title = "{N}ews{QA}: A Machine Comprehension Dataset",
    author = "Trischler, Adam  and
      Wang, Tong  and
      Yuan, Xingdi  and
      Harris, Justin  and
      Sordoni, Alessandro  and
      Bachman, Philip  and
      Suleman, Kaheer",
    booktitle = "Proceedings of the 2nd Workshop on Representation Learning for {NLP}",
    month = aug,
    year = "2017",
    address = "Vancouver, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/W17-2623",
    doi = "10.18653/v1/W17-2623",
    pages = "191--200",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/openbookqa

  • Config description : OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge, and rich text comprehension. OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of a subject.

  • Download size : 942.34 KiB

  • Dataset size : 1.11 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 500
'train' 4,957
'validation' 500
  • Cita :
@inproceedings{mihaylov-etal-2018-suit,
    title = "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering",
    author = "Mihaylov, Todor  and
      Clark, Peter  and
      Khot, Tushar  and
      Sabharwal, Ashish",
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
    month = oct # "-" # nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D18-1260",
    doi = "10.18653/v1/D18-1260",
    pages = "2381--2391",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/openbookqa_dev

  • Config description : OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge, and rich text comprehension. OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of a subject.

  • Download size : 942.34 KiB

  • Dataset size : 1.11 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 500
'train' 4,957
'validation' 500
  • Cita :
@inproceedings{mihaylov-etal-2018-suit,
    title = "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering",
    author = "Mihaylov, Todor  and
      Clark, Peter  and
      Khot, Tushar  and
      Sabharwal, Ashish",
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
    month = oct # "-" # nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D18-1260",
    doi = "10.18653/v1/D18-1260",
    pages = "2381--2391",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/openbookqa_with_ir

  • Config description : OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge, and rich text comprehension. OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of a subject. This version includes paragraphs fetched via an information retrieval system as additional evidence.

  • Download size : 6.08 MiB

  • Dataset size : 6.28 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 500
'train' 4,957
'validation' 500
  • Cita :
@inproceedings{mihaylov-etal-2018-suit,
    title = "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering",
    author = "Mihaylov, Todor  and
      Clark, Peter  and
      Khot, Tushar  and
      Sabharwal, Ashish",
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
    month = oct # "-" # nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D18-1260",
    doi = "10.18653/v1/D18-1260",
    pages = "2381--2391",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/openbookqa_with_ir_dev

  • Config description : OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge, and rich text comprehension. OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of a subject. This version includes paragraphs fetched via an information retrieval system as additional evidence.

  • Download size : 6.08 MiB

  • Dataset size : 6.28 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 500
'train' 4,957
'validation' 500
  • Cita :
@inproceedings{mihaylov-etal-2018-suit,
    title = "Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering",
    author = "Mihaylov, Todor  and
      Clark, Peter  and
      Khot, Tushar  and
      Sabharwal, Ashish",
    booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
    month = oct # "-" # nov,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D18-1260",
    doi = "10.18653/v1/D18-1260",
    pages = "2381--2391",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/physical_iqa

  • Config description : This is a dataset for benchmarking progress in physical commonsense understanding. The underlying task is multiple choice question answering: given a question q and two possible solutions s1, s2, a model or a human must choose the most appropriate solution, of which exactly one is correct. The dataset focuses on everyday situations with a preference for atypical solutions. The dataset is inspired by instructables.com, which provides users with instructions on how to build, craft, bake, or manipulate objects using everyday materials. Annotators are asked to provide semantic perturbations or alternative approaches which are otherwise syntactically and topically similar to ensure physical knowledge is targeted. The dataset is further cleaned of basic artifacts using the AFLite algorithm.

  • Download size : 6.01 MiB

  • Dataset size : 6.59 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 16,113
'validation' 1,838
  • Cita :
@inproceedings{bisk2020piqa,
    title={Piqa: Reasoning about physical commonsense in natural language},
    author={Bisk, Yonatan and Zellers, Rowan and Gao, Jianfeng and Choi, Yejin and others},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={34},
    number={05},
    pages={7432--7439},
    year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/qasc

  • Config description : QASC is a question-answering dataset with a focus on sentence composition. It consists of 8-way multiple-choice questions about grade school science, and comes with a corpus of 17M sentences.

  • Download size : 1.75 MiB

  • Dataset size : 2.09 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 920
'train' 8,134
'validation' 926
  • Cita :
@inproceedings{khot2020qasc,
    title={Qasc: A dataset for question answering via sentence composition},
    author={Khot, Tushar and Clark, Peter and Guerquin, Michal and Jansen, Peter and Sabharwal, Ashish},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={34},
    number={05},
    pages={8082--8090},
    year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/qasc_test

  • Config description : QASC is a question-answering dataset with a focus on sentence composition. It consists of 8-way multiple-choice questions about grade school science, and comes with a corpus of 17M sentences.

  • Download size : 1.75 MiB

  • Dataset size : 2.09 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 920
'train' 8,134
'validation' 926
  • Cita :
@inproceedings{khot2020qasc,
    title={Qasc: A dataset for question answering via sentence composition},
    author={Khot, Tushar and Clark, Peter and Guerquin, Michal and Jansen, Peter and Sabharwal, Ashish},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={34},
    number={05},
    pages={8082--8090},
    year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/qasc_with_ir

  • Config description : QASC is a question-answering dataset with a focus on sentence composition. It consists of 8-way multiple-choice questions about grade school science, and comes with a corpus of 17M sentences. This version includes paragraphs fetched via an information retrieval system as additional evidence.

  • Download size : 16.95 MiB

  • Dataset size : 17.30 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 920
'train' 8,134
'validation' 926
  • Cita :
@inproceedings{khot2020qasc,
    title={Qasc: A dataset for question answering via sentence composition},
    author={Khot, Tushar and Clark, Peter and Guerquin, Michal and Jansen, Peter and Sabharwal, Ashish},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={34},
    number={05},
    pages={8082--8090},
    year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/qasc_with_ir_test

  • Config description : QASC is a question-answering dataset with a focus on sentence composition. It consists of 8-way multiple-choice questions about grade school science, and comes with a corpus of 17M sentences. This version includes paragraphs fetched via an information retrieval system as additional evidence.

  • Download size : 16.95 MiB

  • Dataset size : 17.30 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 920
'train' 8,134
'validation' 926
  • Cita :
@inproceedings{khot2020qasc,
    title={Qasc: A dataset for question answering via sentence composition},
    author={Khot, Tushar and Clark, Peter and Guerquin, Michal and Jansen, Peter and Sabharwal, Ashish},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={34},
    number={05},
    pages={8082--8090},
    year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/quoref

  • Config description : This dataset tests the coreferential reasoning capability of reading comprehension systems. In this span-selection benchmark containing questions over paragraphs from Wikipedia, a system must resolve hard coreferences before selecting the appropriate span(s) in the paragraphs for answering questions.

  • Download size : 51.43 MiB

  • Dataset size : 52.29 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 22,265
'validation' 2,768
  • Cita :
@inproceedings{dasigi-etal-2019-quoref,
    title = "{Q}uoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning",
    author = "Dasigi, Pradeep  and
      Liu, Nelson F.  and
      Marasovi{'c}, Ana  and
      Smith, Noah A.  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-1606",
    doi = "10.18653/v1/D19-1606",
    pages = "5925--5932",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/race_string

  • Config description : Race is a large-scale reading comprehension dataset. The dataset is collected from English examinations in China, which are designed for middle school and high school students. The dataset can be served as the training and test sets for machine comprehension.

  • Download size : 167.97 MiB

  • Dataset size : 171.23 MiB

  • Almacenamiento automático en caché ( documentación ): Sí (prueba, validación), solo cuando shuffle_files=False (tren)

  • Divisiones :

Separar Ejemplos
'test' 4,934
'train' 87,863
'validation' 4,887
  • Cita :
@inproceedings{lai-etal-2017-race,
    title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
    author = "Lai, Guokun  and
      Xie, Qizhe  and
      Liu, Hanxiao  and
      Yang, Yiming  and
      Hovy, Eduard",
    booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
    month = sep,
    year = "2017",
    address = "Copenhagen, Denmark",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D17-1082",
    doi = "10.18653/v1/D17-1082",
    pages = "785--794",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/race_string_dev

  • Config description : Race is a large-scale reading comprehension dataset. The dataset is collected from English examinations in China, which are designed for middle school and high school students. The dataset can be served as the training and test sets for machine comprehension.

  • Download size : 167.97 MiB

  • Dataset size : 171.23 MiB

  • Almacenamiento automático en caché ( documentación ): Sí (prueba, validación), solo cuando shuffle_files=False (tren)

  • Divisiones :

Separar Ejemplos
'test' 4,934
'train' 87,863
'validation' 4,887
  • Cita :
@inproceedings{lai-etal-2017-race,
    title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
    author = "Lai, Guokun  and
      Xie, Qizhe  and
      Liu, Hanxiao  and
      Yang, Yiming  and
      Hovy, Eduard",
    booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
    month = sep,
    year = "2017",
    address = "Copenhagen, Denmark",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D17-1082",
    doi = "10.18653/v1/D17-1082",
    pages = "785--794",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/ropes

  • Config description : This dataset tests a system's ability to apply knowledge from a passage of text to a new situation. A system is presented a background passage containing a causal or qualitative relation(s) (eg, "animal pollinators increase efficiency of fertilization in flowers"), a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the background passage in the context of the situation.

  • Download size : 12.91 MiB

  • Dataset size : 13.35 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 10,924
'validation' 1,688
  • Cita :
@inproceedings{lin-etal-2019-reasoning,
    title = "Reasoning Over Paragraph Effects in Situations",
    author = "Lin, Kevin  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Gardner, Matt",
    booktitle = "Proceedings of the 2nd Workshop on Machine Reading for Question Answering",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-5808",
    doi = "10.18653/v1/D19-5808",
    pages = "58--62",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/social_iqa

  • Config description : This is a large-scale benchmark for commonsense reasoning about social situations. Social IQa contains multiple choice questions for probing emotional and social intelligence in a variety of everyday situations. Through crowdsourcing, commonsense questions along with correct and incorrect answers about social interactions are collected, using a new framework that mitigates stylistic artifacts in incorrect answers by asking workers to provide the right answer to a different but related question.

  • Download size : 7.08 MiB

  • Dataset size : 8.22 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 33,410
'validation' 1,954
  • Cita :
@inproceedings{sap-etal-2019-social,
    title = "Social {IQ}a: Commonsense Reasoning about Social Interactions",
    author = "Sap, Maarten  and
      Rashkin, Hannah  and
      Chen, Derek  and
      Le Bras, Ronan  and
      Choi, Yejin",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D19-1454",
    doi = "10.18653/v1/D19-1454",
    pages = "4463--4473",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/squad1_1

  • Config description : This is a reading comprehension dataset consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.

  • Download size : 80.62 MiB

  • Dataset size : 83.99 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 87,514
'validation' 10,570
  • Cita :
@inproceedings{rajpurkar-etal-2016-squad,
    title = "{SQ}u{AD}: 100,000+ Questions for Machine Comprehension of Text",
    author = "Rajpurkar, Pranav  and
      Zhang, Jian  and
      Lopyrev, Konstantin  and
      Liang, Percy",
    booktitle = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2016",
    address = "Austin, Texas",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/D16-1264",
    doi = "10.18653/v1/D16-1264",
    pages = "2383--2392",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/squad2

  • Config description : This dataset combines the original Stanford Question Answering Dataset (SQuAD) dataset with unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.

  • Download size : 116.56 MiB

  • Dataset size : 121.43 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 130,149
'validation' 11,873
  • Cita :
@inproceedings{rajpurkar-etal-2018-know,
    title = "Know What You Don{'}t Know: Unanswerable Questions for {SQ}u{AD}",
    author = "Rajpurkar, Pranav  and
      Jia, Robin  and
      Liang, Percy",
    booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
    month = jul,
    year = "2018",
    address = "Melbourne, Australia",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/P18-2124",
    doi = "10.18653/v1/P18-2124",
    pages = "784--789",
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/winogrande_l

  • Config description : This dataset is inspired by the original Winograd Schema Challenge design, but adjusted to improve both the scale and the hardness of the dataset. The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations. Training sets with differnt sizes are provided. This set corresponds to size l .

  • Download size : 1.49 MiB

  • Dataset size : 1.83 MiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 10,234
'validation' 1,267
  • Cita :
@inproceedings{sakaguchi2020winogrande,
  title={Winogrande: An adversarial winograd schema challenge at scale},
  author={Sakaguchi, Keisuke and Le Bras, Ronan and Bhagavatula, Chandra and Choi, Yejin},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={34},
  number={05},
  pages={8732--8740},
  year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/winogrande_m

  • Config description : This dataset is inspired by the original Winograd Schema Challenge design, but adjusted to improve both the scale and the hardness of the dataset. The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations. Training sets with differnt sizes are provided. This set corresponds to size m .

  • Download size : 507.46 KiB

  • Dataset size : 623.15 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'train' 2,558
'validation' 1,267
  • Cita :
@inproceedings{sakaguchi2020winogrande,
  title={Winogrande: An adversarial winograd schema challenge at scale},
  author={Sakaguchi, Keisuke and Le Bras, Ronan and Bhagavatula, Chandra and Choi, Yejin},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={34},
  number={05},
  pages={8732--8740},
  year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."

unified_qa/winogrande_s

  • Config description : This dataset is inspired by the original Winograd Schema Challenge design, but adjusted to improve both the scale and the hardness of the dataset. The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations. Training sets with differnt sizes are provided. This set corresponds to size s .

  • Download size : 479.24 KiB

  • Dataset size : 590.47 KiB

  • Almacenamiento automático en caché ( documentación ): Sí

  • Divisiones :

Separar Ejemplos
'test' 1,767
'train' 640
'validation' 1,267
  • Cita :
@inproceedings{sakaguchi2020winogrande,
  title={Winogrande: An adversarial winograd schema challenge at scale},
  author={Sakaguchi, Keisuke and Le Bras, Ronan and Bhagavatula, Chandra and Choi, Yejin},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={34},
  number={05},
  pages={8732--8740},
  year={2020}
}

@inproceedings{khashabi-etal-2020-unifiedqa,
    title = "{UNIFIEDQA}: Crossing Format Boundaries with a Single {QA} System",
    author = "Khashabi, Daniel  and
      Min, Sewon  and
      Khot, Tushar  and
      Sabharwal, Ashish  and
      Tafjord, Oyvind  and
      Clark, Peter  and
      Hajishirzi, Hannaneh",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.171",
    doi = "10.18653/v1/2020.findings-emnlp.171",
    pages = "1896--1907",
}

Note that each UnifiedQA dataset has its own citation. Please see the source to
see the correct citation for each contained dataset."