- Description:
Long T5 benchmark
This dataset collection comprises the evaluation benchmark used in the paper: LongT5: Efficient Text-To-Text Transformer for Long Sequences
LongT5 is an extension of the T5 model that handles long sequence inputs more efficiently. LongT5 achieves state-of-the-art performance on several summarization benchmarks that required longer context or multi-document understanding.
- Homepage:
https://github.com/google-research/longt5
Versions:
1.0.0
(default): Initial release
Datasets in the default version:
natural_questions
:natural_questions/longt5:0.1.0
media_sum
:media_sum:1.0.0
Citation: