ted_multi_translate

  • توضیحات :

مجموعه داده انبوه چند زبانه (60 زبان) برگرفته از رونوشت‌های گفتگوی TED. هر رکورد از آرایه های موازی زبان و متن تشکیل شده است. ترجمه های مفقود و ناقص فیلتر خواهند شد.

شکاف مثال ها
'test' 7213
'train' 258,098
'validation' 6,049
  • ساختار ویژگی :
FeaturesDict({
    'talk_name': Text(shape=(), dtype=string),
    'translations': TranslationVariableLanguages({
        'language': Text(shape=(), dtype=string),
        'translation': Text(shape=(), dtype=string),
    }),
})
  • مستندات ویژگی :
ویژگی کلاس شکل نوع D شرح
FeaturesDict
talk_name متن رشته
ترجمه ها TranslationVariableLanguages
ترجمه ها/زبان متن رشته
ترجمه/ترجمه متن رشته
  • نقل قول :
@InProceedings{qi-EtAl:2018:N18-2,
  author    = {Qi, Ye  and  Sachan, Devendra  and  Felix, Matthieu  and  Padmanabhan, Sarguna  and  Neubig, Graham},
  title     = {When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?},
  booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)},
  month     = {June},
  year      = {2018},
  address   = {New Orleans, Louisiana},
  publisher = {Association for Computational Linguistics},
  pages     = {529--535},
  abstract  = {The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained. Pre-trained word embeddings have proven to be invaluable for improving performance in natural language analysis tasks, which often suffer from paucity of data. However, their utility for NMT has not been extensively explored. In this work, we perform five sets of experiments that analyze when we can expect pre-trained word embeddings to help in NMT tasks. We show that such embeddings can be surprisingly effective in some cases -- providing gains of up to 20 BLEU points in the most favorable setting.},
  url       = {http://www.aclweb.org/anthology/N18-2084}
}
،

  • توضیحات :

مجموعه داده انبوه چند زبانه (60 زبان) برگرفته از رونوشت‌های گفتگوی TED. هر رکورد از آرایه های موازی زبان و متن تشکیل شده است. ترجمه های مفقود و ناقص فیلتر خواهند شد.

شکاف مثال ها
'test' 7213
'train' 258,098
'validation' 6,049
  • ساختار ویژگی :
FeaturesDict({
    'talk_name': Text(shape=(), dtype=string),
    'translations': TranslationVariableLanguages({
        'language': Text(shape=(), dtype=string),
        'translation': Text(shape=(), dtype=string),
    }),
})
  • مستندات ویژگی :
ویژگی کلاس شکل نوع D شرح
FeaturesDict
talk_name متن رشته
ترجمه ها TranslationVariableLanguages
ترجمه ها/زبان متن رشته
ترجمه/ترجمه متن رشته
  • نقل قول :
@InProceedings{qi-EtAl:2018:N18-2,
  author    = {Qi, Ye  and  Sachan, Devendra  and  Felix, Matthieu  and  Padmanabhan, Sarguna  and  Neubig, Graham},
  title     = {When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?},
  booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)},
  month     = {June},
  year      = {2018},
  address   = {New Orleans, Louisiana},
  publisher = {Association for Computational Linguistics},
  pages     = {529--535},
  abstract  = {The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained. Pre-trained word embeddings have proven to be invaluable for improving performance in natural language analysis tasks, which often suffer from paucity of data. However, their utility for NMT has not been extensively explored. In this work, we perform five sets of experiments that analyze when we can expect pre-trained word embeddings to help in NMT tasks. We show that such embeddings can be surprisingly effective in some cases -- providing gains of up to 20 BLEU points in the most favorable setting.},
  url       = {http://www.aclweb.org/anthology/N18-2084}
}