cs_restaurants

  • Açıklama:

Restoran alanında Çek veriden metne veri kümesi. Girdi anlam temsilleri, bir diyalog eylemi türü (bilgilendirme, onaylama vb.), yuvalar (yemek, alan vb.) ve bunların değerlerini içerir. Wen ve diğerleri tarafından İngilizce San Francisco Restoranları veri setinin bir çevirisi olarak ortaya çıkmıştır. (2015).

Bölmek Örnekler
'test' 842
'train' 3.569
'validation' 781
  • Özellikler:
FeaturesDict({
    'delex_input_text': FeaturesDict({
        'table': Sequence({
            'column_header': tf.string,
            'content': tf.string,
            'row_number': tf.int16,
        }),
    }),
    'delex_target_text': tf.string,
    'input_text': FeaturesDict({
        'table': Sequence({
            'column_header': tf.string,
            'content': tf.string,
            'row_number': tf.int16,
        }),
    }),
    'target_text': tf.string,
})
  • Citation:
@inproceedings{dusek_neural_2019,
        author = {Dušek, Ondřej and Jurčíček, Filip},
        title = {Neural {Generation} for {Czech}: {Data} and {Baselines} },
        shorttitle = {Neural {Generation} for {Czech} },
        url = {https://www.aclweb.org/anthology/W19-8670/},
        urldate = {2019-10-18},
        booktitle = {Proceedings of the 12th {International} {Conference} on {Natural} {Language} {Generation} ({INLG} 2019)},
        month = oct,
        address = {Tokyo, Japan},
        year = {2019},
        pages = {563--574},
        abstract = {We present the first dataset targeted at end-to-end NLG in Czech in the restaurant domain, along with several strong baseline models using the sequence-to-sequence approach. While non-English NLG is under-explored in general, Czech, as a morphologically rich language, makes the task even harder: Since Czech requires inflecting named entities, delexicalization or copy mechanisms do not work out-of-the-box and lexicalizing the generated outputs is non-trivial. In our experiments, we present two different approaches to this this problem: (1) using a neural language model to select the correct inflected form while lexicalizing, (2) a two-step generation setup: our sequence-to-sequence model generates an interleaved sequence of lemmas and morphological tags, which are then inflected by a morphological generator.},
}