• תיאור :

מערך נתונים צ'כי לטקסט בדומיין המסעדה. ייצוגי משמעות הקלט מכילים סוג פעולת דיאלוג (לידע, לאשר וכו'), משבצות (אוכל, אזור וכו') והערכים שלהם. מקורו כתרגום של מערך הנתונים האנגלי של San Francisco Restaurants מאת Wen et al. (2015).

לְפַצֵל דוגמאות
'test' 842
'train' 3,569
'validation' 781
  • מבנה תכונה :
    'delex_input_text': FeaturesDict({
        'table': Sequence({
            'column_header': string,
            'content': string,
            'row_number': int16,
    'delex_target_text': string,
    'input_text': FeaturesDict({
        'table': Sequence({
            'column_header': string,
            'content': string,
            'row_number': int16,
    'target_text': string,
  • תיעוד תכונה :
תכונה מעמד צוּרָה Dtype תיאור
delex_input_text FeaturesDict
delex_input_text/table סדר פעולות
delex_input_text/table/column_header מוֹתֵחַ חוּט
delex_input_text/table/content מוֹתֵחַ חוּט
delex_input_text/table/row_number מוֹתֵחַ int16
delex_target_text מוֹתֵחַ חוּט
הקלד טקסט FeaturesDict
input_text/טבלה סדר פעולות
input_text/table/column_header מוֹתֵחַ חוּט
input_text/טבלה/תוכן מוֹתֵחַ חוּט
input_text/table/row_number מוֹתֵחַ int16
target_text מוֹתֵחַ חוּט
  • ציטוט :
        author = {Dušek, Ondřej and Jurčíček, Filip},
        title = {Neural {Generation} for {Czech}: {Data} and {Baselines} },
        shorttitle = {Neural {Generation} for {Czech} },
        url = {https://www.aclweb.org/anthology/W19-8670/},
        urldate = {2019-10-18},
        booktitle = {Proceedings of the 12th {International} {Conference} on {Natural} {Language} {Generation} ({INLG} 2019)},
        month = oct,
        address = {Tokyo, Japan},
        year = {2019},
        pages = {563--574},
        abstract = {We present the first dataset targeted at end-to-end NLG in Czech in the restaurant domain, along with several strong baseline models using the sequence-to-sequence approach. While non-English NLG is under-explored in general, Czech, as a morphologically rich language, makes the task even harder: Since Czech requires inflecting named entities, delexicalization or copy mechanisms do not work out-of-the-box and lexicalizing the generated outputs is non-trivial. In our experiments, we present two different approaches to this this problem: (1) using a neural language model to select the correct inflected form while lexicalizing, (2) a two-step generation setup: our sequence-to-sequence model generates an interleaved sequence of lemmas and morphological tags, which are then inflected by a morphological generator.},