grand_corpus_espagnol,grand_corpus_espagnol

Références:

CCR

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/JRC')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 3410620
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

EMEA

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/EMEA')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 1221233
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

Voix globales

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/GlobalVoices')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 897075
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

BCE

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/ECB')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 1875738
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

DOGC

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/DOGC')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 10917053
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

all_wikis

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/all_wikis')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 28109484
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

TED

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/TED')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 157910
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

multiUN

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/multiUN')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 13127490
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

Europarl

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/Europarl')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 2174141
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

ActualitésCommentaire11

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/NewsCommentary11')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 288771
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

ONU

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/UN')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 74067
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

EUBookShop

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/EUBookShop')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 8214959
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

Paracrawl

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/ParaCrawl')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 15510649
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

OpenSubtitles2018

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/OpenSubtitles2018')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 213508602
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

DGT

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/DGT')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 3168368
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

combiné

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/combined')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 302656160
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}
,

Références:

CCR

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/JRC')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 3410620
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

EMEA

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/EMEA')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 1221233
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

Voix globales

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/GlobalVoices')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 897075
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

BCE

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/ECB')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 1875738
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

DOGC

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/DOGC')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 10917053
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

all_wikis

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/all_wikis')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 28109484
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

TED

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/TED')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 157910
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

multiUN

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/multiUN')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 13127490
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

Europarl

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/Europarl')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 2174141
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

ActualitésCommentaire11

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/NewsCommentary11')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 288771
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

ONU

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/UN')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 74067
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

EUBookShop

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/EUBookShop')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 8214959
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

Paracrawl

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/ParaCrawl')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 15510649
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

OpenSubtitles2018

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/OpenSubtitles2018')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 213508602
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

DGT

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/DGT')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 3168368
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}

combiné

Utilisez la commande suivante pour charger cet ensemble de données dans TFDS :

ds = tfds.load('huggingface:large_spanish_corpus/combined')
  • Descriptif :
The Large Spanish Corpus is a compilation of 15 unlabelled Spanish corpora spanning Wikipedia to European parliament notes. Each config contains the data corresponding to a different corpus. For example, "all_wiki" only includes examples from Spanish Wikipedia. By default, the config is set to "combined" which loads all the corpora; with this setting you can also specify the number of samples to return per corpus by configuring the "split" argument.
  • Licence : MIT
  • Version : 1.1.0
  • Fractionnements :
Diviser Exemples
'train' 302656160
  • Caractéristiques :
{
    "text": {
        "dtype": "string",
        "id": null,
        "_type": "Value"
    }
}