- Description:
The comments in this dataset come from an archive of Wikipedia talk page comments. These have been annotated by Jigsaw for toxicity, as well as (for the main config) a variety of toxicity subtypes, including severe toxicity, obscenity, threatening language, insulting language, and identity attacks. This dataset is a replica of the data released for the Jigsaw Toxic Comment Classification Challenge and Jigsaw Multilingual Toxic Comment Classification competition on Kaggle, with the test dataset merged with the test_labels released after the end of the competitions. Test data not used for scoring has been dropped. This dataset is released under CC0, as is the underlying comment text.
Source code:
tfds.text.WikipediaToxicitySubtypes
Versions:
0.2.0
: Updated features for consistency with CivilComments dataset.0.3.0
: Added WikipediaToxicityMultilingual config.0.3.1
(default): Added a unique id for each comment. (For the Multilingual config, these are only unique within each split.)
Download size:
50.57 MiB
Auto-cached (documentation): Yes
Supervised keys (See
as_supervised
doc):('text', 'toxicity')
Figure (tfds.show_examples): Not supported.
Citation:
@inproceedings{10.1145/3038912.3052591,
author = {Wulczyn, Ellery and Thain, Nithum and Dixon, Lucas},
title = {Ex Machina: Personal Attacks Seen at Scale},
year = {2017},
isbn = {9781450349130},
publisher = {International World Wide Web Conferences Steering Committee},
address = {Republic and Canton of Geneva, CHE},
url = {https://doi.org/10.1145/3038912.3052591},
doi = {10.1145/3038912.3052591},
booktitle = {Proceedings of the 26th International Conference on World Wide Web},
pages = {1391-1399},
numpages = {9},
keywords = {online discussions, wikipedia, online harassment},
location = {Perth, Australia},
series = {WWW '17}
}
wikipedia_toxicity_subtypes/EnglishSubtypes (default config)
- Config description: The comments in the WikipediaToxicitySubtypes config are from an archive of English Wikipedia talk page comments which have been annotated by Jigsaw for toxicity, as well as five toxicity subtype labels (severe toxicity, obscene, threat, insult, identity_attack). The toxicity and toxicity subtype labels are binary values (0 or 1) indicating whether the majority of annotators assigned that attribute to the comment text. This config is a replica of the data released for the Jigsaw Toxic Comment Classification Challenge on Kaggle, with the test dataset joined with the test_labels released after the competition, and test data not used for scoring dropped.
See the Kaggle documentation https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data or https://figshare.com/articles/Wikipedia_Talk_Labels_Toxicity/4563973 for more details.
Homepage: https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data
Dataset size:
128.32 MiB
Splits:
Split | Examples |
---|---|
'test' |
63,978 |
'train' |
159,571 |
- Feature structure:
FeaturesDict({
'id': Text(shape=(), dtype=string),
'identity_attack': float32,
'insult': float32,
'language': Text(shape=(), dtype=string),
'obscene': float32,
'severe_toxicity': float32,
'text': Text(shape=(), dtype=string),
'threat': float32,
'toxicity': float32,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
id | Text | string | ||
identity_attack | Tensor | float32 | ||
insult | Tensor | float32 | ||
language | Text | string | ||
obscene | Tensor | float32 | ||
severe_toxicity | Tensor | float32 | ||
text | Text | string | ||
threat | Tensor | float32 | ||
toxicity | Tensor | float32 |
- Examples (tfds.as_dataframe):
wikipedia_toxicity_subtypes/Multilingual
- Config description: The comments in the WikipediaToxicityMultilingual config here are from an archive of non-English Wikipedia talk page comments annotated by Jigsaw for toxicity, with a binary value (0 or 1) indicating whether the majority of annotators rated the comment text as toxic. The comments in this config are in multiple different languages (Turkish, Italian, Spanish, Portuguese, Russian, and French). This config is a replica of the data released for the Jigsaw Multilingual Toxic Comment Classification on Kaggle, with the test dataset joined with the test_labels released after the competition.
See the Kaggle documentation https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/data for more details.
Homepage: https://www.kaggle.com/c/jigsaw-multilingual-toxic-comment-classification/data
Dataset size:
35.13 MiB
Splits:
Split | Examples |
---|---|
'test' |
63,812 |
'validation' |
8,000 |
- Feature structure:
FeaturesDict({
'id': Text(shape=(), dtype=string),
'language': Text(shape=(), dtype=string),
'text': Text(shape=(), dtype=string),
'toxicity': float32,
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
id | Text | string | ||
language | Text | string | ||
text | Text | string | ||
toxicity | Tensor | float32 |
- Examples (tfds.as_dataframe):