civil_comments

  • Description:

This version of the CivilComments Dataset provides access to the primary seven labels that were annotated by crowd workers, the toxicity and other tags are a value between 0 and 1 indicating the fraction of annotators that assigned these attributes to the comment text.

The other tags are only available for a fraction of the input examples. They are currently ignored for the main dataset; the CivilCommentsIdentities set includes those labels, but only consists of the subset of the data with them. The other attributes that were part of the original CivilComments release are included only in the raw data. See the Kaggle documentation for more details about the available features.

The comments in this dataset come from an archive of the Civil Comments platform, a commenting plugin for independent news sites. These public comments were created from 2015 - 2017 and appeared on approximately 50 English-language news sites across the world. When Civil Comments shut down in 2017, they chose to make the public comments available in a lasting open archive to enable future research. The original data, published on figshare, includes the public comment text, some associated metadata such as article IDs, timestamps and commenter-generated "civility" labels, but does not include user ids. Jigsaw extended this dataset by adding additional labels for toxicity and identity mentions. This data set is an exact replica of the data released for the Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This dataset is released under CC0, as is the underlying comment text.

@article{DBLP:journals/corr/abs-1903-04561,
  author    = {Daniel Borkan and
               Lucas Dixon and
               Jeffrey Sorensen and
               Nithum Thain and
               Lucy Vasserman},
  title     = {Nuanced Metrics for Measuring Unintended Bias with Real Data for Text
               Classification},
  journal   = {CoRR},
  volume    = {abs/1903.04561},
  year      = {2019},
  url       = {http://arxiv.org/abs/1903.04561},
  archivePrefix = {arXiv},
  eprint    = {1903.04561},
  timestamp = {Sun, 31 Mar 2019 19:01:24 +0200},
  biburl    = {https://dblp.org/rec/bib/journals/corr/abs-1903-04561},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

civil_comments/CivilComments (default config)

  • Config description: The CivilComments set here includes all the data, but only the basic seven labels (toxicity, severe_toxicity, obscene, threat, insult, identity_attack, and sexual_explicit).

  • Dataset size: 929.13 MiB

  • Splits:

Split Examples
'test' 97,320
'train' 1,804,874
'validation' 97,320
  • Features:
FeaturesDict({
    'identity_attack': tf.float32,
    'insult': tf.float32,
    'obscene': tf.float32,
    'severe_toxicity': tf.float32,
    'sexual_explicit': tf.float32,
    'text': Text(shape=(), dtype=tf.string),
    'threat': tf.float32,
    'toxicity': tf.float32,
})

civil_comments/CivilCommentsIdentities

  • Config description: The CivilCommentsIdentities set here includes an extended set of identity labels in addition to the basic seven labels. However, it only includes the subset (roughly a quarter) of the data with all these features.

  • Dataset size: 503.34 MiB

  • Splits:

Split Examples
'test' 21,577
'train' 405,130
'validation' 21,293
  • Features:
FeaturesDict({
    'asian': tf.float32,
    'atheist': tf.float32,
    'bisexual': tf.float32,
    'black': tf.float32,
    'buddhist': tf.float32,
    'christian': tf.float32,
    'female': tf.float32,
    'heterosexual': tf.float32,
    'hindu': tf.float32,
    'homosexual_gay_or_lesbian': tf.float32,
    'identity_attack': tf.float32,
    'insult': tf.float32,
    'intellectual_or_learning_disability': tf.float32,
    'jewish': tf.float32,
    'latino': tf.float32,
    'male': tf.float32,
    'muslim': tf.float32,
    'obscene': tf.float32,
    'other_disability': tf.float32,
    'other_gender': tf.float32,
    'other_race_or_ethnicity': tf.float32,
    'other_religion': tf.float32,
    'other_sexual_orientation': tf.float32,
    'physical_disability': tf.float32,
    'psychiatric_or_mental_illness': tf.float32,
    'severe_toxicity': tf.float32,
    'sexual_explicit': tf.float32,
    'text': Text(shape=(), dtype=tf.string),
    'threat': tf.float32,
    'toxicity': tf.float32,
    'transgender': tf.float32,
    'white': tf.float32,
})