Stay organized with collections Save and categorize content based on your preferences.

  • Description:

An audio dataset of spoken words designed to help train and evaluate keyword spotting systems. Its primary goal is to provide a way to build and test small models that detect when a single word is spoken, from a set of ten target words, with as few false positives as possible from background noise or unrelated speech. Note that in the train and validation set, the label "unknown" is much more prevalent than the labels of the target words or background noise. One difference from the release version is the handling of silent segments. While in the test set the silence segments are regular 1 second files, in the training they are provided as long segments under "background_noise" folder. Here we split these background noise into 1 second clips, and also keep one of the files for the validation set.

Split Examples
'test' 4,890
'train' 85,511
'validation' 10,102
  • Feature structure:
    'audio': Audio(shape=(None,), dtype=tf.int16),
    'label': ClassLabel(shape=(), dtype=tf.int64, num_classes=12),
  • Feature documentation:
Feature Class Shape Dtype Description
audio Audio (None,) tf.int16
label ClassLabel tf.int64
  • Citation:
   author = { {Warden}, P.},
    title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
  journal = {ArXiv e-prints},
  archivePrefix = "arXiv",
  eprint = {1804.03209},
  primaryClass = "cs.CL",
  keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
    year = 2018,
    month = apr,
    url = {https://arxiv.org/abs/1804.03209},