- Description:
The NSynth Dataset is an audio dataset containing ~300k musical notes, each with a unique pitch, timbre, and envelope. Each note is annotated with three additional pieces of information based on a combination of human evaluation and heuristic algorithms: Source, Family, and Qualities.
Additional Documentation: Explore on Papers With Code
Homepage: https://g.co/magenta/nsynth-dataset
Source code:
tfds.datasets.nsynth.Builder
Versions:
2.3.0
: Newloudness_db
feature in decibels (unormalized).2.3.1
: F0 computed with normalization fix in CREPE.2.3.2
: Use Audio feature.2.3.3
(default): F0 computed with fix in CREPE wave normalization (https://github.com/marl/crepe/issues/49).
Auto-cached (documentation): No
Supervised keys (See
as_supervised
doc):None
Figure (tfds.show_examples): Not supported.
Citation:
@InProceedings{pmlr-v70-engel17a,
title = {Neural Audio Synthesis of Musical Notes with {W}ave{N}et Autoencoders},
author = {Jesse Engel and Cinjon Resnick and Adam Roberts and Sander Dieleman and Mohammad Norouzi and Douglas Eck and Karen Simonyan},
booktitle = {Proceedings of the 34th International Conference on Machine Learning},
pages = {1068--1077},
year = {2017},
editor = {Doina Precup and Yee Whye Teh},
volume = {70},
series = {Proceedings of Machine Learning Research},
address = {International Convention Centre, Sydney, Australia},
month = {06--11 Aug},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v70/engel17a/engel17a.pdf},
url = {http://proceedings.mlr.press/v70/engel17a.html},
}
nsynth/full (default config)
Config description: Full NSynth Dataset is split into train, valid, and test sets, with no instruments overlapping between the train set and the valid/test sets.
Download size:
73.07 GiB
Dataset size:
73.09 GiB
Splits:
Split | Examples |
---|---|
'test' |
4,096 |
'train' |
289,205 |
'valid' |
12,678 |
- Feature structure:
FeaturesDict({
'audio': Audio(shape=(64000,), dtype=float32),
'id': string,
'instrument': FeaturesDict({
'family': ClassLabel(shape=(), dtype=int64, num_classes=11),
'label': ClassLabel(shape=(), dtype=int64, num_classes=1006),
'source': ClassLabel(shape=(), dtype=int64, num_classes=3),
}),
'pitch': ClassLabel(shape=(), dtype=int64, num_classes=128),
'qualities': FeaturesDict({
'bright': bool,
'dark': bool,
'distortion': bool,
'fast_decay': bool,
'long_release': bool,
'multiphonic': bool,
'nonlinear_env': bool,
'percussive': bool,
'reverb': bool,
'tempo-synced': bool,
}),
'velocity': ClassLabel(shape=(), dtype=int64, num_classes=128),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
audio | Audio | (64000,) | float32 | |
id | Tensor | string | ||
instrument | FeaturesDict | |||
instrument/family | ClassLabel | int64 | ||
instrument/label | ClassLabel | int64 | ||
instrument/source | ClassLabel | int64 | ||
pitch | ClassLabel | int64 | ||
qualities | FeaturesDict | |||
qualities/bright | Tensor | bool | ||
qualities/dark | Tensor | bool | ||
qualities/distortion | Tensor | bool | ||
qualities/fast_decay | Tensor | bool | ||
qualities/long_release | Tensor | bool | ||
qualities/multiphonic | Tensor | bool | ||
qualities/nonlinear_env | Tensor | bool | ||
qualities/percussive | Tensor | bool | ||
qualities/reverb | Tensor | bool | ||
qualities/tempo-synced | Tensor | bool | ||
velocity | ClassLabel | int64 |
- Examples (tfds.as_dataframe):
nsynth/gansynth_subset
Config description: NSynth Dataset limited to acoustic instruments in the MIDI pitch interval [24, 84]. Uses alternate splits that have overlap in instruments (but not exact notes) between the train set and valid/test sets. This variant was originally introduced in the ICLR 2019 GANSynth paper (https://arxiv.org/abs/1902.08710).
Download size:
73.08 GiB
Dataset size:
20.73 GiB
Splits:
Split | Examples |
---|---|
'test' |
8,518 |
'train' |
60,788 |
'valid' |
17,469 |
- Feature structure:
FeaturesDict({
'audio': Audio(shape=(64000,), dtype=float32),
'id': string,
'instrument': FeaturesDict({
'family': ClassLabel(shape=(), dtype=int64, num_classes=11),
'label': ClassLabel(shape=(), dtype=int64, num_classes=1006),
'source': ClassLabel(shape=(), dtype=int64, num_classes=3),
}),
'pitch': ClassLabel(shape=(), dtype=int64, num_classes=128),
'qualities': FeaturesDict({
'bright': bool,
'dark': bool,
'distortion': bool,
'fast_decay': bool,
'long_release': bool,
'multiphonic': bool,
'nonlinear_env': bool,
'percussive': bool,
'reverb': bool,
'tempo-synced': bool,
}),
'velocity': ClassLabel(shape=(), dtype=int64, num_classes=128),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
audio | Audio | (64000,) | float32 | |
id | Tensor | string | ||
instrument | FeaturesDict | |||
instrument/family | ClassLabel | int64 | ||
instrument/label | ClassLabel | int64 | ||
instrument/source | ClassLabel | int64 | ||
pitch | ClassLabel | int64 | ||
qualities | FeaturesDict | |||
qualities/bright | Tensor | bool | ||
qualities/dark | Tensor | bool | ||
qualities/distortion | Tensor | bool | ||
qualities/fast_decay | Tensor | bool | ||
qualities/long_release | Tensor | bool | ||
qualities/multiphonic | Tensor | bool | ||
qualities/nonlinear_env | Tensor | bool | ||
qualities/percussive | Tensor | bool | ||
qualities/reverb | Tensor | bool | ||
qualities/tempo-synced | Tensor | bool | ||
velocity | ClassLabel | int64 |
- Examples (tfds.as_dataframe):
nsynth/gansynth_subset.f0_and_loudness
Config description: NSynth Dataset limited to acoustic instruments in the MIDI pitch interval [24, 84]. Uses alternate splits that have overlap in instruments (but not exact notes) between the train set and valid/test sets. This variant was originally introduced in the ICLR 2019 GANSynth paper (https://arxiv.org/abs/1902.08710). This version additionally contains estimates for F0 using CREPE (Kim et al., 2018) and A-weighted perceptual loudness in decibels. Both signals are provided at a frame rate of 250Hz.
Download size:
73.08 GiB
Dataset size:
22.03 GiB
Splits:
Split | Examples |
---|---|
'test' |
8,518 |
'train' |
60,788 |
'valid' |
17,469 |
- Feature structure:
FeaturesDict({
'audio': Audio(shape=(64000,), dtype=float32),
'f0': FeaturesDict({
'confidence': Tensor(shape=(1000,), dtype=float32),
'hz': Tensor(shape=(1000,), dtype=float32),
'midi': Tensor(shape=(1000,), dtype=float32),
}),
'id': string,
'instrument': FeaturesDict({
'family': ClassLabel(shape=(), dtype=int64, num_classes=11),
'label': ClassLabel(shape=(), dtype=int64, num_classes=1006),
'source': ClassLabel(shape=(), dtype=int64, num_classes=3),
}),
'loudness': FeaturesDict({
'db': Tensor(shape=(1000,), dtype=float32),
}),
'pitch': ClassLabel(shape=(), dtype=int64, num_classes=128),
'qualities': FeaturesDict({
'bright': bool,
'dark': bool,
'distortion': bool,
'fast_decay': bool,
'long_release': bool,
'multiphonic': bool,
'nonlinear_env': bool,
'percussive': bool,
'reverb': bool,
'tempo-synced': bool,
}),
'velocity': ClassLabel(shape=(), dtype=int64, num_classes=128),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
audio | Audio | (64000,) | float32 | |
f0 | FeaturesDict | |||
f0/confidence | Tensor | (1000,) | float32 | |
f0/hz | Tensor | (1000,) | float32 | |
f0/midi | Tensor | (1000,) | float32 | |
id | Tensor | string | ||
instrument | FeaturesDict | |||
instrument/family | ClassLabel | int64 | ||
instrument/label | ClassLabel | int64 | ||
instrument/source | ClassLabel | int64 | ||
loudness | FeaturesDict | |||
loudness/db | Tensor | (1000,) | float32 | |
pitch | ClassLabel | int64 | ||
qualities | FeaturesDict | |||
qualities/bright | Tensor | bool | ||
qualities/dark | Tensor | bool | ||
qualities/distortion | Tensor | bool | ||
qualities/fast_decay | Tensor | bool | ||
qualities/long_release | Tensor | bool | ||
qualities/multiphonic | Tensor | bool | ||
qualities/nonlinear_env | Tensor | bool | ||
qualities/percussive | Tensor | bool | ||
qualities/reverb | Tensor | bool | ||
qualities/tempo-synced | Tensor | bool | ||
velocity | ClassLabel | int64 |
- Examples (tfds.as_dataframe):