imagenet2012_real

This dataset contains ILSVRC-2012 (ImageNet) validation images augmented with a new set of "Re-Assessed" (ReaL) labels from the "Are we done with ImageNet" paper, see https://arxiv.org/abs/2006.07159 These labels are collected using the enhanced protocol, resulting in multi-label and more accurate annotations.

Important note: about 3500 examples contain no label, these should be excluded from the averaging when computing the accuracy. One possible way of doing this is with the following NumPy code:

is_correct = [pred in real_labels[i] for i, pred in enumerate(predictions) if real_labels[i]]
real_accuracy = np.mean(is_correct)
Split Examples
'validation' 50,000
  • Feature structure:
FeaturesDict({
    'file_name': Text(shape=(), dtype=string),
    'image': Image(shape=(None, None, 3), dtype=uint8),
    'original_label': ClassLabel(shape=(), dtype=int64, num_classes=1000),
    'real_label': Sequence(ClassLabel(shape=(), dtype=int64, num_classes=1000)),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
file_name Text string
image Image (None, None, 3) uint8
original_label ClassLabel int64
real_label Sequence(ClassLabel) (None,) int64

Visualization

  • Citation:
@article{beyer2020imagenet,
  title={Are we done with ImageNet?},
  author={Lucas Beyer and Olivier J. Henaff and Alexander Kolesnikov and Xiaohua Zhai and Aaron van den Oord},
  journal={arXiv preprint arXiv:2002.05709},
  year={2020}
}
@article{ILSVRC15,
  Author={Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
  Title={ {ImageNet Large Scale Visual Recognition Challenge} },
  Year={2015},
  journal={International Journal of Computer Vision (IJCV)},
  doi={10.1007/s11263-015-0816-y},
  volume={115},
  number={3},
  pages={211-252}
}