robo_set

  • Description:

Real dataset of a single robot arm demonstrating 12 non-trivial manipulation skills across 38 tasks, 7500 trajectories.

Split Examples
'train' 18,250
  • Feature structure:
FeaturesDict({
    'episode_metadata': FeaturesDict({
        'file_path': string,
        'trial_id': string,
    }),
    'steps': Dataset({
        'action': Tensor(shape=(8,), dtype=float32),
        'discount': Scalar(shape=(), dtype=float32),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'language_instruction': string,
        'observation': FeaturesDict({
            'image_left': Image(shape=(240, 424, 3), dtype=uint8),
            'image_right': Image(shape=(240, 424, 3), dtype=uint8),
            'image_top': Image(shape=(240, 424, 3), dtype=uint8),
            'image_wrist': Image(shape=(240, 424, 3), dtype=uint8),
            'state': Tensor(shape=(8,), dtype=float32),
            'state_velocity': Tensor(shape=(8,), dtype=float32),
        }),
        'reward': Scalar(shape=(), dtype=float32),
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
episode_metadata FeaturesDict
episode_metadata/file_path Tensor string
episode_metadata/trial_id Tensor string
steps Dataset
steps/action Tensor (8,) float32
steps/discount Scalar float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/language_instruction Tensor string
steps/observation FeaturesDict
steps/observation/image_left Image (240, 424, 3) uint8
steps/observation/image_right Image (240, 424, 3) uint8
steps/observation/image_top Image (240, 424, 3) uint8
steps/observation/image_wrist Image (240, 424, 3) uint8
steps/observation/state Tensor (8,) float32
steps/observation/state_velocity Tensor (8,) float32
steps/reward Scalar float32
  • Citation:
@misc{bharadhwaj2023roboagent, title={RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking}, author={Homanga Bharadhwaj and Jay Vakil and Mohit Sharma and Abhinav Gupta and Shubham Tulsiani and Vikash Kumar},  year={2023}, eprint={2309.01918}, archivePrefix={arXiv}, primaryClass={cs.RO} }