robosuite_panda_pick_place_can

Stay organized with collections Save and categorize content based on your preferences.

  • Description:

These datasets have been created with the PickPlaceCan environment of the robosuite robotic arm simulator. The human datasets were recorded by a single operator using the RLDS Creator and a gamepad controller.

The synthetic datasets have been recorded using the EnvLogger library.

The datasets follow the RLDS format to represent steps and episodes.

Episodes consist of 400 steps. In each episode, a tag is added when the task is completed, this tag is stored as part of the custom step metadata.

Note that, due to the EnvLogger dependency, generation of this dataset is currently supported on Linux environments only.

@misc{ramos2021rlds,
      title={RLDS: an Ecosystem to Generate, Share and Use Datasets in Reinforcement Learning},
      author={Sabela Ramos and Sertan Girgin and Léonard Hussenot and Damien Vincent and Hanna Yakubovich and Daniel Toyama and Anita Gergely and Piotr Stanczyk and Raphael Marinier and Jeremiah Harmsen and Olivier Pietquin and Nikola Momchev},
      year={2021},
      eprint={2111.02767},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

robosuite_panda_pick_place_can/human_dc29b40a (default config)

Split Examples
'train' 50
  • Feature structure:
FeaturesDict({
    'agent_id': tf.string,
    'episode_id': tf.string,
    'episode_index': tf.int32,
    'steps': Dataset({
        'action': Tensor(shape=(7,), dtype=tf.float64),
        'discount': tf.float64,
        'image': Image(shape=(None, None, 3), dtype=tf.uint8),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': FeaturesDict({
            'Can_pos': Tensor(shape=(3,), dtype=tf.float64),
            'Can_quat': Tensor(shape=(4,), dtype=tf.float64),
            'Can_to_robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float64),
            'Can_to_robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float32),
            'object-state': Tensor(shape=(14,), dtype=tf.float64),
            'robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float64),
            'robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float64),
            'robot0_gripper_qpos': Tensor(shape=(2,), dtype=tf.float64),
            'robot0_gripper_qvel': Tensor(shape=(2,), dtype=tf.float64),
            'robot0_joint_pos_cos': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_joint_pos_sin': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_joint_vel': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_proprio-state': Tensor(shape=(32,), dtype=tf.float64),
        }),
        'reward': tf.float64,
        'tag:placed': tf.bool,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
agent_id Tensor tf.string
episode_id Tensor tf.string
episode_index Tensor tf.int32
steps Dataset
steps/action Tensor (7,) tf.float64
steps/discount Tensor tf.float64
steps/image Image (None, None, 3) tf.uint8
steps/is_first Tensor tf.bool
steps/is_last Tensor tf.bool
steps/is_terminal Tensor tf.bool
steps/observation FeaturesDict
steps/observation/Can_pos Tensor (3,) tf.float64
steps/observation/Can_quat Tensor (4,) tf.float64
steps/observation/Can_to_robot0_eef_pos Tensor (3,) tf.float64
steps/observation/Can_to_robot0_eef_quat Tensor (4,) tf.float32
steps/observation/object-state Tensor (14,) tf.float64
steps/observation/robot0_eef_pos Tensor (3,) tf.float64
steps/observation/robot0_eef_quat Tensor (4,) tf.float64
steps/observation/robot0_gripper_qpos Tensor (2,) tf.float64
steps/observation/robot0_gripper_qvel Tensor (2,) tf.float64
steps/observation/robot0_joint_pos_cos Tensor (7,) tf.float64
steps/observation/robot0_joint_pos_sin Tensor (7,) tf.float64
steps/observation/robot0_joint_vel Tensor (7,) tf.float64
steps/observation/robot0_proprio-state Tensor (32,) tf.float64
steps/reward Tensor tf.float64
steps/tag:placed Tensor tf.bool

robosuite_panda_pick_place_can/human_images_dc29b40a

Split Examples
'train' 50
  • Feature structure:
FeaturesDict({
    'agent_id': tf.string,
    'episode_id': tf.string,
    'episode_index': tf.int32,
    'steps': Dataset({
        'action': Tensor(shape=(7,), dtype=tf.float64),
        'discount': tf.float64,
        'image': Image(shape=(None, None, 3), dtype=tf.uint8),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': FeaturesDict({
            'Can_pos': Tensor(shape=(3,), dtype=tf.float64),
            'Can_quat': Tensor(shape=(4,), dtype=tf.float64),
            'Can_to_robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float64),
            'Can_to_robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float32),
            'agentview_image': Image(shape=(256, 256, 3), dtype=tf.uint8),
            'birdview_image': Image(shape=(256, 256, 3), dtype=tf.uint8),
            'object-state': Tensor(shape=(14,), dtype=tf.float64),
            'robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float64),
            'robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float64),
            'robot0_eye_in_hand_image': Image(shape=(256, 256, 3), dtype=tf.uint8),
            'robot0_gripper_qpos': Tensor(shape=(2,), dtype=tf.float64),
            'robot0_gripper_qvel': Tensor(shape=(2,), dtype=tf.float64),
            'robot0_joint_pos_cos': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_joint_pos_sin': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_joint_vel': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_proprio-state': Tensor(shape=(32,), dtype=tf.float64),
            'robot0_robotview_image': Image(shape=(256, 256, 3), dtype=tf.uint8),
        }),
        'reward': tf.float64,
        'tag:placed': tf.bool,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
agent_id Tensor tf.string
episode_id Tensor tf.string
episode_index Tensor tf.int32
steps Dataset
steps/action Tensor (7,) tf.float64
steps/discount Tensor tf.float64
steps/image Image (None, None, 3) tf.uint8
steps/is_first Tensor tf.bool
steps/is_last Tensor tf.bool
steps/is_terminal Tensor tf.bool
steps/observation FeaturesDict
steps/observation/Can_pos Tensor (3,) tf.float64
steps/observation/Can_quat Tensor (4,) tf.float64
steps/observation/Can_to_robot0_eef_pos Tensor (3,) tf.float64
steps/observation/Can_to_robot0_eef_quat Tensor (4,) tf.float32
steps/observation/agentview_image Image (256, 256, 3) tf.uint8
steps/observation/birdview_image Image (256, 256, 3) tf.uint8
steps/observation/object-state Tensor (14,) tf.float64
steps/observation/robot0_eef_pos Tensor (3,) tf.float64
steps/observation/robot0_eef_quat Tensor (4,) tf.float64
steps/observation/robot0_eye_in_hand_image Image (256, 256, 3) tf.uint8
steps/observation/robot0_gripper_qpos Tensor (2,) tf.float64
steps/observation/robot0_gripper_qvel Tensor (2,) tf.float64
steps/observation/robot0_joint_pos_cos Tensor (7,) tf.float64
steps/observation/robot0_joint_pos_sin Tensor (7,) tf.float64
steps/observation/robot0_joint_vel Tensor (7,) tf.float64
steps/observation/robot0_proprio-state Tensor (32,) tf.float64
steps/observation/robot0_robotview_image Image (256, 256, 3) tf.uint8
steps/reward Tensor tf.float64
steps/tag:placed Tensor tf.bool

robosuite_panda_pick_place_can/synthetic_stochastic_sac_afe13968

  • Config description: Synthetic dataset generated by a stochastic agent trained with SAC (200 episodes).

  • Homepage: https://github.com/google-research/rlds

  • Download size: 144.44 MiB

  • Dataset size: 622.86 MiB

  • Splits:

Split Examples
'train' 200
  • Feature structure:
FeaturesDict({
    'agent_id': tf.string,
    'episode_id': tf.string,
    'episode_index': tf.int32,
    'steps': Dataset({
        'action': Tensor(shape=(7,), dtype=tf.float32),
        'discount': tf.float64,
        'image': Image(shape=(None, None, 3), dtype=tf.uint8),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': FeaturesDict({
            'Can_pos': Tensor(shape=(3,), dtype=tf.float32),
            'Can_quat': Tensor(shape=(4,), dtype=tf.float32),
            'Can_to_robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float32),
            'Can_to_robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float32),
            'object-state': Tensor(shape=(14,), dtype=tf.float32),
            'robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float32),
            'robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float32),
            'robot0_gripper_qpos': Tensor(shape=(2,), dtype=tf.float32),
            'robot0_gripper_qvel': Tensor(shape=(2,), dtype=tf.float32),
            'robot0_joint_pos_cos': Tensor(shape=(7,), dtype=tf.float32),
            'robot0_joint_pos_sin': Tensor(shape=(7,), dtype=tf.float32),
            'robot0_joint_vel': Tensor(shape=(7,), dtype=tf.float32),
            'robot0_proprio-state': Tensor(shape=(32,), dtype=tf.float32),
        }),
        'reward': tf.float64,
        'tag:placed': tf.bool,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
agent_id Tensor tf.string
episode_id Tensor tf.string
episode_index Tensor tf.int32
steps Dataset
steps/action Tensor (7,) tf.float32
steps/discount Tensor tf.float64
steps/image Image (None, None, 3) tf.uint8
steps/is_first Tensor tf.bool
steps/is_last Tensor tf.bool
steps/is_terminal Tensor tf.bool
steps/observation FeaturesDict
steps/observation/Can_pos Tensor (3,) tf.float32
steps/observation/Can_quat Tensor (4,) tf.float32
steps/observation/Can_to_robot0_eef_pos Tensor (3,) tf.float32
steps/observation/Can_to_robot0_eef_quat Tensor (4,) tf.float32
steps/observation/object-state Tensor (14,) tf.float32
steps/observation/robot0_eef_pos Tensor (3,) tf.float32
steps/observation/robot0_eef_quat Tensor (4,) tf.float32
steps/observation/robot0_gripper_qpos Tensor (2,) tf.float32
steps/observation/robot0_gripper_qvel Tensor (2,) tf.float32
steps/observation/robot0_joint_pos_cos Tensor (7,) tf.float32
steps/observation/robot0_joint_pos_sin Tensor (7,) tf.float32
steps/observation/robot0_joint_vel Tensor (7,) tf.float32
steps/observation/robot0_proprio-state Tensor (32,) tf.float32
steps/reward Tensor tf.float64
steps/tag:placed Tensor tf.bool