Help protect the Great Barrier Reef with TensorFlow on Kaggle Join Challenge

robosuite_panda_pick_place_can

  • Description:

These datasets have been created with the PickPlaceCan environment of the robosuite robotic arm simulator. The human datasets were recorded by a single operator using the RLDS Creator and a gamepad controller.

The synthetic datasets have been recorded using the EnvLogger library.

Episodes consist of 400 steps. In each episode, a tag is added when the task is completed, this tag is stored as part of the custom step metadata.

@misc{ramos2021rlds,
      title={RLDS: an Ecosystem to Generate, Share and Use Datasets in Reinforcement Learning},
      author={Sabela Ramos and Sertan Girgin and Léonard Hussenot and Damien Vincent and Hanna Yakubovich and Daniel Toyama and Anita Gergely and Piotr Stanczyk and Raphael Marinier and Jeremiah Harmsen and Olivier Pietquin and Nikola Momchev},
      year={2021},
      eprint={2111.02767},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

robosuite_panda_pick_place_can/human_dc29b40a (default config)

Split Examples
'train' 50
  • Features:
FeaturesDict({
    'agent_id': tf.string,
    'episode_id': tf.string,
    'episode_index': tf.int32,
    'steps': Dataset({
        'action': Tensor(shape=(7,), dtype=tf.float64),
        'discount': tf.float64,
        'image': Image(shape=(None, None, 3), dtype=tf.uint8),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': FeaturesDict({
            'Can_pos': Tensor(shape=(3,), dtype=tf.float64),
            'Can_quat': Tensor(shape=(4,), dtype=tf.float64),
            'Can_to_robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float64),
            'Can_to_robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float32),
            'object-state': Tensor(shape=(14,), dtype=tf.float64),
            'robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float64),
            'robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float64),
            'robot0_gripper_qpos': Tensor(shape=(2,), dtype=tf.float64),
            'robot0_gripper_qvel': Tensor(shape=(2,), dtype=tf.float64),
            'robot0_joint_pos_cos': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_joint_pos_sin': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_joint_vel': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_proprio-state': Tensor(shape=(32,), dtype=tf.float64),
        }),
        'reward': tf.float64,
        'tag:placed': tf.bool,
    }),
})

robosuite_panda_pick_place_can/human_images_dc29b40a

Split Examples
'train' 50
  • Features:
FeaturesDict({
    'agent_id': tf.string,
    'episode_id': tf.string,
    'episode_index': tf.int32,
    'steps': Dataset({
        'action': Tensor(shape=(7,), dtype=tf.float64),
        'discount': tf.float64,
        'image': Image(shape=(None, None, 3), dtype=tf.uint8),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': FeaturesDict({
            'Can_pos': Tensor(shape=(3,), dtype=tf.float64),
            'Can_quat': Tensor(shape=(4,), dtype=tf.float64),
            'Can_to_robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float64),
            'Can_to_robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float32),
            'agentview_image': Image(shape=(256, 256, 3), dtype=tf.uint8),
            'birdview_image': Image(shape=(256, 256, 3), dtype=tf.uint8),
            'object-state': Tensor(shape=(14,), dtype=tf.float64),
            'robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float64),
            'robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float64),
            'robot0_eye_in_hand_image': Image(shape=(256, 256, 3), dtype=tf.uint8),
            'robot0_gripper_qpos': Tensor(shape=(2,), dtype=tf.float64),
            'robot0_gripper_qvel': Tensor(shape=(2,), dtype=tf.float64),
            'robot0_joint_pos_cos': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_joint_pos_sin': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_joint_vel': Tensor(shape=(7,), dtype=tf.float64),
            'robot0_proprio-state': Tensor(shape=(32,), dtype=tf.float64),
            'robot0_robotview_image': Image(shape=(256, 256, 3), dtype=tf.uint8),
        }),
        'reward': tf.float64,
        'tag:placed': tf.bool,
    }),
})

robosuite_panda_pick_place_can/synthetic_stochastic_sac_afe13968

  • Config description: Synthetic dataset generated by a stochastic agent trained with SAC (200 episodes).

  • Homepage: https://github.com/google-research/rlds

  • Download size: 144.44 MiB

  • Dataset size: 622.86 MiB

  • Splits:

Split Examples
'train' 200
  • Features:
FeaturesDict({
    'agent_id': tf.string,
    'episode_id': tf.string,
    'episode_index': tf.int32,
    'steps': Dataset({
        'action': Tensor(shape=(7,), dtype=tf.float32),
        'discount': tf.float64,
        'image': Image(shape=(None, None, 3), dtype=tf.uint8),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': FeaturesDict({
            'Can_pos': Tensor(shape=(3,), dtype=tf.float32),
            'Can_quat': Tensor(shape=(4,), dtype=tf.float32),
            'Can_to_robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float32),
            'Can_to_robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float32),
            'object-state': Tensor(shape=(14,), dtype=tf.float32),
            'robot0_eef_pos': Tensor(shape=(3,), dtype=tf.float32),
            'robot0_eef_quat': Tensor(shape=(4,), dtype=tf.float32),
            'robot0_gripper_qpos': Tensor(shape=(2,), dtype=tf.float32),
            'robot0_gripper_qvel': Tensor(shape=(2,), dtype=tf.float32),
            'robot0_joint_pos_cos': Tensor(shape=(7,), dtype=tf.float32),
            'robot0_joint_pos_sin': Tensor(shape=(7,), dtype=tf.float32),
            'robot0_joint_vel': Tensor(shape=(7,), dtype=tf.float32),
            'robot0_proprio-state': Tensor(shape=(32,), dtype=tf.float32),
        }),
        'reward': tf.float64,
        'tag:placed': tf.bool,
    }),
})