robomimic_mg

  • Description:

The Robomimic machine generated datasets were collected using a Soft Actor Critic agent trained with a dense reward. Each dataset consists of the agent's replay buffer.

Each task has two versions: one with low dimensional observations (low_dim), and one with images (image).

The datasets follow the RLDS format to represent steps and episodes.

@inproceedings{robomimic2021,
  title={What Matters in Learning from Offline Human Demonstrations for Robot Manipulation},
  author={Ajay Mandlekar and Danfei Xu and Josiah Wong and Soroush Nasiriany
          and Chen Wang and Rohun Kulkarni and Li Fei-Fei and Silvio Savarese
          and Yuke Zhu and Roberto Mart\'{i}n-Mart\'{i}n},
  booktitle={Conference on Robot Learning},
  year={2021}
}

robomimic_mg/lift_mg_image (default config)

  • Download size: 18.04 GiB

  • Dataset size: 2.73 GiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 1,500
  • Feature structure:
FeaturesDict({
    'episode_id': string,
    'horizon': int32,
    'steps': Dataset({
        'action': Tensor(shape=(7,), dtype=float64),
        'discount': int32,
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': FeaturesDict({
            'agentview_image': Image(shape=(84, 84, 3), dtype=uint8),
            'object': Tensor(shape=(10,), dtype=float64),
            'robot0_eef_pos': Tensor(shape=(3,), dtype=float64),
            'robot0_eef_quat': Tensor(shape=(4,), dtype=float64),
            'robot0_eef_vel_ang': Tensor(shape=(3,), dtype=float64),
            'robot0_eef_vel_lin': Tensor(shape=(3,), dtype=float64),
            'robot0_eye_in_hand_image': Image(shape=(84, 84, 3), dtype=uint8),
            'robot0_gripper_qpos': Tensor(shape=(2,), dtype=float64),
            'robot0_gripper_qvel': Tensor(shape=(2,), dtype=float64),
            'robot0_joint_pos': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_pos_cos': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_pos_sin': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_vel': Tensor(shape=(7,), dtype=float64),
        }),
        'reward': float64,
        'states': Tensor(shape=(32,), dtype=float64),
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
episode_id Tensor string
horizon Tensor int32
steps Dataset
steps/action Tensor (7,) float64
steps/discount Tensor int32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation FeaturesDict
steps/observation/agentview_image Image (84, 84, 3) uint8
steps/observation/object Tensor (10,) float64
steps/observation/robot0_eef_pos Tensor (3,) float64 End-effector position
steps/observation/robot0_eef_quat Tensor (4,) float64 End-effector orientation
steps/observation/robot0_eef_vel_ang Tensor (3,) float64 End-effector angular velocity
steps/observation/robot0_eef_vel_lin Tensor (3,) float64 End-effector cartesian velocity
steps/observation/robot0_eye_in_hand_image Image (84, 84, 3) uint8
steps/observation/robot0_gripper_qpos Tensor (2,) float64 Gripper position
steps/observation/robot0_gripper_qvel Tensor (2,) float64 Gripper velocity
steps/observation/robot0_joint_pos Tensor (7,) float64 7DOF joint positions
steps/observation/robot0_joint_pos_cos Tensor (7,) float64
steps/observation/robot0_joint_pos_sin Tensor (7,) float64
steps/observation/robot0_joint_vel Tensor (7,) float64 7DOF joint velocities
steps/reward Tensor float64
steps/states Tensor (32,) float64

robomimic_mg/lift_mg_low_dim

  • Download size: 302.25 MiB

  • Dataset size: 195.10 MiB

  • Auto-cached (documentation): Only when shuffle_files=False (train)

  • Splits:

Split Examples
'train' 1,500
  • Feature structure:
FeaturesDict({
    'episode_id': string,
    'horizon': int32,
    'steps': Dataset({
        'action': Tensor(shape=(7,), dtype=float64),
        'discount': int32,
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': FeaturesDict({
            'object': Tensor(shape=(10,), dtype=float64),
            'robot0_eef_pos': Tensor(shape=(3,), dtype=float64),
            'robot0_eef_quat': Tensor(shape=(4,), dtype=float64),
            'robot0_eef_vel_ang': Tensor(shape=(3,), dtype=float64),
            'robot0_eef_vel_lin': Tensor(shape=(3,), dtype=float64),
            'robot0_gripper_qpos': Tensor(shape=(2,), dtype=float64),
            'robot0_gripper_qvel': Tensor(shape=(2,), dtype=float64),
            'robot0_joint_pos': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_pos_cos': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_pos_sin': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_vel': Tensor(shape=(7,), dtype=float64),
        }),
        'reward': float64,
        'states': Tensor(shape=(32,), dtype=float64),
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
episode_id Tensor string
horizon Tensor int32
steps Dataset
steps/action Tensor (7,) float64
steps/discount Tensor int32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation FeaturesDict
steps/observation/object Tensor (10,) float64
steps/observation/robot0_eef_pos Tensor (3,) float64 End-effector position
steps/observation/robot0_eef_quat Tensor (4,) float64 End-effector orientation
steps/observation/robot0_eef_vel_ang Tensor (3,) float64 End-effector angular velocity
steps/observation/robot0_eef_vel_lin Tensor (3,) float64 End-effector cartesian velocity
steps/observation/robot0_gripper_qpos Tensor (2,) float64 Gripper position
steps/observation/robot0_gripper_qvel Tensor (2,) float64 Gripper velocity
steps/observation/robot0_joint_pos Tensor (7,) float64 7DOF joint positions
steps/observation/robot0_joint_pos_cos Tensor (7,) float64
steps/observation/robot0_joint_pos_sin Tensor (7,) float64
steps/observation/robot0_joint_vel Tensor (7,) float64 7DOF joint velocities
steps/reward Tensor float64
steps/states Tensor (32,) float64

robomimic_mg/can_mg_image

  • Download size: 47.14 GiB

  • Dataset size: 11.15 GiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 3,900
  • Feature structure:
FeaturesDict({
    'episode_id': string,
    'horizon': int32,
    'steps': Dataset({
        'action': Tensor(shape=(7,), dtype=float64),
        'discount': int32,
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': FeaturesDict({
            'agentview_image': Image(shape=(84, 84, 3), dtype=uint8),
            'object': Tensor(shape=(14,), dtype=float64),
            'robot0_eef_pos': Tensor(shape=(3,), dtype=float64),
            'robot0_eef_quat': Tensor(shape=(4,), dtype=float64),
            'robot0_eef_vel_ang': Tensor(shape=(3,), dtype=float64),
            'robot0_eef_vel_lin': Tensor(shape=(3,), dtype=float64),
            'robot0_eye_in_hand_image': Image(shape=(84, 84, 3), dtype=uint8),
            'robot0_gripper_qpos': Tensor(shape=(2,), dtype=float64),
            'robot0_gripper_qvel': Tensor(shape=(2,), dtype=float64),
            'robot0_joint_pos': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_pos_cos': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_pos_sin': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_vel': Tensor(shape=(7,), dtype=float64),
        }),
        'reward': float64,
        'states': Tensor(shape=(71,), dtype=float64),
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
episode_id Tensor string
horizon Tensor int32
steps Dataset
steps/action Tensor (7,) float64
steps/discount Tensor int32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation FeaturesDict
steps/observation/agentview_image Image (84, 84, 3) uint8
steps/observation/object Tensor (14,) float64
steps/observation/robot0_eef_pos Tensor (3,) float64 End-effector position
steps/observation/robot0_eef_quat Tensor (4,) float64 End-effector orientation
steps/observation/robot0_eef_vel_ang Tensor (3,) float64 End-effector angular velocity
steps/observation/robot0_eef_vel_lin Tensor (3,) float64 End-effector cartesian velocity
steps/observation/robot0_eye_in_hand_image Image (84, 84, 3) uint8
steps/observation/robot0_gripper_qpos Tensor (2,) float64 Gripper position
steps/observation/robot0_gripper_qvel Tensor (2,) float64 Gripper velocity
steps/observation/robot0_joint_pos Tensor (7,) float64 7DOF joint positions
steps/observation/robot0_joint_pos_cos Tensor (7,) float64
steps/observation/robot0_joint_pos_sin Tensor (7,) float64
steps/observation/robot0_joint_vel Tensor (7,) float64 7DOF joint velocities
steps/reward Tensor float64
steps/states Tensor (71,) float64

robomimic_mg/can_mg_low_dim

  • Download size: 1.01 GiB

  • Dataset size: 697.71 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 3,900
  • Feature structure:
FeaturesDict({
    'episode_id': string,
    'horizon': int32,
    'steps': Dataset({
        'action': Tensor(shape=(7,), dtype=float64),
        'discount': int32,
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': FeaturesDict({
            'object': Tensor(shape=(14,), dtype=float64),
            'robot0_eef_pos': Tensor(shape=(3,), dtype=float64),
            'robot0_eef_quat': Tensor(shape=(4,), dtype=float64),
            'robot0_eef_vel_ang': Tensor(shape=(3,), dtype=float64),
            'robot0_eef_vel_lin': Tensor(shape=(3,), dtype=float64),
            'robot0_gripper_qpos': Tensor(shape=(2,), dtype=float64),
            'robot0_gripper_qvel': Tensor(shape=(2,), dtype=float64),
            'robot0_joint_pos': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_pos_cos': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_pos_sin': Tensor(shape=(7,), dtype=float64),
            'robot0_joint_vel': Tensor(shape=(7,), dtype=float64),
        }),
        'reward': float64,
        'states': Tensor(shape=(71,), dtype=float64),
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
episode_id Tensor string
horizon Tensor int32
steps Dataset
steps/action Tensor (7,) float64
steps/discount Tensor int32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation FeaturesDict
steps/observation/object Tensor (14,) float64
steps/observation/robot0_eef_pos Tensor (3,) float64 End-effector position
steps/observation/robot0_eef_quat Tensor (4,) float64 End-effector orientation
steps/observation/robot0_eef_vel_ang Tensor (3,) float64 End-effector angular velocity
steps/observation/robot0_eef_vel_lin Tensor (3,) float64 End-effector cartesian velocity
steps/observation/robot0_gripper_qpos Tensor (2,) float64 Gripper position
steps/observation/robot0_gripper_qvel Tensor (2,) float64 Gripper velocity
steps/observation/robot0_joint_pos Tensor (7,) float64 7DOF joint positions
steps/observation/robot0_joint_pos_cos Tensor (7,) float64
steps/observation/robot0_joint_pos_sin Tensor (7,) float64
steps/observation/robot0_joint_vel Tensor (7,) float64 7DOF joint velocities
steps/reward Tensor float64
steps/states Tensor (71,) float64