kuka

  • Description:

Bin picking and rearrangement tasks

Split Examples
'train' 580,392
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': FeaturesDict({
            'base_displacement_vector': Tensor(shape=(2,), dtype=float32),
            'base_displacement_vertical_rotation': Tensor(shape=(1,), dtype=float32),
            'gripper_closedness_action': Tensor(shape=(1,), dtype=float32),
            'rotation_delta': Tensor(shape=(3,), dtype=float32),
            'terminate_episode': Tensor(shape=(3,), dtype=int32),
            'world_vector': Tensor(shape=(3,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': FeaturesDict({
            'clip_function_input/base_pose_tool_reached': Tensor(shape=(7,), dtype=float32),
            'clip_function_input/workspace_bounds': Tensor(shape=(3, 3), dtype=float32),
            'gripper_closed': Tensor(shape=(1,), dtype=float32),
            'height_to_bottom': Tensor(shape=(1,), dtype=float32),
            'image': Image(shape=(512, 640, 3), dtype=uint8),
            'natural_language_embedding': Tensor(shape=(512,), dtype=float32),
            'natural_language_instruction': string,
            'task_id': Tensor(shape=(1,), dtype=float32),
        }),
        'reward': Scalar(shape=(), dtype=float32),
    }),
    'success': bool,
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action FeaturesDict
steps/action/base_displacement_vector Tensor (2,) float32
steps/action/base_displacement_vertical_rotation Tensor (1,) float32
steps/action/gripper_closedness_action Tensor (1,) float32
steps/action/rotation_delta Tensor (3,) float32
steps/action/terminate_episode Tensor (3,) int32
steps/action/world_vector Tensor (3,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation FeaturesDict
steps/observation/clip_function_input/base_pose_tool_reached Tensor (7,) float32
steps/observation/clip_function_input/workspace_bounds Tensor (3, 3) float32
steps/observation/gripper_closed Tensor (1,) float32
steps/observation/height_to_bottom Tensor (1,) float32
steps/observation/image Image (512, 640, 3) uint8
steps/observation/natural_language_embedding Tensor (512,) float32
steps/observation/natural_language_instruction Tensor string
steps/observation/task_id Tensor (1,) float32
steps/reward Scalar float32
success Tensor bool
  • Citation:
@article{kalashnikov2018qt,
  title={Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation},
  author={Kalashnikov, Dmitry and Irpan, Alex and Pastor, Peter and Ibarz, Julian and Herzog, Alexander and Jang, Eric and Quillen, Deirdre and Holly, Ethan and Kalakrishnan, Mrinal and Vanhoucke, Vincent and others},
  journal={arXiv preprint arXiv:1806.10293},
  year={2018}
}