- Description:
Table-top manipulation with 17 objects
Homepage: https://ai.googleblog.com/2022/12/rt-1-robotics-transformer-for-real.html
Source code:
tfds.robotics.rtx.Fractal20220817Data
Versions:
0.1.0
(default): Initial release.
Download size:
Unknown size
Dataset size:
111.38 GiB
Auto-cached (documentation): No
Splits:
Split | Examples |
---|---|
'train' |
87,212 |
- Feature structure:
FeaturesDict({
'aspects': FeaturesDict({
'already_success': bool,
'feasible': bool,
'has_aspects': bool,
'success': bool,
'undesirable': bool,
}),
'attributes': FeaturesDict({
'collection_mode': int64,
'collection_mode_name': string,
'data_type': int64,
'data_type_name': string,
'env': int64,
'env_name': string,
'location': int64,
'location_name': string,
'objects_family': int64,
'objects_family_name': string,
'task_family': int64,
'task_family_name': string,
}),
'steps': Dataset({
'action': FeaturesDict({
'base_displacement_vector': Tensor(shape=(2,), dtype=float32),
'base_displacement_vertical_rotation': Tensor(shape=(1,), dtype=float32),
'gripper_closedness_action': Tensor(shape=(1,), dtype=float32, description=continuous gripper position),
'rotation_delta': Tensor(shape=(3,), dtype=float32, description=rpy commanded orientation displacement, in base-relative frame),
'terminate_episode': Tensor(shape=(3,), dtype=int32),
'world_vector': Tensor(shape=(3,), dtype=float32, description=commanded end-effector displacement, in base-relative frame),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': FeaturesDict({
'base_pose_tool_reached': Tensor(shape=(7,), dtype=float32, description=end-effector base-relative position+quaternion pose),
'gripper_closed': Tensor(shape=(1,), dtype=float32),
'gripper_closedness_commanded': Tensor(shape=(1,), dtype=float32, description=continuous gripper position),
'height_to_bottom': Tensor(shape=(1,), dtype=float32, description=height of end-effector from ground),
'image': Image(shape=(256, 320, 3), dtype=uint8),
'natural_language_embedding': Tensor(shape=(512,), dtype=float32),
'natural_language_instruction': string,
'orientation_box': Tensor(shape=(2, 3), dtype=float32),
'orientation_start': Tensor(shape=(4,), dtype=float32),
'robot_orientation_positions_box': Tensor(shape=(3, 3), dtype=float32),
'rotation_delta_to_go': Tensor(shape=(3,), dtype=float32, description=rotational displacement from current orientation to target),
'src_rotation': Tensor(shape=(4,), dtype=float32),
'vector_to_go': Tensor(shape=(3,), dtype=float32, description=displacement from current end-effector position to target),
'workspace_bounds': Tensor(shape=(3, 3), dtype=float32),
}),
'reward': Scalar(shape=(), dtype=float32),
}),
})
- Feature documentation:
Feature | Class | Shape | Dtype | Description |
---|---|---|---|---|
FeaturesDict | ||||
aspects | FeaturesDict | Session Aspects for crowdcompute ratings | ||
aspects/already_success | Tensor | bool | ||
aspects/feasible | Tensor | bool | ||
aspects/has_aspects | Tensor | bool | ||
aspects/success | Tensor | bool | ||
aspects/undesirable | Tensor | bool | ||
attributes | FeaturesDict | |||
attributes/collection_mode | Tensor | int64 | ||
attributes/collection_mode_name | Tensor | string | ||
attributes/data_type | Tensor | int64 | ||
attributes/data_type_name | Tensor | string | ||
attributes/env | Tensor | int64 | ||
attributes/env_name | Tensor | string | ||
attributes/location | Tensor | int64 | ||
attributes/location_name | Tensor | string | ||
attributes/objects_family | Tensor | int64 | ||
attributes/objects_family_name | Tensor | string | ||
attributes/task_family | Tensor | int64 | ||
attributes/task_family_name | Tensor | string | ||
steps | Dataset | |||
steps/action | FeaturesDict | |||
steps/action/base_displacement_vector | Tensor | (2,) | float32 | |
steps/action/base_displacement_vertical_rotation | Tensor | (1,) | float32 | |
steps/action/gripper_closedness_action | Tensor | (1,) | float32 | continuous gripper position |
steps/action/rotation_delta | Tensor | (3,) | float32 | rpy commanded orientation displacement, in base-relative frame |
steps/action/terminate_episode | Tensor | (3,) | int32 | |
steps/action/world_vector | Tensor | (3,) | float32 | commanded end-effector displacement, in base-relative frame |
steps/is_first | Tensor | bool | ||
steps/is_last | Tensor | bool | ||
steps/is_terminal | Tensor | bool | ||
steps/observation | FeaturesDict | |||
steps/observation/base_pose_tool_reached | Tensor | (7,) | float32 | end-effector base-relative position+quaternion pose |
steps/observation/gripper_closed | Tensor | (1,) | float32 | |
steps/observation/gripper_closedness_commanded | Tensor | (1,) | float32 | continuous gripper position |
steps/observation/height_to_bottom | Tensor | (1,) | float32 | height of end-effector from ground |
steps/observation/image | Image | (256, 320, 3) | uint8 | |
steps/observation/natural_language_embedding | Tensor | (512,) | float32 | |
steps/observation/natural_language_instruction | Tensor | string | ||
steps/observation/orientation_box | Tensor | (2, 3) | float32 | |
steps/observation/orientation_start | Tensor | (4,) | float32 | |
steps/observation/robot_orientation_positions_box | Tensor | (3, 3) | float32 | |
steps/observation/rotation_delta_to_go | Tensor | (3,) | float32 | rotational displacement from current orientation to target |
steps/observation/src_rotation | Tensor | (4,) | float32 | |
steps/observation/vector_to_go | Tensor | (3,) | float32 | displacement from current end-effector position to target |
steps/observation/workspace_bounds | Tensor | (3, 3) | float32 | |
steps/reward | Scalar | float32 |
Supervised keys (See
as_supervised
doc):None
Figure (tfds.show_examples): Not supported.
Examples (tfds.as_dataframe):
- Citation:
@article{brohan2022rt,
title={Rt-1: Robotics transformer for real-world control at scale},
author={Brohan, Anthony and Brown, Noah and Carbajal, Justice and Chebotar, Yevgen and Dabis, Joseph and Finn, Chelsea and Gopalakrishnan, Keerthana and Hausman, Karol and Herzog, Alex and Hsu, Jasmine and others},
journal={arXiv preprint arXiv:2212.06817},
year={2022}
}