viola

  • Description:

Franka robot interacting with stylized kitchen tasks

Split Examples
'test' 15
'train' 135
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': FeaturesDict({
            'gripper_closedness_action': float32,
            'rotation_delta': Tensor(shape=(3,), dtype=float32),
            'terminate_episode': float32,
            'world_vector': Tensor(shape=(3,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': FeaturesDict({
            'agentview_rgb': Image(shape=(224, 224, 3), dtype=uint8, description=RGB captured by workspace camera),
            'ee_states': Tensor(shape=(16,), dtype=float32, description=Pose of the end effector specified as a homogenous matrix.),
            'eye_in_hand_rgb': Image(shape=(224, 224, 3), dtype=uint8, description=RGB captured by in hand camera),
            'gripper_states': Tensor(shape=(1,), dtype=float32, description=gripper_states = 0 means the gripper is fully closed. The value represents the gripper width of Franka Panda Gripper.),
            'joint_states': Tensor(shape=(7,), dtype=float32, description=joint values),
            'natural_language_embedding': Tensor(shape=(512,), dtype=float32),
            'natural_language_instruction': string,
        }),
        'reward': Scalar(shape=(), dtype=float32),
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action FeaturesDict
steps/action/gripper_closedness_action Tensor float32
steps/action/rotation_delta Tensor (3,) float32
steps/action/terminate_episode Tensor float32
steps/action/world_vector Tensor (3,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation FeaturesDict
steps/observation/agentview_rgb Image (224, 224, 3) uint8 RGB captured by workspace camera
steps/observation/ee_states Tensor (16,) float32 Pose of the end effector specified as a homogenous matrix.
steps/observation/eye_in_hand_rgb Image (224, 224, 3) uint8 RGB captured by in hand camera
steps/observation/gripper_states Tensor (1,) float32 gripper_states = 0 means the gripper is fully closed. The value represents the gripper width of Franka Panda Gripper.
steps/observation/joint_states Tensor (7,) float32 joint values
steps/observation/natural_language_embedding Tensor (512,) float32
steps/observation/natural_language_instruction Tensor string
steps/reward Scalar float32
  • Citation:
@article{zhu2022viola,
  title={VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors},
  author={Zhu, Yifeng and Joshi, Abhishek and Stone, Peter and Zhu, Yuke},
  journal={6th Annual Conference on Robot Learning (CoRL)},
  year={2022}
}