d4rl_adroit_relocate

Stay organized with collections Save and categorize content based on your preferences.

  • Description:

D4RL is an open-source benchmark for offline reinforcement learning. It provides standardized environments and datasets for training and benchmarking algorithms.

The datasets follow the RLDS format to represent steps and episodes.

@misc{fu2020d4rl,
    title={D4RL: Datasets for Deep Data-Driven Reinforcement Learning},
    author={Justin Fu and Aviral Kumar and Ofir Nachum and George Tucker and Sergey Levine},
    year={2020},
    eprint={2004.07219},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

d4rl_adroit_relocate/v0-human (default config)

  • Download size: 4.87 MiB

  • Dataset size: 5.48 MiB

  • Auto-cached (documentation): Yes

  • Splits:

Split Examples
'train' 60
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(30,), dtype=tf.float32),
        'discount': tf.float32,
        'infos': FeaturesDict({
            'qpos': Tensor(shape=(36,), dtype=tf.float32),
            'qvel': Tensor(shape=(36,), dtype=tf.float32),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(39,), dtype=tf.float32),
        'reward': tf.float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (30,) tf.float32
steps/discount Tensor tf.float32
steps/infos FeaturesDict
steps/infos/qpos Tensor (36,) tf.float32
steps/infos/qvel Tensor (36,) tf.float32
steps/is_first Tensor tf.bool
steps/is_last Tensor tf.bool
steps/is_terminal Tensor tf.bool
steps/observation Tensor (39,) tf.float32
steps/reward Tensor tf.float32

d4rl_adroit_relocate/v0-cloned

  • Download size: 647.11 MiB

  • Dataset size: 550.50 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 5,519
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(30,), dtype=tf.float32),
        'discount': tf.float64,
        'infos': FeaturesDict({
            'qpos': Tensor(shape=(36,), dtype=tf.float64),
            'qvel': Tensor(shape=(36,), dtype=tf.float64),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(39,), dtype=tf.float64),
        'reward': tf.float64,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (30,) tf.float32
steps/discount Tensor tf.float64
steps/infos FeaturesDict
steps/infos/qpos Tensor (36,) tf.float64
steps/infos/qvel Tensor (36,) tf.float64
steps/is_first Tensor tf.bool
steps/is_last Tensor tf.bool
steps/is_terminal Tensor tf.bool
steps/observation Tensor (39,) tf.float64
steps/reward Tensor tf.float64

d4rl_adroit_relocate/v0-expert

  • Download size: 581.53 MiB

  • Dataset size: 778.97 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 5,000
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(30,), dtype=tf.float32),
        'discount': tf.float32,
        'infos': FeaturesDict({
            'action_logstd': Tensor(shape=(30,), dtype=tf.float32),
            'action_mean': Tensor(shape=(30,), dtype=tf.float32),
            'qpos': Tensor(shape=(36,), dtype=tf.float32),
            'qvel': Tensor(shape=(36,), dtype=tf.float32),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(39,), dtype=tf.float32),
        'reward': tf.float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (30,) tf.float32
steps/discount Tensor tf.float32
steps/infos FeaturesDict
steps/infos/action_logstd Tensor (30,) tf.float32
steps/infos/action_mean Tensor (30,) tf.float32
steps/infos/qpos Tensor (36,) tf.float32
steps/infos/qvel Tensor (36,) tf.float32
steps/is_first Tensor tf.bool
steps/is_last Tensor tf.bool
steps/is_terminal Tensor tf.bool
steps/observation Tensor (39,) tf.float32
steps/reward Tensor tf.float32

d4rl_adroit_relocate/v1-human

  • Download size: 5.92 MiB

  • Dataset size: 6.94 MiB

  • Auto-cached (documentation): Yes

  • Splits:

Split Examples
'train' 25
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(30,), dtype=tf.float32),
        'discount': tf.float32,
        'infos': FeaturesDict({
            'hand_qpos': Tensor(shape=(30,), dtype=tf.float32),
            'obj_pos': Tensor(shape=(3,), dtype=tf.float32),
            'palm_pos': Tensor(shape=(3,), dtype=tf.float32),
            'qpos': Tensor(shape=(36,), dtype=tf.float32),
            'qvel': Tensor(shape=(36,), dtype=tf.float32),
            'target_pos': Tensor(shape=(3,), dtype=tf.float32),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(39,), dtype=tf.float32),
        'reward': tf.float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (30,) tf.float32
steps/discount Tensor tf.float32
steps/infos FeaturesDict
steps/infos/hand_qpos Tensor (30,) tf.float32
steps/infos/obj_pos Tensor (3,) tf.float32
steps/infos/palm_pos Tensor (3,) tf.float32
steps/infos/qpos Tensor (36,) tf.float32
steps/infos/qvel Tensor (36,) tf.float32
steps/infos/target_pos Tensor (3,) tf.float32
steps/is_first Tensor tf.bool
steps/is_last Tensor tf.bool
steps/is_terminal Tensor tf.bool
steps/observation Tensor (39,) tf.float32
steps/reward Tensor tf.float32

d4rl_adroit_relocate/v1-cloned

  • Download size: 554.39 MiB

  • Dataset size: 1.86 GiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 3,758
  • Feature structure:
FeaturesDict({
    'algorithm': tf.string,
    'policy': FeaturesDict({
        'fc0': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=tf.float32),
            'weight': Tensor(shape=(39, 256), dtype=tf.float32),
        }),
        'fc1': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=tf.float32),
            'weight': Tensor(shape=(256, 256), dtype=tf.float32),
        }),
        'last_fc': FeaturesDict({
            'bias': Tensor(shape=(30,), dtype=tf.float32),
            'weight': Tensor(shape=(256, 30), dtype=tf.float32),
        }),
        'nonlinearity': tf.string,
        'output_distribution': tf.string,
    }),
    'steps': Dataset({
        'action': Tensor(shape=(30,), dtype=tf.float32),
        'discount': tf.float32,
        'infos': FeaturesDict({
            'hand_qpos': Tensor(shape=(30,), dtype=tf.float32),
            'obj_pos': Tensor(shape=(3,), dtype=tf.float32),
            'palm_pos': Tensor(shape=(3,), dtype=tf.float32),
            'qpos': Tensor(shape=(36,), dtype=tf.float32),
            'qvel': Tensor(shape=(36,), dtype=tf.float32),
            'target_pos': Tensor(shape=(3,), dtype=tf.float32),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(39,), dtype=tf.float32),
        'reward': tf.float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor tf.string
policy FeaturesDict
policy/fc0 FeaturesDict
policy/fc0/bias Tensor (256,) tf.float32
policy/fc0/weight Tensor (39, 256) tf.float32
policy/fc1 FeaturesDict
policy/fc1/bias Tensor (256,) tf.float32
policy/fc1/weight Tensor (256, 256) tf.float32
policy/last_fc FeaturesDict
policy/last_fc/bias Tensor (30,) tf.float32
policy/last_fc/weight Tensor (256, 30) tf.float32
policy/nonlinearity Tensor tf.string
policy/output_distribution Tensor tf.string
steps Dataset
steps/action Tensor (30,) tf.float32
steps/discount Tensor tf.float32
steps/infos FeaturesDict
steps/infos/hand_qpos Tensor (30,) tf.float32
steps/infos/obj_pos Tensor (3,) tf.float32
steps/infos/palm_pos Tensor (3,) tf.float32
steps/infos/qpos Tensor (36,) tf.float32
steps/infos/qvel Tensor (36,) tf.float32
steps/infos/target_pos Tensor (3,) tf.float32
steps/is_first Tensor tf.bool
steps/is_last Tensor tf.bool
steps/is_terminal Tensor tf.bool
steps/observation Tensor (39,) tf.float32
steps/reward Tensor tf.float32

d4rl_adroit_relocate/v1-expert

  • Download size: 682.47 MiB

  • Dataset size: 1012.49 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 5,000
  • Feature structure:
FeaturesDict({
    'algorithm': tf.string,
    'policy': FeaturesDict({
        'fc0': FeaturesDict({
            'bias': Tensor(shape=(32,), dtype=tf.float32),
            'weight': Tensor(shape=(32, 39), dtype=tf.float32),
        }),
        'fc1': FeaturesDict({
            'bias': Tensor(shape=(32,), dtype=tf.float32),
            'weight': Tensor(shape=(32, 32), dtype=tf.float32),
        }),
        'last_fc': FeaturesDict({
            'bias': Tensor(shape=(30,), dtype=tf.float32),
            'weight': Tensor(shape=(30, 32), dtype=tf.float32),
        }),
        'last_fc_log_std': FeaturesDict({
            'bias': Tensor(shape=(30,), dtype=tf.float32),
            'weight': Tensor(shape=(30, 32), dtype=tf.float32),
        }),
        'nonlinearity': tf.string,
        'output_distribution': tf.string,
    }),
    'steps': Dataset({
        'action': Tensor(shape=(30,), dtype=tf.float32),
        'discount': tf.float32,
        'infos': FeaturesDict({
            'action_log_std': Tensor(shape=(30,), dtype=tf.float32),
            'action_mean': Tensor(shape=(30,), dtype=tf.float32),
            'hand_qpos': Tensor(shape=(30,), dtype=tf.float32),
            'obj_pos': Tensor(shape=(3,), dtype=tf.float32),
            'palm_pos': Tensor(shape=(3,), dtype=tf.float32),
            'qpos': Tensor(shape=(36,), dtype=tf.float32),
            'qvel': Tensor(shape=(36,), dtype=tf.float32),
            'target_pos': Tensor(shape=(3,), dtype=tf.float32),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(39,), dtype=tf.float32),
        'reward': tf.float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor tf.string
policy FeaturesDict
policy/fc0 FeaturesDict
policy/fc0/bias Tensor (32,) tf.float32
policy/fc0/weight Tensor (32, 39) tf.float32
policy/fc1 FeaturesDict
policy/fc1/bias Tensor (32,) tf.float32
policy/fc1/weight Tensor (32, 32) tf.float32
policy/last_fc FeaturesDict
policy/last_fc/bias Tensor (30,) tf.float32
policy/last_fc/weight Tensor (30, 32) tf.float32
policy/last_fc_log_std FeaturesDict
policy/last_fc_log_std/bias Tensor (30,) tf.float32
policy/last_fc_log_std/weight Tensor (30, 32) tf.float32
policy/nonlinearity Tensor tf.string
policy/output_distribution Tensor tf.string
steps Dataset
steps/action Tensor (30,) tf.float32
steps/discount Tensor tf.float32
steps/infos FeaturesDict
steps/infos/action_log_std Tensor (30,) tf.float32
steps/infos/action_mean Tensor (30,) tf.float32
steps/infos/hand_qpos Tensor (30,) tf.float32
steps/infos/obj_pos Tensor (3,) tf.float32
steps/infos/palm_pos Tensor (3,) tf.float32
steps/infos/qpos Tensor (36,) tf.float32
steps/infos/qvel Tensor (36,) tf.float32
steps/infos/target_pos Tensor (3,) tf.float32
steps/is_first Tensor tf.bool
steps/is_last Tensor tf.bool
steps/is_terminal Tensor tf.bool
steps/observation Tensor (39,) tf.float32
steps/reward Tensor tf.float32