d4rl_adroit_pen

  • Description:

D4RL is an open-source benchmark for offline reinforcement learning. It provides standardized environments and datasets for training and benchmarking algorithms.

The datasets follow the RLDS format to represent steps and episodes.

@misc{fu2020d4rl,
    title={D4RL: Datasets for Deep Data-Driven Reinforcement Learning},
    author={Justin Fu and Aviral Kumar and Ofir Nachum and George Tucker and Sergey Levine},
    year={2020},
    eprint={2004.07219},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

d4rl_adroit_pen/v0-human (default config)

Split Examples
'train' 50
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(24,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'qpos': Tensor(shape=(30,), dtype=float32),
            'qvel': Tensor(shape=(30,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(45,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (24,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/qpos Tensor (30,) float32
steps/infos/qvel Tensor (30,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (45,) float32
steps/reward Tensor float32

d4rl_adroit_pen/v0-cloned

Split Examples
'train' 5,023
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(24,), dtype=float32),
        'discount': float64,
        'infos': FeaturesDict({
            'qpos': Tensor(shape=(30,), dtype=float64),
            'qvel': Tensor(shape=(30,), dtype=float64),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(45,), dtype=float64),
        'reward': float64,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (24,) float32
steps/discount Tensor float64
steps/infos FeaturesDict
steps/infos/qpos Tensor (30,) float64
steps/infos/qvel Tensor (30,) float64
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (45,) float64
steps/reward Tensor float64

d4rl_adroit_pen/v0-expert

Split Examples
'train' 5,000
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(24,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_logstd': Tensor(shape=(24,), dtype=float32),
            'action_mean': Tensor(shape=(24,), dtype=float32),
            'qpos': Tensor(shape=(30,), dtype=float32),
            'qvel': Tensor(shape=(30,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(45,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (24,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_logstd Tensor (24,) float32
steps/infos/action_mean Tensor (24,) float32
steps/infos/qpos Tensor (30,) float32
steps/infos/qvel Tensor (30,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (45,) float32
steps/reward Tensor float32

d4rl_adroit_pen/v1-human

Split Examples
'train' 25
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(24,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'desired_orien': Tensor(shape=(4,), dtype=float32),
            'qpos': Tensor(shape=(30,), dtype=float32),
            'qvel': Tensor(shape=(30,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(45,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (24,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/desired_orien Tensor (4,) float32
steps/infos/qpos Tensor (30,) float32
steps/infos/qvel Tensor (30,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (45,) float32
steps/reward Tensor float32

d4rl_adroit_pen/v1-cloned

Split Examples
'train' 3,755
  • Feature structure:
FeaturesDict({
    'algorithm': string,
    'policy': FeaturesDict({
        'fc0': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=float32),
            'weight': Tensor(shape=(45, 256), dtype=float32),
        }),
        'fc1': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=float32),
            'weight': Tensor(shape=(256, 256), dtype=float32),
        }),
        'last_fc': FeaturesDict({
            'bias': Tensor(shape=(24,), dtype=float32),
            'weight': Tensor(shape=(256, 24), dtype=float32),
        }),
        'nonlinearity': string,
        'output_distribution': string,
    }),
    'steps': Dataset({
        'action': Tensor(shape=(24,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'desired_orien': Tensor(shape=(4,), dtype=float32),
            'qpos': Tensor(shape=(30,), dtype=float32),
            'qvel': Tensor(shape=(30,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(45,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor string
policy FeaturesDict
policy/fc0 FeaturesDict
policy/fc0/bias Tensor (256,) float32
policy/fc0/weight Tensor (45, 256) float32
policy/fc1 FeaturesDict
policy/fc1/bias Tensor (256,) float32
policy/fc1/weight Tensor (256, 256) float32
policy/last_fc FeaturesDict
policy/last_fc/bias Tensor (24,) float32
policy/last_fc/weight Tensor (256, 24) float32
policy/nonlinearity Tensor string
policy/output_distribution Tensor string
steps Dataset
steps/action Tensor (24,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/desired_orien Tensor (4,) float32
steps/infos/qpos Tensor (30,) float32
steps/infos/qvel Tensor (30,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (45,) float32
steps/reward Tensor float32

d4rl_adroit_pen/v1-expert

  • Download size: 249.90 MiB

  • Dataset size: 548.47 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 5,000
  • Feature structure:
FeaturesDict({
    'algorithm': string,
    'policy': FeaturesDict({
        'fc0': FeaturesDict({
            'bias': Tensor(shape=(64,), dtype=float32),
            'weight': Tensor(shape=(64, 45), dtype=float32),
        }),
        'fc1': FeaturesDict({
            'bias': Tensor(shape=(64,), dtype=float32),
            'weight': Tensor(shape=(64, 64), dtype=float32),
        }),
        'last_fc': FeaturesDict({
            'bias': Tensor(shape=(24,), dtype=float32),
            'weight': Tensor(shape=(24, 64), dtype=float32),
        }),
        'last_fc_log_std': FeaturesDict({
            'bias': Tensor(shape=(24,), dtype=float32),
            'weight': Tensor(shape=(24, 64), dtype=float32),
        }),
        'nonlinearity': string,
        'output_distribution': string,
    }),
    'steps': Dataset({
        'action': Tensor(shape=(24,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_log_std': Tensor(shape=(24,), dtype=float32),
            'action_mean': Tensor(shape=(24,), dtype=float32),
            'desired_orien': Tensor(shape=(4,), dtype=float32),
            'qpos': Tensor(shape=(30,), dtype=float32),
            'qvel': Tensor(shape=(30,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(45,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor string
policy FeaturesDict
policy/fc0 FeaturesDict
policy/fc0/bias Tensor (64,) float32
policy/fc0/weight Tensor (64, 45) float32
policy/fc1 FeaturesDict
policy/fc1/bias Tensor (64,) float32
policy/fc1/weight Tensor (64, 64) float32
policy/last_fc FeaturesDict
policy/last_fc/bias Tensor (24,) float32
policy/last_fc/weight Tensor (24, 64) float32
policy/last_fc_log_std FeaturesDict
policy/last_fc_log_std/bias Tensor (24,) float32
policy/last_fc_log_std/weight Tensor (24, 64) float32
policy/nonlinearity Tensor string
policy/output_distribution Tensor string
steps Dataset
steps/action Tensor (24,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_log_std Tensor (24,) float32
steps/infos/action_mean Tensor (24,) float32
steps/infos/desired_orien Tensor (4,) float32
steps/infos/qpos Tensor (30,) float32
steps/infos/qvel Tensor (30,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (45,) float32
steps/reward Tensor float32