d4rl_mujoco_halfcheetah

  • Description:

D4RL is an open-source benchmark for offline reinforcement learning. It provides standardized environments and datasets for training and benchmarking algorithms.

The datasets follow the RLDS format to represent steps and episodes.

@misc{fu2020d4rl,
    title={D4RL: Datasets for Deep Data-Driven Reinforcement Learning},
    author={Justin Fu and Aviral Kumar and Ofir Nachum and George Tucker and Sergey Levine},
    year={2020},
    eprint={2004.07219},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

d4rl_mujoco_halfcheetah/v0-expert (default config)

  • Download size: 83.44 MiB

  • Dataset size: 98.43 MiB

  • Auto-cached (documentation): Yes

  • Splits:

Split Examples
'train' 1,002
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v0-medium

  • Download size: 82.92 MiB

  • Dataset size: 98.43 MiB

  • Auto-cached (documentation): Yes

  • Splits:

Split Examples
'train' 1,002
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v0-medium-expert

  • Download size: 166.36 MiB

  • Dataset size: 196.86 MiB

  • Auto-cached (documentation): Only when shuffle_files=False (train)

  • Splits:

Split Examples
'train' 2,004
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v0-mixed

  • Download size: 8.60 MiB

  • Dataset size: 9.94 MiB

  • Auto-cached (documentation): Yes

  • Splits:

Split Examples
'train' 101
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v0-random

  • Download size: 84.79 MiB

  • Dataset size: 98.43 MiB

  • Auto-cached (documentation): Yes

  • Splits:

Split Examples
'train' 1,002
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v1-expert

  • Download size: 146.94 MiB

  • Dataset size: 451.88 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 1,000
  • Feature structure:
FeaturesDict({
    'algorithm': string,
    'iteration': int32,
    'policy': FeaturesDict({
        'fc0': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=float32),
            'weight': Tensor(shape=(256, 17), dtype=float32),
        }),
        'fc1': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=float32),
            'weight': Tensor(shape=(256, 256), dtype=float32),
        }),
        'last_fc': FeaturesDict({
            'bias': Tensor(shape=(6,), dtype=float32),
            'weight': Tensor(shape=(6, 256), dtype=float32),
        }),
        'last_fc_log_std': FeaturesDict({
            'bias': Tensor(shape=(6,), dtype=float32),
            'weight': Tensor(shape=(6, 256), dtype=float32),
        }),
        'nonlinearity': string,
        'output_distribution': string,
    }),
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_log_probs': float32,
            'qpos': Tensor(shape=(9,), dtype=float32),
            'qvel': Tensor(shape=(9,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor string
iteration Tensor int32
policy FeaturesDict
policy/fc0 FeaturesDict
policy/fc0/bias Tensor (256,) float32
policy/fc0/weight Tensor (256, 17) float32
policy/fc1 FeaturesDict
policy/fc1/bias Tensor (256,) float32
policy/fc1/weight Tensor (256, 256) float32
policy/last_fc FeaturesDict
policy/last_fc/bias Tensor (6,) float32
policy/last_fc/weight Tensor (6, 256) float32
policy/last_fc_log_std FeaturesDict
policy/last_fc_log_std/bias Tensor (6,) float32
policy/last_fc_log_std/weight Tensor (6, 256) float32
policy/nonlinearity Tensor string
policy/output_distribution Tensor string
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float32
steps/infos/qpos Tensor (9,) float32
steps/infos/qvel Tensor (9,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v1-medium

  • Download size: 146.65 MiB

  • Dataset size: 451.88 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 1,000
  • Feature structure:
FeaturesDict({
    'algorithm': string,
    'iteration': int32,
    'policy': FeaturesDict({
        'fc0': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=float32),
            'weight': Tensor(shape=(256, 17), dtype=float32),
        }),
        'fc1': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=float32),
            'weight': Tensor(shape=(256, 256), dtype=float32),
        }),
        'last_fc': FeaturesDict({
            'bias': Tensor(shape=(6,), dtype=float32),
            'weight': Tensor(shape=(6, 256), dtype=float32),
        }),
        'last_fc_log_std': FeaturesDict({
            'bias': Tensor(shape=(6,), dtype=float32),
            'weight': Tensor(shape=(6, 256), dtype=float32),
        }),
        'nonlinearity': string,
        'output_distribution': string,
    }),
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_log_probs': float32,
            'qpos': Tensor(shape=(9,), dtype=float32),
            'qvel': Tensor(shape=(9,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor string
iteration Tensor int32
policy FeaturesDict
policy/fc0 FeaturesDict
policy/fc0/bias Tensor (256,) float32
policy/fc0/weight Tensor (256, 17) float32
policy/fc1 FeaturesDict
policy/fc1/bias Tensor (256,) float32
policy/fc1/weight Tensor (256, 256) float32
policy/last_fc FeaturesDict
policy/last_fc/bias Tensor (6,) float32
policy/last_fc/weight Tensor (6, 256) float32
policy/last_fc_log_std FeaturesDict
policy/last_fc_log_std/bias Tensor (6,) float32
policy/last_fc_log_std/weight Tensor (6, 256) float32
policy/nonlinearity Tensor string
policy/output_distribution Tensor string
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float32
steps/infos/qpos Tensor (9,) float32
steps/infos/qvel Tensor (9,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v1-medium-expert

  • Download size: 293.00 MiB

  • Dataset size: 342.37 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 2,000
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_log_probs': float32,
            'qpos': Tensor(shape=(9,), dtype=float32),
            'qvel': Tensor(shape=(9,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float32
steps/infos/qpos Tensor (9,) float32
steps/infos/qvel Tensor (9,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v1-medium-replay

  • Download size: 57.68 MiB

  • Dataset size: 34.59 MiB

  • Auto-cached (documentation): Yes

  • Splits:

Split Examples
'train' 202
  • Feature structure:
FeaturesDict({
    'algorithm': string,
    'iteration': int32,
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float64),
        'discount': float64,
        'infos': FeaturesDict({
            'action_log_probs': float64,
            'qpos': Tensor(shape=(9,), dtype=float64),
            'qvel': Tensor(shape=(9,), dtype=float64),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float64),
        'reward': float64,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor string
iteration Tensor int32
steps Dataset
steps/action Tensor (6,) float64
steps/discount Tensor float64
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float64
steps/infos/qpos Tensor (9,) float64
steps/infos/qvel Tensor (9,) float64
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float64
steps/reward Tensor float64

d4rl_mujoco_halfcheetah/v1-full-replay

  • Download size: 285.01 MiB

  • Dataset size: 171.22 MiB

  • Auto-cached (documentation): Only when shuffle_files=False (train)

  • Splits:

Split Examples
'train' 1,000
  • Feature structure:
FeaturesDict({
    'algorithm': string,
    'iteration': int32,
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float64),
        'discount': float64,
        'infos': FeaturesDict({
            'action_log_probs': float64,
            'qpos': Tensor(shape=(9,), dtype=float64),
            'qvel': Tensor(shape=(9,), dtype=float64),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float64),
        'reward': float64,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor string
iteration Tensor int32
steps Dataset
steps/action Tensor (6,) float64
steps/discount Tensor float64
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float64
steps/infos/qpos Tensor (9,) float64
steps/infos/qvel Tensor (9,) float64
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float64
steps/reward Tensor float64

d4rl_mujoco_halfcheetah/v1-random

  • Download size: 145.19 MiB

  • Dataset size: 171.18 MiB

  • Auto-cached (documentation): Only when shuffle_files=False (train)

  • Splits:

Split Examples
'train' 1,000
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_log_probs': float32,
            'qpos': Tensor(shape=(9,), dtype=float32),
            'qvel': Tensor(shape=(9,), dtype=float32),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float32
steps/infos/qpos Tensor (9,) float32
steps/infos/qvel Tensor (9,) float32
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v2-expert

  • Download size: 226.46 MiB

  • Dataset size: 451.88 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 1,000
  • Feature structure:
FeaturesDict({
    'algorithm': string,
    'iteration': int32,
    'policy': FeaturesDict({
        'fc0': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=float32),
            'weight': Tensor(shape=(256, 17), dtype=float32),
        }),
        'fc1': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=float32),
            'weight': Tensor(shape=(256, 256), dtype=float32),
        }),
        'last_fc': FeaturesDict({
            'bias': Tensor(shape=(6,), dtype=float32),
            'weight': Tensor(shape=(6, 256), dtype=float32),
        }),
        'last_fc_log_std': FeaturesDict({
            'bias': Tensor(shape=(6,), dtype=float32),
            'weight': Tensor(shape=(6, 256), dtype=float32),
        }),
        'nonlinearity': string,
        'output_distribution': string,
    }),
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_log_probs': float64,
            'qpos': Tensor(shape=(9,), dtype=float64),
            'qvel': Tensor(shape=(9,), dtype=float64),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor string
iteration Tensor int32
policy FeaturesDict
policy/fc0 FeaturesDict
policy/fc0/bias Tensor (256,) float32
policy/fc0/weight Tensor (256, 17) float32
policy/fc1 FeaturesDict
policy/fc1/bias Tensor (256,) float32
policy/fc1/weight Tensor (256, 256) float32
policy/last_fc FeaturesDict
policy/last_fc/bias Tensor (6,) float32
policy/last_fc/weight Tensor (6, 256) float32
policy/last_fc_log_std FeaturesDict
policy/last_fc_log_std/bias Tensor (6,) float32
policy/last_fc_log_std/weight Tensor (6, 256) float32
policy/nonlinearity Tensor string
policy/output_distribution Tensor string
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float64
steps/infos/qpos Tensor (9,) float64
steps/infos/qvel Tensor (9,) float64
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v2-full-replay

  • Download size: 277.88 MiB

  • Dataset size: 171.22 MiB

  • Auto-cached (documentation): Only when shuffle_files=False (train)

  • Splits:

Split Examples
'train' 1,000
  • Feature structure:
FeaturesDict({
    'algorithm': string,
    'iteration': int32,
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_log_probs': float64,
            'qpos': Tensor(shape=(9,), dtype=float64),
            'qvel': Tensor(shape=(9,), dtype=float64),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor string
iteration Tensor int32
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float64
steps/infos/qpos Tensor (9,) float64
steps/infos/qvel Tensor (9,) float64
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v2-medium

  • Download size: 226.71 MiB

  • Dataset size: 451.88 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 1,000
  • Feature structure:
FeaturesDict({
    'algorithm': string,
    'iteration': int32,
    'policy': FeaturesDict({
        'fc0': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=float32),
            'weight': Tensor(shape=(256, 17), dtype=float32),
        }),
        'fc1': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=float32),
            'weight': Tensor(shape=(256, 256), dtype=float32),
        }),
        'last_fc': FeaturesDict({
            'bias': Tensor(shape=(6,), dtype=float32),
            'weight': Tensor(shape=(6, 256), dtype=float32),
        }),
        'last_fc_log_std': FeaturesDict({
            'bias': Tensor(shape=(6,), dtype=float32),
            'weight': Tensor(shape=(6, 256), dtype=float32),
        }),
        'nonlinearity': string,
        'output_distribution': string,
    }),
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_log_probs': float64,
            'qpos': Tensor(shape=(9,), dtype=float64),
            'qvel': Tensor(shape=(9,), dtype=float64),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor string
iteration Tensor int32
policy FeaturesDict
policy/fc0 FeaturesDict
policy/fc0/bias Tensor (256,) float32
policy/fc0/weight Tensor (256, 17) float32
policy/fc1 FeaturesDict
policy/fc1/bias Tensor (256,) float32
policy/fc1/weight Tensor (256, 256) float32
policy/last_fc FeaturesDict
policy/last_fc/bias Tensor (6,) float32
policy/last_fc/weight Tensor (6, 256) float32
policy/last_fc_log_std FeaturesDict
policy/last_fc_log_std/bias Tensor (6,) float32
policy/last_fc_log_std/weight Tensor (6, 256) float32
policy/nonlinearity Tensor string
policy/output_distribution Tensor string
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float64
steps/infos/qpos Tensor (9,) float64
steps/infos/qvel Tensor (9,) float64
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v2-medium-expert

  • Download size: 452.58 MiB

  • Dataset size: 342.37 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 2,000
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_log_probs': float64,
            'qpos': Tensor(shape=(9,), dtype=float64),
            'qvel': Tensor(shape=(9,), dtype=float64),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float64
steps/infos/qpos Tensor (9,) float64
steps/infos/qvel Tensor (9,) float64
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v2-medium-replay

  • Download size: 56.69 MiB

  • Dataset size: 34.59 MiB

  • Auto-cached (documentation): Yes

  • Splits:

Split Examples
'train' 202
  • Feature structure:
FeaturesDict({
    'algorithm': string,
    'iteration': int32,
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_log_probs': float64,
            'qpos': Tensor(shape=(9,), dtype=float64),
            'qvel': Tensor(shape=(9,), dtype=float64),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
algorithm Tensor string
iteration Tensor int32
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float64
steps/infos/qpos Tensor (9,) float64
steps/infos/qvel Tensor (9,) float64
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32

d4rl_mujoco_halfcheetah/v2-random

  • Download size: 226.34 MiB

  • Dataset size: 171.18 MiB

  • Auto-cached (documentation): Only when shuffle_files=False (train)

  • Splits:

Split Examples
'train' 1,000
  • Feature structure:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(6,), dtype=float32),
        'discount': float32,
        'infos': FeaturesDict({
            'action_log_probs': float64,
            'qpos': Tensor(shape=(9,), dtype=float64),
            'qvel': Tensor(shape=(9,), dtype=float64),
        }),
        'is_first': bool,
        'is_last': bool,
        'is_terminal': bool,
        'observation': Tensor(shape=(17,), dtype=float32),
        'reward': float32,
    }),
})
  • Feature documentation:
Feature Class Shape Dtype Description
FeaturesDict
steps Dataset
steps/action Tensor (6,) float32
steps/discount Tensor float32
steps/infos FeaturesDict
steps/infos/action_log_probs Tensor float64
steps/infos/qpos Tensor (9,) float64
steps/infos/qvel Tensor (9,) float64
steps/is_first Tensor bool
steps/is_last Tensor bool
steps/is_terminal Tensor bool
steps/observation Tensor (17,) float32
steps/reward Tensor float32