ML Community Day is November 9! Join us for updates from TensorFlow, JAX, and more Learn more

d4rl_adroit_hammer

  • Description:

D4RL is an open-source benchmark for offline reinforcement learning. It provides standardized environments and datasets for training and benchmarking algorithms.

@misc{fu2020d4rl,
    title={D4RL: Datasets for Deep Data-Driven Reinforcement Learning},
    author={Justin Fu and Aviral Kumar and Ofir Nachum and George Tucker and Sergey Levine},
    year={2020},
    eprint={2004.07219},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

d4rl_adroit_hammer/v0-human (default config)

  • Download size: 5.33 MiB

  • Dataset size: 6.10 MiB

  • Auto-cached (documentation): Yes

  • Splits:

Split Examples
'train' 70
  • Features:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(26,), dtype=tf.float32),
        'discount': tf.float32,
        'infos': FeaturesDict({
            'qpos': Tensor(shape=(33,), dtype=tf.float32),
            'qvel': Tensor(shape=(33,), dtype=tf.float32),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(46,), dtype=tf.float32),
        'reward': tf.float32,
    }),
})

d4rl_adroit_hammer/v0-cloned

  • Download size: 644.69 MiB

  • Dataset size: 538.97 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 5,594
  • Features:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(26,), dtype=tf.float32),
        'discount': tf.float64,
        'infos': FeaturesDict({
            'qpos': Tensor(shape=(33,), dtype=tf.float64),
            'qvel': Tensor(shape=(33,), dtype=tf.float64),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(46,), dtype=tf.float64),
        'reward': tf.float64,
    }),
})

d4rl_adroit_hammer/v0-expert

  • Download size: 529.91 MiB

  • Dataset size: 737.00 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 5,000
  • Features:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(26,), dtype=tf.float32),
        'discount': tf.float32,
        'infos': FeaturesDict({
            'action_logstd': Tensor(shape=(26,), dtype=tf.float32),
            'action_mean': Tensor(shape=(26,), dtype=tf.float32),
            'qpos': Tensor(shape=(33,), dtype=tf.float32),
            'qvel': Tensor(shape=(33,), dtype=tf.float32),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(46,), dtype=tf.float32),
        'reward': tf.float32,
    }),
})

d4rl_adroit_hammer/v1-human

  • Download size: 5.35 MiB

  • Dataset size: 6.34 MiB

  • Auto-cached (documentation): Yes

  • Splits:

Split Examples
'train' 25
  • Features:
FeaturesDict({
    'steps': Dataset({
        'action': Tensor(shape=(26,), dtype=tf.float32),
        'discount': tf.float32,
        'infos': FeaturesDict({
            'board_pos': Tensor(shape=(3,), dtype=tf.float32),
            'qpos': Tensor(shape=(33,), dtype=tf.float32),
            'qvel': Tensor(shape=(33,), dtype=tf.float32),
            'target_pos': Tensor(shape=(3,), dtype=tf.float32),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(46,), dtype=tf.float32),
        'reward': tf.float32,
    }),
})

d4rl_adroit_hammer/v1-cloned

  • Download size: 425.93 MiB

  • Dataset size: 1.68 GiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 3,606
  • Features:
FeaturesDict({
    'algorithm': tf.string,
    'policy': FeaturesDict({
        'fc0': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=tf.float32),
            'weight': Tensor(shape=(46, 256), dtype=tf.float32),
        }),
        'fc1': FeaturesDict({
            'bias': Tensor(shape=(256,), dtype=tf.float32),
            'weight': Tensor(shape=(256, 256), dtype=tf.float32),
        }),
        'last_fc': FeaturesDict({
            'bias': Tensor(shape=(26,), dtype=tf.float32),
            'weight': Tensor(shape=(256, 26), dtype=tf.float32),
        }),
        'nonlinearity': tf.string,
        'output_distribution': tf.string,
    }),
    'steps': Dataset({
        'action': Tensor(shape=(26,), dtype=tf.float32),
        'discount': tf.float32,
        'infos': FeaturesDict({
            'board_pos': Tensor(shape=(3,), dtype=tf.float32),
            'qpos': Tensor(shape=(33,), dtype=tf.float32),
            'qvel': Tensor(shape=(33,), dtype=tf.float32),
            'target_pos': Tensor(shape=(3,), dtype=tf.float32),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(46,), dtype=tf.float32),
        'reward': tf.float32,
    }),
})

d4rl_adroit_hammer/v1-expert

  • Download size: 531.24 MiB

  • Dataset size: 843.54 MiB

  • Auto-cached (documentation): No

  • Splits:

Split Examples
'train' 5,000
  • Features:
FeaturesDict({
    'algorithm': tf.string,
    'policy': FeaturesDict({
        'fc0': FeaturesDict({
            'bias': Tensor(shape=(32,), dtype=tf.float32),
            'weight': Tensor(shape=(32, 46), dtype=tf.float32),
        }),
        'fc1': FeaturesDict({
            'bias': Tensor(shape=(32,), dtype=tf.float32),
            'weight': Tensor(shape=(32, 32), dtype=tf.float32),
        }),
        'last_fc': FeaturesDict({
            'bias': Tensor(shape=(26,), dtype=tf.float32),
            'weight': Tensor(shape=(26, 32), dtype=tf.float32),
        }),
        'last_fc_log_std': FeaturesDict({
            'bias': Tensor(shape=(26,), dtype=tf.float32),
            'weight': Tensor(shape=(26, 32), dtype=tf.float32),
        }),
        'nonlinearity': tf.string,
        'output_distribution': tf.string,
    }),
    'steps': Dataset({
        'action': Tensor(shape=(26,), dtype=tf.float32),
        'discount': tf.float32,
        'infos': FeaturesDict({
            'action_log_std': Tensor(shape=(26,), dtype=tf.float32),
            'action_mean': Tensor(shape=(26,), dtype=tf.float32),
            'board_pos': Tensor(shape=(3,), dtype=tf.float32),
            'qpos': Tensor(shape=(33,), dtype=tf.float32),
            'qvel': Tensor(shape=(33,), dtype=tf.float32),
            'target_pos': Tensor(shape=(3,), dtype=tf.float32),
        }),
        'is_first': tf.bool,
        'is_last': tf.bool,
        'is_terminal': tf.bool,
        'observation': Tensor(shape=(46,), dtype=tf.float32),
        'reward': tf.float32,
    }),
})