本頁面由 Cloud Translation API 翻譯而成。
Switch to English

車手

在TensorFlow.org上查看 在Google Colab中運行 在GitHub上查看源代碼 下載筆記本

介紹

強化學習的常見模式是在環境中針對指定數量的步驟或情節執行策略。例如,在數據收集,評估和生成代理視頻期間會發生這種情況。

儘管使用python編寫代碼相對簡單,但在TensorFlow中編寫和調試則要復雜得多,因為它涉及tf.while循環, tf.condtf.control_dependencies 。因此,我們將運行循環的概念抽象為一個名為driver的類,並在Python和TensorFlow中提供經過良好測試的實現。

此外,駕駛員在每個步驟遇到的數據都保存在名為Trajectory的命名元組中,並廣播給一組觀察者,例如重放緩衝區和度量。這些數據包括對環境的觀察,政策建議的措施,獲得的獎勵,當前和下一步的類型等。

建立

如果尚未安裝tf-agents或Gym,請運行:

pip install -q --pre tf-agents[reverb]
pip install -q gym
WARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/tmpfs/src/tf_docs_env/bin/python -m pip install --upgrade pip' command.
WARNING: You are using pip version 20.1.1; however, version 20.2 is available.
You should consider upgrading via the '/tmpfs/src/tf_docs_env/bin/python -m pip install --upgrade pip' command.

 from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf


from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.policies import random_py_policy
from tf_agents.policies import random_tf_policy
from tf_agents.metrics import py_metrics
from tf_agents.metrics import tf_metrics
from tf_agents.drivers import py_driver
from tf_agents.drivers import dynamic_episode_driver

tf.compat.v1.enable_v2_behavior()
 

Python驅動程序

PyDriver類具有一個python環境,一個python策略和一個觀察者列表,每個步驟都要更新。主要方法是run() ,它使用策略中的操作逐步執行環境,直到至少滿足以下終止條件之一:步驟數達到max_steps或情節數達到max_episodes

實現大致如下:

 class PyDriver(object):

  def __init__(self, env, policy, observers, max_steps=1, max_episodes=1):
    self._env = env
    self._policy = policy
    self._observers = observers or []
    self._max_steps = max_steps or np.inf
    self._max_episodes = max_episodes or np.inf

  def run(self, time_step, policy_state=()):
    num_steps = 0
    num_episodes = 0
    while num_steps < self._max_steps and num_episodes < self._max_episodes:

      # Compute an action using the policy for the given time_step
      action_step = self._policy.action(time_step, policy_state)

      # Apply the action to the environment and get the next step
      next_time_step = self._env.step(action_step.action)

      # Package information into a trajectory
      traj = trajectory.Trajectory(
         time_step.step_type,
         time_step.observation,
         action_step.action,
         action_step.info,
         next_time_step.step_type,
         next_time_step.reward,
         next_time_step.discount)

      for observer in self._observers:
        observer(traj)

      # Update statistics to check termination
      num_episodes += np.sum(traj.is_last())
      num_steps += np.sum(~traj.is_boundary())

      time_step = next_time_step
      policy_state = action_step.state

    return time_step, policy_state

 

現在,讓我們看一下在CartPole環境上運行隨機策略,將結果保存到重播緩衝區併計算一些指標的示例。

 env = suite_gym.load('CartPole-v0')
policy = random_py_policy.RandomPyPolicy(time_step_spec=env.time_step_spec(), 
                                         action_spec=env.action_spec())
replay_buffer = []
metric = py_metrics.AverageReturnMetric()
observers = [replay_buffer.append, metric]
driver = py_driver.PyDriver(
    env, policy, observers, max_steps=20, max_episodes=1)

initial_time_step = env.reset()
final_time_step, _ = driver.run(initial_time_step)

print('Replay Buffer:')
for traj in replay_buffer:
  print(traj)

print('Average Return: ', metric.result())
 
Replay Buffer:
Trajectory(step_type=array(0, dtype=int32), observation=array([-0.02872566, -0.04710842, -0.00544945, -0.0319142 ], dtype=float32), action=array(1), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([-0.02966782,  0.14809126, -0.00608774, -0.3263115 ], dtype=float32), action=array(0), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([-0.026706  , -0.0469435 , -0.01261397, -0.03555458], dtype=float32), action=array(0), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([-0.02764487, -0.24188231, -0.01332506,  0.25312197], dtype=float32), action=array(1), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([-0.03248252, -0.04657265, -0.00826262, -0.04373396], dtype=float32), action=array(1), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([-0.03341397,  0.1486668 , -0.0091373 , -0.33901232], dtype=float32), action=array(1), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([-0.03044063,  0.34391758, -0.01591755, -0.6345626 ], dtype=float32), action=array(1), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([-0.02356228,  0.5392579 , -0.0286088 , -0.9322155 ], dtype=float32), action=array(1), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([-0.01277712,  0.73475397, -0.04725311, -1.2337494 ], dtype=float32), action=array(1), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([ 0.00191796,  0.9304505 , -0.0719281 , -1.5408539 ], dtype=float32), action=array(0), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([ 0.02052697,  0.73626345, -0.10274518, -1.271455  ], dtype=float32), action=array(0), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([ 0.03525224,  0.542592  , -0.12817428, -1.0126338 ], dtype=float32), action=array(1), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([ 0.04610407,  0.7391692 , -0.14842695, -1.342661  ], dtype=float32), action=array(0), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([ 0.06088746,  0.5461935 , -0.17528017, -1.0998576 ], dtype=float32), action=array(1), policy_info=(), next_step_type=array(1, dtype=int32), reward=array(1., dtype=float32), discount=array(1., dtype=float32))
Trajectory(step_type=array(1, dtype=int32), observation=array([ 0.07181133,  0.743134  , -0.19727732, -1.4420109 ], dtype=float32), action=array(0), policy_info=(), next_step_type=array(2, dtype=int32), reward=array(1., dtype=float32), discount=array(0., dtype=float32))
Average Return:  15.0

TensorFlow驅動程序

TensorFlow中也有驅動程序,其功能與Python驅動程序類似,但使用的是TF環境,TF策略,TF觀察器等。我們目前有2個TensorFlow驅動程序: DynamicStepDriver ,它在給定數量的(有效)環境步驟之後終止,而DynamicEpisodeDriver終止。在給定數目的情節之後終止。讓我們看一個實際的DynamicEpisode的例子。

 env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)

tf_policy = random_tf_policy.RandomTFPolicy(action_spec=tf_env.action_spec(),
                                            time_step_spec=tf_env.time_step_spec())


num_episodes = tf_metrics.NumberOfEpisodes()
env_steps = tf_metrics.EnvironmentSteps()
observers = [num_episodes, env_steps]
driver = dynamic_episode_driver.DynamicEpisodeDriver(
    tf_env, tf_policy, observers, num_episodes=2)

# Initial driver.run will reset the environment and initialize the policy.
final_time_step, policy_state = driver.run()

print('final_time_step', final_time_step)
print('Number of Steps: ', env_steps.result().numpy())
print('Number of Episodes: ', num_episodes.result().numpy())
 
final_time_step TimeStep(step_type=<tf.Tensor: shape=(1,), dtype=int32, numpy=array([0], dtype=int32)>, reward=<tf.Tensor: shape=(1,), dtype=float32, numpy=array([0.], dtype=float32)>, discount=<tf.Tensor: shape=(1,), dtype=float32, numpy=array([1.], dtype=float32)>, observation=<tf.Tensor: shape=(1, 4), dtype=float32, numpy=
array([[-0.02959541, -0.02209792,  0.01046155,  0.01981805]],
      dtype=float32)>)
Number of Steps:  34
Number of Episodes:  2

 # Continue running from previous state
final_time_step, _ = driver.run(final_time_step, policy_state)

print('final_time_step', final_time_step)
print('Number of Steps: ', env_steps.result().numpy())
print('Number of Episodes: ', num_episodes.result().numpy())
 
final_time_step TimeStep(step_type=<tf.Tensor: shape=(1,), dtype=int32, numpy=array([0], dtype=int32)>, reward=<tf.Tensor: shape=(1,), dtype=float32, numpy=array([0.], dtype=float32)>, discount=<tf.Tensor: shape=(1,), dtype=float32, numpy=array([1.], dtype=float32)>, observation=<tf.Tensor: shape=(1, 4), dtype=float32, numpy=
array([[-0.00525741,  0.00929421, -0.03492192,  0.0023589 ]],
      dtype=float32)>)
Number of Steps:  91
Number of Episodes:  4