Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
tensorflow
GitHub Repository: tensorflow/docs-l10n
Path: blob/master/site/zh-cn/agents/tutorials/bandits_tutorial.ipynb
25118 views
Kernel: Python 3
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.

TF-Agents 中的多臂老虎机教程

安装

如果尚未安装以下依赖项,请运行:

!pip install tf-agents

导入

import abc import numpy as np import tensorflow as tf from tf_agents.agents import tf_agent from tf_agents.drivers import driver from tf_agents.environments import py_environment from tf_agents.environments import tf_environment from tf_agents.environments import tf_py_environment from tf_agents.policies import tf_policy from tf_agents.specs import array_spec from tf_agents.specs import tensor_spec from tf_agents.trajectories import time_step as ts from tf_agents.trajectories import trajectory from tf_agents.trajectories import policy_step nest = tf.nest

简介

多臂老虎机问题 (MAB) 是强化学习的一项特例:代理会通过在观察到环境的某些状态后采取一些动作来收集环境中的奖励。一般的强化学习与 MAB 的主要区别在于,在 MAB 中,我们假定代理采取的动作不会影响环境的下一个状态。因此,代理不会对状态转换进行建模,将奖励归因于过去的动作,或者以获得高奖励状态为目的进行“提前计划”。

与其他强化学习领域一样,MAB 代理的目标也是找出一种策略来收集尽可能多奖励。然而,总是试图利用预示最高奖励的动作是不对的,因为如果我们所做的探索不够充分,就有可能会错过更好的动作。这是 MAB 中要解决的主要问题,通常称为探索-利用困境

MAB 的老虎机环境、策略和代理可以在 tf_agents/bandits 的子目录中找到。

环境

在 TF-Agents 中,环境类的作用是提供有关当前状态的信息(称为观测值上下文)、接收动作作为输入、执行状态转换以及输出奖励。此类还负责在片段结束时进行重置,以便可以开始新的片段。这是通过在状态被标记为片段的“最后”状态时调用 reset 函数来实现的。

有关详情,请参阅 TF-Agents 环境教程

如上所述,MAB 与一般强化学习的不同之处在于动作不会影响下一次观测。另一个区别是,老虎机中没有“片段”:每个时间步都始于新的观测,与之前的时间步无关。

为了确保观测独立且不涉及强化学习片段的概念,我们引入了 PyEnvironmentTFEnvironment 的子类:BanditPyEnvironmentBanditTFEnvironment。这些类会公开两个私有成员函数,这些函数仍然由用户实现:

@abc.abstractmethod def _observe(self):

@abc.abstractmethod def _apply_action(self, action):

_observe 函数会返回一个观测值。然后,策略会根据此观测值来选择一个动作。_apply_action 会接收该动作作为输入,并返回相应的奖励。这些私有成员函数分别由 resetstep 函数调用。

class BanditPyEnvironment(py_environment.PyEnvironment): def __init__(self, observation_spec, action_spec): self._observation_spec = observation_spec self._action_spec = action_spec super(BanditPyEnvironment, self).__init__() # Helper functions. def action_spec(self): return self._action_spec def observation_spec(self): return self._observation_spec def _empty_observation(self): return tf.nest.map_structure(lambda x: np.zeros(x.shape, x.dtype), self.observation_spec()) # These two functions below should not be overridden by subclasses. def _reset(self): """Returns a time step containing an observation.""" return ts.restart(self._observe(), batch_size=self.batch_size) def _step(self, action): """Returns a time step containing the reward for the action taken.""" reward = self._apply_action(action) return ts.termination(self._observe(), reward) # These two functions below are to be implemented in subclasses. @abc.abstractmethod def _observe(self): """Returns an observation.""" @abc.abstractmethod def _apply_action(self, action): """Applies `action` to the Environment and returns the corresponding reward. """

上述临时抽象类会实现 PyEnvironment_reset_step 函数,并公开 _observe_apply_action 抽象函数以由子类实现。

环境类的简单示例

以下类提供了一个非常简单的环境,观测值是一个介于 -2 和 2 之间的随机整数,有 3 种可能的动作(0、1、2),奖励为动作和观测值的乘积。

class SimplePyEnvironment(BanditPyEnvironment): def __init__(self): action_spec = array_spec.BoundedArraySpec( shape=(), dtype=np.int32, minimum=0, maximum=2, name='action') observation_spec = array_spec.BoundedArraySpec( shape=(1,), dtype=np.int32, minimum=-2, maximum=2, name='observation') super(SimplePyEnvironment, self).__init__(observation_spec, action_spec) def _observe(self): self._observation = np.random.randint(-2, 3, (1,), dtype='int32') return self._observation def _apply_action(self, action): return action * self._observation

现在,我们可以使用此环境来获得观测值,并为我们的动作获得奖励。

environment = SimplePyEnvironment() observation = environment.reset().observation print("observation: %d" % observation) action = 2 #@param print("action: %d" % action) reward = environment.step(action).reward print("reward: %f" % reward)

TF 环境

可以通过子类化 BanditTFEnvironment 来定义老虎机环境,或者也可以与强化学习环境类似,定义 BanditPyEnvironment 并使用 TFPyEnvironment 对其进行包装。为简单起见,我们在本教程中使用后一选项。

tf_environment = tf_py_environment.TFPyEnvironment(environment)

策略

老虎机问题中的策略与强化学习问题中的策略工作方式相同:给定一个观测值作为输入,它会提供一个动作(或动作分布)。

有关详情,请参阅 TF-Agents 策略教程

与环境一样,可以通过两种方式构建策略:可以创建 PyPolicy 并使用 TFPyPolicy 包装,或者也可以直接创建 TFPolicy。在此,我们选择使用直接方式。

由于本例非常简单,我们可以手动定义最优策略。动作仅取决于观测值的正负,0 表示负数,2 表示正数。

class SignPolicy(tf_policy.TFPolicy): def __init__(self): observation_spec = tensor_spec.BoundedTensorSpec( shape=(1,), dtype=tf.int32, minimum=-2, maximum=2) time_step_spec = ts.time_step_spec(observation_spec) action_spec = tensor_spec.BoundedTensorSpec( shape=(), dtype=tf.int32, minimum=0, maximum=2) super(SignPolicy, self).__init__(time_step_spec=time_step_spec, action_spec=action_spec) def _distribution(self, time_step): pass def _variables(self): return () def _action(self, time_step, policy_state, seed): observation_sign = tf.cast(tf.sign(time_step.observation[0]), dtype=tf.int32) action = observation_sign + 1 return policy_step.PolicyStep(action, policy_state)

现在,我们可以从环境请求观测值,调用策略以选择动作,然后环境将输出奖励:

sign_policy = SignPolicy() current_time_step = tf_environment.reset() print('Observation:') print (current_time_step.observation) action = sign_policy.action(current_time_step).action print('Action:') print (action) reward = tf_environment.step(action).reward print('Reward:') print(reward)

老虎机环境的实现方式可确保我们每完成一步,不仅会因所采取的动作而获得奖励,还会获得下一个观测值。

step = tf_environment.reset() action = 1 next_step = tf_environment.step(action) reward = next_step.reward next_observation = next_step.observation print("Reward: ") print(reward) print("Next observation:") print(next_observation)

代理

现在,我们已经有了老虎机环境和老虎机策略,是时候定义老虎机代理了,它会负责基于训练样本来改变策略。

老虎机代理的 API 与强化学习代理的 API 没有区别:代理只需实现 _initialize_train 方法,并定义 policycollect_policy

更加复杂的环境

编写老虎机代理之前,我们需要准备一个稍加复杂的环境。为了增添一点趣味性,下一个环境要么总是给出 reward = observation * action,要么总是给出 reward = -observation * action。这将在环境初始化时决定。

class TwoWayPyEnvironment(BanditPyEnvironment): def __init__(self): action_spec = array_spec.BoundedArraySpec( shape=(), dtype=np.int32, minimum=0, maximum=2, name='action') observation_spec = array_spec.BoundedArraySpec( shape=(1,), dtype=np.int32, minimum=-2, maximum=2, name='observation') # Flipping the sign with probability 1/2. self._reward_sign = 2 * np.random.randint(2) - 1 print("reward sign:") print(self._reward_sign) super(TwoWayPyEnvironment, self).__init__(observation_spec, action_spec) def _observe(self): self._observation = np.random.randint(-2, 3, (1,), dtype='int32') return self._observation def _apply_action(self, action): return self._reward_sign * action * self._observation[0] two_way_tf_environment = tf_py_environment.TFPyEnvironment(TwoWayPyEnvironment())

更加复杂的策略

更加复杂的环境需要更加复杂的策略。我们需要一种能够检测底层环境行为的策略。该策略需要处理以下三种情况:

  1. 代理未检测,尚不知道哪个版本的环境正在运行。

  2. 代理检测到原始版本的环境正在运行。

  3. 代理检测到翻转版本的环境正在运行。

我们定义了一个名为 _situationtf_variable 来将此信息编码为 [0, 2] 区间内的值,然后使策略相应运行。

class TwoWaySignPolicy(tf_policy.TFPolicy): def __init__(self, situation): observation_spec = tensor_spec.BoundedTensorSpec( shape=(1,), dtype=tf.int32, minimum=-2, maximum=2) action_spec = tensor_spec.BoundedTensorSpec( shape=(), dtype=tf.int32, minimum=0, maximum=2) time_step_spec = ts.time_step_spec(observation_spec) self._situation = situation super(TwoWaySignPolicy, self).__init__(time_step_spec=time_step_spec, action_spec=action_spec) def _distribution(self, time_step): pass def _variables(self): return [self._situation] def _action(self, time_step, policy_state, seed): sign = tf.cast(tf.sign(time_step.observation[0, 0]), dtype=tf.int32) def case_unknown_fn(): # Choose 1 so that we get information on the sign. return tf.constant(1, shape=(1,)) # Choose 0 or 2, depending on the situation and the sign of the observation. def case_normal_fn(): return tf.constant(sign + 1, shape=(1,)) def case_flipped_fn(): return tf.constant(1 - sign, shape=(1,)) cases = [(tf.equal(self._situation, 0), case_unknown_fn), (tf.equal(self._situation, 1), case_normal_fn), (tf.equal(self._situation, 2), case_flipped_fn)] action = tf.case(cases, exclusive=True) return policy_step.PolicyStep(action, policy_state)

代理

现在,可以定义检测环境正负并适当设置策略的代理了。

class SignAgent(tf_agent.TFAgent): def __init__(self): self._situation = tf.Variable(0, dtype=tf.int32) policy = TwoWaySignPolicy(self._situation) time_step_spec = policy.time_step_spec action_spec = policy.action_spec super(SignAgent, self).__init__(time_step_spec=time_step_spec, action_spec=action_spec, policy=policy, collect_policy=policy, train_sequence_length=None) def _initialize(self): return tf.compat.v1.variables_initializer(self.variables) def _train(self, experience, weights=None): observation = experience.observation action = experience.action reward = experience.reward # We only need to change the value of the situation variable if it is # unknown (0) right now, and we can infer the situation only if the # observation is not 0. needs_action = tf.logical_and(tf.equal(self._situation, 0), tf.not_equal(reward, 0)) def new_situation_fn(): """This returns either 1 or 2, depending on the signs.""" return (3 - tf.sign(tf.cast(observation[0, 0, 0], dtype=tf.int32) * tf.cast(action[0, 0], dtype=tf.int32) * tf.cast(reward[0, 0], dtype=tf.int32))) / 2 new_situation = tf.cond(needs_action, new_situation_fn, lambda: self._situation) new_situation = tf.cast(new_situation, tf.int32) tf.compat.v1.assign(self._situation, new_situation) return tf_agent.LossInfo((), ()) sign_agent = SignAgent()

在上面的代码中,代理定义了策略,变量 situation 由代理和策略共享。

另外,_train 函数的 experience 参数是一条轨迹:

轨迹

在 TF-Agents 中,trajectories 是包含来自先前步骤的样本的命名元组。然后代理会使用这些样本来训练和更新策略。在强化学习中,轨迹必须包含有关当前状态、下一个状态以及当前片段是否结束的信息。我们在老虎机问题中不需要这些信息,因此我们设置了一个辅助函数来创建轨迹:

# We need to add another dimension here because the agent expects the # trajectory of shape [batch_size, time, ...], but in this tutorial we assume # that both batch size and time are 1. Hence all the expand_dims. def trajectory_for_bandit(initial_step, action_step, final_step): return trajectory.Trajectory(observation=tf.expand_dims(initial_step.observation, 0), action=tf.expand_dims(action_step.action, 0), policy_info=action_step.info, reward=tf.expand_dims(final_step.reward, 0), discount=tf.expand_dims(final_step.discount, 0), step_type=tf.expand_dims(initial_step.step_type, 0), next_step_type=tf.expand_dims(final_step.step_type, 0))

训练代理

现在,各个部分均已准备就绪,可以训练我们的老虎机代理了。

step = two_way_tf_environment.reset() for _ in range(10): action_step = sign_agent.collect_policy.action(step) next_step = two_way_tf_environment.step(action_step.action) experience = trajectory_for_bandit(step, action_step, next_step) print(experience) sign_agent.train(experience) step = next_step

从输出可以看出,在第二步之后(除非在第一步中观测值为 0),策略将以正确的方式选择动作,因此收集的奖励始终为非负值。

真实上下文老虎机示例

在本教程的剩余部分中,我们将使用 TF-Agents Bandits 库的预实现环境代理

# Imports for example. from tf_agents.bandits.agents import lin_ucb_agent from tf_agents.bandits.environments import stationary_stochastic_py_environment as sspe from tf_agents.bandits.metrics import tf_metrics from tf_agents.drivers import dynamic_step_driver from tf_agents.replay_buffers import tf_uniform_replay_buffer import matplotlib.pyplot as plt

采用线性收益函数的平稳随机环境

此示例中使用的环境为 StationaryStochasticPyEnvironment。此环境会将(通常含噪声)函数作为参数来提供观测值(上下文),并且会针对每个老虎机臂采用(也含噪声)函数来基于给定的观测值计算奖励。在我们的示例中,我们从 d 维立方体中均匀地采样上下文,奖励函数为上下文的线性函数,加上一些高斯噪声。

batch_size = 2 # @param arm0_param = [-3, 0, 1, -2] # @param arm1_param = [1, -2, 3, 0] # @param arm2_param = [0, 0, 1, 1] # @param def context_sampling_fn(batch_size): """Contexts from [-10, 10]^4.""" def _context_sampling_fn(): return np.random.randint(-10, 10, [batch_size, 4]).astype(np.float32) return _context_sampling_fn class LinearNormalReward(object): """A class that acts as linear reward function when called.""" def __init__(self, theta, sigma): self.theta = theta self.sigma = sigma def __call__(self, x): mu = np.dot(x, self.theta) return np.random.normal(mu, self.sigma) arm0_reward_fn = LinearNormalReward(arm0_param, 1) arm1_reward_fn = LinearNormalReward(arm1_param, 1) arm2_reward_fn = LinearNormalReward(arm2_param, 1) environment = tf_py_environment.TFPyEnvironment( sspe.StationaryStochasticPyEnvironment( context_sampling_fn(batch_size), [arm0_reward_fn, arm1_reward_fn, arm2_reward_fn], batch_size=batch_size))

LinUCB 代理

下面的代理实现了 LinUCB 算法。

observation_spec = tensor_spec.TensorSpec([4], tf.float32) time_step_spec = ts.time_step_spec(observation_spec) action_spec = tensor_spec.BoundedTensorSpec( dtype=tf.int32, shape=(), minimum=0, maximum=2) agent = lin_ucb_agent.LinearUCBAgent(time_step_spec=time_step_spec, action_spec=action_spec)

后悔值指标

老虎机最重要的指标就是后悔值,计算方式是求代理收集的奖励与可以访问环境奖励函数的先知策略的预期奖励之差。因此,RegretMetric 需要 baseline_reward_fn 函数来计算给定观测值的最佳可实现预期奖励。对于我们的示例,我们需要取我们已经为环境定义的奖励函数的无噪声等效函数的最大值。

def compute_optimal_reward(observation): expected_reward_for_arms = [ tf.linalg.matvec(observation, tf.cast(arm0_param, dtype=tf.float32)), tf.linalg.matvec(observation, tf.cast(arm1_param, dtype=tf.float32)), tf.linalg.matvec(observation, tf.cast(arm2_param, dtype=tf.float32))] optimal_action_reward = tf.reduce_max(expected_reward_for_arms, axis=0) return optimal_action_reward regret_metric = tf_metrics.RegretMetric(compute_optimal_reward)

训练

现在,我们将上面介绍的所有组件组合到一起:环境、策略和代理。我们借助驱动器在环境上运行策略并输出训练数据,并基于这些数据训练代理。

请注意,有两个参数共同指定所采取的步数。num_iterations 将指定我们运行训练器循环的次数,而驱动器将在每次迭代中执行 steps_per_loop 步。保留这两项参数的主要原因是,有些运算是在每次迭代中完成的,而有些运算是由驱动器在每一步中完成的。例如,代理的 train 函数仅在每次迭代中调用一次。这里需要权衡之处在于,如果我们以更高的频率进行训练,那么我们的策略会“更加新鲜”;另一方面,以更大批次进行训练可能会更具时间效率。

num_iterations = 90 # @param steps_per_loop = 1 # @param replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( data_spec=agent.policy.trajectory_spec, batch_size=batch_size, max_length=steps_per_loop) observers = [replay_buffer.add_batch, regret_metric] driver = dynamic_step_driver.DynamicStepDriver( env=environment, policy=agent.collect_policy, num_steps=steps_per_loop * batch_size, observers=observers) regret_values = [] for _ in range(num_iterations): driver.run() loss_info = agent.train(replay_buffer.gather_all()) replay_buffer.clear() regret_values.append(regret_metric.result()) plt.plot(regret_values) plt.ylabel('Average Regret') plt.xlabel('Number of Iterations')

运行最后一个代码段后,生成的统计图(有望)显示,在给定的观测值下,平均后悔值会随着代理的训练而逐渐下降并且策略会逐渐更加善于确定正确的动作。

后续步骤

要查看更多工作示例,请参阅 bandits/agents/examples 目录,其中包含针对不同代理和环境的随时可运行的示例。

TF-Agents 库还能够处理具有每臂特征的多臂老虎机。为此,我们建议读者阅读每臂老虎机教程