# 强化学习读书笔记 - 06~07 - 时序差分学习(Temporal-Difference Learning)

## 时序差分学习简话

Formula MonteCarlo
$V(S_t) \gets V(S_t) + \alpha \delta_t \\ \delta_t = [G_t - V(S_t)] \\ where \\ \delta_t \text{ - Monte Carlo error} \\ \alpha \text{ - learning step size}$

Formula TD(0)
$V(S_t) \gets V(S_t) + \alpha \delta_t \\ \delta_t = [R_{t+1} + \gamma\ V(S_{t+1} - V(S_t)] \\ where \\ \delta_t \text{ - TD error} \\ \alpha \text{ - learning step size} \\ \gamma \text{ - reward discount rate}$

## 时序差分学习方法

• 策略状态价值$v_{\pi}$的时序差分学习方法(单步\多步)
• 策略行动价值$q_{\pi}$的on-policy时序差分学习方法: Sarsa(单步\多步)
• 策略行动价值$q_{\pi}$的off-policy时序差分学习方法: Q-learning(单步)
• Double Q-learning(单步)
• 策略行动价值$q_{\pi}$的off-policy时序差分学习方法(带importance sampling): Sarsa(多步)
• 策略行动价值$q_{\pi}$的off-policy时序差分学习方法(不带importance sampling): Tree Backup Algorithm(多步)
• 策略行动价值$q_{\pi}$的off-policy时序差分学习方法: $Q(\sigma)$(多步)

## 策略状态价值$v_{\pi}$的时序差分学习方法

• 流程图
• 算法描述

Initialize $V(s)$ arbitrarily $\forall s \in \mathcal{S}^+$
Repeat (for each episode):
Initialize $\mathcal{S}$
Repeat (for each step of episode):
$A \gets$ action given by $\pi$ for $S$
Take action $A$, observe $R, S'$
$V(S) \gets V(S) + \alpha [R + \gamma V(S') - V(S)]$
$S \gets S'$
Until S is terminal

• 流程图
• 算法描述

Input: the policy $\pi$ to be evaluated
Initialize $V(s)$ arbitrarily $\forall s \in \mathcal{S}$
Parameters: step size $\alpha \in (0, 1]$, a positive integer $n$
All store and access operations (for $S_t$ and $R_t$) can take their index mod $n$

Repeat (for each episode):
Initialize and store $S_0 \ne terminal$
$T \gets \infty$
For $t = 0,1,2,\cdots$:
If $t < T$, then:
Take an action according to $\pi(\dot \ | S_t)$
Observe and store the next reward as $R_{t+1}$ and the next state as $S_{t+1}$
If $S_{t+1}$ is terminal, then $T \gets t+1$
$\tau \gets t - n + 1 \$ ($\tau$ is the time whose state's estimate is being updated)
If $\tau \ge 0$:
$G \gets \sum_{i = \tau + 1}^{min(\tau + n, T)} \gamma^{i-\tau-1}R_i$
if $\tau + n \le T$ then: $G \gets G + \gamma^{n}V(S_{\tau + n}) \qquad \qquad (G_{\tau}^{(n)})$
$V(S_{\tau}) \gets V(S_{\tau}) + \alpha [G - V(S_{\tau})]$
Until $\tau = T - 1$

## 策略行动价值$q_{\pi}$的on-policy时序差分学习方法: Sarsa

• 流程图
• 算法描述

Initialize $Q(s, a), \forall s \in \mathcal{S}, a \in \mathcal{A}(s)$ arbitrarily, and $Q(terminal, \dot \ ) = 0$
Repeat (for each episode):
Initialize $\mathcal{S}$
Choose $A$ from $S$ using policy derived from $Q$ (e.g. $\epsilon-greedy$)
Repeat (for each step of episode):
Take action $A$, observe $R, S'$
Choose $A'$ from $S'$ using policy derived from $Q$ (e.g. $\epsilon-greedy$)
$Q(S, A) \gets Q(S, A) + \alpha [R + \gamma Q(S', A') - Q(S, A)]$
$S \gets S'; A \gets A';$
Until S is terminal

• 流程图
• 算法描述

Initialize $Q(s, a)$ arbitrarily $\forall s \in \mathcal{S}^, \forall a in \mathcal{A}$
Initialize $\pi$ to be $\epsilon$-greedy with respect to Q, or to a fixed given policy
Parameters: step size $\alpha \in (0, 1]$,
small $\epsilon > 0$
a positive integer $n$
All store and access operations (for $S_t$ and $R_t$) can take their index mod $n$

Repeat (for each episode):
Initialize and store $S_0 \ne terminal$
Select and store an action $A_0 \sim \pi(\dot \ | S_0)$
$T \gets \infty$
For $t = 0,1,2,\cdots$:
If $t < T$, then:
Take an action $A_t$
Observe and store the next reward as $R_{t+1}$ and the next state as $S_{t+1}$
If $S_{t+1}$ is terminal, then:
$T \gets t+1$
Else:
Select and store an action $A_{t+1} \sim \pi(\dot \ | S_{t+1})$
$\tau \gets t - n + 1 \$ ($\tau$ is the time whose state's estimate is being updated)
If $\tau \ge 0$:
$G \gets \sum_{i = \tau + 1}^{min(\tau + n, T)} \gamma^{i-\tau-1}R_i$
if $\tau + n \le T$ then: $G \gets G + \gamma^{n} Q(S_{\tau + n}, A_{\tau + n}) \qquad \qquad (G_{\tau}^{(n)})$
$Q(S_{\tau}, A_{\tau}) \gets Q(S_{\tau}, A_{\tau}) + \alpha [G - Q(S_{\tau}, A_{\tau})]$
If {\pi} is being learned, then ensure that $\pi(\dot \ | S_{\tau})$ is $\epsilon$-greedy wrt Q
Until $\tau = T - 1$

## 策略行动价值$q_{\pi}$的off-policy时序差分学习方法: Q-learning

Q-learning 算法（Watkins, 1989）是一个突破性的算法。这里利用了这个公式进行off-policy学习。
$Q(S_t, A_t) \gets Q(S_t, A_t) + \alpha [R_{t+1} + \gamma \underset{a}{max} \ Q(S_{t+1}, a) - Q(S_t, A_t)]$

• 算法描述

Initialize $Q(s, a), \forall s \in \mathcal{S}, a \in \mathcal{A}(s)$ arbitrarily, and $Q(terminal, \dot \ ) = 0$
Repeat (for each episode):
Initialize $\mathcal{S}$
Choose $A$ from $S$ using policy derived from $Q$ (e.g. $\epsilon-greedy$)
Repeat (for each step of episode):
Take action $A$, observe $R, S'$
$Q(S, A) \gets Q(S, A) + \alpha [R + \gamma \underset{a}{max} \ Q(S‘, a) - Q(S, A)]$
$S \gets S';$
Until S is terminal

• Q-learning使用了max，会引起一个最大化偏差(Maximization Bias)问题。
具体说明，请看书上的Example 6.7。**
使用Double Q-learning可以消除这个问题。

## Double Q-learning

Initialize $Q_1(s, a)$ and $Q_2(s, a), \forall s \in \mathcal{S}, a \in \mathcal{A}(s)$ arbitrarily
Initialize $Q_1(terminal, \dot \ ) = Q_2(terminal, \dot \ ) = 0$
Repeat (for each episode):
Initialize $\mathcal{S}$
Repeat (for each step of episode):
Choose $A$ from $S$ using policy derived from $Q_1$ and $Q_2$ (e.g. $\epsilon-greedy$)
Take action $A$, observe $R, S'$
With 0.5 probability:
$Q_1(S, A) \gets Q_1(S, A) + \alpha [R + \gamma Q_2(S', \underset{a}{argmax} \ Q_1(S', a)) - Q_1(S, A)]$
Else:
$Q_2(S, A) \gets Q_2(S, A) + \alpha [R + \gamma Q_1(S', \underset{a}{argmax} \ Q_2(S', a)) - Q_2(S, A)]$
$S \gets S';$
Until S is terminal

## 策略行动价值$q_{\pi}$的off-policy时序差分学习方法(by importance sampling): Sarsa

$\rho$ - 重要样本比率(importance sampling ratio)
$\rho \gets \prod_{i = \tau + 1}^{min(\tau + n - 1, T -1 )} \frac{\pi(A_t|S_t)}{\mu(A_t|S_t)} \qquad \qquad (\rho_{\tau+n}^{(\tau+1)})$

• 算法描述

Input: behavior policy \mu such that $\mu(a|s) > 0，\forall s \in \mathcal{S}, a \in \mathcal{A}$
Initialize $Q(s，a)$ arbitrarily $\forall s \in \mathcal{S}^, \forall a in \mathcal{A}$
Initialize $\pi$ to be $\epsilon$-greedy with respect to Q, or to a fixed given policy
Parameters: step size $\alpha \in (0, 1]$,
small $\epsilon > 0$
a positive integer $n$
All store and access operations (for $S_t$ and $R_t$) can take their index mod $n$

Repeat (for each episode):
Initialize and store $S_0 \ne terminal$
Select and store an action $A_0 \sim \mu(\dot \ | S_0)$
$T \gets \infty$
For $t = 0,1,2,\cdots$:
If $t < T$, then:
Take an action $A_t$
Observe and store the next reward as $R_{t+1}$ and the next state as $S_{t+1}$
If $S_{t+1}$ is terminal, then:
$T \gets t+1$
Else:
Select and store an action $A_{t+1} \sim \pi(\dot \ | S_{t+1})$
$\tau \gets t - n + 1 \$ ($\tau$ is the time whose state's estimate is being updated)
If $\tau \ge 0$:
$\rho \gets \prod_{i = \tau + 1}^{min(\tau + n - 1, T -1 )} \frac{\pi(A_t|S_t)}{\mu(A_t|S_t)} \qquad \qquad (\rho_{\tau+n}^{(\tau+1)})$
$G \gets \sum_{i = \tau + 1}^{min(\tau + n, T)} \gamma^{i-\tau-1}R_i$
if $\tau + n \le T$ then: $G \gets G + \gamma^{n} Q(S_{\tau + n}, A_{\tau + n}) \qquad \qquad (G_{\tau}^{(n)})$
$Q(S_{\tau}, A_{\tau}) \gets Q(S_{\tau}, A_{\tau}) + \alpha \rho [G - Q(S_{\tau}, A_{\tau})]$
If {\pi} is being learned, then ensure that $\pi(\dot \ | S_{\tau})$ is $\epsilon$-greedy wrt Q
Until $\tau = T - 1$

• 流程图
• 算法描述
略。

## 策略行动价值$q_{\pi}$的off-policy时序差分学习方法(不带importance sampling): Tree Backup Algorithm

Tree Backup Algorithm的思想是每步都求行动价值的期望值。

• 流程图
• 算法描述

Initialize $Q(s，a)$ arbitrarily $\forall s \in \mathcal{S}^, \forall a in \mathcal{A}$
Initialize $\pi$ to be $\epsilon$-greedy with respect to Q, or to a fixed given policy
Parameters: step size $\alpha \in (0, 1]$,
small $\epsilon > 0$
a positive integer $n$
All store and access operations (for $S_t$ and $R_t$) can take their index mod $n$

Repeat (for each episode):
Initialize and store $S_0 \ne terminal$
Select and store an action $A_0 \sim \pi(\dot \ | S_0)$
$Q_0 \gets Q(S_0, A_0)$
$T \gets \infty$
For $t = 0,1,2,\cdots$:
If $t < T$, then:
Take an action $A_t$
Observe and store the next reward as $R_{t+1}$ and the next state as $S_{t+1}$
If $S_{t+1}$ is terminal, then:
$T \gets t+1$
$\delta_t \gets R - Q_t$
Else:
$\delta_t \gets R + \gamma \sum_a \pi(a|S_{t+1})Q(S_{t+1},a) - Q_t$
Select arbitrarily and store an action as $A_{t+1}$
$Q_{t+1} \gets Q(S_{t+1},A_{t+1})$
$\pi_{t+1} \gets \pi(S_{t+1},A_{t+1})$
$\tau \gets t - n + 1 \$ ($\tau$ is the time whose state's estimate is being updated)
If $\tau \ge 0$:
$E \gets 1$
$G \gets Q_{\tau}$
For $k=\tau, \dots, min(\tau + n - 1, T - 1):$
$G \gets\ G + E \delta_k$
$E \gets\ \gamma E \pi_{k+1}$
$Q(S_{\tau}, A_{\tau}) \gets Q(S_{\tau}, A_{\tau}) + \alpha [G - Q(S_{\tau}, A_{\tau})]$
If {\pi} is being learned, then ensure that $\pi(a | S_{\tau})$ is $\epsilon$-greedy wrt $Q(S_{\tau},\dot \ )$
Until $\tau = T - 1$

## 策略行动价值$q_{\pi}$的off-policy时序差分学习方法: $Q(\sigma)$

$Q(\sigma)$结合了Sarsa(importance sampling), Expected Sarsa, Tree Backup算法，并考虑了重要样本。
$\sigma = 1$时，使用了重要样本的Sarsa算法。
$\sigma = 0$时，使用了Tree Backup的行动期望值算法。

• 流程图
• 算法描述

Input: behavior policy \mu such that $\mu(a|s) > 0，\forall s \in \mathcal{S}, a \in \mathcal{A}$
Initialize $Q(s，a)$ arbitrarily \forall s \in \mathcal{S}^, \forall a in \mathcal{A}\$
Initialize $\pi$ to be $\epsilon$-greedy with respect to Q, or to a fixed given policy
Parameters: step size $\alpha \in (0, 1]$,
small $\epsilon > 0$
a positive integer $n$
All store and access operations (for $S_t$ and $R_t$) can take their index mod $n$

Repeat (for each episode):
Initialize and store $S_0 \ne terminal$
Select and store an action $A_0 \sim \mu(\dot \ | S_0)$
$Q_0 \gets Q(S_0, A_0)$
$T \gets \infty$
For $t = 0,1,2,\cdots$:
If $t < T$, then:
Take an action $A_t$
Observe and store the next reward as $R_{t+1}$ and the next state as $S_{t+1}$
If $S_{t+1}$ is terminal, then:
$T \gets t+1$
$\delta_t \gets R - Q_t$
Else:
Select and store an action as $A_{t+1} \sim \mu(\dot \ |S_{t+1})$
Select and store $\sigma_{t+1})$
$Q_{t+1} \gets Q(S_{t+1},A_{t+1})$
$\delta_t \gets R + \gamma \sigma_{t+1} Q_{t+1} + \gamma (1 - \sigma_{t+1})\sum_a \pi(a|S_{t+1})Q(S_{t+1},a) - Q_t$
$\pi_{t+1} \gets \pi(S_{t+1},A_{t+1})$
$\rho_{t+1} \gets \frac{\pi(A_{t+1}|S_{t+1})}{\mu(A_{t+1}|S_{t+1})}$
$\tau \gets t - n + 1 \$ ($\tau$ is the time whose state's estimate is being updated)
If $\tau \ge 0$:
$\rho \gets 1$
$E \gets 1$
$G \gets Q_{\tau}$
For $k=\tau, \dots, min(\tau + n - 1, T - 1):$
$G \gets\ G + E \delta_k$
$E \gets\ \gamma E [(1 - \sigma_{k+1})\pi_{k+1} + \sigma_{k+1}]$
$\rho \gets\ \rho(1 - \sigma_{k} + \sigma_{k}\tau_{k})$
$Q(S_{\tau}, A_{\tau}) \gets Q(S_{\tau}, A_{\tau}) + \alpha \rho [G - Q(S_{\tau}, A_{\tau})]$
If ${\pi}$ is being learned, then ensure that $\pi(a | S_{\tau})$ is $\epsilon$-greedy wrt $Q(S_{\tau},\dot \ )$
Until $\tau = T - 1$

## 参照

posted @ 2017-03-09 15:23  SNYang  阅读(...)  评论(... 编辑 收藏