Lei Mao bio photo

Lei Mao

Machine Learning, Artificial Intelligence. On the Move.

Twitter Facebook LinkedIn GitHub   G. Scholar E-Mail RSS

Introduction

I have not been working on reinforcement learning for a while, and it seems that I could not remember what do on-policy and off-policy mean in reinforcement learning and what the difference is between these two. Most of explanations online bluff too much and I don’t think those are directly answering the questions.


I also do have to apologize that I have taken several good images from Sutton’s latest book “Reinforcement Learning: An Introduction” 2nd Edition.

Algorithms

Let us review a typical off-policy algorithm Q-Learning and a typical on-policy algorithm SARSA first.

Q-Learning

Q-Learning

SARSA

SARSA

Off-Policy VS On-Policy

We can see that the major difference is that Q-Learning is using

to update Q value. While SARSA is using

to update Q value.


Let us introduce two concepts first: update policy and behavior policy.


Update policy is how your agent learns the optimal policy, and behavior policy is how your behaves.


In Q-Learning, the agent learns optimal policy using absolute greedy policy and behaves using other policies such as $\epsilon$-greedy policy. Because the update policy is different to the behavior policy, so Q-Learning is off-policy.


In SARSA, the agent learns optimal policy and behaves using the same policy such as $\epsilon$-greedy policy. Because the update policy is the same to the behavior policy, so SARSA is on-policy.

Cliff Walking Game

I took the Cliff Walking game from Sutton’s book. It seems that Q-Learning with $\epsilon$-greedy action learns the value for optimal policy but sometimes it will fall off due to $\epsilon$-greedy action. SARSA with $\epsilon$-greedy action learns the value for a less optimal policy but it is a safer policy.

Cliff Walking

To me, it seems that Q-Learning with $\epsilon$-greedy action will be unstable (less likely to converge) during learning in some environments but it is more likely to learn an optimal policy as the overall reward is lower and fluctuation is larger. SARSA, on the other hand, will be more stable during learning but it is less likely to learn an optimal policy.

Cliff Walking Rewards

According to Sutton, “if $\epsilon$ were gradually reduced, then both methods would asymptotically converge to the optimal policy.”

Final Remarks

I actually have not seen any formal definition about on-policy and off-policy from authority publications (Who invented these?). What if I modified Q-Learning algorithm to use absolute greedy policy to behave, is this new algorithm on-policy now?