Pickyour favorite single player computer game, for example, Wechat ‘Jump a jump’.How would you model the problem oflearning to play this game (without needing to know the physics of the game) asa reinforcement learning problem? Define statespace, action space, transition model, reward function and goalof the underlying MDP.

Markov Decision Process(MDP)

Markov Decision Process(MDP) Reinforcement Learning

Reinforcement learning

Markov Decision Process(MDP) Reinforcement Learning

Markov decision process

    马尔可夫决策过程依赖于马尔可夫假设,即下一个状态si + 1的概率仅取决于当前状态si和动作ai,但不依赖于前面的状态或动作。

A good strategy for an agent would be to always choose an action that maximizesthe (discounted) future reward.Q(s,a)

基于Simple_Q_table_Q-learning的改进二维空间

环境+强化+提升循环效率

Markov Decision Process(MDP) Reinforcement Learning

Markov Decision Process(MDP) Reinforcement Learning

Markov Decision Process(MDP) Reinforcement Learning

Markov Decision Process(MDP) Reinforcement Learning

Markov Decision Process(MDP) Reinforcement Learning

    

效果展示

 Markov Decision Process(MDP) Reinforcement Learning

Markov Decision Process(MDP) Reinforcement Learning


Deep Q-Network(DQN)

Deep Q-Network Algorithm:

Initialize replay memory D to size N

Initialize action-value function Q with random weights θ

for episode = 1, M do

    Initialize state s_1

    for t = 1, T do

        With probability ϵ select random action a_t

        otherwise select a_t=max_a  Q(s_t,a; θ_i)

        Execute action a_t in emulator and observe r_t and s_(t+1)

        Store transition (s_t,a_t,r_t,s_(t+1)) in D

        Sample a minibatch of transitions (s_j,a_j,r_j,s_(j+1)) from D

        Set y_j:=

            r_j for terminal s_(j+1)

            r_j+γ*max_(a^' )  Q(s_(j+1),a'; θ_i) for non-terminal s_(j+1)

        Perform a gradient step on (y_j-Q(s_j,a_j; θ_i))^2 with respect to θ

    end for

end for

Network Architecture(网络体系/架构)

Markov Decision Process(MDP) Reinforcement Learning


效果展示:

Markov Decision Process(MDP) Reinforcement Learning





相关文章: