引言

  要将脉冲强化学习进行分类,首先要了解SNN学习算法以及强化学习本身的类别。

脉冲强化学习总结(持续更新)
    




Learning in Spiking Neural Networks by Reinforcement of Stochastic Synaptic Transmission
A reinforcement learning algorithm for spiking neural networks
Reinforcement Learning Through Modulation of Spike-Timing-Dependent Synaptic Plasticity
Solving the Distal Reward Problem through Linkage of STDP and Dopamine Signaling
A Spiking Neural Network Model of an Actor-Critic Learning Agent
Reinforcement learning in populations of spiking neurons
Spike-Based Reinforcement Learning in Continuous State and Action Space: When Policy Gradient Methods Fail
Functional Requirements for Reward-Modulated Spike-Timing-Dependent Plasticity
Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons
A neural reinforcement learning model for tasks with unknown time delays
A Spiking Neural Model for Stable Reinforcement of Synapses Based on Multiple Distal Rewards
Biologically Inspired SNN for Robot Control
Reinforcement learning in cortical networks
Computation by Time
A Spiking Neural Network Model of Model-Free Reinforcement Learning with High-Dimensional Sensory Input and Perceptual Ambiguity
Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules
Navigating Mobile Robots to Target in Near Shortest Time using Reinforcement Learning with Spiking Neural Networks
A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks
End to End Learning of Spiking Neural Network based on R-STDP for a Lane Keeping Vehicle
Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to Atari Breakout game
Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses
Embodied Synaptic Plasticity with Online Reinforcement learning
Reinforcement co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware
Deep Reinforcement Learning with Population-Coded Spiking Neural Network for Continuous Control
Training spiking neural networks for reinforcement learning
A solution to the learning dilemma for recurrent networks of spiking neurons
Reinforcement Learning with Feedback-modulated TD-STDP
Strategy and Benchmark for Converting Deep Q-Networks to Event-Driven Spiking Neural Networks
Population-coding and Dynamic-neurons improved Spiking Actor Network for Reinforcement Learning
Combining STDP and binary networks for reinforcement learning from images and sparse rewards
On-chip trainable hardware-based deep Q-networks approximating a backpropagation algorithm
Multi-timescale biological learning algorithms train spiking neuronal network motor control
A Dual-Memory Architecture for Reinforcement Learning on Neuromorphic Platforms
BindsNET学习系列 —— LearningRule
BindsNET学习系列 —— Nodes

现代RL中一种非详尽但有用的算法分类法。

图片源自:OpenAI Spinning Up (https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html#citations-below)

 

强化学习算法:

参考文献:Part 2: Kinds of RL Algorithms — Spinning Up documentation (openai.com)

 

Model-Free vs Model-Based RL

  RL算法中最重要的分支点之一是智能体是否能够访问(或学习)环境模型的问题。我们所说的环境模型是指预测状态转换和奖励的函数。

  拥有一个模型的主要好处是,它允许智能体通过提前思考、看到一系列可能的选择会发生什么,并明确地在其选项之间做出决定来进行规划。然后,智能体可以将预先规划的结果提取到一个学到的策略中。这种方法的一个特别著名的例子是AlphaZero。当这种方法起作用时,与没有模型的方法相比,它可以大大提高样本效率。

  主要的缺点是,智能体通常无法使用环境的真实模型。如果一个智能体想要在这种情况下使用一个模型,它必须完全从经验中学习模型,这会带来一些挑战。最大的挑战是,模型中的偏差可以被智能体利用,从而导致智能体在学习模型方面表现良好,但在真实环境中表现得次优(或超级糟糕)。模型学习从根本上说是很难的,所以即使是愿意投入大量时间和计算的巨大努力也可能没有回报。

  使用模型的算法称为基于模型(model-based)的方法,不使用模型的算法称为无模型(model-free)方法。虽然无模型方法放弃了使用模型在样本效率方面的潜在收益,但它们往往更易于实现和调整。截至目前,与基于模型的方法相比,无模型方法更受欢迎,并且得到了更广泛的开发和测试。

 

What to Learn

  RL算法的另一个关键分支点是学习什么的问题。通常的候选名单包括:

  • 策略,无论是随机的还是确定性的,
  • 动作-价值函数(Q函数),
  • 价值函数,
  • 和/或环境模型。

 

What to Learn in Model-Free RL

  本文主要考虑无模型RL,使用无模型RL表示和训练智能体有两种主要方法:

Policy Optimization. 这个族中的方法将策略显式表示为πθ(a|s)。它们通过在性能目标J(πθ)上的梯度上升直接优化参数θ,或通过最大化J(πθ)的局部近似间接优化参数θ。这种优化几乎总是在策略上执行的,这意味着每次更新只使用根据策略的最新版本进行操作时收集的数据。策略优化通常还包括为同策价值函数Vπ(s)学习近似器VΦ(s),该函数用于确定如何更新策略。

  策略优化方法的几个示例如下:

  • A2C/A3C,执行梯度上升以直接最大化性能,
  • PPO,其更新通过最大化替代目标函数间接实现性能最大化,该函数为J(πθ)因更新而改变的程度提供保守估计。

Q-Learning. 这个族中的方法学习最优动作-价值函数Q*(s, a)的近似器Qθ(s, a)。通常,他们使用基于贝尔曼方程的目标函数。这种优化几乎总是根据策略执行的,这意味着每次更新都可以使用在训练期间的任何时间点收集的数据,而不管获取数据时智能体选择如何探索环境。通过Q*和π*之间的连接获得相应的策略:Q学习智能体采取的动作由下式给出:

脉冲强化学习总结(持续更新)
    




Learning in Spiking Neural Networks by Reinforcement of Stochastic Synaptic Transmission
A reinforcement learning algorithm for spiking neural networks
Reinforcement Learning Through Modulation of Spike-Timing-Dependent Synaptic Plasticity
Solving the Distal Reward Problem through Linkage of STDP and Dopamine Signaling
A Spiking Neural Network Model of an Actor-Critic Learning Agent
Reinforcement learning in populations of spiking neurons
Spike-Based Reinforcement Learning in Continuous State and Action Space: When Policy Gradient Methods Fail
Functional Requirements for Reward-Modulated Spike-Timing-Dependent Plasticity
Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons
A neural reinforcement learning model for tasks with unknown time delays
A Spiking Neural Model for Stable Reinforcement of Synapses Based on Multiple Distal Rewards
Biologically Inspired SNN for Robot Control
Reinforcement learning in cortical networks
Computation by Time
A Spiking Neural Network Model of Model-Free Reinforcement Learning with High-Dimensional Sensory Input and Perceptual Ambiguity
Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules
Navigating Mobile Robots to Target in Near Shortest Time using Reinforcement Learning with Spiking Neural Networks
A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks
End to End Learning of Spiking Neural Network based on R-STDP for a Lane Keeping Vehicle
Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to Atari Breakout game
Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses
Embodied Synaptic Plasticity with Online Reinforcement learning
Reinforcement co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware
Deep Reinforcement Learning with Population-Coded Spiking Neural Network for Continuous Control
Training spiking neural networks for reinforcement learning
A solution to the learning dilemma for recurrent networks of spiking neurons
Reinforcement Learning with Feedback-modulated TD-STDP
Strategy and Benchmark for Converting Deep Q-Networks to Event-Driven Spiking Neural Networks
Population-coding and Dynamic-neurons improved Spiking Actor Network for Reinforcement Learning
Combining STDP and binary networks for reinforcement learning from images and sparse rewards
On-chip trainable hardware-based deep Q-networks approximating a backpropagation algorithm
Multi-timescale biological learning algorithms train spiking neuronal network motor control
A Dual-Memory Architecture for Reinforcement Learning on Neuromorphic Platforms
BindsNET学习系列 —— LearningRule
BindsNET学习系列 —— Nodes

  Q学习方法的例子包括:

  • DQN,一个开创深度RL领域的经典之作,
  • 以及C51,一个学习期望值为Q*的奖励分布的变体。

Trade-offs Between Policy Optimization and Q-Learning. 策略优化方法的主要优势在于它们是有原则的,也就是说,你可以直接为你想要的东西进行优化。这往往使它们稳定可靠。相比之下,Q学习方法只能通过训练Qθ以满足自洽方程,间接优化智能体性能。这种学习有很多失败模式,所以它往往不太稳定。[1] 但是,Q学习方法的优点是,它们在工作时的样本效率大大提高,因为与策略优化技术相比,它们可以更有效地重用数据。

Interpolating Between Policy Optimization and Q-Learning. 巧合的是,策略优化和Q学习并非不兼容(在某些情况下,事实证明,它们是等效的),而且存在一系列介于这两个极端之间的算法。基于这一范围的算法能够仔细权衡双方的优缺点。例子包括:

  • DDPG,一种同时学习确定性策略和Q函数的算法,通过使用它们来改进另一个,
  • TD3:虽然DDPG有时可以获得很好的性能,但它在超参数和其他类型的调整方面往往很脆弱。DDPG的一种常见故障模式是,学习的Q函数开始大幅高估Q值,然后导致策略中断,因为它利用了Q函数中的错误。双延迟DDPG(TD3)是一种通过引入三个关键技巧来解决此问题的算法,使性能大大优于基准DDPG:
    • 技巧一:裁剪双重Q学习(Clipped Double-Q Learning)。TD3学习两个Q函数而不是一个(因此是"twin"),并使用两个Q值中较小的一个来形成Bellman误差损失函数中的目标。
    • 技巧二:"延迟"策略更新("Delayed" Policy Updates)。TD3更新策略(和目标网络)的频率低于Q函数。本文建议每两次Q函数更新后进行一次策略更新。
    • 技巧三:目标策略平滑(Target Policy Smoothing)。TD3向目标动作添加了噪声,使策略更难通过随着动作的变化平滑Q来利用Q函数错误。
  • SAC,一种变体,它使用随机策略、熵正则化和其他一些技巧来稳定学习,并在标准基准上获得高于DDPG的分数。

 

[1] 有关Q学习方法失败的方式和原因的更多信息,请参见 1)Tsitsiklis and van Roy的这篇经典论文,2)Szepesvari的评论(见第4.3.2节),以及3)Sutton and Barto的第11章,尤其是第11.3节(关于函数近似、自举(bootstrapping)和异策数据的"致命三元组",共同导致价值学习算法的不稳定性)。

 

脉冲强化学习总结(持续更新)
    




Learning in Spiking Neural Networks by Reinforcement of Stochastic Synaptic Transmission
A reinforcement learning algorithm for spiking neural networks
Reinforcement Learning Through Modulation of Spike-Timing-Dependent Synaptic Plasticity
Solving the Distal Reward Problem through Linkage of STDP and Dopamine Signaling
A Spiking Neural Network Model of an Actor-Critic Learning Agent
Reinforcement learning in populations of spiking neurons
Spike-Based Reinforcement Learning in Continuous State and Action Space: When Policy Gradient Methods Fail
Functional Requirements for Reward-Modulated Spike-Timing-Dependent Plasticity
Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons
A neural reinforcement learning model for tasks with unknown time delays
A Spiking Neural Model for Stable Reinforcement of Synapses Based on Multiple Distal Rewards
Biologically Inspired SNN for Robot Control
Reinforcement learning in cortical networks
Computation by Time
A Spiking Neural Network Model of Model-Free Reinforcement Learning with High-Dimensional Sensory Input and Perceptual Ambiguity
Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules
Navigating Mobile Robots to Target in Near Shortest Time using Reinforcement Learning with Spiking Neural Networks
A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks
End to End Learning of Spiking Neural Network based on R-STDP for a Lane Keeping Vehicle
Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to Atari Breakout game
Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses
Embodied Synaptic Plasticity with Online Reinforcement learning
Reinforcement co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware
Deep Reinforcement Learning with Population-Coded Spiking Neural Network for Continuous Control
Training spiking neural networks for reinforcement learning
A solution to the learning dilemma for recurrent networks of spiking neurons
Reinforcement Learning with Feedback-modulated TD-STDP
Strategy and Benchmark for Converting Deep Q-Networks to Event-Driven Spiking Neural Networks
Population-coding and Dynamic-neurons improved Spiking Actor Network for Reinforcement Learning
Combining STDP and binary networks for reinforcement learning from images and sparse rewards
On-chip trainable hardware-based deep Q-networks approximating a backpropagation algorithm
Multi-timescale biological learning algorithms train spiking neuronal network motor control
A Dual-Memory Architecture for Reinforcement Learning on Neuromorphic Platforms
BindsNET学习系列 —— LearningRule
BindsNET学习系列 —— Nodes

 图片源自:胡一凡,李国齐,吴郁杰,等.脉冲神经网络研究进展综述[J].控制与决策,2021,36(1):1-26

 

SNN学习算法:

  • 基于反向传播的算法
    • Clock-driven
    • Event-driven
  • 基于突触可塑性的算法(赫布规则及其变种, STDP, R-STDP以及其他突触可塑性规则)
  • ANN-SNN转换

PS:本文后续介绍的基于突触可塑性的算法一般都属于三因素学习规则;

 

脉冲强化学习

  根据前两节对强化学习算法与SNN学习算法的分类,我们可以开始对现有的脉冲强化学习论文进行分类。

  需要特别说明的是,由于Actor-Critic结构由actor网络与critic网络构成(引入critic网络可以减小策略梯度的方差),因此在部分工作中actor网络用SNN实现,critic网络用ANN实现。这样最终测试阶段可以仅使用actor网络,充分发挥SNN的高能效优势以用于机器人控制等领域。

  同样在当前的DQN方法(e.g. Rainbow)中,优势函数的使用也会使得模型由两个部分构成:advantage网络以及value网络。优势函数有助于提高学习效率,同时使学习更加稳定;同时经验表明,优势函数也有助于减小方差,而方差过大是导致过拟合的重要因素。如何对这些网络进行实现也是需要注意的点。

  目前来看,现有的脉冲强化学习算法可以分为四大类(Chen et al., 2022):

  • 使用三因素学习规则的基于奖励的学习(Reward-based Learning by Three-factor Learning Rules);
  • 在RL中进行ANN到SNN的转换(ANN to SNN Conversion for RL);
  • 混合框架的协同训练(Co-learning of Hybrid Framework);
  • 利用基于脉冲的反向传播算法的直接脉冲学习算法(Direct Spiking Learning using Spike-based BP):利用不同的方式将脉冲序列解码成价值函数/动作概率;

 

相关概念

  • MDP:马尔可夫过程;
  • POMDP:部分可观察的马尔可夫过程;
  • GPOMDP:一种在参数化随机策略控制的通用POMDP中生成平均奖励梯度的有偏估计的算法;
  • OLPOMDP:GPOMDP算法的在线变体;

 

相关论文

1、Learning in Spiking Neural Networks by Reinforcement of Stochastic Synaptic Transmission (Neuron 2003)

  • 强化学习算法:奖励最大化规则;
  • SNN学习算法:享乐主义突触,IF神经元模型;

 

2、A reinforcement learning algorithm for spiking neural networks (SYNASC 2005)

  • 强化学习算法:连续时间的强化学习规则,OLPOMDP强化学习算法;
  • SNN学习算法:带有全局奖励信号的脉冲时序依赖可塑性(STDP),随机IF神经元模型;

 

3、Reinforcement Learning Through Modulation of Spike-Timing-Dependent Synaptic Plasticity (Neural Computation, 2007)

  • 强化学习算法:连续时间的强化学习规则,OLPOMDP强化学习算法;
  • SNN学习算法:奖励调节的STDP——带有资格迹的调节STDP(MSTDPET) & 不带资格迹的简化学习规则(MSTDP),带逃逸噪声的脉冲响应模型(SRM);

 

4、Solving the Distal Reward Problem through Linkage of STDP and Dopamine Signaling (BMC Neuroscience, 2007)

  • 强化学习算法:TD强化学习;
  • SNN学习算法:DA-STDP;

 

5、A Spiking Neural Network Model of an Actor-Critic Learning Agent (Neural Computation, 2009)

  • 强化学习算法:A2C (State-Critic Plasticity & State-Actor Plasticity),连续时间更新规则,行动器-评判器的TD学习;
  • SNN学习算法:基于奖励的差分Hebbian学习规则(取决于奖励与神经元活动的发放率),基于电流的LIF神经元;

 

6、Reinforcement learning in populations of spiking neurons (Nature neuroscience, 2009)

  • 强化学习算法:资格迹驱动的权重更新规则,群体强化 & 在线学习(Online learning);
  • SNN学习算法:三因素学习规则,逃逸噪声神经元;

PS:逃逸噪声神经元是LIF神经元,具有随机波动的脉冲阈值;可塑性由该全局反馈信号和每个突触处局部计算量(称为资格迹)驱动。

 

7、Spike-Based Reinforcement Learning in Continuous State and Action Space: When Policy Gradient Methods Fail (PLoS Computational Biology, 2009)

  • 强化学习算法:连续状态和动作的强化学习,标准策略梯度规则和朴素的Hebbian规则之间的过渡;
  • SNN学习算法:三因素学习规则,LIF神经元;

简介:本文将全局奖励信号与两个在突触部位可用的局部因素结合在一起,使用将前馈输入与横向连接相结合的网络结构,根据动作单元发放率的群体向量选择动作,在脉冲神经元群体中实现强化学习突触更新规则。

 

8、Functional Requirements for Reward-Modulated Spike-Timing-Dependent Plasticity (The Journal of neuroscience: the official journal of the Society for Neuroscience, 2010)

  • 强化学习算法:critic (利用奖励预测误差进行学习);
  • SNN学习算法:R-STDP & R-max学习规则,具有指数逃逸率的简化脉冲响应模型(SRM0)神经元;

 

9、Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons (Plos Computational Biology, 2013)

  • 强化学习算法:A2C (Spiking Neuron Critic & Spiking Neuron Actor),连续TD学习;
  • SNN学习算法:三因素学习规则,简化的脉冲响应模型(SRM0);

简介:本文通过将连续TD学习扩展到以连续时间操作的具有连续状态和动作表征的actor-critic网络中脉冲神经元的情况,成功解决了如何在非离散框架的自然情况下实现RL以及如何在神经元中计算奖励预测误差(RPE)的问题。在仿真中,本文通过许多与报道的动物表现相符的试验,证明了这种架构可以解决类似Morris水迷宫的导航任务,以及Acrobot和Cartpole这类经典控制问题。

PS:该SNN-RL学习方法被称为TD-LTP学习规则;状态编码为位置单元发放率;critic以及actor输出遵循群体编码;为了确保明确选择动作,actor使用N-winner-take-all横向连接方案。

 

10、A neural reinforcement learning model for tasks with unknown time delays (CogSci 2013)

  • 强化学习算法:Q学习,TD强化学习;
  • SNN学习算法:误差调节神经学习规则(伪Hebbian形式,三因素学习规则),LIF神经元;

PS:伪Hebbian形式:学习率κ,突触前活动si(x),突触后因素αjej以及误差E。值得注意的是,该项不是接收神经元的发放,而是驱动该发放的亚阈值电流(因此是"伪"Hebbian)。换句话说,用于驱动神经元的脉冲活动的相同电流用于更新连接权重。与该规则一致,在实验工作中已经提出,突触后电位对于可塑性不是必需的。

 

11、A Spiking Neural Model for Stable Reinforcement of Synapses Based on Multiple Distal Rewards (Neural Computation, 2013)

  • 强化学习算法:类似critic的算法;
  • SNN学习算法:具有衰减奖励门控的R-STDP,并增加了短期可塑性(STP);

 

12、Biologically Inspired SNN for Robot Control (IEEE Transactions on Cybernetics, 2013)

  • 强化学习算法:TD学习规则;
  • SNN学习算法:自组织SNN,LIF神经元模型;

PS:沿墙绕行任务,Pioneer 3机器人(声纳,激光和电机);

 

13、Reinforcement learning in cortical networks (Encyclopedia of Computational Neuroscience, 2014)

  • 策略梯度方法(Hedonistic synapse, Spike reinforcement, Node perturbation, Population reinforcement, Online learning & Phenomenological R-STDP models);
  • TD学习(假设基础决策过程是马尔可夫模型);

 

14、Computation by Time (Neural Processing Letters, 2015)

  • Reinforcement Learning (R-STDP, 策略梯度方法 & TD学习);

 

15、A Spiking Neural Network Model of Model-Free Reinforcement Learning with High-Dimensional Sensory Input and Perceptual Ambiguity (PloS one, 2015)

  • 强化学习算法:基于自由能的强化学习(FERL)
  • SNN学习算法:带平均发放率的伪自由能(aFE) & 平均瞬时伪自由能(iFE),LIF神经元;

PS:受限玻尔兹曼机(RBM),部分可观察的RL,NEST模拟器;

 

16、Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules (FRONTIERS IN NEURAL CIRCUITS, 2016)

  • SNN学习算法:三因素学习规则;

 

17、Navigating Mobile Robots to Target in Near Shortest Time using Reinforcement Learning with Spiking Neural Networks (IJCNN 2017)

  • 强化学习算法:Q学习;
  • SNN学习算法:带资格迹的多巴胺调节的Hebbian STDP (三因素学习规则);

PS:绕墙避障导航任务;

 

18、A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks (Front. Neurorobot., 2018)

  • Hebbian-Based Learning (Unsupervised Learning, Supervised Learning, Classical Conditioning, Operant Conditioning, Reward-Modulated Training);
  • Reinforcement Learning (Temporal Difference, Model-Based & Others);

 

19、End to End Learning of Spiking Neural Network based on R-STDP for a Lane Keeping Vehicle (ICRA 2018)

  • 强化学习算法:奖励最大化规则;
  • SNN学习算法:R-STDP,LIF神经元;

PS:车道保持任务,部署有动态视觉传感器(DVS)的Pioneer机器人;利用DVS作为输入,电机命令作为输出;

 

20、Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to Atari Breakout game (Neural Networks, 25 November 2019)

  • 强化学习算法:DQN;
  • SNN学习算法:ANN-SNN转换,随机LIF神经元;

简介:本文将ANN-SNN转换扩展到深度Q学习的领域,提供了将标准NN转换为SNN的原理证明,并证明了SNN可以提高输入图像中遮挡的鲁棒性。

PS:本研究探索了几种搜索最优缩放参数的方法,包括粒子群优化(particle swarm optimization,PSO);在所研究的优化方法中,PSO产生了最优性能;使用基于PyTorch的开源库BindsNET模拟脉冲神经元;随机LIF神经元基于LIF神经元(如果神经元的膜电位低于阈值,则神经元可能会以与膜电位(逃逸噪声)成比例的概率发放脉冲);Atari Breakout游戏任务。

 

21、Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses (Neural Computation, 2019)

  • 强化学习算法:随机性-确定性协调(SDC)脉冲强化学习模型(SDC模型);
  • SNN学习算法:享乐主义规则/半RSTDP规则;

PS:随机性突触的可塑性是通过调节具有全局奖励的突触神经递质的释放概率来实现的。确定性突触的可塑性是通过R-STDP规则的变体实现的(Florian, 2007; Fremaux & Gerstner, 2015)。作者将其命名为半RSTDP规则,它根据一半STDP窗口的结果修改权重(突触前脉冲先于突触后脉冲的部分)。

 

22、Embodied Synaptic Plasticity with Online Reinforcement learning (Front. Neurorobot., 2019)

  • 强化学习算法:使用SPORE学习规则最大化奖励信号;
  • SNN学习算法:一种基于突触采样的奖励学习规则(依赖于资格迹,三因素学习规则);

PS:作者展示了这个框架,以通过在线强化学习来评估突触可塑性(Synaptic Plasticity with Online REinforcement learning, SPORE)。它结合了一种策略采样方法,该方法可以模拟关于多巴胺涌入的树突棘的生长。SPORE从精确的脉冲时间在线学习,并且完全由脉冲神经元实现。基于奖励的在线学习规则,用于脉冲神经网络,称为突触采样。SPORE是基于奖励的突触采样规则的实现,它使用NEST神经模拟器。SPORE针对闭环应用进行优化,形成在线策略梯度方法。

 

23、Reinforcement co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware (2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS))

  • 强化学习算法:Spiking DDPG (Deep Critic Network & Spiking Actor Network);
  • SNN学习算法:Clock-driven,具有两个内部状态变量(电流和电压)的LIF神经元;

简介:本文提出了一种神经形态方法,将SNN的能效与DRL的最优性相结合,并在无地图导航的学习控制策略中对其进行基准测试。网络使用的混合框架是脉冲确定性策略梯度(SDDPG),由脉冲actor网络(SAN)和深度critic网络组成,其中两个网络使用梯度下降共同训练。

PS:混合框架是脉冲确定性策略梯度(SDDPG);状态编码为Possion脉冲编码;任务为Gazebo模拟器与真实场景下的目标导航任务。

 

24、Deep Reinforcement Learning with Population-Coded Spiking Neural Network for Continuous Control (CoRL 2020)

  • 强化学习算法:TD3/DDPG/SAC/PPO-PopSAN (Deep Critic Network & Population-coded Spiking Actor Network);
  • SNN学习算法:Clock-driven,基于电流的LIF模型;

简介:本文提出一个群体编码的脉冲actor网络(PopSAN),该网络与使用深度强化学习(DRL)的深度critic网络一起进行了训练。遍及大脑网络的群体编码方案极大地增强网络的表征能力,并且混合学习将深度网络的训练优势与脉冲网络的节能推断相结合,极大地提高了神经形态控制器的整体效率。实验结果证明,在能效和鲁棒性都很重要的情况下,混合RL方法可以替代深度学习。

PS:群体编码的脉冲actor网络(PopSAN);将观察和动作空间的每个维度编码为脉冲神经元的各个输入和输出群体的活动。编码器模块将连续观测转换为输入群体中的脉冲,而解码器模块将输出群体活动解码为实值动作;编码器模块中的每个神经元具有高斯感受野(μσ),μσ都是任务特定的可训练参数;解码器模块分两个阶段:首先,每T个时间步骤计算发放率fr,然后将动作a作为计算出的fr的加权和返回(输出群体的感受野是由它们的连接权重形成的,这是在训练中学习的);任务为MuJoCo仿真环境(OpenAI gym)。

 

25、Training spiking neural networks for reinforcement learning (ArXiv 2020)

  • 强化学习算法:优势A2C (Critic & Spiking Actor),策略梯度合作者网络(policy gradient coagent network, PGCN);
  • SNN学习算法:类似Hebbian/Anti-Hebbian学习规则,Memoryless Ising model;类似STDP学习规则,LIF神经元;Event-driven,Generalized Linear Model (GLM);

PS:由于权重更新还受全局TD误差控制,所以前两种学习规则仍属于三因素学习规则范畴。本文中还额外介绍了一种事件驱动的算法。假设每个脉冲神经元从一个发放策略中采样其动作,从而在网络中形成一个随机节点。通过使用重参数化技巧,能够将采样中的随机性建模为模型的输入,而不是将其归因于模型参数,从而使所有模型参数连续可微的,实现反向传播。但是,这种训练方法在其开源代码中并未进行实现,缺乏实验分析,网络实现部分介绍也十分有限,有待后续版本的查看。

 

26、A solution to the learning dilemma for recurrent networks of spiking neurons (NATURE COMMUNICATIONS, 2020)

  • 强化学习算法:Actor-Critic;
  • SNN学习算法:e-prop,LSNN;

简介:本文将局部资格迹和自上而下的学习信号进行结合,在循环连接的脉冲神经元网络上以通过梯度下降实现生物学合理的在线网络学习,尤其是深度强化学习。这种被称为e-prop的学习方法,其性能接近时间反向传播(BPTT)。

 

27、Reinforcement Learning with Feedback-modulated TD-STDP (ArXiv 2020)

  • 强化学习算法:Actor-Critic;
  • SNN学习算法:反馈调节的TD-STDP学习规则;

 

28、Strategy and Benchmark for Converting Deep Q-Networks to Event-Driven Spiking Neural Networks (AAAI 2021)

  • 强化学习算法:DQN;
  • SNN学习算法:ANN-SNN转换,IF神经元;

简介:本文提出了发放率的可靠表征,以减少深度Q网络在ANN-SNN转换过程中的误差,在17项性能最优的Atari游戏中获得了相当的得分。

PS:BindsNET;虽然题目中带有Event-Driven,但是实际采用的仍为标准的ANN-SNN转换方法(ReLU激活函数/IF神经元);任务为Atari游戏环境。

 

29、 Population-coding and Dynamic-neurons improved Spiking Actor Network for Reinforcement Learning (Journal of Neural Networks, 2021)

  • 强化学习算法:TD3-PDSAN (Deep Critic Network & Population-coded Spiking Actor Network);
  • SNN学习算法:Clock-driven,具有膜电位的一阶或更高阶动态的动态神经元;

简介:本文提出了群体编码和动态神经元改进的脉冲actor网络(PDSAN),用于两个不同尺度的有效状态表征:输入编码和神经元编码。本文使用TD3算法结合深度critic网络对PDSAN进行训练(TD3-PDSAN),在四个MuJoCo基准任务上取得了比最先进模型更好的性能。

PS:任务为MuJoCo仿真环境(OpenAI gym)。

 

30、Combining STDP and binary networks for reinforcement learning from images and sparse rewards (Neural Networks, 2021)

  • 强化学习算法:奖励最大化规则;
  • SNN学习算法:R-STDP;

 

31、On-chip trainable hardware-based deep Q-networks approximating a backpropagation algorithm (Neural Computing and Applications, 2021)

  • 强化学习算法:DQN;
  • SNN学习算法:硬件实现的LTP/LTD;

 

32、Multi-timescale biological learning algorithms train spiking neuronal network motor control (bioRxiv, November 20, 2021)

  • SNN学习算法:脉冲时序依赖强化学习(STDP-RL,三因素学习规则),进化(EVOL)学习算法;

简介:这项研究首先使用单个模型学习分别训练SNN,使用脉冲时序依赖强化学习(STDP-RL)和进化(EVOL)学习算法来解决CartPole强化学习(RL)控制问题。然后,本文开发了一种受生物进化启发的交错算法,该算法按顺序结合了EVOL和STDP-RL学习。

 

33、A Dual-Memory Architecture for Reinforcement Learning on Neuromorphic Platforms (Neuromorph. Comput., 2021)

简介:强化学习(RL)代表了生物系统学习的原生方式。人类和动物不是在部署之前通过大量标记数据进行训练,而是通过根据不断收集的数据更新策略来不断从经验中学习。这需要就地学习,而不是依赖于将新数据缓慢且成本高昂地上传到中央位置,在该位置将新信息嵌入到先前训练的模型中,然后将新模型下载到智能体。为了实现这些目标,本文描述了一个用于执行RL任务的高级系统,该系统受到生物计算的几个原则的启发,特别是互补学习系统理论,假设学习大脑中的新记忆取决于皮层和海马网络之间的相互作用。本文表明,这种"双记忆学习器"(DML)可以实现可以接近RL问题最佳解决方案的方法。然后以脉冲方式实现DML架构并在英特尔的Loihi处理器上执行。本文演示了它可以解决经典的多臂赌博机问题,以及更高级的任务,例如在迷宫中导航和纸牌游戏二十一点。

 

34、Human-Level Control through Directly-Trained Deep Spiking Q-Networks (arxiv 2022)

  • 强化学习算法:DQN;
  • SNN学习算法:基于脉冲的反向传播算法(替代梯度法);一个训练好的MLP作为解码器,解码器的输入为发放率,输出为价值函数;

 

35、Deep Reinforcement Learning with Spiking Q-learning (arxiv 2022)

  • 强化学习算法:DQN;
  • SNN学习算法:基于脉冲的反向传播算法(替代梯度法);最后一层脉冲神经元不发放,用仿真时间T内的最大膜电压表示价值函数;

 

相关开源代码

1、SpikingJelly: An open-source deep learning framework for Spiking Neural Network (SNN) based on PyTorch

  • 强化学习算法:DQN,Policy Gradient,Actor-Critic;
  • SNN学习算法:Clock-driven;

PS:最后一层脉冲神经元不发放,用仿真时间T结束后的膜电压表示连续值。

 

2、BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python (Blog)

  • SNN学习算法:PostPre, WeightDependentPostPre, Hebbian learning rules, MSTDP, MSTDPET, Rmax / ANN-SNN转换;
  • 脉冲神经元:McCullochPitts(对应的一般的ANN里的神经元), IFNodes, LIFNodes, CurrentLIFNodes, AdaptiveLIFNodes, DiehlAndCookNodes, IzhikevichNodes, SRM0Nodes;
  • 游戏环境:Atari (Breakout);

PS:具体学习算法参见(BindsNET Breakout)。

 

3、Norse: A library to do deep learning with spiking neural networks

  • 强化学习算法:DQN,Policy Gradient,Actor-Critic;
  • SNN学习算法:Clock-driven;

PS:最后一层脉冲神经元不发放,用仿真时间T内的最大膜电压表示连续值;基于策略的算法;

 

解码方式

脉冲强化学习总结(持续更新)
    




Learning in Spiking Neural Networks by Reinforcement of Stochastic Synaptic Transmission
A reinforcement learning algorithm for spiking neural networks
Reinforcement Learning Through Modulation of Spike-Timing-Dependent Synaptic Plasticity
Solving the Distal Reward Problem through Linkage of STDP and Dopamine Signaling
A Spiking Neural Network Model of an Actor-Critic Learning Agent
Reinforcement learning in populations of spiking neurons
Spike-Based Reinforcement Learning in Continuous State and Action Space: When Policy Gradient Methods Fail
Functional Requirements for Reward-Modulated Spike-Timing-Dependent Plasticity
Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons
A neural reinforcement learning model for tasks with unknown time delays
A Spiking Neural Model for Stable Reinforcement of Synapses Based on Multiple Distal Rewards
Biologically Inspired SNN for Robot Control
Reinforcement learning in cortical networks
Computation by Time
A Spiking Neural Network Model of Model-Free Reinforcement Learning with High-Dimensional Sensory Input and Perceptual Ambiguity
Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules
Navigating Mobile Robots to Target in Near Shortest Time using Reinforcement Learning with Spiking Neural Networks
A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks
End to End Learning of Spiking Neural Network based on R-STDP for a Lane Keeping Vehicle
Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to Atari Breakout game
Reinforcement Learning in Spiking Neural Networks with Stochastic and Deterministic Synapses
Embodied Synaptic Plasticity with Online Reinforcement learning
Reinforcement co-Learning of Deep and Spiking Neural Networks for Energy-Efficient Mapless Navigation with Neuromorphic Hardware
Deep Reinforcement Learning with Population-Coded Spiking Neural Network for Continuous Control
Training spiking neural networks for reinforcement learning
A solution to the learning dilemma for recurrent networks of spiking neurons
Reinforcement Learning with Feedback-modulated TD-STDP
Strategy and Benchmark for Converting Deep Q-Networks to Event-Driven Spiking Neural Networks
Population-coding and Dynamic-neurons improved Spiking Actor Network for Reinforcement Learning
Combining STDP and binary networks for reinforcement learning from images and sparse rewards
On-chip trainable hardware-based deep Q-networks approximating a backpropagation algorithm
Multi-timescale biological learning algorithms train spiking neuronal network motor control
A Dual-Memory Architecture for Reinforcement Learning on Neuromorphic Platforms
BindsNET学习系列 —— LearningRule
BindsNET学习系列 —— Nodes

  • 最后一层脉冲神经元不发放,用仿真时间T结束后的膜电压表示价值函数
  • 最后一层脉冲神经元不发放,用仿真时间T内的最大膜电压表示价值函数
  • 用一个训练好的MLP作为解码器。

相关文章: