DARLA:Improving Zero-Shot Transfer in Reinforcement Learning这篇文章发表于ICML2017,主要讲Reinforement learning算法在不同数据分布上的迁移。

鉴于实验室讲论文讨论的要求,将该论文做成了ppt,供大家分享。

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning


读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

读论文-DARLA:Improving Zero-Shot Transfer in Reinforcement Learning

总结,这篇文章关键点在于使用DAE将real world的state转换为latent state。另外,DARLA不能称为严格意义的zero-shot,因为模拟环境与现实环境具有极大的相似性,致使在本质上Agent处于现实环境中训练(虽然表面上是虚拟环境训练)。不过,这文章的这种思想不错,能够在虚拟环境训练,并成功用于现实环境。

相关文章:

  • 2021-07-04
  • 2022-12-23
  • 2021-12-12
  • 2021-06-07
  • 2021-09-14
  • 2021-07-27
  • 2021-10-15
  • 2021-06-30
猜你喜欢
  • 2021-11-08
  • 2021-11-19
  • 2021-08-06
  • 2021-12-30
  • 2021-12-18
相关资源
相似解决方案