[1]Karparthy博客 Breaking Linear Classifiers on ImageNet

http://karpathy.github.io/2015/03/30/breaking-convnets/

 

[2]Christian等人在ICLR2014最先提出adversarial examples的论文Intriguing properties of neural networks

论文下载到本地的第3篇

 

[3]Ian Goodfellow对对抗样本解释的论文Explaining and Harnessing Adversarial Examples

论文下载到本地的第5篇

 

[4]最近Bengio他们组发文表示就算是从相机自然采集的图像,也会有这种特性Adversarial examples in the physical world

论文下载到本地第4篇

 

[5]Anh Nguyen等人在CVPR2015上首次提出Fooling Examples的论文Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
https://arxiv.org/pdf/1412.1897.pdf

下载为本地论文第18篇

 

[6]Delving into Transferable Adversarial Examples and Black-box Attacks

论文下载到本地的第17篇

对抗样本可转移性与黑盒攻击_学习笔记:https://blog.csdn.net/qq_35414569/article/details/82383788

 

相关文章:

  • 2021-12-27
  • 2021-08-21
  • 2021-04-09
  • 2021-05-24
  • 2021-09-27
  • 2022-12-23
  • 2022-12-23
  • 2021-04-05
猜你喜欢
  • 2021-12-04
  • 2021-04-16
  • 2021-08-04
  • 2022-12-23
  • 2022-01-12
  • 2021-11-12
  • 2022-12-23
相关资源
相似解决方案