对抗样本论文总结

[1]Karparthy博客 Breaking Linear Classifiers on ImageNet

http://karpathy.github.io/2015/03/30/breaking-convnets/

 

[2]Christian等人在ICLR2014最先提出adversarial examples的论文Intriguing properties of neural networks

论文下载到本地的第3篇

 

[3]Ian Goodfellow对对抗样本解释的论文Explaining and Harnessing Adversarial Examples

论文下载到本地的第5篇

 

[4]最近Bengio他们组发文表示就算是从相机自然采集的图像,也会有这种特性Adversarial examples in the physical world

论文下载到本地第4篇

 

[5]Anh Nguyen等人在CVPR2015上首次提出Fooling Examples的论文Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
https://arxiv.org/pdf/1412.1897.pdf

下载为本地论文第18篇

 

[6]Delving into Transferable Adversarial Examples and Black-box Attacks

论文下载到本地的第17篇

对抗样本可转移性与黑盒攻击_学习笔记:https://blog.csdn.net/qq_35414569/article/details/82383788

 

posted on 2018-11-14 11:43  Josie_chen  阅读(1056)  评论(0编辑  收藏  举报

导航