上一页 1 ··· 26 27 28 29 30 31 32 33 34 ··· 36 下一页
摘要: Zhang H, Yu Y, Jiao J, et al. Theoretically Principled Trade-off between Robustness and Accuracy[J]. arXiv: Learning, 2019. @article{zhang2019theoreti 阅读全文
posted @ 2020-03-12 14:24 馒头and花卷 阅读(1539) 评论(0) 推荐(0)
摘要: Madry A, Makelov A, Schmidt L, et al. Towards Deep Learning Models Resistant to Adversarial Attacks.[J]. arXiv: Machine Learning, 2017. @article{madry 阅读全文
posted @ 2020-03-04 20:08 馒头and花卷 阅读(944) 评论(0) 推荐(0)
摘要: Goodfellow I, Shlens J, Szegedy C, et al. Explaining and Harnessing Adversarial Examples[J]. arXiv: Machine Learning, 2014. @article{goodfellow2014exp 阅读全文
posted @ 2020-03-04 19:35 馒头and花卷 阅读(497) 评论(0) 推荐(0)
摘要: Papernot N, Mcdaniel P, Goodfellow I, et al. Practical Black-Box Attacks against Machine Learning[C]. computer and communications security, 2017: 506- 阅读全文
posted @ 2020-03-04 19:32 馒头and花卷 阅读(395) 评论(0) 推荐(1)
摘要: A Deep Neural Network’s Loss Surface Contains Every Low-dimensional Pattern 概 作者关于Loss Surface的情况做了一个理论分析, 即证明足够大的神经网络能够逼近所有的低维损失patterns. 相关工作 loss l 阅读全文
posted @ 2020-02-25 22:14 馒头and花卷 阅读(257) 评论(0) 推荐(0)
摘要: Skorokhodov I, Burtsev M. Loss Landscape Sightseeing with Multi-Point Optimization.[J]. arXiv: Learning, 2019. @article{skorokhodov2019loss, title={Lo 阅读全文
posted @ 2020-02-25 22:09 馒头and花卷 阅读(314) 评论(0) 推荐(0)
摘要: Lu Z, Pu H, Wang F, et al. The expressive power of neural networks: a view from the width[C]. neural information processing systems, 2017: 6232-6240. 阅读全文
posted @ 2020-02-24 13:51 馒头and花卷 阅读(448) 评论(0) 推荐(0)
摘要: Accelerating Deep Learning by Focusing on the Biggest Losers 概 思想很简单, 在训练网络的时候, 每个样本都会产生一个损失$\mathcal{L}(f(x_i),y_i)\(, 训练的模式往往是批训练, 将一个批次\)\sum_i \ma 阅读全文
posted @ 2020-02-16 21:24 馒头and花卷 阅读(383) 评论(0) 推荐(0)
摘要: Katharopoulos A, Fleuret F. Not All Samples Are Created Equal: Deep Learning with Importance Sampling[J]. arXiv: Learning, 2018. @article{katharopoulo 阅读全文
posted @ 2020-02-16 20:42 馒头and花卷 阅读(740) 评论(2) 推荐(0)
摘要: Rosasco L, De Vito E, Caponnetto A, et al. Are loss functions all the same[J]. Neural Computation, 2004, 16(5): 1063-1076. @article{rosasco2004are, ti 阅读全文
posted @ 2020-02-13 23:13 馒头and花卷 阅读(311) 评论(0) 推荐(0)
摘要: Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]. computer vision and pattern recognition, 2015: 1-9. @article{szegedy2015going, titl 阅读全文
posted @ 2020-01-13 21:06 馒头and花卷 阅读(475) 评论(0) 推荐(0)
摘要: He K, Zhang X, Ren S, et al. Deep Residual Learning for Image Recognition[C]. computer vision and pattern recognition, 2016: 770-778. @article{he2016d 阅读全文
posted @ 2020-01-11 23:35 馒头and花卷 阅读(316) 评论(0) 推荐(0)
摘要: [TOC] "Mirza M, Osindero S. Conditional Generative Adversarial Nets.[J]. arXiv: Learning, 2014." @article{mirza2014conditional, title={Conditional Gen 阅读全文
posted @ 2020-01-08 18:59 馒头and花卷 阅读(896) 评论(0) 推荐(1)
摘要: [TOC] "Neal R. M. , MCMC Using Hamiltonian Dynamics[J]. arXiv: Computation, 2011: 139 188." @article{neal2011mcmc, title={MCMC Using Hamiltonian Dynam 阅读全文
posted @ 2020-01-05 14:50 馒头and花卷 阅读(1140) 评论(0) 推荐(0)
摘要: [TOC] 茆诗松, 汤银才, 《贝叶斯统计》, 中国统计出版社, 2012.9. 这本书错误有点多, 所以我后面写得可能也有很多错误的地方. python def posterior_random_walk(x, y, beta, mu, sigma2): theta = lambda x: np 阅读全文
posted @ 2020-01-02 14:38 馒头and花卷 阅读(1409) 评论(0) 推荐(0)
摘要: Safran I, Shamir O. Spurious Local Minima are Common in Two-Layer ReLU Neural Networks[J]. arXiv: Learning, 2017. @article{safran2017spurious, title={ 阅读全文
posted @ 2019-12-13 22:49 馒头and花卷 阅读(282) 评论(0) 推荐(0)
摘要: [TOC] "Cho Y, Saul L K. Kernel Methods for Deep Learning[C]. neural information processing systems, 2009: 342 350." @article{cho2009kernel, title={Ker 阅读全文
posted @ 2019-12-12 22:34 馒头and花卷 阅读(481) 评论(0) 推荐(0)
摘要: Nguyen Q C, Hein M. Optimization Landscape and Expressivity of Deep CNNs[J]. arXiv: Learning, 2017. BibTex @article{nguyen2017optimization, title={Opt 阅读全文
posted @ 2019-11-22 12:50 馒头and花卷 阅读(532) 评论(0) 推荐(0)
摘要: [TOC] BP算法的简单实现 首先创建一个父类Fun, 主要定义了 forward: 前向方法,需要子类重定义; Momentum: 一个梯度下降方法; step: 更新系数的方法; zero_grad: 将记录的梯度清空; load: 加载系数; Linear 全连接层 全连接层需要注意的是 $ 阅读全文
posted @ 2019-10-27 15:37 馒头and花卷 阅读(575) 评论(0) 推荐(0)
摘要: Arora S, Cohen N, Hazan E, et al. On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization[J]. arXiv: Learning, 2018. 引 我很喜 阅读全文
posted @ 2019-10-18 22:10 馒头and花卷 阅读(604) 评论(0) 推荐(0)
上一页 1 ··· 26 27 28 29 30 31 32 33 34 ··· 36 下一页