上一页 1 ··· 20 21 22 23 24
摘要: Tuning process 下图中的需要tune的parameter的先后顺序, 红色>黄色>紫色,其他基本不会tune. 先讲到怎么选hyperparameter, 需要随机选取(sampling at random) 随机选取的过程中,可以采用从粗到细的方法逐步确定参数 有些参数可以按照线性随 阅读全文
posted @ 2018-03-06 20:44 mashuai_191 阅读(320) 评论(0) 推荐(0)
摘要: 声明:所有内容来自coursera,作为个人学习笔记记录在这里. 请不要ctrl+c/ctrl+v作业. Optimization Methods Until now, you've always used Gradient Descent to update the parameters and 阅读全文
posted @ 2018-03-05 23:29 mashuai_191 阅读(1697) 评论(0) 推荐(0)
摘要: Gradient descent Batch Gradient Decent, Mini-batch gradient descent, Stochastic gradient descent 还有很多比gradient decent 更优化的算法,在了解这些算法前,需要先理解 Exponentia 阅读全文
posted @ 2018-03-01 18:31 mashuai_191 阅读(317) 评论(0) 推荐(0)
摘要: 声明:所有内容来自coursera,作为个人学习笔记记录在这里. Gradient Checking Welcome to the final assignment for this week! In this assignment you will learn to implement and u 阅读全文
posted @ 2018-02-28 23:05 mashuai_191 阅读(244) 评论(0) 推荐(0)
摘要: 声明:所有内容来自coursera,作为个人学习笔记记录在这里. Initialization Welcome to the first assignment of "Improving Deep Neural Networks". Training your neural network requ 阅读全文
posted @ 2018-02-27 22:50 mashuai_191 阅读(512) 评论(0) 推荐(0)
摘要: 声明:所有内容来自coursera,作为个人学习笔记记录在这里. Regularization Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capac 阅读全文
posted @ 2018-02-27 22:37 mashuai_191 阅读(422) 评论(0) 推荐(0)
摘要: italics, _word_ bold, **word** Header Link Image Blockquotes: Lists: Ref: https://www.markdowntutorial.com/ https://www.markdownguide.org/basic-syntax 阅读全文
posted @ 2018-02-26 17:39 mashuai_191 阅读(113) 评论(0) 推荐(0)
摘要: Train/Dev/Test set Bias/Variance Regularization 有下面一些regularization的方法. L2 regularization: Forbenius Norm. 上面这张图提到了weight decay 的概念 Weight Decay: A re 阅读全文
posted @ 2018-02-25 18:05 mashuai_191 阅读(333) 评论(0) 推荐(0)
上一页 1 ··· 20 21 22 23 24