随笔分类 -  MachineLearning

摘要: 阅读全文
posted @ 2021-12-30 13:09 Junfei_Wang 阅读(37) 评论(0) 推荐(0)
摘要:I had been searching for so long on google for solutions to visualize a high dimensional decision boundaries, but it seems no so many people having do 阅读全文
posted @ 2020-03-17 10:11 Junfei_Wang 阅读(286) 评论(0) 推荐(0)
摘要:GANs is supervised learning, so there is only X, no label y. The network tries to learn the data distribution of X, and the goal is to generate new sy 阅读全文
posted @ 2020-02-21 22:23 Junfei_Wang 阅读(338) 评论(0) 推荐(0)
摘要:Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks’ vuln 阅读全文
posted @ 2020-02-12 22:48 Junfei_Wang 阅读(247) 评论(0) 推荐(0)
摘要:Analogy: Many insects have a larval form that is optimized for extracting energy and nutrients from the environment and a completely different adult f 阅读全文
posted @ 2020-01-27 12:32 Junfei_Wang 阅读(168) 评论(0) 推荐(0)
摘要:Taylor Approximation According to Taylor Series: It defines a way to estimate the value of function f, when the variable x has a small change Δx. If w 阅读全文
posted @ 2020-01-17 10:15 Junfei_Wang 阅读(277) 评论(0) 推荐(0)
摘要:Problem with Large Weights Large weights in a neural network are a sign of overfitting. A network with large weights has very likely learned the stati 阅读全文
posted @ 2020-01-05 08:11 Junfei_Wang 阅读(304) 评论(0) 推荐(0)
摘要:Paper [1]: White-box neural network attack, adversaries have full access to the model. Using Gradient Descent going back to update the input so that r 阅读全文
posted @ 2019-11-06 00:11 Junfei_Wang 阅读(682) 评论(0) 推荐(0)
摘要:In this post, we are going to compare the two types of machine learning models-generative model and discriminative model-, whose underlying ideas are 阅读全文
posted @ 2019-08-22 10:54 Junfei_Wang 阅读(594) 评论(0) 推荐(0)
摘要:People commonly tend to put much effort on hyperparameter tuning and training while using Tensoflow&Deep Learning. A realistic problem for TF is how t 阅读全文
posted @ 2019-06-29 23:58 Junfei_Wang 阅读(307) 评论(0) 推荐(0)
摘要:We are now trying to deploy our Deep Learning model onto Google Cloud. It is required to use Google Function to trigger the Deep Learning predictions. 阅读全文
posted @ 2019-06-04 23:44 Junfei_Wang 阅读(433) 评论(0) 推荐(0)
摘要:1.Problem and Loss Function Linear Regression is a Supervised Learning Algorithm with input matrix X and output label Y. We train a system to make hyp 阅读全文
posted @ 2018-12-03 11:33 Junfei_Wang 阅读(296) 评论(0) 推荐(0)
摘要:Boosting Ensemble: 机器学习中,Ensemble model除了Bagging以外,更常用的是Boosting。与Bagging不同,Boosting中各个模型是串行的。其思想是,后面的model,要从前面models的预测中结果中,试图将错误纠正。下面两张图可以看出二者的异同: 阅读全文
posted @ 2018-09-02 09:05 Junfei_Wang 阅读(397) 评论(0) 推荐(0)
摘要:Bootstrap Method:在统计学中,Bootstrap从原始数据中抽取子集,然后分别求取各个子集的统计特征,最终将统计特征合并。例如求取某国人民的平均身高,不可能测量每一个人的身高,但却可以在10个省市,分别招募1000个志愿者来测量并求均值,最终再求取各省市的平均值。 Bagging(B 阅读全文
posted @ 2018-08-29 20:34 Junfei_Wang 阅读(329) 评论(0) 推荐(0)
摘要:在开始kNN博文之前,举一个小栗子。当手中的杯子突然滑落,从一米多高的空中坠向地板,常人会惊慌失措,心想:坏了,要碎了!这一下意识的想法,恰恰说明了kNN运作的机理:没有人在此之前见过那只特定的杯子打碎的样子, 但是大家见过很多其他杯子打碎的样子,以及很多杯子虽然摔落但没有碎掉的例子。所以我们知道, 阅读全文
posted @ 2018-08-26 23:04 Junfei_Wang 阅读(1090) 评论(0) 推荐(0)
摘要:曾经多次看到别人说起,在选择Optimizer的时候默认就选Adam。这样的建议其实比较尴尬,如果有一点科学精神的人,其实就会想问为什么,并搞懂这一切,这也是我开这个Optimizer系列的原因之一。前面介绍了Momentum,也介绍了RMSProp,其实Adam就是二者的结合,再加上偏差修正(Bi 阅读全文
posted @ 2018-07-13 20:24 Junfei_Wang 阅读(777) 评论(0) 推荐(0)
摘要:AdaGrad全称是Adaptive Gradient Algorithm,是标准Gradient Descent的又一个派生算法。标准Gradient Descent的更新公式为: 其中Learning Rate α对于Cost Function的各个feature都一样,但同一个α几乎不可能在各 阅读全文
posted @ 2018-07-11 15:52 Junfei_Wang 阅读(1174) 评论(0) 推荐(0)
摘要:在Batch Gradient Descent及Mini-batch Gradient Descent, Stochastic Gradient Descent(SGD)算法中,每一步优化相对于之前的操作,都是独立的。每一次迭代开始,算法都要根据更新后的Cost Function来计算梯度,并用该梯 阅读全文
posted @ 2018-07-09 20:15 Junfei_Wang 阅读(693) 评论(0) 推荐(0)
摘要:全零初始化的问题: 在Linear Regression中,常用的参数初始化方式是全零,因为在做Gradient Descent的时候,各个参数会在输入的各个分量维度上各自更新。更新公式为: 而在Neural Network(Deep Learning)中,当我们将所有的parameters做全零初 阅读全文
posted @ 2018-07-04 18:39 Junfei_Wang 阅读(363) 评论(0) 推荐(0)
摘要:L2 Regularization是解决Variance(Overfitting)问题的方案之一,在Neural Network领域里通常还有Drop Out, L1 Regularization等。无论哪种方法,其Core Idea是让模型变得更简单,从而平衡对training set完美拟合、以 阅读全文
posted @ 2018-06-30 18:47 Junfei_Wang 阅读(198) 评论(0) 推荐(0)