• 博客园logo
  • 会员
  • 众包
  • 新闻
  • 博问
  • 闪存
  • 赞助商
  • HarmonyOS
  • Chat2DB
    • 搜索
      所有博客
    • 搜索
      当前博客
  • 写随笔 我的博客 短消息 简洁模式
    用户头像
    我的博客 我的园子 账号设置 会员中心 简洁模式 ... 退出登录
    注册 登录
ying_vincent
博客园    首页    新随笔    联系   管理    订阅  订阅
上一页 1 ··· 9 10 11 12 13 14 15 16 17 ··· 27 下一页
2013年6月30日
Machine Learning No.11: Recommender System
摘要: 1. Content based Problem formulation Content Based Recommendations:2. collaborative filtering algorithm 阅读全文
posted @ 2013-06-30 09:35 ying_vincent 阅读(193) 评论(0) 推荐(0)
Machine Learning No.10: Anomaly detection
摘要: 1. Algorithm2. evaluating an anomaly detection system3. anomaly detection vs supervised learning4. choose what features to use. - choose the features xi which hist(xi) is like gaussian shape, or transfer xi such as log(xi+c) to make hist(xi) to be like gaussian shape. - if anomaly case's feature 阅读全文
posted @ 2013-06-30 04:58 ying_vincent 阅读(393) 评论(0) 推荐(0)
2013年6月29日
Machine Learning No.9: Dimensionality reduction
摘要: 1. Principal component analysis algorithmdata preprocessing2. choosing the number of principal components3. reconstruction from compressed representation4. Application of PCA - compression - reduce memory/dist needed to store data - speed up learning algorithm - visualizationbad use of PCA: to pre.. 阅读全文
posted @ 2013-06-29 12:31 ying_vincent 阅读(192) 评论(0) 推荐(0)
Machine Learning No.8: Clusting
摘要: 1. K-means algorithm2. K-means optimization objective3. Random initialization 阅读全文
posted @ 2013-06-29 11:03 ying_vincent 阅读(223) 评论(0) 推荐(0)
Machine Learning No.7: Support Vector Machines
摘要: 1. SVM hypothsis2. large margin classification3. kernals and similarityiff1 = 1;if x if far from l^(1), f1 = 04. SVM with kernels5. SVM parameters6. Multi-class classification7. Logistic regression vs SVMs 阅读全文
posted @ 2013-06-29 10:10 ying_vincent 阅读(183) 评论(0) 推荐(0)
2013年6月27日
Machine Learning No.5: Neural networks
摘要: 1. advantage: when number of features is too large, so previous algorithm is not a good way to learn complex nonlinear hypotheses.2. representation"activation" of unit i in layer jmatrix of weights controlling function mapping from layer j to layer j+13. samplewe have the neural expression 阅读全文
posted @ 2013-06-27 11:00 ying_vincent 阅读(266) 评论(0) 推荐(0)
Machine Learning No.4: Regularization
摘要: 1. Underfit = High bias Overfit = High varience2. Addressing overfitting: (1) reduce number of features. Manually select which features to keep. Model selection algorithm disadvantage: throw out some useful information (2) Regularization Keep all the features, but reduce magnitude/valu... 阅读全文
posted @ 2013-06-27 07:45 ying_vincent 阅读(217) 评论(0) 推荐(0)
2013年6月26日
Machine Learning No.3: Logistic Regression
摘要: 1. Decision boundarywhen hθ(x) > 0, g(z) = 1; whenhθ(x) < 0, g(z) = 0.so the hyppthesis is:2. cost functionto fit parametersθ:to make a prediction given new x:Output 3. Gradient DescentRepeat { (simultaneously update allθj)} 阅读全文
posted @ 2013-06-26 10:41 ying_vincent 阅读(136) 评论(0) 推荐(0)
2013年6月25日
Algorithm: Sieve of Eratosthenes
摘要: 寻找比n小的所有质数的方法。2是质数, 2*i都是质数,同样3是质数,3*i也都是质数代码如下1 int n;2 vector prime (n+1, true);3 prime[0] = prime[1] = false;4 for (int i=2; i<=n; ++i)5 if (prime[i])6 if (i * 1ll * i <= n)7 for (int j=i*i; j<=n; j+=i)8 prime[j] = false; 阅读全文
posted @ 2013-06-25 12:05 ying_vincent 阅读(143) 评论(0) 推荐(0)
Machine Learning No.2: Linear Regression with Multiple Variables
摘要: 1. notation:n = number of featuresx(i) = input (features) of ithtraining example= value of feature j inithtraining example2. Hypothesis:3. Cost function:4. Gradient descent:Repeat { }substituting cost function, thenRepeat { (simultaneously updateθj for j = 0, ... n)}5. Mean normalizationreplace ... 阅读全文
posted @ 2013-06-25 08:55 ying_vincent 阅读(222) 评论(0) 推荐(0)
上一页 1 ··· 9 10 11 12 13 14 15 16 17 ··· 27 下一页
博客园  ©  2004-2025
浙公网安备 33010602011771号 浙ICP备2021040463号-3