随笔分类 - algorithm
摘要:http://ufldl.stanford.edu/tutorial/supervised/OptimizationStochasticGradientDescent/
        阅读全文
                
摘要:https://stats.stackexchange.com/questions/164876/tradeoff-batch-size-vs-number-of-iterations-to-train-a-neural-network It has been observed in practic
        阅读全文
                
摘要:https://rdipietro.github.io/friendly-intro-to-cross-entropy-loss/ 【将输入转化为输出:概率分布】 When we develop a model for probabilistic classification, we aim to 
        阅读全文
                
摘要:https://en.wikipedia.org/wiki/Claude_Shannon In 1948, the promised memorandum appeared as "A Mathematical Theory of Communication," an article in two 
        阅读全文
                
摘要:http://cs231n.github.io/linear-classify/
        阅读全文
                
摘要:The hinge loss is a convex function, so many of the usual convex optimizers used in machine learning can work with it. It is not differentiable, but h
        阅读全文
                
摘要:http://www1.inf.tu-dresden.de/~ds24/lehre/ml_ws_2013/ml_11_hinge.pdf Two extremes: • Big 𝐶 → the loss is more important → better recognition rate but
        阅读全文
                
摘要:C:\Python36\python.exe D:/pymine/clean/chained_located/chained_located_dynamic_input.py '-69,-47,,,-72,-40,-37,-96,-36,-97,-67,-67,-43,,-100,-70,-54,-62,-92,-98,,-33,-77,-17,-17,,-98,-76...
        阅读全文
                
摘要:http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture9.pdf The deeper model performs worse, but it’s not caused by overfitting!
        阅读全文
                
摘要:http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture4.pdf
        阅读全文
                
摘要:http://cs231n.github.io/linear-classify/ http://cs231n.github.io/assets/svmvssoftmax.png
        阅读全文
                
摘要:http://karpathy.github.io/2014/09/02/what-i-learned-from-competing-against-a-convnet-on-imagenet/
        阅读全文
                
摘要:https://www.tensorflow.org/tutorials/image_recognition
        阅读全文
                
摘要:http://neuralnetworksanddeeplearning.com/chap1.html Up to now, we've been discussing neural networks where the output from one layer is used as input 
        阅读全文
                
摘要:http://neuralnetworksanddeeplearning.com/chap1.html . Sigmoid neurons are similar to perceptrons, but modified so that small changes in their weights 
        阅读全文
                
摘要:@Matthew 【[抱拳]】推荐系统中,传统的两大算法,无论是基于人的过滤,还是基于物品的过滤,在前期历史数据的量和质都不充足的情况下,这两种传统的算法都无法冷启动。 【基于ANN的一种解决办法】 张三第一次点击了汽车广告,通过ANN训练,使得输出层的汽车的概率最大,次之的就为接下来要投出的广告类
        阅读全文
                
 
                    
                
 浙公网安备 33010602011771号
浙公网安备 33010602011771号