神经网络梯度下降的三种学习方式

 

# Batch gradient descent(批量梯度下降)
for i in range(nb_epochs):
    params_grad = evaluate_gradient(loss_function, data, params)
    params = params - learning_rate * params_grad


# Stochastic gradient descent(随机梯度下降)
for i in range(nb_epochs):
    np.random.shuffle(data)
    for example in data:
        params_grad = evaluate_gradient(loss_function, example, params)
        params = params - learning_rate * params_grad


# Mini-batch gradient descent(小批量梯度下降)
for i in range(nb_epochs):
    np.random.shuffle(data)
    for batch in get_batches(data, batch_size=50):
        params_grad = evaluate_gradient(loss_function, batch, params)
        params = params - learning_rate * params_grad

 

posted @ 2016-04-05 04:02  罗兵  阅读(964)  评论(0编辑  收藏  举报