梯度下降算法详解

https://blog.csdn.net/qq_41800366/article/details/86583789

根据这篇文章,根据自己的理解,实际应用了一下梯度下降

#coding:utf-8
import numpy as np

w = 2
b = 2
lr = 0.01 # 学习率
notInc = 0 # 为增长次数
MAX_NOT_INC = 10

x = np.array([1, 3, 10, 100, -1, -10], dtype=np.float32)
y = np.array([2, 7, 19, 190, -2, -20], dtype=np.float32)


n = len(x)
_2n = 2*n
def model(x):
    return w*x + b

def loss(x, y):
    dist = np.sqrt(np.sum(np.square(model(x)-y))) / _2n
    return dist

# 导数
def loss_grad(x, y):
    return np.sum(model(x) - y) / n

# print(loss(x, y))
step = 0
lastL = -np.inf
while True:
    grad = loss_grad(x, y)
    # print(grad)
    
    w -= lr * np.mean(grad/x)
    b -= lr * grad
    l = loss(x, y)

    if l > lastL:
        notInc = 0
    else:
        notInc += 1
    
    # print (w,b)
    if step % 100 == 0:
        print(l)

    if notInc >= MAX_NOT_INC:
        break
    lastL = l

print (w, b, loss(x, y), model(10000), model(-20000))

这个训练结果还是比较满意的,但是有个问题,特征中,不能为0, 否则就会有除0错误。 

 

posted @ 2022-06-03 16:43  Please Call me 小强  阅读(47)  评论(0编辑  收藏  举报