PyTorch深度学习实践(三)---梯度下降法

B站 《PyTorch深度学习实践》梯度下降法

可能陷入局部最优

 1 import numpy as np
 2 
 3 x_data=[1.0,2.0,3.0]
 4 y_data=[2.0,4.0,6.0]
 5 
 6 w=1.0
 7 
 8 def forward(x):
 9     return x*w
10 
11 def cost(xs,ys): #所有数据的平均损失
12     cost=0
13     for x,y in zip(xs,ys):
14         y_pred=forward(x)
15         cost+=(y_pred-y)**2
16     return (cost/len(xs))
17 
18 def gradient(xs,ys): 
19     grad=0
20     for x,y in zip (xs,ys):
21         grad+=2*x*(x*w-y)
22     return grad/len(xs)
23 
24 for epoch in range(100):
25     cost_val=cost(x_data,y_data)
26     grad_val=gradient(x_data,y_data)
27     w-=0.01*grad_val
28     print('epoch:', epoch,'w=',w,'loss=',cost_val)

随机梯度下降法

计算损失loss是对于一个数据的,gradient也改为计算一个数据的梯度,避免陷入局部最优。

 1 import numpy as np
 2 
 3 x_data=[1.0,2.0,3.0,4.0]
 4 y_data=[2.0,4.0,6.0,8.0]
 5 
 6 w=1.0
 7 
 8 def forward(x):
 9     return x*w
10 
11 def loss(x,y):
12     y_pred=forward(x)
13     return (y_pred-y)**2
14 
15 def gradient(x,y):
16     return 2*x*(x*w-y)
17 
18 for epoch in range(100):
19     for x,y in zip (x_data,y_data):
20         grad=gradient(x,y)
21         w-=0.05*grad
22         l=loss(x,y)
23         print(x,y,grad)
24     print('epoch:',epoch,'loss:',l,'w:',w)

 

posted @ 2021-04-15 15:23  mioho  阅读(140)  评论(0)    收藏  举报