GBDT中损失函数的负梯度用来拟合的一些理解

\(L(y_i,f(x_i))\)\(f(x_i)=f_{m-1}(x_i)\)处泰勒展开到一阶(舍去余项,故为近似)

\[L(y_i,f(x_i))\approx L(y_i,f_{m-1}(x_i))+\left. \frac{\partial L(y_i,f(x_i))}{\partial f(x_i)} \right|_{f(x_i)=f_{m-1}(x_i)}\cdot (f(x_i)-f_{m-1}(x_i)) \]

\(f(x_i) = f_{m-1}(x_i)\)\(f_m(x_i) = f_{m-1}(x_i)+T_m(x_i;\theta _m)\)带入上式并移项

\[L(y_i,f_m(x_i))-L(y_i,f_{m-1}(x_i))\approx \left. \frac{\partial L(y_i,f(x_i))}{\partial f(x_i)} \right|_{f(x_i)=f_{m-1}(x_i)}\cdot T_m(x_i;\theta _m) \]

左式需小于0(每轮得到的强学习器需要比上一轮强学习器在损失函数更小,不然优化无意义),故令\(T_m(x_i;\theta _m)\)去拟合\(-\left. \frac{\partial L(y_i,f(x_i))}{\partial f(x_i)} \right|_{f(x_i)=f_{m-1}(x_i)}\)使得右式小于0。
混淆点:\(f(x_i)\)是一个变量,代表最终求得的强学习器在第\(i\)个样本\(x_i\)上的预测,\(f_{m-1}(x_i)\)\(f_m(x_i)\)是常量,即\((m-1)\)轮和\(m\)轮得到的强学习器在样本\(x_i\)上的预测

posted @ 2022-05-23 11:39  lcxxx  Views(86)  Comments(1Edit  收藏  举报