最小二乘困难详解5:非线性最小二乘求解实例

1. 引言

在上一篇文章《最小二乘问题详解4:非线性最小二乘》中,介绍了非线性最小二乘问题的基本定义、求解思路及其核心算法Gauss-Newton方法,强调通过局部线性化将非线性问题转化为迭代的线性最小二乘子问题来求解。由于非线性最小二乘问题起来比线性最小二乘复杂多了,这里就通过一个拟合曲线 y = exp ⁡ ( a x 2 + b x + c ) y = \exp(a x^2 + b x + c) y=exp(ax2+bx+c)的实例来加深对非线性最小二乘问题的理解。

2. 雅可比矩阵

最麻烦的还是计算雅可比矩阵。设参数向量为:

θ = [ a b c ] \boldsymbol{\theta} = \begin{bmatrix} a \\ b \\ c \end{bmatrix} θ=abc

模型函数为:

f ( x ; θ ) = exp ⁡ ( a x 2 + b x + c ) f(x; \boldsymbol{\theta}) = \exp(a x^2 + b x + c) f(x;θ)=exp(ax2+bx+c)

令中间变量:

u = a x 2 + b x + c ⇒ f = e u u = a x^2 + b x + c \quad \Rightarrow \quad f = e^u u=ax2+bx+cf=eu

根据定义,雅可比矩阵是模型输出对参数的偏导形成的矩阵,即:

J i , j = ∂ f ( x i ; θ ) ∂ θ j \mathbf{J}_{i,j} = \frac{\partial f(x_i; \boldsymbol{\theta})}{\partial \theta_j} Ji,j=θjf(xi;θ)

每一行是模型在第 i i i个样本处对参数的梯度。计算每个偏导数的解析形式:

  1. 对 $ a $ 求偏导:

    ∂ f ∂ a = ∂ ∂ a e a x 2 + b x + c = e a x 2 + b x + c ⋅ ∂ ∂ a ( a x 2 + b x + c ) = e u ⋅ x 2 = f ( x ; θ ) ⋅ x 2 \frac{\partial f}{\partial a} = \frac{\partial}{\partial a} e^{a x^2 + b x + c} = e^{a x^2 + b x + c} \cdot \frac{\partial}{\partial a}(a x^2 + b x + c) = e^{u} \cdot x^2 = f(x; \boldsymbol{\theta}) \cdot x^2 af=aeax2+bx+c=eax2+bx+ca(ax2+bx+c)=eux2=f(x;θ)x2

  2. 对 $ b $ 求偏导:

    ∂ f ∂ b = e u ⋅ ∂ ∂ b ( a x 2 + b x + c ) = e u ⋅ x = f ( x ; θ ) ⋅ x \frac{\partial f}{\partial b} = e^{u} \cdot \frac{\partial}{\partial b}(a x^2 + b x + c) = e^{u} \cdot x = f(x; \boldsymbol{\theta}) \cdot x bf=eub(ax2+bx+c)=eux=f(x;θ)x

  3. 对 $ c $ 求偏导:

    ∂ f ∂ c = e u ⋅ ∂ ∂ c ( a x 2 + b x + c ) = e u ⋅ 1 = f ( x ; θ ) \frac{\partial f}{\partial c} = e^{u} \cdot \frac{\partial}{\partial c}(a x^2 + b x + c) = e^{u} \cdot 1 = f(x; \boldsymbol{\theta}) cf=euc(ax2+bx+c)=eu1=f(x;θ)

因此,对于 $ N $ 个数据点 $ x_1, x_2, \dots, x_N $,雅可比矩阵为:

J = [ ∂ f ( x 1 ) ∂ a ∂ f ( x 1 ) ∂ b ∂ f ( x 1 ) ∂ c ∂ f ( x 2 ) ∂ a ∂ f ( x 2 ) ∂ b ∂ f ( x 2 ) ∂ c ⋮ ⋮ ⋮ ∂ f ( x N ) ∂ a ∂ f ( x N ) ∂ b ∂ f ( x N ) ∂ c ] = [ f ( x 1 ) ⋅ x 1 2 f ( x 1 ) ⋅ x 1 f ( x 1 ) f ( x 2 ) ⋅ x 2 2 f ( x 2 ) ⋅ x 2 f ( x 2 ) ⋮ ⋮ ⋮ f ( x N ) ⋅ x N 2 f ( x N ) ⋅ x N f ( x N ) ] \mathbf{J} = \begin{bmatrix} \frac{\partial f(x_1)}{\partial a} & \frac{\partial f(x_1)}{\partial b} & \frac{\partial f(x_1)}{\partial c} \\ \frac{\partial f(x_2)}{\partial a} & \frac{\partial f(x_2)}{\partial b} & \frac{\partial f(x_2)}{\partial c} \\ \vdots & \vdots & \vdots \\ \frac{\partial f(x_N)}{\partial a} & \frac{\partial f(x_N)}{\partial b} & \frac{\partial f(x_N)}{\partial c} \end{bmatrix}= \begin{bmatrix} f(x_1) \cdot x_1^2 & f(x_1) \cdot x_1 & f(x_1) \\ f(x_2) \cdot x_2^2 & f(x_2) \cdot x_2 & f(x_2) \\ \vdots & \vdots & \vdots \\ f(x_N) \cdot x_N^2 & f(x_N) \cdot x_N & f(x_N) \end{bmatrix} J=af(x1)af(x2)af(xN)bf(x1)bf(x2)bf(xN)cf(x1)cf(x2)cf(xN)=f(x1)x12f(x2)x22f(xN)xN2f(x1)x1f(x2)x2f(xN)xNf(x1)f(x2)f(xN)

或者写成:

J = [ e a x 1 2 + b x 1 + c ⋅ x 1 2 e a x 1 2 + b x 1 + c ⋅ x 1 e a x 1 2 + b x 1 + c e a x 2 2 + b x 2 + c ⋅ x 2 2 e a x 2 2 + b x 2 + c ⋅ x 2 e a x 2 2 + b x 2 + c ⋮ ⋮ ⋮ e a x N 2 + b x N + c ⋅ x N 2 e a x N 2 + b x N + c ⋅ x N e a x N 2 + b x N + c ] \boxed{ \mathbf{J} = \begin{bmatrix} e^{a x_1^2 + b x_1 + c} \cdot x_1^2 & e^{a x_1^2 + b x_1 + c} \cdot x_1 & e^{a x_1^2 + b x_1 + c} \\ e^{a x_2^2 + b x_2 + c} \cdot x_2^2 & e^{a x_2^2 + b x_2 + c} \cdot x_2 & e^{a x_2^2 + b x_2 + c} \\ \vdots & \vdots & \vdots \\ e^{a x_N^2 + b x_N + c} \cdot x_N^2 & e^{a x_N^2 + b x_N + c} \cdot x_N & e^{a x_N^2 + b x_N + c} \end{bmatrix} } J=eax12+bx1+cx12eax22+bx2+cx22eaxN2+bxN+cxN2eax12+bx1+cx1eax22+bx2+cx2eaxN2+bxN+cxNeax12+bx1+ceax22+bx2+ceaxN2+bxN+c

3. 实例

其实要求解非线性最小二乘问题可以使用现成的库(比如Ceres Solver),不过本文主要为了理解非线性最小二乘的求解过程,尤其是Gauss-Newton方法。因此这个实例还是使用基础的矩阵库Eigen来实现,具体代码如下:

#include <Eigen/Dense>
  #include <cmath>
    #include <iostream>
      #include <random>
        #include <vector>
          using namespace std;
          using namespace Eigen;
          // 模型函数: y = exp(a*x^2 + b*x + c)
          double model(double x, const Vector3d& theta) {
          double a = theta(0);
          double b = theta(1);
          double c = theta(2);
          double exponent = a * x * x + b * x + c;
          // 防止溢出
          if (exponent > 300) return exp(300);
          if (exponent < -300) return exp(-300);
          return exp(exponent);
          }
          // 计算 Jacobian 矩阵(数值导数或解析导数)
          MatrixXd computeJacobian(const vector<double>& x_data,
            const vector<double>& y_data, const Vector3d& theta) {
              int N = x_data.size();
              MatrixXd J(N, 3);  // 每行对应一个点,三列:∂f/∂a, ∂f/∂b, ∂f/∂c
              for (int i = 0; i < N; ++i) {
              double x = x_data[i];
              double exponent = theta(0) * x * x + theta(1) * x + theta(2);
              double f = model(x, theta);  // 当前预测值
              // 解析导数(链式法则)
              double df_de = f;  // d(exp(u))/du = exp(u)
              double de_da = x * x;
              double de_db = x;
              double de_dc = 1.0;
              J(i, 0) = df_de * de_da;  // ∂f/∂a
              J(i, 1) = df_de * de_db;  // ∂f/∂b
              J(i, 2) = df_de * de_dc;  // ∂f/∂c
              }
              return J;
              }
              // 计算残差向量 r_i = y_i - f(x_i; theta)
              VectorXd computeResiduals(const vector<double>& x_data,
                const vector<double>& y_data, const Vector3d& theta) {
                  int N = x_data.size();
                  VectorXd residuals(N);
                  for (int i = 0; i < N; ++i) {
                  double pred = model(x_data[i], theta);
                  residuals(i) = y_data[i] - pred;
                  }
                  return residuals;
                  }
                  int main() {
                  // ========================
                  // 1. 设置真实参数
                  // ========================
                  Vector3d true_params;
                  true_params << 0.05, -0.4, 1.0;  // a, b, c
                  double a_true = true_params(0), b_true = true_params(1),
                  c_true = true_params(2);
                  cout << "真实参数: a=" << a_true << ", b=" << b_true << ", c=" << c_true
                  << endl;
                  // ========================
                  // 2. 生成观测数据(带高斯噪声)
                  // ========================
                  int N = 50;
                  vector<double> x_data(N), y_data(N);
                    random_device rd;
                    mt19937 gen(rd());
                    uniform_real_distribution<double> x_dis(-5.0, 5.0);  // x 在 [-5, 5]
                      normal_distribution<double> noise_gen(0.0, 0.1);     // 噪声 ~ N(0, 0.1)
                        for (int i = 0; i < N; ++i) {
                        x_data[i] = x_dis(gen);
                        double y_true = model(x_data[i], true_params);
                        y_data[i] = y_true + noise_gen(gen);  // 添加噪声
                        }
                        // ========================
                        // 3. 初始化参数(随便猜)
                        // ========================
                        Vector3d theta = Vector3d::Zero();  // 初始猜测: a=0, b=0, c=0
                        cout << "初始猜测: a=" << theta(0) << ", b=" << theta(1) << ", c=" << theta(2)
                        << endl;
                        // ========================
                        // 4. Gauss-Newton 迭代
                        // ========================
                        int max_iter = 50;
                        double tol = 1e-6;
                        double prev_cost = 0.0;
                        cout << "\n开始 Gauss-Newton 迭代...\n";
                        for (int iter = 0; iter < max_iter; ++iter) {
                        // 计算残差 r
                        VectorXd r = computeResiduals(x_data, y_data, theta);
                        // 计算代价函数 ||r||^2
                        double cost = r.squaredNorm();
                        cout << "迭代 " << iter << ": 残差平方和 = " << cost << endl;
                        // 收敛判断
                        if (iter > 0 && abs(cost - prev_cost) < tol) {
                        cout << "收敛!" << endl;
                        break;
                        }
                        prev_cost = cost;
                        // 计算 Jacobian 矩阵
                        MatrixXd J = computeJacobian(x_data, y_data, theta);
                        // 求解 Gauss-Newton 方程: (J^T J) Δ = J^T r
                        // 使用 QR 分解求解(更稳定)
                        VectorXd delta = J.colPivHouseholderQr().solve(r);
                        // 更新参数: theta = theta + delta
                        theta += delta;
                        // 可选:限制更新幅度防止发散
                        // if (delta.norm() > 1.0) delta *= 1.0 / delta.norm();
                        // 检查参数是否合理(防止溢出)
                        if (!theta.allFinite()) {
                        cout << "警告:参数发散!停止迭代。" << endl;
                        break;
                        }
                        }
                        // ========================
                        // 5. 输出结果
                        // ========================
                        cout << "\n--- 拟合完成 ---" << endl;
                        cout << "估计参数: a=" << theta(0) << ", b=" << theta(1) << ", c=" << theta(2)
                        << endl;
                        cout << "真实参数: a=" << a_true << ", b=" << b_true << ", c=" << c_true
                        << endl;
                        // 最终残差
                        VectorXd final_r = computeResiduals(x_data, y_data, theta);
                        cout << "最终残差平方和: " << final_r.squaredNorm() << endl;
                        // ========================
                        // 6. (可选)计算参数协方差与标准差
                        // ========================
                        MatrixXd J_final = computeJacobian(x_data, y_data, theta);
                        int n = N, p = 3;
                        double sigma_squared = final_r.squaredNorm() / (n - p);  // 估计噪声方差
                        MatrixXd cov_theta =
                        sigma_squared * (J_final.transpose() * J_final).inverse();
                        Vector3d std_error;
                        std_error << sqrt(cov_theta(0, 0)), sqrt(cov_theta(1, 1)),
                        sqrt(cov_theta(2, 2));
                        cout << "\n参数标准差 (近似):" << endl;
                        cout << "a: ±" << std_error(0) << endl;
                        cout << "b: ±" << std_error(1) << endl;
                        cout << "c: ±" << std_error(2) << endl;
                        return 0;
                        }

运行的结果如下:

真实参数: a=0.05, b=-0.4, c=1
初始猜测: a=0, b=0, c=0
开始 Gauss-Newton 迭代...
迭代 0: 残差平方和 = 15595.7
迭代 1: 残差平方和 = 9.83388e+37
迭代 2: 残差平方和 = 1.33087e+37
迭代 3: 残差平方和 = 1.80114e+36
迭代 4: 残差平方和 = 2.43757e+35
迭代 5: 残差平方和 = 3.2989e+34
迭代 6: 残差平方和 = 4.46457e+33
迭代 7: 残差平方和 = 6.04214e+32
迭代 8: 残差平方和 = 8.17715e+31
迭代 9: 残差平方和 = 1.10666e+31
迭代 10: 残差平方和 = 1.4977e+30
迭代 11: 残差平方和 = 2.02691e+29
迭代 12: 残差平方和 = 2.74313e+28
迭代 13: 残差平方和 = 3.71242e+27
迭代 14: 残差平方和 = 5.02421e+26
迭代 15: 残差平方和 = 6.79954e+25
迭代 16: 残差平方和 = 9.20217e+24
迭代 17: 残差平方和 = 1.24538e+24
迭代 18: 残差平方和 = 1.68544e+23
迭代 19: 残差平方和 = 2.28099e+22
迭代 20: 残差平方和 = 3.08698e+21
迭代 21: 残差平方和 = 4.17778e+20
迭代 22: 残差平方和 = 5.65401e+19
迭代 23: 残差平方和 = 7.65187e+18
迭代 24: 残差平方和 = 1.03557e+18
迭代 25: 残差平方和 = 1.40149e+17
迭代 26: 残差平方和 = 1.89671e+16
迭代 27: 残差平方和 = 2.56691e+15
迭代 28: 残差平方和 = 3.47393e+14
迭代 29: 残差平方和 = 4.70143e+13
迭代 30: 残差平方和 = 6.36261e+12
迭代 31: 残差平方和 = 8.61052e+11
迭代 32: 残差平方和 = 1.16541e+11
迭代 33: 残差平方和 = 1.57677e+10
迭代 34: 残差平方和 = 2.13228e+09
迭代 35: 残差平方和 = 2.87975e+08
迭代 36: 残差平方和 = 3.87593e+07
迭代 37: 残差平方和 = 5.17234e+06
迭代 38: 残差平方和 = 677744
迭代 39: 残差平方和 = 86534.7
迭代 40: 残差平方和 = 10405.1
迭代 41: 残差平方和 = 543.309
迭代 42: 残差平方和 = 4.93922
迭代 43: 残差平方和 = 0.579241
迭代 44: 残差平方和 = 0.577034
迭代 45: 残差平方和 = 0.577034
收敛!
--- 拟合完成 ---
估计参数: a=0.0498221, b=-0.400562, c=1.00071
真实参数: a=0.05, b=-0.4, c=1
最终残差平方和: 0.577034
参数标准差 (近似):
a: ±0.00041604
b: ±0.00273643
c: ±0.00568421

需要注意的就是一下几点:

  1. Gauss-Newton方法所使用的迭代逼近过程,在代码中通常使用一个while/for循环来实现。因此初值选择停止条件特别重要,否则容易在循环过程中发散和震荡,导致不能实现收敛合适的求解值:

    for (int iter = 0; iter < max_iter; ++iter) {
    //...
    // 收敛判断
    if (iter > 0 && abs(cost - prev_cost) < tol) {
    cout << "收敛!" << endl;
    break;
    }
    //...
    // 更新参数: theta = theta + delta
    theta += delta;
    }
  2. 初值选择不太容易,需要对求解问题的领域知识有一定的先验经验,或者通过使用近似的线性最小二乘问题的解作为初值。这里初值选择(0,0,0)因为是示例没有考虑太多的因素,实际应用中具体的问题需要具体分析。

  3. 常见的停止条件有三种:

    1. 参数变化很小:

    ∥ Δ θ k ∥ < ϵ 1 \boxed{\| \Delta \theta_k \| < \epsilon_1} ∥Δθk<ϵ1

    1. 目标函数 S ( θ ) = ∥ r ( θ ) ∥ 2 S(\theta) = \| \mathbf{r}(\theta) \|^2 S(θ)=r(θ)2变化很小:

    ∣ S ( θ k + 1 ) − S ( θ k ) ∣ < ϵ 2 \boxed{| S(\theta_{k+1}) - S(\theta_k) | < \epsilon_2} S(θk+1)S(θk)<ϵ2

    1. 梯度足够小。

    ∥ J k T r k ∥ < ϵ 3 \boxed{\| J_k^T \mathbf{r}_k \| < \epsilon_3} JkTrk<ϵ3

    目标函数的梯度是 ∇ S ( θ ) = 2 J ( θ ) T r ( θ ) \nabla S(\theta) = 2 J(\theta)^T \mathbf{r}(\theta) S(θ)=2J(θ)Tr(θ) J T r J^T \mathbf{r} JTr 正是正规方程的右边项。

  4. 关键在于雅可比矩阵的计算:

    // 计算 Jacobian 矩阵(数值导数或解析导数)
    MatrixXd computeJacobian(const vector<double>& x_data,
      const vector<double>& y_data, const Vector3d& theta) {
        int N = x_data.size();
        MatrixXd J(N, 3);  // 每行对应一个点,三列:∂f/∂a, ∂f/∂b, ∂f/∂c
        for (int i = 0; i < N; ++i) {
        double x = x_data[i];
        double exponent = theta(0) * x * x + theta(1) * x + theta(2);
        double f = model(x, theta);  // 当前预测值
        // 解析导数(链式法则)
        double df_de = f;  // d(exp(u))/du = exp(u)
        double de_da = x * x;
        double de_db = x;
        double de_dc = 1.0;
        J(i, 0) = df_de * de_da;  // ∂f/∂a
        J(i, 1) = df_de * de_db;  // ∂f/∂b
        J(i, 2) = df_de * de_dc;  // ∂f/∂c
        }
        return J;
        }

    可以将这段代码与推导的雅可比矩阵进行对照。

  5. 这一段代码生成的随机值不一定总是能够收敛,因为有可能会遇到雅可比矩阵病态导致更新方向错误,初始猜测太差导致发散等问题。Gauss-Newton也理论上易于理解的方法,更加工程化的实践需要使用Levenberg-Marquardt算法。

posted @ 2025-11-11 20:08  yxysuanfa  阅读(42)  评论(0)    收藏  举报