• 博客园logo
  • 会员
  • 周边
  • 新闻
  • 博问
  • 闪存
  • 众包
  • 赞助商
  • Chat2DB
    • 搜索
      所有博客
    • 搜索
      当前博客
  • 写随笔 我的博客 短消息 简洁模式
    用户头像
    我的博客 我的园子 账号设置 会员中心 简洁模式 ... 退出登录
    注册 登录
MKT-porter
博客园    首页    新随笔    联系   管理    订阅  订阅
python就算F和H矩阵选择最优

 

在视觉定位中求解单应矩阵(H)或本质矩阵(E)时,threshold和confidence是两个关键的参数,它们直接影响RANSAC(随机抽样一致性)算法的性能。下面我会详细解释它们的单位、作用,并针对匹配点比较准的情况给出选择建议。

1. threshold(阈值)

作用:

  • 几何误差阈值:判断一个匹配点对是内点(inlier)还是外点(outlier)的临界值
  • 对于单应矩阵H:通常是重投影误差(像素单位)
  • 对于本质矩阵E:通常是点到极线的距离(像素单位)

单位:

  • 像素(pixels):在图像坐标系中测量
  • 例如:如果某点经过H矩阵变换后的重投影误差大于threshold,则被判定为外点

物理意义:

  • 表示算法能容忍的匹配误差上限
  • 较小的threshold → 更严格的几何一致性要求
  • 较大的threshold → 更宽松的接受标准

 

2. confidence(置信度)

作用:

  • 算法置信水平:RANSAC算法期望达到的内点集正确的概率
  • 例如:confidence=0.99 表示有99%的把握最终找到的内点集是正确且一致的

单位:

  • 百分比或小数(0到1之间)
  • 无物理单位,是概率值

物理意义:

  • 更高的confidence → RANSAC需要更多的迭代次数来保证找到正确解的概率
  • 直接影响计算时间:confidence越高,迭代次数越多

. 匹配点较准时的参数选择

当匹配点质量较高时(如使用SIFT+ratio test+Lowe's阈值过滤后的匹配):

threshold的选择策略:

对于单应矩阵H(视觉定位中常见):

推荐范围:0.5 - 3.0 像素
典型值:1.0 - 2.0 像素

 
理由:
  • 高质量匹配的重投影误差通常在1-2像素内
  • 过小的threshold(如0.5)可能因计算精度问题误杀好的匹配
  • 过大的threshold(>3.0)可能让误匹配混入

对于本质矩阵E:

推荐范围:0.5 - 2.0 像素(点到极线距离)
典型值:1.0 像素

confidence的选择策略:

推荐范围:0.95 - 0.999
典型值:0.99(平衡精度与效率)

理由:
  • 高质量匹配时,内点率通常较高(>70%)
  • 0.99的置信度在大多数情况下足够可靠
  • 0.999会显著增加计算时间,但精度提升有限

无人机/车辆定位:

  • threshold: 1.5-2.5像素(考虑动态模糊)
  • confidence: 0.99
  • 理由:平台振动可能导致轻微匹配偏移

AR/VR室内定位:

  • threshold: 1.0-1.5像素
  • confidence: 0.995
  • 理由:静态场景,匹配精度高

大尺度城市定位:

  • threshold: 2.0-3.0像素
  • confidence: 0.99
  • 理由:视角变化大,轻微投影变形可接受
import cv2
import numpy as np
from scipy.spatial.transform import Rotation as R
from typing import Tuple, List, Dict, Optional, Union
import time

# 原始图像缩放后,内参要跟着缩放,如果是等比例的
# mast3r是先缩放 然后抠图,所以这种情况,将匹配点变换到原始分辨率去算
#    def _scale_intrinsics(self):
#         sx = self.working_size[0] / self.original_size[0]
#         sy = self.working_size[1] / self.original_size[1]
        
#         K = self.K_orig.copy()
#         K[0, 0] *= sx
#         K[1, 1] *= sy
#         K[0, 2] *= sx
#         K[1, 2] *= sy
#         return K



'''

Q1:置信度应该设多高?
A:
实时应用:0.9-0.95
一般应用:0.99
高精度应用:0.999-0.9999

Q2:置信度越高越好吗?
A: 不是的。过高的置信度会导致:
迭代次数急剧增加
计算时间大幅延长
可能陷入局部最优

重要原则:
    1. 数据干净(内点比例高) → 可以使用较低置信度
    2. 数据噪声大(内点比例低) → 需要较高置信度
    3. 如果内点比例低于30%,RANSAC可能失效

'''

class InitStatus:
    SUCCESS = "初始化成功"
    LOW_PARALLAX = "视差过小"
    DEGENERATE = "F/H 退化不可区分"
    CHEIRALITY_FAIL = "正深度验证失败"
    INSUFFICIENT_POINTS = "匹配点数量不足"

# 常见阈值设置
THRESHOLD_VALUES = {
    "very_accurate": 0.1,    # 非常高精度,适用于精确测量
    "accurate": 0.5,         # 标准精度,适用于大多数情况
    "default": 1.0,          # OpenCV 默认值
    "robust": 3.0,           # 更鲁棒,适用于有噪声的数据
    "very_robust": 5.0,      # 非常鲁棒,适用于低质量图像
    "loose": 10.0,           # 宽松阈值,适用于快速计算
}


class RelativePoseEstimator:
    """
    相对位姿估计器 - 修正版
    """
    
    def __init__(self, 
                 K: np.ndarray,
                 use_orb: bool = True,
                 feature_threshold: int = 1000,
                 ransac_threshold: float = THRESHOLD_VALUES["accurate"],
                 verbose: bool = True):
        """
        初始化相对位姿估计器
        """
        self.K = K
        self.use_orb = use_orb
        self.feature_threshold = feature_threshold
        self.ransac_threshold = ransac_threshold
        self.verbose = verbose
        self.K_inv = np.linalg.inv(K)
        
        # 特征检测器
        if use_orb:
            self.detector = cv2.ORB_create(nfeatures=feature_threshold)
            self.matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
        else:
            self.detector = cv2.SIFT_create(nfeatures=feature_threshold)
            self.matcher = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True)
        
        self.log("相对位姿估计器初始化完成")
    
    def log(self, msg: str):
        """日志输出"""
        if self.verbose:
            print(msg)
    
    def extract_features(self, img1: np.ndarray, img2: np.ndarray) -> Tuple[List, List, List]:
        """
        从两张图片中提取特征点
        """
        if len(img1.shape) == 3:
            img1_gray = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
            img2_gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
        else:
            img1_gray = img1
            img2_gray = img2
        
        kp1, des1 = self.detector.detectAndCompute(img1_gray, None)
        kp2, des2 = self.detector.detectAndCompute(img2_gray, None)
        
        if des1 is None or des2 is None or len(des1) < 10 or len(des2) < 10:
            raise ValueError(f"特征点不足")
        
        matches = self.matcher.match(des1, des2)
        matches = sorted(matches, key=lambda x: x.distance)
        
        self.log(f"提取特征点: 图片1: {len(kp1)}, 图片2: {len(kp2)}, 匹配: {len(matches)}")
        
        return kp1, kp2, matches
    
    def matches_to_points(self, kp1: List, kp2: List, matches: List) -> Tuple[np.ndarray, np.ndarray]:
        """
        将匹配转换为点坐标
        """
        pts1 = np.float32([kp1[m.queryIdx].pt for m in matches])
        pts2 = np.float32([kp2[m.trainIdx].pt for m in matches])
        return pts1, pts2
    
    def compute_essential_matrix(self, pts1: np.ndarray, pts2: np.ndarray) -> Tuple[Optional[np.ndarray], Optional[np.ndarray], int]:
        """
        计算本质矩阵
        """
        if len(pts1) < 8:
            self.log(f"计算E矩阵需要至少8个点,当前只有{len(pts1)}个")
            return None, None, 0
        
        # 修正:检查OpenCV版本
        try:
            E, mask_E = cv2.findEssentialMat(
                pts1, pts2, self.K,
                method=cv2.RANSAC,
                prob=0.999,
                threshold=self.ransac_threshold
            )
        except Exception as e:
            self.log(f"计算E矩阵失败: {e}")
            return None, None, 0
        
        if E is None or mask_E is None:
            self.log("计算E矩阵失败")
            return None, None, 0
        
        mask_E = mask_E.ravel().astype(bool)
        num_inliers_E = np.sum(mask_E)
        
        self.log(f"E矩阵计算完成,内点: {num_inliers_E}/{len(pts1)}")
        
        return E, mask_E, num_inliers_E
    
    def compute_homography_matrix(self, pts1: np.ndarray, pts2: np.ndarray) -> Tuple[Optional[np.ndarray], Optional[np.ndarray], int]:
        """
        计算单应性矩阵
        """
        if len(pts1) < 4:
            self.log(f"计算H矩阵需要至少4个点,当前只有{len(pts1)}个")
            return None, None, 0
        
        H, mask_H = cv2.findHomography(
            pts1, pts2,
            method=cv2.RANSAC,
            ransacReprojThreshold=self.ransac_threshold
        )
        
        if H is None or mask_H is None:
            self.log("计算H矩阵失败")
            return None, None, 0
        
        mask_H = mask_H.ravel().astype(bool)
        num_inliers_H = np.sum(mask_H)
        
        self.log(f"H矩阵计算完成,内点: {num_inliers_H}/{len(pts1)}")
        
        return H, mask_H, num_inliers_H
    
    def decompose_essential_matrix(self, E: np.ndarray, pts1: np.ndarray, pts2: np.ndarray, mask: np.ndarray) -> Tuple[Optional[np.ndarray], Optional[np.ndarray], float]:
        """
        从本质矩阵分解位姿 - 修正版
        """
        if E is None or len(pts1) < 5:
            return None, None, 0.0
        
        pts1_masked = pts1[mask]
        pts2_masked = pts2[mask]
        
        # 修正:处理不同OpenCV版本的recoverPose接口
        try:
            # 尝试新版本接口
            num_inliers, R_E, t_E, mask_pose = cv2.recoverPose(
                E, pts1_masked, pts2_masked, self.K
            )
        except Exception as e1:
            self.log(f"recoverPose新接口失败: {e1}")
            try:
                # 尝试旧版本接口
                num_inliers, R_E, t_E, mask_pose = cv2.recoverPose(
                    E, pts1_masked, pts2_masked, self.K, 
                    distanceThresh=50, mask=None
                )
            except Exception as e2:
                self.log(f"recoverPose旧接口也失败: {e2}")
                return None, None, 0.0
        
        if mask_pose is None:
            num_positive_depth = 0
        else:
            num_positive_depth = np.sum(mask_pose)
        
        # 计算评分
        score_E = self.compute_essential_score(E, R_E, t_E, pts1_masked, pts2_masked)
        
        self.log(f"E矩阵分解: 正深度点: {num_positive_depth}/{len(pts1_masked)}")
        self.log(f"E矩阵评分: {score_E:.4f}")
        
        return R_E, t_E, score_E
    
    def decompose_homography_matrix(self, H: np.ndarray, pts1: np.ndarray, pts2: np.ndarray, mask: np.ndarray) -> Tuple[Optional[np.ndarray], Optional[np.ndarray], float]:
        """
        从单应性矩阵分解位姿
        """
        if H is None or len(pts1) < 5:
            return None, None, 0.0
        
        pts1_masked = pts1[mask]
        pts2_masked = pts2[mask]
        
        try:
            num_solutions, rotations, translations, normals = cv2.decomposeHomographyMat(
                H, self.K, pts1_masked, pts2_masked
            )
        except Exception as e:
            self.log(f"分解H矩阵失败: {e}")
            return None, None, 0.0
        
        if num_solutions == 0:
            return None, None, 0.0
        
        # 选择最佳解
        '''
        平面法向量有正负两种可能
        深度有正负两种可能
        数学上这4种解都满足单应性约束
        '''
        best_idx = -1
        best_score = -1.0
        best_R = None
        best_t = None
        
        for i in range(num_solutions):
            R_i = rotations[i]
            t_i = translations[i]
            n_i = normals[i]
            
            score_i = self.compute_homography_score(R_i, t_i, n_i, pts1_masked, pts2_masked)
            
            if score_i > best_score:
                best_score = score_i
                best_idx = i
                best_R = R_i
                best_t = t_i
        
        self.log(f"H矩阵分解: 找到{num_solutions}个解,选择解{best_idx+1}")
        self.log(f"H矩阵评分: {best_score:.4f}")
        
        return best_R, best_t, best_score
    
    def compute_essential_score(self, E: np.ndarray, R: np.ndarray, t: np.ndarray, 
                               pts1: np.ndarray, pts2: np.ndarray) -> float:
        """
        计算E矩阵的评分
        """
        if len(pts1) < 5:
            return 0.0
        
        n_points = len(pts1)
        
        # 1. 对极约束误差
        try:
            pts1_norm = cv2.undistortPoints(pts1.reshape(-1, 1, 2), self.K, None).reshape(-1, 2)
            pts2_norm = cv2.undistortPoints(pts2.reshape(-1, 1, 2), self.K, None).reshape(-1, 2)
        except:
            # 如果undistortPoints失败,使用近似方法
            pts1_norm = (pts1 - self.K[0:2, 2]) / np.diag(self.K)[0:2]
            pts2_norm = (pts2 - self.K[0:2, 2]) / np.diag(self.K)[0:2]
        
        epipolar_errors = []
        for i in range(n_points):
            x1 = np.array([pts1_norm[i, 0], pts1_norm[i, 1], 1])
            x2 = np.array([pts2_norm[i, 0], pts2_norm[i, 1], 1])
            error = abs(np.dot(x2, np.dot(E, x1)))
            epipolar_errors.append(error)
        
        mean_epipolar_error = np.mean(epipolar_errors) if epipolar_errors else 1.0
        epipolar_score = 1.0 / (mean_epipolar_error + 1e-6)
        
        # 2. 三角测量和重投影误差
        P1 = self.K @ np.hstack([np.eye(3), np.zeros((3, 1))])
        P2 = self.K @ np.hstack([R, t])
        
        try:
            points_4d = cv2.triangulatePoints(P1, P2, pts1.T, pts2.T)
            points_3d = points_4d[:3] / points_4d[3]
            
            # 计算重投影误差
            pts1_reproj = P1 @ np.vstack([points_3d, np.ones((1, n_points))])
            pts2_reproj = P2 @ np.vstack([points_3d, np.ones((1, n_points))])
            
            pts1_reproj = pts1_reproj[:2] / pts1_reproj[2]
            pts2_reproj = pts2_reproj[:2] / pts2_reproj[2]
            
            reproj_error1 = np.mean(np.linalg.norm(pts1 - pts1_reproj.T, axis=1))
            reproj_error2 = np.mean(np.linalg.norm(pts2 - pts2_reproj.T, axis=1))
            mean_reproj_error = (reproj_error1 + reproj_error2) / 2
        except:
            mean_reproj_error = 10.0  # 如果三角测量失败,使用较大误差
        
        reproj_score = 1.0 / (mean_reproj_error + 1e-6)
        
        # 3. 正深度检查
        try:
            depths1 = points_3d[2, :]
            points_3d_cam2 = R @ points_3d + t
            depths2 = points_3d_cam2[2, :]
            positive_ratio = np.sum((depths1 > 0) & (depths2 > 0)) / n_points
        except:
            positive_ratio = 0.5  # 默认值
        
        # 综合评分
        total_score = (
            positive_ratio * 0.4 +
            epipolar_score * 0.3 +
            reproj_score * 0.3
        )
        
        return total_score
    
    def compute_homography_score(self, R: np.ndarray, t: np.ndarray, n: np.ndarray,
                                pts1: np.ndarray, pts2: np.ndarray, d_prior: float = 5.0) -> float:
        """
        计算H矩阵的评分
        """
        n_points = len(pts1)
        
        # 1. 正深度点检查
        P1 = self.K @ np.hstack([np.eye(3), np.zeros((3, 1))])
        P2 = self.K @ np.hstack([R, t])
        
        try:
            points_4d = cv2.triangulatePoints(P1, P2, pts1.T, pts2.T)
            points_3d = points_4d[:3] / points_4d[3]
            
            depths1 = points_3d[2, :]
            points_3d_cam2 = R @ points_3d + t
            depths2 = points_3d_cam2[2, :]
            
            positive_mask = (depths1 > 0) & (depths2 > 0)
            positive_ratio = np.sum(positive_mask) / n_points
        except:
            positive_ratio = 0.5
        
        # 2. 重投影误差(通过单应性)
        try:
            H_estimated = self.compute_homography_from_pose(R, t, n, d_prior)
            
            pts1_homo = np.hstack([pts1, np.ones((n_points, 1))])
            pts2_proj_homo = (H_estimated @ pts1_homo.T).T
            pts2_proj = pts2_proj_homo[:, :2] / pts2_proj_homo[:, 2:3]
            
            reproj_errors = np.linalg.norm(pts2 - pts2_proj, axis=1)
            mean_reproj_error = np.mean(reproj_errors)
            reproj_score = 1.0 / (mean_reproj_error + 1e-6)
        except:
            reproj_score = 0.0
        
        # 3. 平面法向量合理性
        try:
            camera_forward = np.array([0, 0, 1])
            n_dot = np.abs(np.dot(n.flatten(), camera_forward))
        except:
            n_dot = 0.5
        
        # 综合评分
        total_score = (
            positive_ratio * 0.4 +
            reproj_score * 0.4 +
            n_dot * 0.2
        )
        
        return total_score
    
    def compute_homography_from_pose(self, R: np.ndarray, t: np.ndarray, n: np.ndarray, d: float = 1.0) -> np.ndarray:
        """从位姿计算单应性矩阵"""
        H = self.K @ (R - t @ n.T / d) @ self.K_inv
        return H / H[2, 2]
    
    def select_best_method(self, 
                          R_E: Optional[np.ndarray], t_E: Optional[np.ndarray], score_E: float,
                          R_H: Optional[np.ndarray], t_H: Optional[np.ndarray], score_H: float,
                          num_inliers_E: int, num_inliers_H: int) -> Tuple[str, Optional[np.ndarray], Optional[np.ndarray]]:
        """
        选择最佳方法
        """
        self.log("\n" + "="*60)
        self.log("方法选择")
        self.log("="*60)
        
        self.log(f"E矩阵评分: {score_E:.4f}, 内点: {num_inliers_E}")
        self.log(f"H矩阵评分: {score_H:.4f}, 内点: {num_inliers_H}")
        
        E_valid = R_E is not None and t_E is not None and score_E > 0
        H_valid = R_H is not None and t_H is not None and score_H > 0
        
        if not E_valid and not H_valid:
            self.log("两种方法都无效")
            return 'none', None, None
        
        if not E_valid:
            self.log("选择H矩阵方法")
            return 'H', R_H, t_H
        
        if not H_valid:
            self.log("选择E矩阵方法")
            return 'E', R_E, t_E
        
        # 两种方法都有效,根据评分选择
        if score_E > score_H:
            self.log(f"选择E矩阵方法 (评分更高: {score_E:.4f} > {score_H:.4f})")
            return 'E', R_E, t_E
        else:
            self.log(f"选择H矩阵方法 (评分更高: {score_H:.4f} > {score_E:.4f})")
            return 'H', R_H, t_H
    
    def estimate_from_points(self, pts1: np.ndarray, pts2: np.ndarray) -> Tuple[Optional[np.ndarray], Optional[np.ndarray], Dict]:
        """
        从特征点估计相对位姿
        """
        self.log("\n" + "="*60)
        self.log("从特征点估计相对位姿")
        self.log("="*60)
        
        start_time = time.time()
        
        # 转换为 numpy 数组
        pts1 = np.array(pts1, dtype=np.float32)
        pts2 = np.array(pts2, dtype=np.float32)
        
        # 确保数组形状正确 (N, 2)
        if pts1.ndim == 1:
            pts1 = pts1.reshape(-1, 2)
        if pts2.ndim == 1:
            pts2 = pts2.reshape(-1, 2)


        if len(pts1) < 8:
            self.log(f"点数不足: {len(pts1)} < 8")
            return None, None, {'error': 'insufficient_points'}
        
        # 计算E矩阵
        E, mask_E, num_inliers_E = self.compute_essential_matrix(pts1, pts2)
        
        # 计算H矩阵
        H, mask_H, num_inliers_H = self.compute_homography_matrix(pts1, pts2)
        
        # 分解E矩阵
        R_E, t_E, score_E = None, None, 0.0
        if E is not None and mask_E is not None:
            R_E, t_E, score_E = self.decompose_essential_matrix(E, pts1, pts2, mask_E)
        
        # 分解H矩阵
        R_H, t_H, score_H = None, None, 0.0
        if H is not None and mask_H is not None:
            R_H, t_H, score_H = self.decompose_homography_matrix(H, pts1, pts2, mask_H)
        
        # 选择最佳方法
        method, R_best, t_best = self.select_best_method(
            R_E, t_E, score_E, R_H, t_H, score_H,
            num_inliers_E, num_inliers_H
        )
        
        processing_time = time.time() - start_time
        
        # 构建结果信息
        info = {
            'method': method,
            'score_E': float(score_E),
            'score_H': float(score_H),
            'num_inliers_E': int(num_inliers_E),
            'num_inliers_H': int(num_inliers_H),
            'total_points': len(pts1),
            'processing_time': processing_time,
            'R_E': R_E.tolist() if R_E is not None else None,
            't_E': t_E.tolist() if t_E is not None else None,
            'R_H': R_H.tolist() if R_H is not None else None,
            't_H': t_H.tolist() if t_H is not None else None
        }
        
        if R_best is not None and t_best is not None:
            try:
                euler_angles = R.from_matrix(R_best).as_euler('xyz', degrees=True)
                info['euler_angles'] = euler_angles.tolist()
                info['translation_norm'] = float(np.linalg.norm(t_best))
                
                self.log(f"\n最终结果:")
                self.log(f"选择方法: {method}")
                self.log(f"欧拉角 (度): [{euler_angles[0]:.2f}, {euler_angles[1]:.2f}, {euler_angles[2]:.2f}]")
                self.log(f"平移向量: {t_best.flatten()}")
                self.log(f"处理时间: {processing_time:.3f}秒")
            except:
                info['euler_angles'] = None
        
        return R_best, t_best, info
    
    def estimate_from_images(self, img1: np.ndarray, img2: np.ndarray) -> Tuple[Optional[np.ndarray], Optional[np.ndarray], Dict]:
        """
        从两张图片估计相对位姿
        """
        self.log("\n" + "="*60)
        self.log("从图片估计相对位姿")
        self.log("="*60)
        
        try:
            kp1, kp2, matches = self.extract_features(img1, img2)
            pts1, pts2 = self.matches_to_points(kp1, kp2, matches)
            
            R_best, t_best, info = self.estimate_from_points(pts1, pts2)
            
            info['num_keypoints1'] = len(kp1)
            info['num_keypoints2'] = len(kp2)
            info['num_matches'] = len(matches)
            
            return R_best, t_best, info
            
        except Exception as e:
            self.log(f"处理图片时出错: {e}")
            return None, None, {'error': str(e)}
    
    def estimate_pose(self, 
                     input1: Union[np.ndarray, Tuple[np.ndarray, np.ndarray]],
                     input2: Optional[np.ndarray] = None) -> Tuple[Optional[np.ndarray], Optional[np.ndarray], Dict]:
        """
        通用估计函数
        """
        if input2 is not None:
            return self.estimate_from_images(input1, input2)
        elif isinstance(input1, tuple) and len(input1) == 2:
            pts1, pts2 = input1
            return self.estimate_from_points(pts1, pts2)
        else:
            raise ValueError("输入格式错误")

# 简化的测试函数
def simple_test():
    """简单的测试函数"""
    
    print("="*60)
    print("相对位姿估计器测试 - 简化版")
    print("="*60)
    
    # 相机内参
    K = np.array([[800, 0, 320],
                  [0, 800, 240],
                  [0, 0, 1]], dtype=np.float32)
    
    # 创建估计器
    estimator = RelativePoseEstimator(K, verbose=True)
    
    # 生成测试数据
    np.random.seed(42)
    n_points = 50
    
    # 生成平面场景数据
    X_plane = np.random.rand(n_points, 2) * 4 - 2
    X_plane = np.hstack([X_plane, np.zeros((n_points, 1))])
    
    # 相机位姿
    d = 5.0
    R1 = np.eye(3)
    t1 = np.array([0, 0, d])
    
    # 小的旋转和平移
    angle = 0.1
    axis = np.array([0, 0, 1])  # 绕Z轴旋转
    R2 = R.from_rotvec(axis * angle).as_matrix()
    t2 = np.array([0.2, 0.1, 0])
    
    # 投影
    P1 = K @ np.hstack([R1, t1.reshape(3, 1)])
    P2 = K @ np.hstack([R2, (t1 + t2).reshape(3, 1)])
    
    X_homo = np.hstack([X_plane, np.ones((n_points, 1))])
    pts1_homo = (P1 @ X_homo.T).T
    pts2_homo = (P2 @ X_homo.T).T
    
    pts1 = pts1_homo[:, :2] / pts1_homo[:, 2:3]
    pts2 = pts2_homo[:, :2] / pts2_homo[:, 2:3]
    
    # 添加噪声
    noise_scale = 0.5
    pts1 += np.random.randn(*pts1.shape) * noise_scale
    pts2 += np.random.randn(*pts2.shape) * noise_scale
    
    print(f"生成测试数据: {n_points} 个点")
    print(f"真实旋转:\n{R2}")
    print(f"真实平移: {t2}")
    
    # 估计位姿
    R_est, t_est, info = estimator.estimate_from_points(pts1, pts2)
    
    if R_est is not None and t_est is not None:
        # 计算误差
        R_error = np.linalg.norm(R_est - R2)
        
        t_est_norm = t_est / np.linalg.norm(t_est)
        t2_norm = t2 / np.linalg.norm(t2)
        t_dir_error = np.arccos(np.clip(np.dot(t_est_norm.flatten(), t2_norm), -1.0, 1.0))
        
        print(f"\n误差分析:")
        print(f"旋转误差: {R_error:.6f}")
        print(f"平移方向误差: {np.degrees(t_dir_error):.2f}°")
        print(f"选择的方法: {info['method']}")
    
    return estimator, R_est, t_est, info



def test_with_real_images():
    """使用真实图片测试(如果可用)"""
    
    print("="*60)
    print("使用真实图片测试")
    print("="*60)
    
    # 这里可以加载实际图片
    # 例如:img1 = cv2.imread('image1.jpg')
    #       img2 = cv2.imread('image2.jpg')
    
    # 由于没有实际图片,我们创建模拟图片
    height, width = 480, 640
    img1 = np.random.randint(0, 255, (height, width, 3), dtype=np.uint8)
    img2 = np.random.randint(0, 255, (height, width, 3), dtype=np.uint8)
    
    # 添加一些特征点
    cv2.circle(img1, (320, 240), 10, (255, 0, 0), -1)
    cv2.circle(img2, (330, 245), 10, (255, 0, 0), -1)
    
    cv2.circle(img1, (200, 200), 8, (0, 255, 0), -1)
    cv2.circle(img2, (210, 205), 8, (0, 255, 0), -1)
    
    cv2.circle(img1, (400, 300), 6, (0, 0, 255), -1)
    cv2.circle(img2, (410, 305), 6, (0, 0, 255), -1)
    
    # 相机内参
    K = np.array([[500, 0, width/2],
                  [0, 500, height/2],
                  [0, 0, 1]])
    
    # 创建估计器
    estimator = RelativePoseEstimator(K, verbose=True)
    
    # 估计位姿
    R, t, info = estimator.estimate_from_images(img1, img2)
    
    if R is not None:
        print(f"\n估计成功!")
        print(f"使用的方法: {info['method']}")
        print(f"内点数量: E={info['num_inliers_E']}, H={info['num_inliers_H']}")
    
    return estimator, R, t, info

# 快速测试
if __name__ == "__main__":

    

    
    # 运行简化测试
    try:
        print("测试点集合")
        print("="*60)

        estimator, R_est, t_est, info = simple_test()
        
        print("\n" + "="*60)
        print("测试真实图片(模拟)")
        print("="*60)
        
        # 测试模拟图片
        estimator2, R_est2, t_est2, info2 = test_with_real_images()
        
    except Exception as e:
        print(f"测试过程中出错: {e}")
        import traceback
        traceback.print_exc()
    
    print("\n" + "="*60)
    print("测试完成!")
    print("="*60)

  

posted on 2026-01-24 02:46  MKT-porter  阅读(1)  评论(0)    收藏  举报
刷新页面返回顶部
博客园  ©  2004-2026
浙公网安备 33010602011771号 浙ICP备2021040463号-3