多源异构数据采集与融合应用综合实践
| 这个项目属于哪个课程 | 2025数据采集与融合技术 |
|---|---|
| 组名、项目简介 | 组名:好运来 项目需求:智能运动辅助应用,针对用户上传的运动视频(以引体向上为核心),解决传统动作评估依赖主观经验、反馈延迟的问题,提供客观的动作分析与改进建议 项目目标:对用户上传的运动视频进行动作分析、评分,提供个性化改进意见,包含完整的用户成长记录和反馈系统,帮助用户科学提升运动水平 技术路线:基于Vue3+Python+openGauss的前后端分离架构,前端使用Vue3实现用户界面和可视化,后端用Python集成MediaPipe进行姿态分析算法处理,数据库采用OceanBase存储用户数据和运动记录,实现引体向上动作分析系统 |
| 团队成员学号 | 102302148(谢文杰)、102302149(赖翊煊)、102302150(蔡骏)、102302151(薛雨晨)、102302108(赵雅萱)、102302111(海米沙)、102302139(尚子骐)、022304105(叶骋恺) |
| 这个项目的目标 | 通过上传的运动视频,运用人体姿态估计算法(双视角协同分析:正面看握距对称性、身体稳定性,侧面看动作完整性、躯干角度),自动识别身体关键点,分解动作周期、识别违规代偿,生成量化评分、可视化报告与个性化改进建议;同时搭建用户成长记录与反馈系统,存储用户数据与运动记录,最终打造低成本、高精度的自动化评估工具,助力个人训练、体育教育等场景的科学化训练,规避运动损伤、提升训练效果 |
| 其他参考文献 | [1] ZHOU P, CAO J J, ZHANG X Y, et al. Learning to Score Figure Skating Sport Videos [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2019. 1802.02774 [2] Toshev, A., & Szegedy, C. (2014). DeepPose: Human Pose Estimation via Deep Neural Networks. DeepPose: Human Pose Estimation via Deep Neural Networks |
| 码云链接(代码已汇总,各小组成员代码不分开放) | 前后端代码:https://gitee.com/wsxxs233/SoftWare |
一.项目背景
随着全民健身的深入与健身文化的普及,以引体向上为代表的自重训练,因其便捷性与高效性,成为衡量个人基础力量与身体素质的重要标志,广泛应用于学校体测、军事训练及大众健身。然而,传统的动作评估高度依赖教练员的肉眼观察与主观经验,存在标准不一、反馈延迟、难以量化等局限性。在缺少专业指导的环境中,训练者往往难以察觉自身动作模式的细微偏差,如借力、摆动、幅度不足等,这不仅影响训练效果,长期更可能导致运动损伤。如何将人工智能与计算机视觉技术,转化为每个人触手可及的“AI教练”,提供客观、即时、精准的动作反馈,已成为提升科学化训练水平的一个迫切需求。
二.项目概述
本项目旨在开发一套基于计算机视觉的智能引体向上动作分析与评估系统。系统通过训练者上传的视频,运用先进的人体姿态估计算法,自动识别并追踪身体关键点。针对引体向上动作的复杂性,我们创新性地构建了双视角协同分析框架:正面视角专注于分析握距对称性、身体稳定性和左右平衡,确保动作的规范与基础架构;侧面视角则着重评估动作的完整性、躯干角度与发力模式,判断动作幅度与效率。通过多维度量化指标,系统能够自动分解动作周期、识别违规代偿,并生成直观的可视化报告与改进建议。最终,本项目致力于打造一个低成本、高精度的自动化评估工具,为个人训练者、体育教育及专业机构提供一种数据驱动的科学训练辅助解决方案。、
三.项目分工
蔡骏:负责用户界面前端所需前端功能的构建。
赵雅萱:负责管理员系统构建。
薛雨晨:实现功能部署到服务器的使用,以及前后端接口的书写修订。
海米沙:墨刀进行原型设计,实时记录市场调研结果并汇报分析需求,项目logo及产品名称设计,进行软件测试。
谢文杰:负责正面评分标准制定,搭建知识库。
赖翊煊:负责侧面评分标准制定,API接口接入AI
叶骋恺:负责数据库方面创建与设计
尚子琪:负责进行爬虫爬取对应相关视频,进行软件测试
四.个人贡献
4.1初始化阶段
(1). 工具初始化
加载 MediaPipe 姿态识别模型,配置模型参数(检测置信度、跟踪置信度等),用于后续
人体关键点识别。
(2). 定义关键节点
预设需要追踪的人体关键点(肩膀、肘部、手腕、髋部、膝盖、脚踝等),并定义这些节
点的连接关系(如手腕 - 肘部 - 肩膀的上肢链条、髋部 - 膝盖 - 脚踝的下肢链条),为骨
架绘制和动作分析提供基础。
(3)过滤处理准备
初始化LandmarkSmoother平滑器,配置平滑方法、平滑因子等参数,为后续帧级别的关键点过滤做准备。
点击查看代码
class AdvancedPullUpBenchmark:
def __init__(self, smooth_method='double_exponential', smooth_factor=0.7):
"""初始化,添加平滑方法参数"""
# MediaPipe初始化
self.mp_pose = mp.solutions.pose
self.mp_drawing = mp.solutions.drawing_utils
try:
# 先尝试最全的参数
self.pose = self.mp_pose.Pose(
static_image_mode=False,
model_complexity=2,
smooth_landmarks=True,
enable_segmentation=False,
min_detection_confidence=0.8,
min_tracking_confidence=0.8
)
except TypeError as e:
# 如果还有问题,使用最简参数
print(f"使用最简参数: {e}")
self.pose = self.mp_pose.Pose(
static_image_mode=False,
model_complexity=1,
smooth_landmarks=True,
min_detection_confidence=0.8,
min_tracking_confidence=0.8
)
# ==================================================================
# 关键点定义
self.LANDMARK_INDICES = {
'LEFT_SHOULDER': 11, 'RIGHT_SHOULDER': 12,
'LEFT_ELBOW': 13, 'RIGHT_ELBOW': 14,
'LEFT_WRIST': 15, 'RIGHT_WRIST': 16,
'LEFT_HIP': 23, 'RIGHT_HIP': 24,
'LEFT_KNEE': 25, 'RIGHT_KNEE': 26
}
self.BENCHMARK_POINTS = [0, 25, 50, 75, 100]
# ============== 修改2: 添加平滑器初始化 ==============
self.landmark_smoother = self.LandmarkSmoother(
smooth_method=smooth_method,
smoothing_factor=smooth_factor,
filter_window=5
)
4.2视频处理与关键点提取
(1)关键点实时过滤
为了减小视频的遮挡,以及噪声影响,通过LandmarkSmoother类实现双指数平滑和移动平均过滤,减少关键点抖动,保留运动趋势,过滤随机噪声。
点击查看代码
# ============== 过滤处理2:LandmarkSmoother内部类(关键点平滑过滤) ==============
class LandmarkSmoother:
"""专门用于MediaPipe关键点平滑的类(帧级别的实时过滤处理)"""
def __init__(self, smooth_method='double_exponential',
smoothing_factor=0.7,
filter_window=5):
self.method = smooth_method # 平滑方法(双指数/移动平均)
self.smoothing_factor = smoothing_factor # 双指数平滑因子
self.window_size = filter_window # 移动平均窗口大小
# 存储历史数据
self.history = []
self.smoothed_history = []
def smooth_frame(self, landmarks):
"""平滑单个帧的关键点(核心过滤逻辑)"""
if landmarks is None:
return None
# 提取当前帧关键点坐标
current_points = self._extract_points(landmarks)
self.history.append(current_points)
# 根据方法进行过滤平滑
if self.method == 'double_exponential':
smoothed = self._double_exponential_smoothing(current_points)
elif self.method == 'moving_average':
smoothed = self._moving_average_smoothing(current_points)
else:
smoothed = current_points
self.smoothed_history.append(smoothed)
return self._create_landmarks(smoothed)
def _double_exponential_smoothing(self, current_points):
"""双指数平滑过滤:适用于有速度变化的运动,减少抖动同时保留趋势"""
if len(self.smoothed_history) == 0:
return current_points
last_smoothed = self.smoothed_history[-1]
smoothed = {}
for i, (x, y, z, v) in current_points.items():
if i in last_smoothed:
# 位置平滑过滤
s_x = self.smoothing_factor * x + (1 - self.smoothing_factor) * last_smoothed[i][0]
s_y = self.smoothing_factor * y + (1 - self.smoothing_factor) * last_smoothed[i][1]
s_z = self.smoothing_factor * z + (1 - self.smoothing_factor) * last_smoothed[i][2]
# 趋势平滑过滤(针对运动速度变化)
if len(self.smoothed_history) > 1:
prev_smoothed = self.smoothed_history[-2]
trend_x = last_smoothed[i][0] - prev_smoothed[i][0]
trend_y = last_smoothed[i][1] - prev_smoothed[i][1]
trend_z = last_smoothed[i][2] - prev_smoothed[i][2]
s_x += self.smoothing_factor * trend_x
s_y += self.smoothing_factor * trend_y
s_z += self.smoothing_factor * trend_z
else:
s_x, s_y, s_z = x, y, z
smoothed[i] = (s_x, s_y, z, v)
return smoothed
def _moving_average_smoothing(self, current_points):
"""移动平均过滤:基于历史窗口内的关键点求平均,过滤随机噪声"""
if len(self.history) < 2:
return current_points
# 取最近window_size帧的历史数据
window = self.history[-self.window_size:] if len(self.history) >= self.window_size else self.history
smoothed = {}
for i in current_points.keys():
# 收集窗口内有效(可见度>0.5)的关键点坐标
points_in_window = []
valid_frames = 0
for frame in window:
if i in frame and len(frame[i]) >= 4 and frame[i][3] > 0.5:
points_in_window.append(frame[i])
valid_frames += 1
# 计算平均值作为过滤后的结果
if valid_frames > 0:
avg_x = sum(p[0] for p in points_in_window) / valid_frames
avg_y = sum(p[1] for p in points_in_window) / valid_frames
avg_z = sum(p[2] for p in points_in_window) / valid_frames
visibility = current_points[i][3] if i in current_points else 0.5
smoothed[i] = (avg_x, avg_y, avg_z, visibility)
else:
smoothed[i] = current_points[i] if i in current_points else (0, 0, 0, 0)
return smoothed
def _extract_points(self, landmarks):
"""提取关键点坐标(为过滤处理做数据准备)"""
points = {}
for idx, landmark in enumerate(landmarks.landmark):
points[idx] = (landmark.x, landmark.y, landmark.z, landmark.visibility)
return points
def _create_landmarks(self, points_dict):
"""将过滤后的坐标转换回Landmarks对象(保持数据格式统一)"""
class SimpleLandmark:
def __init__(self, x, y, z, visibility):
self.x = x
self.y = y
self.z = z
self.visibility = visibility
class SimpleLandmarkList:
def __init__(self):
self.landmark = []
landmark_list = SimpleLandmarkList()
for i in sorted(points_dict.keys()):
if i in points_dict and len(points_dict[i]) >= 4:
x, y, z, visibility = points_dict[i]
landmark_list.landmark.append(SimpleLandmark(x, y, z, visibility))
return landmark_list
def reset(self):
"""重置历史数据(为新视频的过滤处理做准备)"""
self.history = []
self.smoothed_history = []
# ==============================================================================
(2)视频逐帧处理与关键点提取(包含过滤处理调用)
打开输入视频文件,获取视频基本信息,并初始化视频写入器,对视频进行逐帧处理,提取关键点,绘制自定义骨架,同时可以选择生成带骨架标注的可视化视频。
点击查看代码
def extract_comprehensive_landmarks(self, video_path, output_video_path=None, enable_smoothing=True):
"""提取综合关键点数据并生成简单可视化视频(包含过滤处理调用)"""
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
return None
fps = cap.get(cv2.CAP_PROP_FPS)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
# 视频写入器
out = None
if output_video_path:
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter(output_video_path, fourcc, fps, (width, height))
print(f"将生成可视化视频: {output_video_path}")
# 自定义躯干连接线
TORSO_CONNECTIONS = [
(15, 13), (16, 14), (13, 11), (14, 12),
(11, 12), (11, 23), (12, 24), (23, 24),
(23, 25), (24, 26), (25, 27), (26, 28)
]
landmarks_data = []
# ============== 过滤处理3:重置平滑器(为新视频的实时过滤做准备) ==============
if enable_smoothing:
self.landmark_smoother.reset() # 清空历史数据,避免前一个视频的过滤数据干扰
# ==============================================================================
with tqdm(total=total_frames, desc="提取综合关键点") as pbar:
for frame_count in range(total_frames):
success, frame = cap.read()
if not success:
break
display_frame = frame.copy()
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = self.pose.process(frame_rgb)
frame_data = {
'frame': frame_count,
'timestamp': frame_count / fps
}
if results.pose_landmarks:
# ============== 过滤处理4:调用平滑器进行帧级别关键点过滤 ==============
if enable_smoothing:
# 使用平滑器过滤当前帧的关键点(核心过滤调用)
smoothed_landmarks = self.landmark_smoother.smooth_frame(results.pose_landmarks)
# 基于过滤后的关键点进行绘制和计算
self._draw_custom_skeleton(display_frame, smoothed_landmarks, TORSO_CONNECTIONS, width, height)
frame_data.update(self._calculate_grip_metrics(smoothed_landmarks))
frame_data.update(self._calculate_height_metrics(smoothed_landmarks))
frame_data.update(self._calculate_torso_angle(smoothed_landmarks))
else:
# 不使用过滤,直接使用原始关键点
self._draw_custom_skeleton(display_frame, results.pose_landmarks, TORSO_CONNECTIONS, width,
height)
frame_data.update(self._calculate_grip_metrics(results.pose_landmarks))
frame_data.update(self._calculate_height_metrics(results.pose_landmarks))
frame_data.update(self._calculate_torso_angle(results.pose_landmarks))
# ======================================================================
# 保存到视频文件
if out:
out.write(display_frame)
else:
# 即使没有检测到关键点,也保存原始帧到视频
if out:
out.write(display_frame)
# 标记缺失数据
frame_data.update(self._get_nan_metrics())
landmarks_data.append(frame_data)
pbar.update(1)
cap.release()
if out:
out.release()
print(f"可视化视频已保存: {output_video_path}")
return pd.DataFrame(landmarks_data)
(3)特征指标计算与绘制骨架图
计算握距、高度、躯干角度等指标,为后续周期检测和动作分析提供数据支撑,同时绘制骨架图
对每帧提取的关键点计算动作特征:
- 高度指标:肩膀、手腕等部位的垂直坐标(Y 值),用于判断最高点时下巴是否
过杆。 - 角度指标:正面视角:躯干(肩 - 髋连线)与垂直线的夹角,评估上半身稳定
性。 - 握距指标:手腕间距与肩宽的比例,分析握距合理性。

点击查看代码
def _calculate_grip_metrics(self, landmarks):
"""计算握距相关指标"""
metrics = {}
try:
# 使用世界坐标计算握距
left_wrist = np.array([landmarks.landmark[15].x, landmarks.landmark[15].y])
right_wrist = np.array([landmarks.landmark[16].x, landmarks.landmark[16].y])
left_shoulder = np.array([landmarks.landmark[11].x, landmarks.landmark[11].y])
right_shoulder = np.array([landmarks.landmark[12].x, landmarks.landmark[12].y])
wrist_distance = np.linalg.norm(left_wrist - right_wrist)
shoulder_distance = np.linalg.norm(left_shoulder - right_shoulder)
metrics['GRIP_WIDTH'] = wrist_distance
metrics['SHOULDER_WIDTH'] = shoulder_distance
metrics['GRIP_RATIO'] = wrist_distance / shoulder_distance if shoulder_distance > 0 else np.nan
except Exception as e:
metrics['GRIP_WIDTH'] = np.nan
metrics['SHOULDER_WIDTH'] = np.nan
metrics['GRIP_RATIO'] = np.nan
return metrics
def _calculate_height_metrics(self, landmarks):
"""计算高度相关指标"""
metrics = {}
try:
# 使用归一化坐标(0-1范围)
left_wrist_x = landmarks.landmark[15].x
left_wrist_y = landmarks.landmark[15].y
right_wrist_x = landmarks.landmark[16].x
right_wrist_y = landmarks.landmark[16].y
left_shoulder_y = landmarks.landmark[11].y
right_shoulder_y = landmarks.landmark[12].y
left_elbow_x = landmarks.landmark[13].x
left_elbow_y = landmarks.landmark[13].y
right_elbow_x = landmarks.landmark[14].x
right_elbow_y = landmarks.landmark[14].y
metrics['LEFT_WRIST_X'] = left_wrist_x
metrics['LEFT_WRIST_Y'] = left_wrist_y
metrics['RIGHT_WRIST_X'] = right_wrist_x
metrics['RIGHT_WRIST_Y'] = right_wrist_y
metrics['LEFT_SHOULDER_Y'] = left_shoulder_y
metrics['RIGHT_SHOULDER_Y'] = right_shoulder_y
metrics['AVG_WRIST_HEIGHT'] = (left_wrist_y + right_wrist_y) / 2
metrics['AVG_SHOULDER_HEIGHT'] = (left_shoulder_y + right_shoulder_y) / 2
metrics['MIN_SHOULDER_HEIGHT'] = min(left_shoulder_y, right_shoulder_y)
metrics['LEFT_ELBOW_X'] = left_elbow_x
metrics['LEFT_ELBOW_Y'] = left_elbow_y
metrics['RIGHT_ELBOW_X'] = right_elbow_x
metrics['RIGHT_ELBOW_Y'] = right_elbow_y
except Exception as e:
metrics.update({key: np.nan for key in [
'LEFT_WRIST_X', 'LEFT_WRIST_Y',
'RIGHT_WRIST_X', 'RIGHT_WRIST_Y',
'LEFT_SHOULDER_Y', 'RIGHT_SHOULDER_Y',
'AVG_WRIST_HEIGHT', 'AVG_SHOULDER_HEIGHT', 'MIN_SHOULDER_HEIGHT',
'LEFT_ELBOW_X', 'LEFT_ELBOW_Y', 'RIGHT_ELBOW_X', 'RIGHT_ELBOW_Y'
]})
return metrics
def _calculate_torso_angle(self, landmarks):
"""计算躯干角度"""
metrics = {}
try:
# 肩膀中心
left_shoulder = np.array([landmarks.landmark[11].x, landmarks.landmark[11].y])
right_shoulder = np.array([landmarks.landmark[12].x, landmarks.landmark[12].y])
shoulder_center = (
(left_shoulder[0] + right_shoulder[0]) / 2,
(left_shoulder[1] + right_shoulder[1]) / 2
)
# 髋部中心
left_hip = np.array([landmarks.landmark[23].x, landmarks.landmark[23].y])
right_hip = np.array([landmarks.landmark[24].x, landmarks.landmark[24].y])
hip_center = (
(left_hip[0] + right_hip[0]) / 2,
(left_hip[1] + right_hip[1]) / 2
)
# 躯干向量
dx = shoulder_center[0] - hip_center[0]
dy = shoulder_center[1] - hip_center[1]
# 计算与垂直线的夹角
angle = np.degrees(np.arctan2(dx, dy))
metrics['TORSO_ANGLE'] = angle
metrics['TORSO_ANGLE_ABS'] = abs(angle)
except Exception as e:
metrics['TORSO_ANGLE'] = np.nan
metrics['TORSO_ANGLE_ABS'] = np.nan
return metrics
def _draw_custom_skeleton(self, frame, landmarks, connections, width, height):
"""自定义绘制骨架"""
# 绘制连接线
for start_idx, end_idx in connections:
start_landmark = landmarks.landmark[start_idx]
end_landmark = landmarks.landmark[end_idx]
if start_landmark.visibility > 0.3 and end_landmark.visibility > 0.3:
start_x = int(start_landmark.x * width)
start_y = int(start_landmark.y * height)
end_x = int(end_landmark.x * width)
end_y = int(end_landmark.y * height)
cv2.line(frame, (start_x, start_y), (end_x, end_y), (0, 255, 255), 2)
# 绘制关键点
connected_points = set()
for connection in connections:
connected_points.add(connection[0])
connected_points.add(connection[1])
for point_idx in connected_points:
landmark = landmarks.landmark[point_idx]
if landmark.visibility > 0.3:
x = int(landmark.x * width)
y = int(landmark.y * height)
cv2.circle(frame, (x, y), 5, (0, 255, 0), -1)
4.3引体向上周期检测:
(1)信号预处理
- 以肩膀高度(Y 值)作为核心信号,对缺失数据进行线性插值补全,确保信号的连续性,避免数据断裂导致周期检测失败
- 并通过Savitzky-Golay 滤波:对插值后的信号进行平滑,过滤高频噪声,保留引体向上的周期趋势,提升峰值 / 谷值检测的准确性。
(2)周期识别
- 利用信号峰值检测算法,识别肩膀高度的 “波峰”(手臂伸直、身体最低位)
和 “波谷”(下巴过杆、身体最高位)。 - 以 “波峰 - 波谷 - 波峰” 的序列定义一个完整引体向上周期:从手臂伸直开始,
到下巴过杆,再回到手臂伸直结束
(3)周期验证
过滤无效周期(如持续时间过短 / 过长、动作幅度不足的情况),确保检测结果的有效性。
(4)对边缘值的确定
因为找波峰与波谷时,都是通过找极值点进行处理的,所以,我们在视频开始帧是无法提取到极值点的,但是我们可以通过自定义一个函数用于比较周期的第一帧后与后面连续几帧做对比,判断是不是最小值点,来确定边缘值。

点击查看代码
def detect_rep_cycles_by_shoulder_height(self, df):
"""基于肩膀高度检测引体向上周期(包含信号预处理过滤)"""
print("基于肩膀高度检测引体向上周期...")
# 使用肩膀高度作为主要信号
shoulder_heights = df['MIN_SHOULDER_HEIGHT'].values
# ============== 过滤处理5:信号预处理过滤(缺失值插值+Savitzky-Golay滤波) ==============
# 第一步:线性插值过滤缺失值(填充空数据,避免数据断裂)
shoulder_series = pd.Series(shoulder_heights)
shoulder_interp = shoulder_series.interpolate(method='linear', limit_direction='both')
if len(shoulder_interp) < 20:
print("数据太少,无法检测周期")
return []
# 第二步:Savitzky-Golay滤波(平滑信号,过滤高频噪声,保留周期趋势)
window_size = min(11, len(shoulder_interp) // 10 * 2 + 1)
if window_size < 3:
window_size = 3
try:
smoothed = signal.savgol_filter(shoulder_interp, window_length=window_size, polyorder=2)
except Exception as e:
print(f"平滑信号失败: {e}")
smoothed = shoulder_interp.values
# ======================================================================================
# 寻找周期
rep_cycles = self._find_cycles_by_shoulder_height(smoothed)
print(f"检测到 {len(rep_cycles)} 个引体向上周期")
return rep_cycles
def _find_cycles_by_shoulder_height(self, shoulder_heights):
"""基于肩膀高度寻找周期"""
rep_cycles = []
try:
min_distance = max(15, len(shoulder_heights) // 20)
# 寻找波谷(肩膀最高点)
valleys, _ = signal.find_peaks(-shoulder_heights, distance=min_distance, prominence=0.02)
# 寻找波峰(肩膀最低点)
peaks, _ = signal.find_peaks(shoulder_heights, distance=min_distance, prominence=0.02)
peaks = self._add_boundary_peaks(shoulder_heights, peaks, min_distance)
print(f"肩膀高度检测: {len(peaks)}个波峰(手臂伸直), {len(valleys)}个波谷(下巴过杆)")
# 构建周期
if len(peaks) >= 2 and len(valleys) >= 1:
for i in range(len(peaks) - 1):
start_peak = peaks[i]
end_peak = peaks[i + 1]
# 在两个波峰之间寻找波谷
valleys_between = [v for v in valleys if start_peak < v < end_peak]
if valleys_between:
valley = valleys_between[0]
if self._validate_rep_cycle(shoulder_heights, start_peak, valley, end_peak):
rep_cycles.append({
'start_frame': int(start_peak),
'bottom_frame': int(valley),
'end_frame': int(end_peak),
'duration': int(end_peak - start_peak),
'amplitude': float(shoulder_heights[start_peak] - shoulder_heights[valley])
})
except Exception as e:
print(f"肩膀高度周期检测错误: {e}")
return rep_cycles
def _add_boundary_peaks(self, signal_data, detected_peaks, min_distance):
peaks = list(detected_peaks)
# 检查起始边界(第一帧)
if len(signal_data) > 0:
search_range = min(min_distance, len(signal_data) // 4)
if search_range > 0:
first_value = signal_data[0]
subsequent_values = signal_data[1:search_range]
# 条件1:第一帧 > 后续帧的最大值
if len(subsequent_values) > 0 and first_value > np.max(subsequent_values):
# 条件2:第一帧 > 整个数据的60%分位数
if first_value > np.percentile(signal_data, 60):
peaks.insert(0, 0)
return np.array(sorted(peaks))
def _validate_rep_cycle(self, signal_data, start, bottom, end):
"""验证周期有效性"""
try:
if end <= start or bottom <= start or end <= bottom:
return False
duration = end - start
amplitude = signal_data[start] - signal_data[bottom]
# 宽松的验证条件
if duration < 10 or duration > 200 or amplitude < 0.02:
return False
return True
except Exception as e:
return False
4.4动作分析与结果输出
(1)单周期分析
对每个有效周期,计算关键指标的统计特征:
- 角度指标:最大 / 最小 / 平均角度及标准差(评估躯干和下肢的稳定性与晃动程
度)。 - 握距指标(仅正面):平均 / 最大 / 最小握距比例(评估握距一致性)。
- 高度差(仅正面):最高点时肩膀与手腕的垂直距离(评估动作标准度)。
点击查看代码
def create_biomechanical_benchmark(self, df, rep_cycles):
"""创建生物力学基准"""
if not rep_cycles:
print("没有检测到周期,创建空基准")
return self._create_empty_benchmark()
# 分析每个周期
cycle_analyses = {}
for i, cycle in enumerate(rep_cycles):
cycle_name = f"cycle_{i + 1}"
cycle_analysis = self._analyze_single_cycle(df, cycle, cycle_name)
if cycle_analysis:
cycle_analyses[cycle_name] = cycle_analysis
if not cycle_analyses:
return self._create_empty_benchmark()
# 创建基准结果
benchmark = {
'analysis_summary': {
'total_cycles': len(cycle_analyses),
'total_frames': len(df),
'analysis_timestamp': pd.Timestamp.now().isoformat(),
'status': 'success'
},
'cycles': cycle_analyses
}
return benchmark
def _analyze_single_cycle(self, df, cycle, cycle_name):
"""分析单个周期"""
try:
start, bottom, end = cycle['start_frame'], cycle['bottom_frame'], cycle['end_frame']
if end >= len(df):
return None
cycle_data = df.iloc[start:end].copy()
# 计算握距统计
grip_ratios = cycle_data['GRIP_RATIO'].dropna()
grip_stats = {
'grip_ratio_mean': float(np.mean(grip_ratios)) if len(grip_ratios) > 0 else np.nan,
'grip_ratio_max': float(np.max(grip_ratios)) if len(grip_ratios) > 0 else np.nan,
'grip_ratio_min': float(np.min(grip_ratios)) if len(grip_ratios) > 0 else np.nan,
'grip_ratio_std': float(np.std(grip_ratios)) if len(grip_ratios) > 0 else np.nan
}
# 计算躯干角度统计
torso_angles = cycle_data['TORSO_ANGLE_ABS'].dropna()
torso_stats = {
'torso_angle_max': float(np.max(torso_angles)) if len(torso_angles) > 0 else np.nan,
'torso_angle_min': float(np.min(torso_angles)) if len(torso_angles) > 0 else np.nan,
'torso_angle_mean': float(np.mean(torso_angles)) if len(torso_angles) > 0 else np.nan,
'torso_angle_std': float(np.std(torso_angles)) if len(torso_angles) > 0 else np.nan
}
# 计算最高点的肩膀中心与手腕中心高度差
peak_height_diff = self._calculate_peak_height_difference(cycle_data, bottom)
# 计算最高点时手腕-肘部角度
wrist_elbow_angle = self._calculate_wrist_elbow_angle_at_peak(cycle_data, bottom)
cycle_analysis = {
'cycle_info': {
'start_frame': int(start),
'bottom_frame': int(bottom),
'end_frame': int(end),
'duration_frames': int(end - start),
'amplitude': float(cycle['amplitude'])
},
'grip_metrics': grip_stats,
'torso_metrics': torso_stats,
'peak_height_difference': peak_height_diff,
'wrist_elbow_angle': wrist_elbow_angle
}
return cycle_analysis
except Exception as e:
print(f"分析周期 {cycle_name} 错误: {e}")
return None
def _calculate_peak_height_difference(self, cycle_data, bottom_frame):
"""计算最高点的肩膀中心与手腕中心高度差"""
try:
cycle_start = cycle_data.index[0]
relative_bottom = bottom_frame - cycle_start
if 0 <= relative_bottom < len(cycle_data):
bottom_data = cycle_data.iloc[relative_bottom]
shoulder_center_y = bottom_data.get('AVG_SHOULDER_HEIGHT', np.nan)
wrist_center_y = bottom_data.get('AVG_WRIST_HEIGHT', np.nan)
if not np.isnan(shoulder_center_y) and not np.isnan(wrist_center_y):
height_diff = shoulder_center_y - wrist_center_y
return {
'height_difference': float(height_diff),
'shoulder_height': float(shoulder_center_y),
'wrist_height': float(wrist_center_y),
'frame': int(bottom_frame)
}
return {
'height_difference': np.nan,
'shoulder_height': np.nan,
'wrist_height': np.nan,
'frame': int(bottom_frame)
}
except Exception as e:
print(f"计算高度差错误: {e}")
return {
'height_difference': np.nan,
'shoulder_height': np.nan,
'wrist_height': np.nan,
'frame': int(bottom_frame)
}
def _calculate_wrist_elbow_angle_at_peak(self, cycle_data, bottom_frame):
"""计算最高点时手腕与手肘之间的连接向量与垂直方向的夹角"""
try:
cycle_start = cycle_data.index[0]
relative_bottom = bottom_frame - cycle_start
if 0 <= relative_bottom < len(cycle_data):
bottom_data = cycle_data.iloc[relative_bottom]
# 左手坐标
left_wrist_x = bottom_data.get('LEFT_WRIST_X')
left_wrist_y = bottom_data.get('LEFT_WRIST_Y')
left_elbow_x = bottom_data.get('LEFT_ELBOW_X')
left_elbow_y = bottom_data.get('LEFT_ELBOW_Y')
# 右手坐标
right_wrist_x = bottom_data.get('RIGHT_WRIST_X')
right_wrist_y = bottom_data.get('RIGHT_WRIST_Y')
right_elbow_x = bottom_data.get('RIGHT_ELBOW_X')
right_elbow_y = bottom_data.get('RIGHT_ELBOW_Y')
# 计算左手角度
left_angle = np.nan
if not np.isnan(left_wrist_x) and not np.isnan(left_wrist_y) and \
not np.isnan(left_elbow_x) and not np.isnan(left_elbow_y):
wrist_to_elbow_x = left_elbow_x - left_wrist_x
wrist_to_elbow_y = left_elbow_y - left_wrist_y
vertical_vector = np.array([0, -1])
wrist_elbow_vector = np.array([wrist_to_elbow_x, wrist_to_elbow_y])
if np.linalg.norm(wrist_elbow_vector) > 0:
cos_angle = np.dot(wrist_elbow_vector, vertical_vector) / \
(np.linalg.norm(wrist_elbow_vector) * np.linalg.norm(vertical_vector))
cos_angle = np.clip(cos_angle, -1.0, 1.0)
left_angle = np.degrees(np.arccos(cos_angle))
# 计算右手角度
right_angle = np.nan
if not np.isnan(right_wrist_x) and not np.isnan(right_wrist_y) and \
not np.isnan(right_elbow_x) and not np.isnan(right_elbow_y):
wrist_to_elbow_x = right_elbow_x - right_wrist_x
wrist_to_elbow_y = right_elbow_y - right_wrist_y
vertical_vector = np.array([0, -1])
wrist_elbow_vector = np.array([wrist_to_elbow_x, wrist_to_elbow_y])
if np.linalg.norm(wrist_elbow_vector) > 0:
cos_angle = np.dot(wrist_elbow_vector, vertical_vector) / \
(np.linalg.norm(wrist_elbow_vector) * np.linalg.norm(vertical_vector))
cos_angle = np.clip(cos_angle, -1.0, 1.0)
right_angle = np.degrees(np.arccos(cos_angle))
return {
'left_wrist_elbow_angle': float(left_angle) if not np.isnan(left_angle) else np.nan,
'right_wrist_elbow_angle': float(right_angle) if not np.isnan(right_angle) else np.nan,
'avg_wrist_elbow_angle': float(np.nanmean([left_angle, right_angle])) if not all(
np.isnan([left_angle, right_angle])) else np.nan,
'frame': int(bottom_frame)
}
return {
'left_wrist_elbow_angle': np.nan,
'right_wrist_elbow_angle': np.nan,
'avg_wrist_elbow_angle': np.nan,
'frame': int(bottom_frame)
}
except Exception as e:
print(f"计算手腕-手肘角度错误: {e}")
return {
'left_wrist_elbow_angle': np.nan,
'right_wrist_elbow_angle': np.nan,
'avg_wrist_elbow_angle': np.nan,
'frame': int(bottom_frame)
}
def _get_nan_metrics(self):
"""返回NaN指标字典"""
return {
'GRIP_WIDTH': np.nan, 'SHOULDER_WIDTH': np.nan, 'GRIP_RATIO': np.nan,
'LEFT_WRIST_X': np.nan, 'LEFT_WRIST_Y': np.nan,
'RIGHT_WRIST_X': np.nan, 'RIGHT_WRIST_Y': np.nan,
'LEFT_SHOULDER_Y': np.nan, 'RIGHT_SHOULDER_Y': np.nan,
'AVG_WRIST_HEIGHT': np.nan, 'AVG_SHOULDER_HEIGHT': np.nan, 'MIN_SHOULDER_HEIGHT': np.nan,
'TORSO_ANGLE': np.nan, 'TORSO_ANGLE_ABS': np.nan,
'LEFT_ELBOW_X': np.nan, 'LEFT_ELBOW_Y': np.nan, 'RIGHT_ELBOW_X': np.nan, 'RIGHT_ELBOW_Y': np.nan
}
def _create_empty_benchmark(self):
"""创建空基准"""
return {
'analysis_summary': {
'total_cycles': 0,
'total_frames': 0,
'analysis_timestamp': pd.Timestamp.now().isoformat(),
'status': 'no_cycles_detected'
},
'cycles': {}
}
(2)后处理滤波
对批量数据进行巴特沃斯低通滤波或 Savitzky-Golay 滤波,进一步过滤噪声,提升数据质量,为分析提供更可靠的数据源。
点击查看代码
# ============== 过滤处理6:后处理滤波(批量数据的精细过滤) ==============
def post_process_filtering(self, df, method='butterworth', order=4, cutoff_freq=0.1):
"""后处理滤波 - 在数据提取完成后进行更精细的平滑过滤"""
df_smoothed = df.copy()
# 需要平滑的列
coordinate_columns = [
'LEFT_WRIST_X', 'LEFT_WRIST_Y',
'RIGHT_WRIST_X', 'RIGHT_WRIST_Y',
'LEFT_ELBOW_X', 'LEFT_ELBOW_Y',
'RIGHT_ELBOW_X', 'RIGHT_ELBOW_Y',
'LEFT_SHOULDER_Y', 'RIGHT_SHOULDER_Y',
'GRIP_WIDTH', 'SHOULDER_WIDTH',
'GRIP_RATIO', 'TORSO_ANGLE'
]
fps = 30 # 估计的帧率,可以根据实际情况调整
for col in coordinate_columns:
if col in df.columns:
series = df[col].copy()
# 检查是否有有效数据
if series.isna().all() or len(series) < 10:
continue
# 插值填充缺失值(过滤缺失数据)
series_filled = series.interpolate(method='linear', limit_direction='both')
series_filled = series_filled.ffill().bfill()
if method == 'butterworth':
# 巴特沃斯低通滤波:过滤高频噪声,保留低频周期信号
nyquist = fps / 2
normal_cutoff = cutoff_freq / nyquist
if normal_cutoff < 1.0:
b, a = signal.butter(order, normal_cutoff, btype='low')
# 零相位滤波(避免信号偏移)
filtered = signal.filtfilt(b, a, series_filled)
# 范围限制过滤:确保数据在合理范围内
if 'ANGLE' in col or 'GRIP' in col:
filtered = np.clip(filtered, series_filled.min() * 0.5, series_filled.max() * 1.5)
df_smoothed[col] = filtered
elif method == 'savgol':
# Savitzky-Golay滤波:批量数据的多项式平滑
window_length = min(11, len(series_filled) // 3 * 2 + 1)
if window_length >= 5 and window_length <= len(series_filled):
polyorder = min(3, window_length - 1)
try:
filtered = signal.savgol_filter(
series_filled,
window_length=window_length,
polyorder=polyorder
)
df_smoothed[col] = filtered
except:
df_smoothed[col] = series_filled
else:
df_smoothed[col] = series_filled
return df_smoothed
# ======================================================================
(3)结果输出
将分析结果保存为 JSON 文件,打印摘要信息,方便后续查看和知识库构建。
五.心得体会
参与引体向上动作分析算法开发,让我收获颇丰。开发初期,我按文档拆解为初始化、关键点提取等四大模块,明确各环节目标,为后续落地奠定基础。
算法精度的提升离不开细节优化,针对关键点抖动、数据缺失等问题,我设计了多层次过滤机制,从帧级别实时平滑到批量后处理滤波,有效提升了数据质量与检测准确率。
开发中,数据思维贯穿全程,从提取、处理到结构化输出,让无序数据转化为有价值的评估报告。同时,我也学会了应对兼容性、逻辑漏洞等问题,在实践中锤炼了问题解决能力。
此次经历让我明白,优秀算法既要精准高效,也要兼顾实用性与易用性。未来,我将继续优化算法鲁棒性,探索与 AI 的深度结合,让技术更好地服务于动作评估需求。

浙公网安备 33010602011771号