Web Workder 入门
// src/main.ts
const worker = new Worker(new URL('./example.worker.ts', import.meta.url), {
type: 'module',
});
worker.onmessage = (event: MessageEvent) => {
console.log('主线程收到消息:', event.data);
console.log('end', Date.now())
};
console.log('start', Date.now())
worker.postMessage({ number: 1_000_000 });
// src/example.worker.ts
self.onmessage = (event) => {
console.log('Worker received message:', event.data);
const result = heavyComputation(event.data.number);
self.postMessage({ result });
};
function heavyComputation(n: number): number {
let sum = 0;
for (let i = 0; i < n; i++) {
sum += i;
}
return sum;
}
Mediapipe 入门
Pose 类是为主线程设计的。
在 Web Worker 中运行时,MediaPipe 的自动资源加载机制未能正确识别环境,导致它尝试使用 importScripts() 去加载非 JS 资源(.tflite, .wasm),从而报错 ❌
Mediapipe 使用 Web Worker
适配 vision_bundle.js 文件
在 node_modules/@mediapipe/task-vision/ 中复制 vision_bundle.mjs 和 vision_bundle.mjs.map 到项目工程中,并修改
export{Ia as DrawingUtils,Za as FaceDetector,uc as FaceLandmarker,lc as FaceStylizer,Uo as FilesetResolver,mc as GestureRecognizer,_c as HandLandmarker,Ac as HolisticLandmarker,bc as ImageClassifier,kc as ImageEmbedder,Rc as ImageSegmenter,Sc as ImageSegmenterResult,Vc as InteractiveSegmenter,Fc as InteractiveSegmenterResult,Ga as MPImage,Ea as MPMask,Xc as ObjectDetector,Kc as PoseLandmarker,Zo as TaskRunner,Ja as VisionTaskRunner};
成如下内容,并修改文件名字为 vision_bundle.js
// export{Ia as DrawingUtils,Za as FaceDetector,uc as FaceLandmarker,lc as FaceStylizer,Uo as FilesetResolver,mc as GestureRecognizer,_c as HandLandmarker,Ac as HolisticLandmarker,bc as ImageClassifier,kc as ImageEmbedder,Rc as ImageSegmenter,Sc as ImageSegmenterResult,Vc as InteractiveSegmenter,Fc as InteractiveSegmenterResult,Ga as MPImage,Ea as MPMask,Xc as ObjectDetector,Kc as PoseLandmarker,Zo as TaskRunner,Ja as VisionTaskRunner};
var FaceLandmarker = uc;
var FilesetResolver = Uo;
var HolisticLandmarker = Ac;
//# sourceMappingURL=vision_bundle_mjs.js.map
配置 Worker
/// <reference lib="webworker" />
self.importScripts("vision_bundle.js");
let poseLandmarker = null;
let PoseConfig = {
baseOptions: {
modelAssetPath: "https://storage.googleapis.com/mediapipe-models/pose_landmarker/pose_landmarker_lite/float16/1/pose_landmarker_lite.task",
delegate: "CPU"
},
outputFaceBlendshapes: true,
runningMode: "IMAGE",
numFaces: 1
}
async function initPoseLandmarker() {
const filesetResolver = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.12/wasm"
);
poseLandmarker = await PoseLandmarker.createFromOptions(filesetResolver, PoseConfig);
}
onmessage = async e => {
if (poseLandmarker == null) {
const start = Date.now();
await initPoseLandmarker();
const end = Date.now();
console.log("init", end - start);
}
if (poseLandmarker !== null && e.data) {
const start = Date.now();
let results = await poseLandmarker.detect(e.data);
const end = Date.now();
console.log("proc", end - start);
postMessage(results);
}
}
补充
关于 Holistic Landmarker
在 https://www.npmjs.com/package/@mediapipe/tasks-vision 对于 Holistic Landmarker 给出的代码
const vision = await FilesetResolver.forVisionTasks(
"https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision/wasm"
);
const holisticLandmarker = await HolisticLandmarker.createFromModelPath(vision,
"https://storage.googleapis.com/mediapipe-models/holistic_landmarker/holistic_landmarker/float16/1/hand_landmark.task"
);
const image = document.getElementById("image") as HTMLImageElement;
const landmarks = holisticLandmarker.detect(image);
参考
https://www.npmjs.com/package/@mediapipe/tasks-vision
https://github.com/google-ai-edge/mediapipe/issues/2574
https://github.com/Wei-1/vision-worker-test/tree/main
Zoo
https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/models.md