深度学习降噪专题课:总结

大家好,本课是本次专题课的最后一节课,给出了未来的研究改进方向,谢谢!

加QQ群,获得相关资料,与群主交流讨论:106047770

本系列文章为线上课程的复盘,每上完一节课就会同步发布对应的文章

本课程系列文章可进入合集查看:
深度学习降噪专题课系列文章合集

未来的研究改进方向

1.等待WebNN Polyfill 支持WebGPU后端

2.使用WebGPU的GPU Buffer或者GPU Texture作为Network的输入和输出

参考资料:
https://www.w3.org/TR/webnn/#programming-model-device-selection

https://www.w3.org/TR/webnn/#api-ml

https://www.w3.org/TR/webnn/#api-mlcontext-webgpu-interop

https://www.w3.org/TR/webnn/#api-mlcommandencoder

Create tensor from GPUBuffer

3.与path tracer结合

4.使用多帧来累积spp(temporally accumulating)

对于训练,加入累积spp的数据(参考wspk的dataset)

对于推理,使用累积spp的场景数据作为输入

参考资料:

  • wspk论文的相关描述:

Besides, temporally accumulating consecutive 1-spp frames can effectively improve the temporal stability and increase the effec- tive spp of each frame. We employ a temporal accumulation pre- processing step before sending the noisy inputs to the denoising pipeline just like [SKW∗ 17, KIM∗ 19, MZV∗ 20]. We first reproject the previous frame to the current frame with the motion vector and then judge their geometry consistency by world position and shad- ing normal feature buffers. Current frame pixels that passed the consistency test are blended with their corresponding pixels in the previous frame, while the failed pixels remain original 1 spp.

  • wspk的相关实现
  • bmfr的相关实现

对于Motion Vector造成的ghost问题,可参考下面的论文改进:Temporally Reliable Motion Vectors for Real-time Ray Tracing

posted @ 2023-06-12 17:22  杨元超  阅读(65)  评论(0编辑  收藏  举报