DailyPaper-2025-9-26

MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources

https://arxiv.org/pdf/2509.21268

Variance-Aware Sampling and large-scale CoT data improve multimodal reasoning models by stabilizing RL fine-tuning and enhancing performance on benchmarks.

定义的 VPS 得分很 naive 很直觉真没看出创新在哪. 然后在 VAS 采样的基础上加了个随机采样, 也没有创新点. 后文详细证明了虽然这个玩意很直觉很人类先验但是是对的.

训练是冷启动 + VAS, 也很 general.

效果看着还行, 而且 checkpoint 和数据集全开源了.

5/10

Seedream 4.0: Toward Next-generation Multimodal Image Generation

https://arxiv.org/abs/2509.20427

Seedream 4.0 is a high-performance multimodal image generation system that integrates text-to-image synthesis, image editing, and multi-image composition using a diffusion transformer and VAE, achieving state-of-the-art results with efficient training and inference.

字节新工作, 总体太工程了我也不是很懂 cv 模型训练, 效果看着是真好, 又 SOTA 了.

7+/10

CE-GPPO: Controlling Entropy via Gradient-Preserving Clipping Policy Optimization in Reinforcement Learning

https://arxiv.org/abs/2509.20712

A novel reinforcement learning algorithm, CE-GPPO, reintroduces gradients from clipped tokens to improve the exploration-exploitation balance in training large language models.

PPO 会裁剪目标函数里那种过大和过小的策略改变量的贡献, 然后这个工作在 backward 时候引入了超参数缩放区间之外对梯度的贡献. 这点确实之前没有想到过, 切入点挺好的.

缺点是这超参数是自己设置的, 虽然也做了不同超参数下的消融实验但是如果这玩意是自动的就好了.

6+/10

SciReasoner: Laying the Scientific Reasoning Ground Across Disciplines

https://arxiv.org/abs/2509.21320

A scientific reasoning foundation model pre-trained on diverse scientific data supports multiple tasks and enhances cross-domain generalization and fidelity through specialized training techniques.

上海 AI Lab 的工作, 感觉可以看作他们 2507.17512 的一个小分支的延申和实践. 总之就是 mix 相关 domain 的知识可以带来性能提升.

和 2507.17512 一起可以给 8-/10.

VCRL: Variance-based Curriculum Reinforcement Learning for Large Language Models

https://arxiv.org/abs/2509.19803

A curriculum reinforcement learning framework dynamically adjusts training sample difficulty based on reward variance, improving LLM performance on mathematical reasoning tasks.

和我想的 idea 的 domain 有点像, 这个工作相当于把 Curriculum RL 动态了. 结合 HuggingFace 的 SmolLM 和上海 AI Lab 的 2507.17512 来看, 我感觉这方面很可以写.

它提出说如果你一个问题对于模型过于简单或者过难, 它得分的期望就约等于 0/1, 如果得分方差过大那就可以说明适合这个模型当前训练阶段. 然后根据这个方差维护了一个 memory bank 来动态采样去训这个东西.

7+/10

posted @ 2025-09-26 13:23  LiBoyi  阅读(41)  评论(0)    收藏  举报