上一页 1 2 3 4 5 6 7 8 ··· 19 下一页
摘要: 目录KIMI K1.5: SCALING REINFORCEMENT LEARNING WITH LLMSTL;DRMethodRL Prompt Set制作Long-CoT Supervised Fine-Tuning强化学习算法长度惩罚采样策略视觉数据Long2short CoT模型Model 阅读全文
posted @ 2025-07-21 20:37 fariver 阅读(137) 评论(0) 推荐(0)
摘要: 目录DAPO: An Open-Source LLM Reinforcement Learning System at ScaleTL;DRBackgroundMethodClip-HigherDynamic SamplingOverlong Reward ShapingExperiment总结与思 阅读全文
posted @ 2025-07-20 18:58 fariver 阅读(63) 评论(0) 推荐(0)
摘要: 目录QWENLONG-L1: Towards Long-Context Large Reasoning Models with Reinforcement LearningTL;DRMotivationsuboptimal training efficiencyunstable optimizati 阅读全文
posted @ 2025-07-20 15:07 fariver 阅读(26) 评论(0) 推荐(0)
摘要: 目录Training language models to follow instructions with human feedbackTL;DRMethodDatasetModelSupervised fine-tuningReward modeling(RM)Reinforcement Lea 阅读全文
posted @ 2025-07-17 21:58 fariver 阅读(89) 评论(0) 推荐(0)
摘要: 目录R1-Omni: Explainable Omni-Multimodal Emotion Recognition with Reinforcement LearningTL;DRMethodVerifiable RewardRLVRExperiment总结与思考相关链接 R1-Omni: Exp 阅读全文
posted @ 2025-07-15 21:28 fariver 阅读(48) 评论(0) 推荐(0)
摘要: 目录DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement LearningTL;DRMethodExperiment总结与思考相关链接 DeepSeek-R1: Incentivizing Reasonin 阅读全文
posted @ 2025-07-15 20:28 fariver 阅读(44) 评论(0) 推荐(0)
摘要: 目录DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language ModelsTL;DRMethodData CollectionDeepSeekMath-Base 7B训练与评估​Reinforcement 阅读全文
posted @ 2025-07-11 20:08 fariver 阅读(116) 评论(0) 推荐(0)
摘要: 目录Reforce Learning Tutorial课程内容基本概念Policy Gradient - 方案演进Version0Version1Version2Version3Version3.5Version4Policy Gradient - On-policy Vs Off-policyOn 阅读全文
posted @ 2025-07-05 14:17 fariver 阅读(107) 评论(0) 推荐(0)
摘要: 分布式通信原语 Broadcast: 将一张XPU卡数据复制同步到其它所有XPU卡上 Scatter: 将一张XPU卡数据切片分发到其它所有XPU卡上 Reduce:接收其它所有XPU卡上数据,通过某种操作(Sum/Mean/Max)之后,最终放到某个XPU卡上 Gather: 接受其它所有XPU卡 阅读全文
posted @ 2025-07-02 20:21 fariver 阅读(21) 评论(0) 推荐(0)
摘要: 背景 大语言模型(LLM)参数量已突破万亿,单次训练计算量达千亿亿次浮点运算(ExaFLOPs)。单卡GPU显存上限仅80GB(A100),算力峰值312 TFLOPS,显存墙与通信墙成为千卡/万卡分布式训练的核心瓶颈。 前置知识 1. DDP训练过程 ​​数据切片​​:全局Batch拆分为子Bat 阅读全文
posted @ 2025-07-02 20:19 fariver 阅读(165) 评论(0) 推荐(0)
上一页 1 2 3 4 5 6 7 8 ··· 19 下一页