_weiping Profile Banner
Wei Ping Profile
Wei Ping

@_weiping

Followers
2K
Following
604
Media
14
Statuses
299

distinguished research scientist @nvidia | post-training, RL, multimodal | generative models for audio. Views are my own.

San Francisco, CA
Joined June 2020
Don't wanna be here? Send us removal request.
@_weiping
Wei Ping
6 months
Introducing AceReason-Nemotron: Advancing math and code reasoning through reinforcement learning (RL) We propose conducting RL on math-only prompts first, then on code-only prompts. Our key findings include: - Math-only RL significantly boosts both math and code benchmarks! -
2
24
157
@RafaelValleArt
Rafael Valle
4 months
🤯 Audio Flamingo 3 is out already... and that's before Audio Flamingo 2 makes its debut at ICML on Wednesday, July 16 at 4:30 p.m.! These benchmark results are insane! https://t.co/6VMONn6AEB
2
16
54
@lucas110550
Zhuolin Yang
5 months
Our released evaluation toolkit can reproduce our AceReason-Nemotron models numbers (see below): AceReason-Nemotron-1.0-7B: LiveCodeBench (Avg@8): * [05/23-05/24]: 72.0; [06/24-01/25]: 54.2 * release set v5: 51.2; release set v6: 44.4 AIME (Avg@64): * AIME'24: 68.6; AIME'25:
Tweet card summary image
huggingface.co
@ychenNLP
Yang Chen
5 months
The first thing we did was to make sure the eval setup is correct! We spend a lot of time to make sure our eval can - accurately reproduce the DeepSeek-R1 numbers on AIME, LiveCodeBench - it's IMPOSSIBLE to track the RL progress without a good eval set up (e.g., we see AIME up
0
4
9
@ychenNLP
Yang Chen
5 months
📢We conduct a systematic study to demystify the synergy between SFT and RL for reasoning models. The result? We trained a 7B model - AceReason-Nemotron-1.1, significantly improved from version 1.0 on math and coding benchmarks. ✅AIME2025 (math): 53.6% -> 64.8% ✅LiveCodeBench
6
45
205
@zihan_johan_liu
Zihan (Johan) Liu
5 months
With stronger SFT backbone, AceReason-Nemotron-1.1-7B significantly outperforms its predecessor and sets a record-high performance among Qwen2.5-7B-based reasoning models. 📄Report: https://t.co/yzYeGqWoTr 🤗Model: https://t.co/VRtprrPxZJ 📚SFT Data:
Tweet card summary image
huggingface.co
@_weiping
Wei Ping
5 months
Introducing AceReason-Nemotron 1.1 Our previous release, AceReason-Nemotron-1.0, introduced a stage-wise RL recipe that was applied sequentially to math-only and code-only prompts, demonstrating both high efficiency and strong effectiveness. Here, we systematically investigate
1
8
25
@MohammadShoeybi
Mohammad Shoeybi
5 months
Checkout our detailed study on advancing math and code reasoning using SFT and RL.
@_weiping
Wei Ping
5 months
Introducing AceReason-Nemotron 1.1 Our previous release, AceReason-Nemotron-1.0, introduced a stage-wise RL recipe that was applied sequentially to math-only and code-only prompts, demonstrating both high efficiency and strong effectiveness. Here, we systematically investigate
1
3
12
@_weiping
Wei Ping
5 months
Introducing AceReason-Nemotron 1.1 Our previous release, AceReason-Nemotron-1.0, introduced a stage-wise RL recipe that was applied sequentially to math-only and code-only prompts, demonstrating both high efficiency and strong effectiveness. Here, we systematically investigate
2
16
69
@GavinNewsom
Gavin Newsom
5 months
If they can handcuff a U.S. Senator for asking a question, imagine what they will do to you.
53K
61K
339K
@mli0603
Max Li 李赵硕
5 months
Cosmos-Reason1 has exciting updates 💡 Now it understands physical reality — judging videos as real or fake! Check out the resources👇 Paper: https://t.co/TcqqvrhqAD Huggingface: https://t.co/hOLno2IyhW Code: https://t.co/UUg90bmcGW Project page: https://t.co/Dr6ZqnKM8o (1/n)
2
32
101
@kuchaev
Oleksii Kuchaiev
5 months
New reasoning Nemotron-H models are now publicly available. These models are based on hybrid architecture! 47B and 8B in BF16 and FP8. Blogpost: https://t.co/mcGOeS9RfV Weights:
Tweet card summary image
huggingface.co
@rendu_a
Adi Renduchintala
5 months
Transformers are still dominating the LLM scene but we show that higher throughput alternatives exist which are just as strong! Grateful to have a part in Nemotron-H Reasoning effort. 🙏 Technical report will be out soon, stay tuned!
1
25
123
@_weiping
Wei Ping
5 months
Pass@1024 results of our RL model (AceReason-Nemotron-7B) and its starting SFT model (DeepSeek-R1-Distill-Qwen-7B) on LiveCodeBench-v6, which features a large answer space and high-quality test cases that are difficult to solve through 'guessing', even with extensive sampling.
@_weiping
Wei Ping
6 months
Introducing AceReason-Nemotron: Advancing math and code reasoning through reinforcement learning (RL) We propose conducting RL on math-only prompts first, then on code-only prompts. Our key findings include: - Math-only RL significantly boosts both math and code benchmarks! -
2
9
56
@_weiping
Wei Ping
6 months
👍👍
@deepseek_ai
DeepSeek
6 months
🚀 DeepSeek-R1-0528 is here! 🔹 Improved benchmark performance 🔹 Enhanced front-end capabilities 🔹 Reduced hallucinations 🔹 Supports JSON output & function calling ✅ Try it now: https://t.co/IMbTch8Pii 🔌 No change to API usage — docs here: https://t.co/Qf97ASptDD 🔗
0
0
1
@_akhaliq
AK
6 months
Nvidia just dropped AceReason-Nemotron on Hugging Face Advancing Math and Code Reasoning through Reinforcement Learning
4
34
198
@zihan_johan_liu
Zihan (Johan) Liu
6 months
Check out our AceReason-Nemotron-14B. 🤗 https://t.co/cMDwcHQPDS We start with RL training using math-only prompts, then continue with code-only prompts, which further enhances coding performance while maintaining math capability.
Tweet card summary image
huggingface.co
@_weiping
Wei Ping
6 months
Introducing AceReason-Nemotron: Advancing math and code reasoning through reinforcement learning (RL) We propose conducting RL on math-only prompts first, then on code-only prompts. Our key findings include: - Math-only RL significantly boosts both math and code benchmarks! -
0
2
13
@StringChaos
Naman Jain
6 months
Lots of good analysis in here!
@_weiping
Wei Ping
6 months
Introducing AceReason-Nemotron: Advancing math and code reasoning through reinforcement learning (RL) We propose conducting RL on math-only prompts first, then on code-only prompts. Our key findings include: - Math-only RL significantly boosts both math and code benchmarks! -
0
3
13
@ychenNLP
Yang Chen
6 months
with just math-RL, AceReason-Nemotron-14B surpass DeepCoder-14B on LiveCodeBench v5. we then did code-RL and found training becomes so much easier
@_weiping
Wei Ping
6 months
Introducing AceReason-Nemotron: Advancing math and code reasoning through reinforcement learning (RL) We propose conducting RL on math-only prompts first, then on code-only prompts. Our key findings include: - Math-only RL significantly boosts both math and code benchmarks! -
0
10
50
@NVIDIAAIDev
NVIDIA AI Developer
6 months
📣 Introducing AceReason-Nemotron: Advancing Math and Code Reasoning through Reinforcement Learning (RL) Starting from the SFT model DeepSeek-R1-Distill-Qwen-14B, our AceReason-Nemotron-14B achieves substantial improvements in pass@1 accuracy on key benchmarks through RL: AIME
7
38
138
@kuchaev
Oleksii Kuchaiev
7 months
Llama-Nemotron-v1 technical report is now available on arxiv https://t.co/OwFdIZnYlH
3
65
347
@Alibaba_Qwen
Qwen
7 months
Introducing Qwen3! We release and open-weight Qwen3, our latest large language models, including 2 MoE models and 6 dense models, ranging from 0.6B to 235B. Our flagship model, Qwen3-235B-A22B, achieves competitive results in benchmark evaluations of coding, math, general
351
2K
8K
@zihan_johan_liu
Zihan (Johan) Liu
7 months
Introducing AceMath-RL-Nemotron-7B, a math reasoning model trained entirely through reinforcement learning from DeepSeek-R1-Distilled-Qwen-7B. It achieves AIME24: 69.0%, AIME25: 53.6%, and GPQA: 52.1%. Interestingly, this math-focused RL training also improves the coding
Tweet card summary image
huggingface.co
@_weiping
Wei Ping
7 months
Introducing AceMath-RL-Nemotron-7B, an open math model trained with reinforcement learning from the SFT-only checkpoint: Deepseek-R1-Distilled-Qwen-7B. It achieves: - AIME24: 69.0 (+13.5 gain by RL) - AIME25: 53.6 (+14.4) - LiveCodeBench: 44.4 (surprisingly, +6.8 gain after
0
4
11