
Wei Liu
@WeiLiu99
Followers
561
Following
2K
Media
18
Statuses
557
#NLProc | Ph.D. Student @hkust @hkustnlp | Prev. @AlibabaGroup @ShanghaiTechUni
Joined February 2018
“What is the answer of 1 + 1?”.Large Reasoning Models (LRMs) may generate 1500+ tokens just to answer this trivial question. Too much thinking 🤯.Can LRMs be both Faster AND Stronger?. Yes. Introducing LASER💥: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping
2
33
140
RT @terryyuezhuo: Training Agents without Runtime? Yes, and it works well on Offensive Cybersecurity!. Introducing Cyber-Zero, the first ap….
0
7
0
RT @fengyao1909: Failing on 𝐥𝐚𝐫𝐠𝐞-𝐬𝐜𝐚𝐥𝐞 𝐑𝐋 with VeRL?. ⚠️ Mixing inference backend (𝐯𝐋𝐋𝐌/𝐒𝐆𝐋𝐚𝐧𝐠) with training backends (𝐅𝐒𝐃𝐏/𝐌𝐞𝐠𝐚𝐭𝐫𝐨𝐧) 𝐬𝐞𝐜….
0
92
0
RT @anneouyang: KernelBench v0.1 is out, featuring:.- A guideline on analyzing the validity of results and ruling out physically impossible….
0
31
0
RT @ChujieZheng: Proud to introduce Group Sequence Policy Optimization (GSPO), our stable, efficient, and performant RL algorithm that powe….
0
245
0
RT @yuntiandeng: Today I learned a student of mine from China gave up waiting for his Canadian visa after over a year without updates:. 1.….
0
23
0
RT @sivil_taram: 🚀 Just one week after SWE-Perf launched (the first repository-level benchmark for realistic code performance optimization)….
0
3
0
RT @sivil_taram: Wrapped up a SWE-Perf website redesign using Qwen3-Coder on AnyCoder (. The process was incredibly….
0
14
0
RT @sivil_taram: 🔥 LLMs can fix bugs, but can they make your code faster? We put them to the test on real-world repositories, and the resul….
0
17
0
RT @_zhihuixie: 🚀 Thrilled to announce Dream-Coder 7B — the most powerful open diffusion code LLM to date.
0
34
0
RT @yuntiandeng: Can we build an operating system entirely powered by neural networks?. Introducing NeuralOS: towards a generative OS that….
0
39
0
RT @lockonlvange: 👇this nice guy❤️will help us present CodeI/O ( at Oral session 6A Applications in Agents and Codi….
arxiv.org
Reasoning is a fundamental capability of Large Language Models. While prior research predominantly focuses on enhancing narrow skills like math or code generation, improving performance on many...
0
3
0
RT @yihengxu_: Attending #ICML2025 🇨🇦 this week! . Will be presenting Aguvis ( on July 15 at 11am, and joining Comp….
0
5
0
RT @james_y_zou: 📢New conference where AI is the primary author and reviewer! Current venues don't allow AI-writte….
0
126
0
RT @sivil_taram: 🚀 Check out our recent work Afterburner: Reinforcement Learning demonstrating super powerful self-improving code efficienc….
0
4
0
RT @FaZhou_998: MegaMath has been accepted to @COLM_conf 2025🥳 Hoping you find our data useful!
0
10
0
RT @sansa19739319: 🤖Can diffusion models write code competitively?.Excited to share our latest 7B coding diffusion LLM!!💻. With DiffuCoder,….
0
113
0