
Peixuan Han (韩沛煊)
@peixuanhakhan
Followers
80
Following
17
Media
11
Statuses
39
1st year Ph.D. student at UIUC @IllinoisCS Amazon 25Summer Intern LLM researcher
Urbana
Joined September 2024
RT @AlexiGlad: How can we unlock generalized reasoning?. ⚡️Introducing Energy-Based Transformers (EBTs), an approach that out-scales (feed-….
0
255
0
Super excited to begin my Applied Scientist Internship at @amazon, which is my first internship in the industry. I'm looking forward to conducting interesting and insightful research on the efficient reasoning of LLMs!
0
0
1
RT @xiusi_chen: Can LLMs make rational decisions like human experts?. 📖Introducing DecisionFlow: Advancing Large Language Model as Principl….
0
16
0
RT @JiaxunZhang6: ⚠️ Rogue AI scientists? 🛡️ SafeScientist rejects unsafe prompts for ethical discoveries. Check out paper ➡️ ( https://t….
0
7
0
RT @qiancheng1231: 📢 New Paper Drop: From Solving to Modeling!.LLMs can solve math problems — but can they model the real world? 🌍. 📄 arXiv….
0
30
0
RT @xwzliuzijia: 💥Time-R1 is here! Can a 3B LLM truly grasp time? 🤔 YES! . Excited to share our new work, Time-R1: Towards Comprehensive Te….
0
3
0
RT @ExplainMiracles: We introduce Gradient Variance Minimization (GVM)-RAFT, a principled dynamic sampling strategy that minimizes gradient….
0
27
0
RT @xiusi_chen: 🚀 Can we cast reward modeling as a reasoning task?. 📖 Introducing our new paper: .RM-R1: Reward Modeling as Reasoning. 📑 Pa….
0
47
0
RT @haofeiyu44: 🧪 Want an AI-generated paper draft in just 1 minute? Or dreaming of building auto-research apps but frustrated with setups?….
github.com
A lightweight framework for building research agents designed for developers - ulab-uiuc/tiny-scientist
0
10
0
RT @Alibaba_Qwen: Introducing Qwen3! . We release and open-weight Qwen3, our latest large language models, including 2 MoE models and 6 den….
0
2K
0
RT @BowenJin13: 🚀 Excited to announce that our paper 𝐒𝐞𝐚𝐫𝐜𝐡-𝐑𝟏. is now live! 📄. We introduce an RL framework (an extension of 𝐃𝐞𝐞𝐩𝐬𝐞𝐞𝐤-𝐑𝟏)….
0
111
0
RT @sundarpichai: Gemma 3 is here! Our new open models are incredibly efficient - the largest 27B model runs on just one H100 GPU. You'd ne….
0
884
0