
Charlie Ruan
@charlie_ruan
Followers
613
Following
611
Media
22
Statuses
136
CS PhD Student @UCBerkeley @BerkeleySky | prev @CSDatCMU, @CornellCIS
Joined August 2014
RT @NovaSkyAI: The SkyRL roadmap is live! . Our focus is on building the easiest-to-use high-performance RL framework for agents. We'd lo….
0
15
0
RT @NovaSkyAI: 🔎 SkyRL + Search-R1. Training a multi-turn search agent doesn’t have to be complicated. With SkyRL, reproducing the SearchR….
0
30
0
RT @WentaoGuo7: 🦆🚀QuACK🦆🚀: new SOL mem-bound kernel library without a single line of CUDA C++ all straight in Python thanks to CuTe-DSL. On….
0
66
0
RT @tyler_griggs_: We did a deep-dive on the (many) open source RL frameworks out there, and tried to distill their core design philosophie….
0
31
0
Started working on RL and contributing to SkyRL recently. Check out our new RL framework that prioritizes both modularity and performance!. e.g. try disaggregating training and generation on heterogeneous HW with just a config change🔥.
✨Release: We upgraded SkyRL into a highly-modular, performant RL framework for training LLMs. We prioritized modularity—easily prototype new algorithms, environments, and training logic with minimal overhead. 🧵👇.Blog: Code:
0
2
25
RT @NovaSkyAI: ✨Release: We upgraded SkyRL into a highly-modular, performant RL framework for training LLMs. We prioritized modularity—easi….
0
43
0
RT @JiaZhihao: 📢Exciting updates from #MLSys2025! All session recordings are now available and free to watch at We….
0
31
0
RT @chrisdonahuey: Excited to announce 🎵Magenta RealTime, the first open weights music generation model capable of real-time audio generati….
0
85
0
RT @JiaZhihao: One of the best ways to reduce LLM latency is by fusing all computation and communication into a single GPU megakernel. But….
0
120
0
RT @uccl_proj: 1/N 📢 Introducing UCCL (Ultra & Unified CCL), an efficient collective communication library for ML training and inference, o….
0
18
0
RT @ye_combinator: We’re thrilled that FlashInfer won a Best Paper Award at MLSys 2025! 🎉.This wouldn’t have been possible without the comm….
0
37
0
RT @zicokolter: Thanks @NVIDIADC for the DGX B200 machine for the CMU Catalyst group! I'm perhaps already a bit too enthralled by it in th….
0
14
0
RT @JiaZhihao: Thank you to @NVIDIA for gifting our Catalyst Research Group the latest NVIDIA DGX B200! The B200 platform will greatly acce….
0
10
0
RT @tqchenml: Really thrilled to receive #NVIDIADGX B200 from @nvidia . Looking forward to cooking with the beast. Together with an amazin….
0
16
0
RT @Tim_Dettmers: Happy to announce that I joined the CMU Catalyst with three of my incoming students. Our research will bring the best m….
0
53
0
RT @SCSatCMU: Huge thank you to @NVIDIADC for gifting a brand new #NVIDIADGX B200 to CMU’s Catalyst Research Group! This AI supercomputing….
0
29
0
RT @yi_xin_dong: XGrammar is accepted to MLSys 2025🎉🎉🎉.It is a widely adopted library for structured generation with LLMs—output clean JSON….
0
18
0
RT @tqchenml: Happy to share our latest work at @ASPLOSConf 2025! LLMs are dynamic, both in sequence and batches. Relax brings an ML compil….
0
36
0
RT @ChromiumDev: Build private web apps with WebLLM. Google Developer Expert, @christianliebel walks you through adding WebLLM to a to-do….
0
35
0
RT @jtpio: What if we could use AI models like Llama 3.2 or Mistral 7B in the browser with JupyterLite? 🤯. Still at a very early stage of c….
0
2
0