
Chenguang Wang
@ChenguangWang
Followers
179
Following
30
Media
8
Statuses
49
Assistant Professor in CSE at WashU @WUSTL. WashU NLP Group https://t.co/TppY5J4Bkz. NLP, Machine Learning, Security. Previously @UCBerkeley @PKU1898 @UofIllinois
St. Louis, USA
Joined August 2011
๐ Introducing rLLM: a flexible framework for post-training language agents via RL. It's also the engine behind DeepSWE, a fully open-sourced, state-of-the-art coding agent. ๐ GitHub: ๐ rLLM: ๐ DeepSWE:
pretty-radio-b75.notion.site
*,โ : Major Contributors
๐ Introducing DeepSWE ๐ค: our fully open-sourced, SOTA software engineering agent trained purely with RL on top of Qwen3-32B. DeepSWE achieves 59% on SWEBench-Verified with test-time scaling (and 42.2% Pass@1), topping the SWEBench leaderboard for open-weight models. ๐ชDeepSWE
0
4
23
RT @dawnsongtweets: 1/ ๐ฅ AI agents are reaching a breakthrough moment in cybersecurity. In our latest work:. ๐ CyberGym: AI agents discovโฆ.
0
141
0
RT @kylepmont: Excited to share our work at #ICLR2025! JudgeBench โ๏ธ tests the reliability of LLM-based judges with a focus on objective coโฆ.
0
1
0
RT @ppppnnnni: ๐ Huge thanks to our amazing collaborators:.@t_jianhong @ppppnnnni @NRCrispino ZihaoYu @bemikelive @BelizGunel @ruoxijia @Xiโฆ.
0
1
0
Huge thanks to my amazing students @WUSTL.and collaborators @UBC @GoogleDeepMind @virginia_tech @ucdavis @SonyAI_global @UCBerkeley!.
0
0
0
๐ฅReally excited about #mlan, we generalize language instruction tuning across vision and language modalities in multimodal LLMs!.
๐ Excited to announce our latest paper: MLAN: Language-Based Instruction Tuning Improves Zero-Shot Generalization of Multimodal Large Language Models!
1
1
1
RT @sijun_tan: Introducing JudgeBench โ the ultimate benchmark designed to push LLM-based judges to their limits! ๐. โWhy do we need a newโฆ.
0
10
0
๐ฅReally excited to share that WashU @WUSTL NLP group is now recruiting postdocs #postdoc!!! Join us to do exciting research on the next thing on foundation models, ml, security, and more! We are also recruiting PhDs #PhD. Happy to chat more!. Details:
๐ฅOur #icml2024 work #agentinstruct is again covered by @WUSTL Record! This time, it is the top story!. Thanks to my wonderful students and collaborators @NRCrispino Kyle Montgomery @dawnsongtweets . ๐paper ๐๏ธnews
0
2
4
๐ฅOur #icml2024 work #agentinstruct is again covered by @WUSTL Record! This time, it is the top story!. Thanks to my wonderful students and collaborators @NRCrispino Kyle Montgomery @dawnsongtweets . ๐paper ๐๏ธnews
0
0
0
๐ฅOur recent research was covered by @WUSTL RECORD . ๐กJoin us at WashU NLP Group to do exciting research!.
source.washu.edu
0
0
0
๐ฅLarge Language models can divide and conquer your tasks!. Excited about our #ACL2024 work on helping LLMs learn how to break down complex tasks into small instances of the same tasks and solve them recursively! .๐ป[Github]
0
1
5
RT @yuan_ye_2000123: (1/4).We are really excited to introduce our new paper: Measuring Social Norms of Large Language Models. This paper haโฆ.
0
1
0
Take a look at my recent interview with MIT Technology Review @techreview China talking about our recent research on #STEM and others๐. ๐๏ธ[Orginally in Chinese] [Google Translate]
0
0
0
๐ฅ#STEM benchmark is now released! Try it out to understand the fundamentals of your foundation models!. ๐[#ICLR2024 Paper] ๐[Leaderboard] ๐ค[Dataset] ๐ป[Github]
github.com
Code for ICLR 2024 paper: Measuring Vision-Language STEM Skills of Neural Models - stemdataset/STEM
Really excited about our ICLR 2024 work on testing the basic STEM (science, technology, engineering, maths) skills of large foundation models (e.g., GPT3.5)! They took real-world K-12 exams and performed well below that of millions of elementary students.
0
0
1
Today @gonglinyuan (PhD @UCBerkeley) talked about training LLMs to understand programming language @WUSTL , hosted by NLP group Fantastic work!. ๐๐๐
0
0
0
๐Very excited to receive the Google Research Scholar Award @Google @GoogleAI! We are going to ground LLMs! .
research.google
0
0
6
Great to have @adveisner Jason Eisner (Professor @JohnsHopkins, Director @SemMachines) present his amazing work on "neural semantic parsing" @WUSTL hosted by WashU NLP Will there be LLM semantics? ๐ค. ๐๐
0
0
1
Today William Cohen @professorwcohen (Professor.@CarnegieMellon, Principal Scientist @Google) shared his fantastic research on "Language Models that Retrieve". Prof. Cohen is one of a few pioneers in the field to learn from!. ๐
0
0
0