ChenguangWang Profile Banner
Chenguang Wang Profile
Chenguang Wang

@ChenguangWang

Followers
179
Following
30
Media
8
Statuses
49

Assistant Professor in CSE at WashU @WUSTL. WashU NLP Group https://t.co/TppY5J4Bkz. NLP, Machine Learning, Security. Previously @UCBerkeley @PKU1898 @UofIllinois

St. Louis, USA
Joined August 2011
Don't wanna be here? Send us removal request.
@ChenguangWang
Chenguang Wang
19 days
๐Ÿš€ Introducing rLLM: a flexible framework for post-training language agents via RL. It's also the engine behind DeepSWE, a fully open-sourced, state-of-the-art coding agent. ๐Ÿ”— GitHub: ๐Ÿ“˜ rLLM: ๐Ÿ“˜ DeepSWE:
pretty-radio-b75.notion.site
*,โ€ : Major Contributors
@Agentica_
Agentica Project
19 days
๐Ÿš€ Introducing DeepSWE ๐Ÿค–: our fully open-sourced, SOTA software engineering agent trained purely with RL on top of Qwen3-32B. DeepSWE achieves 59% on SWEBench-Verified with test-time scaling (and 42.2% Pass@1), topping the SWEBench leaderboard for open-weight models. ๐Ÿ’ชDeepSWE
Tweet media one
0
4
23
@ChenguangWang
Chenguang Wang
1 month
RT @dawnsongtweets: 1/ ๐Ÿ”ฅ AI agents are reaching a breakthrough moment in cybersecurity. In our latest work:. ๐Ÿ”“ CyberGym: AI agents discovโ€ฆ.
0
141
0
@ChenguangWang
Chenguang Wang
3 months
RT @kylepmont: Excited to share our work at #ICLR2025! JudgeBench โš–๏ธ tests the reliability of LLM-based judges with a focus on objective coโ€ฆ.
0
1
0
@ChenguangWang
Chenguang Wang
8 months
RT @ppppnnnni: ๐ŸŒŸ Huge thanks to our amazing collaborators:.@t_jianhong @ppppnnnni @NRCrispino ZihaoYu @bemikelive @BelizGunel @ruoxijia @Xiโ€ฆ.
0
1
0
@ChenguangWang
Chenguang Wang
8 months
Huge thanks to my amazing students @WUSTL.and collaborators @UBC @GoogleDeepMind @virginia_tech @ucdavis @SonyAI_global @UCBerkeley!.
0
0
0
@ChenguangWang
Chenguang Wang
8 months
๐Ÿ”ฅReally excited about #mlan, we generalize language instruction tuning across vision and language modalities in multimodal LLMs!.
@ppppnnnni
Zhuohao Ni
8 months
๐Ÿš€ Excited to announce our latest paper: MLAN: Language-Based Instruction Tuning Improves Zero-Shot Generalization of Multimodal Large Language Models!
Tweet media one
1
1
1
@ChenguangWang
Chenguang Wang
9 months
RT @sijun_tan: Introducing JudgeBench โ€“ the ultimate benchmark designed to push LLM-based judges to their limits! ๐Ÿš€. โ“Why do we need a newโ€ฆ.
0
10
0
@ChenguangWang
Chenguang Wang
10 months
๐Ÿ”ฅReally excited to share that WashU @WUSTL NLP group is now recruiting postdocs #postdoc!!! Join us to do exciting research on the next thing on foundation models, ml, security, and more! We are also recruiting PhDs #PhD. Happy to chat more!. Details:
@ChenguangWang
Chenguang Wang
10 months
๐Ÿ”ฅOur #icml2024 work #agentinstruct is again covered by @WUSTL Record! This time, it is the top story!. Thanks to my wonderful students and collaborators @NRCrispino Kyle Montgomery @dawnsongtweets . ๐Ÿ“œpaper ๐Ÿ—ž๏ธnews
Tweet media one
0
2
4
@ChenguangWang
Chenguang Wang
10 months
๐Ÿ”ฅOur #icml2024 work #agentinstruct is again covered by @WUSTL Record! This time, it is the top story!. Thanks to my wonderful students and collaborators @NRCrispino Kyle Montgomery @dawnsongtweets . ๐Ÿ“œpaper ๐Ÿ—ž๏ธnews
Tweet media one
0
0
0
@ChenguangWang
Chenguang Wang
10 months
๐Ÿ”ฅExcited about our @IEEESSP work "Preference Poisoning Attacks on Reward Model Learning"!
Tweet media one
Tweet media two
0
0
0
@ChenguangWang
Chenguang Wang
10 months
๐Ÿ”ฅOur recent research was covered by @WUSTL RECORD . ๐Ÿ’กJoin us at WashU NLP Group to do exciting research!.
Tweet media one
source.washu.edu
0
0
0
@ChenguangWang
Chenguang Wang
11 months
๐Ÿ”ฅLarge Language models can divide and conquer your tasks!. Excited about our #ACL2024 work on helping LLMs learn how to break down complex tasks into small instances of the same tasks and solve them recursively! .๐Ÿ’ป[Github]
Tweet media one
0
1
5
@ChenguangWang
Chenguang Wang
1 year
RT @yuan_ye_2000123: (1/4).We are really excited to introduce our new paper: Measuring Social Norms of Large Language Models. This paper haโ€ฆ.
0
1
0
@ChenguangWang
Chenguang Wang
1 year
Take a look at my recent interview with MIT Technology Review @techreview China talking about our recent research on #STEM and others๐Ÿ˜ƒ. ๐ŸŽ™๏ธ[Orginally in Chinese] [Google Translate]
@ChenguangWang
Chenguang Wang
1 year
๐Ÿ”ฅ#STEM benchmark is now released! Try it out to understand the fundamentals of your foundation models!. ๐Ÿ“ƒ[#ICLR2024 Paper] ๐Ÿ†[Leaderboard] ๐Ÿค—[Dataset] ๐Ÿ’ป[Github]
0
0
0
@ChenguangWang
Chenguang Wang
1 year
๐Ÿ”ฅ#STEM benchmark is now released! Try it out to understand the fundamentals of your foundation models!. ๐Ÿ“ƒ[#ICLR2024 Paper] ๐Ÿ†[Leaderboard] ๐Ÿค—[Dataset] ๐Ÿ’ป[Github]
Tweet media one
github.com
Code for ICLR 2024 paper: Measuring Vision-Language STEM Skills of Neural Models - stemdataset/STEM
@ChenguangWang
Chenguang Wang
1 year
Really excited about our ICLR 2024 work on testing the basic STEM (science, technology, engineering, maths) skills of large foundation models (e.g., GPT3.5)! They took real-world K-12 exams and performed well below that of millions of elementary students.
Tweet media one
Tweet media two
Tweet media three
0
0
1
@ChenguangWang
Chenguang Wang
1 year
Today @gonglinyuan (PhD @UCBerkeley) talked about training LLMs to understand programming language @WUSTL , hosted by NLP group Fantastic work!. ๐Ÿ“ƒ๐Ÿ“ƒ๐Ÿ“ƒ
0
0
0
@ChenguangWang
Chenguang Wang
1 year
๐Ÿ˜ƒVery excited to receive the Google Research Scholar Award @Google @GoogleAI! We are going to ground LLMs! .
research.google
0
0
6
@ChenguangWang
Chenguang Wang
1 year
Great to have @adveisner Jason Eisner (Professor @JohnsHopkins, Director @SemMachines) present his amazing work on "neural semantic parsing" @WUSTL hosted by WashU NLP Will there be LLM semantics? ๐Ÿค”. ๐Ÿ“ƒ๐Ÿ“ƒ
0
0
1
@ChenguangWang
Chenguang Wang
1 year
WashU NLP today just hosted a talk on "Advanced #RAG". Jialu Liu (Senior Staff Engineer @Google ) shared deep technical insights on RAG and its future with long context LLMs!. ๐Ÿ“ƒ
0
0
0
@ChenguangWang
Chenguang Wang
1 year
Today William Cohen @professorwcohen (Professor.@CarnegieMellon, Principal Scientist @Google) shared his fantastic research on "Language Models that Retrieve". Prof. Cohen is one of a few pioneers in the field to learn from!. ๐Ÿ“ƒ
0
0
0