Ruqi Zhang Profile
Ruqi Zhang

@ruqi_zhang

Followers
979
Following
252
Media
24
Statuses
111

Assistant Professor @PurdueCS | PhD @Cornell | Probabilistic machine learning, Trustworthy AI, Monte Carlo sampling

Joined October 2021
Don't wanna be here? Send us removal request.
@ruqi_zhang
Ruqi Zhang
2 days
A hybrid diffusion can outperform pure discrete (masked) diffusion! We introduce CANDI: - Combine discrete structure with continuous joint updates - Achieves strong low-NFE generation - Enables simple classifier guidance How does it work? Continuous diffusion on text wasn’t
@PatrickPyn35903
Patrick Pynadath
2 days
Continuous diffusion dominates images but fails on discrete data—despite learning continuous gradients that should enable coordinated updates. "CANDI: Hybrid Discrete-Continuous Diffusion Models" explains why and how why hybrid diffusion fixes it! (1/8)
0
5
46
@lblaoke
Bolian Li
7 days
Can we accelerate test-time alignment? YES! 📃paper: Reward-Shifted Speculative Sampling Is An Efficient Test-Time Weak-to-Strong Aligner 🔗arXiv: https://t.co/hzDG2l9KZG 📌EMNLP 2025
1
3
6
@ruqi_zhang
Ruqi Zhang
16 days
Thanks for having me and for putting together such a great event! Looking forward to the next one!
@nikitasaxena02
Nikita Saxena (she/her)
17 days
And a massive thank you to our mentors who led discussions on: ⚖️ Responsible AI: @GabrielSaadia, @adinamwilliams 🧘‍♀️ Career-Life Balance: Julia Kreutzer, Mor Geva Pipek 🏢 Industry Careers: @OlgaNLP, @gspandana, @BahareFatemi 📚 Keeping Pace w/ AI: @swetaagrawal20, @ruqi_zhang
0
0
2
@ruqi_zhang
Ruqi Zhang
18 days
Excited to give a talk on Oct 14 about Gradient-Based Discrete Sampling! How can we bring the power of Langevin dynamics to discrete spaces? I’ll discuss algorithms like Discrete Langevin and its extensions for multimodal distributions and combinatorial optimization, with
@OnlineMCSeminar
Monte Carlo Seminar
21 days
🎙️ Monte Carlo Seminar — Tue, Oct 14, 2025 Speaker: Ruqi Zhang (Purdue University) Title: Gradient-Based Discrete Sampling: Algorithms and Applications Time: 8:30 AM PT / 11:30 AM ET / 4:30 PM London / 5:30 PM Paris Zoom:
0
6
25
@ruqi_zhang
Ruqi Zhang
24 days
We’re presenting three papers at #COLM2025! I’ll be here Oct 7–10. Please stop by our poster and DM me if you want to chat. I’ll also be at Mentorship Roundtables at WiML. See you there!
0
3
22
@YiDingywhy
Yi Ding
1 month
Sherlock is accepted to NeurIPS2025! See u in San Diego
@YiDingywhy
Yi Ding
5 months
🕵️Introducing Sherlock, a self-correction and self-improvement training framework: - Analyze self-correction behavior of rasoning VLMs - Integrate self-correction and reasoning ability to VLM using < 20% annotated data compared to reasoning baselines 👇 https://t.co/f7viAksAhQ
1
1
1
@ruqi_zhang
Ruqi Zhang
3 months
Excited to be speaking at the #IJCAI2025 Workshop! Hope to see you there!
@pulkit_verma
Pulkit Verma
3 months
The program for the #IJCAI2025 Workshop on User-Aligned Assessment of Adaptive AI Systems is now available. We have a fantastic lineup of invited speakers and talks. Link: https://t.co/AhhwUfjqMO @XujieSi @ruqi_zhang @sidsrivast @HazemTorfah
0
0
7
@ruqi_zhang
Ruqi Zhang
3 months
Purdue IPAI is hiring postdocs in AI! If you're interested in statistical machine learning or trustworthy AI, and would like to work with me, please get in touch! Applications are due by Sept 1, 2025. https://t.co/Mf9KMnVukj
Tweet card summary image
purdue.edu
Purdue connects emerging leaders to world-class experts in physical artificial intelligence and applied fields through the IPAI Postdoctoral Fellows Program. Applications Due: October 15…
0
2
8
@ruqi_zhang
Ruqi Zhang
4 months
Proud advisor moment: Pascal Jutras-Dubé gave a talk at MMLS to hundreds! Great work on making samplers work in just one step! Paper: https://t.co/JNrT9KPWH6
0
0
14
@ruqi_zhang
Ruqi Zhang
5 months
Excited to share our latest work on self-correcting reasoning in Vision-Language Models! - Improve reasoning with minimal annotated data - Lots of insights + strong results Kudos to @YiDingywhy for leading this amazing work!
@YiDingywhy
Yi Ding
5 months
🕵️Introducing Sherlock, a self-correction and self-improvement training framework: - Analyze self-correction behavior of rasoning VLMs - Integrate self-correction and reasoning ability to VLM using < 20% annotated data compared to reasoning baselines 👇 https://t.co/f7viAksAhQ
0
5
32
@ruqi_zhang
Ruqi Zhang
5 months
Excited to share our latest work on self-correcting reasoning in Vision-Language Models! - Improve reasoning with minimal annotated data - Lots of insights + strong results Kudos to @YiDingywhy for leading this amazing work!
@YiDingywhy
Yi Ding
5 months
🕵️Introducing Sherlock, a self-correction and self-improvement training framework: - Analyze self-correction behavior of rasoning VLMs - Integrate self-correction and reasoning ability to VLM using < 20% annotated data compared to reasoning baselines 👇 https://t.co/f7viAksAhQ
0
5
32
@pulkit_verma
Pulkit Verma
6 months
The deadline for #IJCAI2025 Workshop on User-Aligned Assessment of Adaptive AI Systems is just 5 days away. If you are working on any aspect of assessment, regulation, compliance, etc., of AI systems, please check it out. More details here: https://t.co/AhhwUfjqMO
0
3
13
@PatrickPyn35903
Patrick Pynadath
6 months
Excited to present this at today’s Poster session! Quick update: poster number is 592. Whova event seems to be outdated, but ICLR website has correct info. Check out the project page if you want to read more! Link: https://t.co/Qc8nRMllxP Time: 10am-12pm, poster number 592
@ruqi_zhang
Ruqi Zhang
6 months
DAB is a controlled decoding algorithm using gradient-based discrete sampling. It achieves better fluency and constraint satisfaction—all with much less computational cost.
0
2
6
@ruqi_zhang
Ruqi Zhang
6 months
ETA: https://t.co/uSOjVA8Nao DAB: https://t.co/ApscxYVqOJ Gradient GA: https://t.co/5Pz6g2BWcw Single-step diffusion sampler:
0
0
1
@ruqi_zhang
Ruqi Zhang
6 months
DAB is a controlled decoding algorithm using gradient-based discrete sampling. It achieves better fluency and constraint satisfaction—all with much less computational cost.
1
1
1
@ruqi_zhang
Ruqi Zhang
6 months
ETA is an inference-time alignment approach that improves safety without compromising the capabilities of VLMs.
1
0
0
@ruqi_zhang
Ruqi Zhang
6 months
I won’t be attending #ICLR2025 this year, but my amazing students will be presenting several exciting works: 1️⃣ Inference-time safety in VLMs 2️⃣ Controlled decoding via discrete sampling 3️⃣ Gradient genetic algorithms for drug discovery 4️⃣ Single-step diffusion samplers Catch
1
4
32
@YiDingywhy
Yi Ding
6 months
I will attend #ICLR2025 this week and share our work on VLM safety, ETA, #530 (Hall 3 + Hall 2B) on April 24th from 10:00 AM to 12:30 PM. Feel free to chat and discussion! See you in Singapore🤩
0
2
9
@ruqi_zhang
Ruqi Zhang
8 months
Excited to see our chapter out! A concise and accessible introduction to Bayes Compute in deep neural networks and deep generative models. Great for statisticians curious about diving in!
@liyzhen2
yingzhen
8 months
A little chapter that we (@ruqi_zhang and awesome students and yours truly) wrote a while ago to give a brief intro of this nice field to statisticians 😊
0
3
23
@timrudner
Tim G. J. Rudner
9 months
We have extended the #AABI workshop and proceedings deadlines! *New deadlines:* Workshop Track: February 14, AoE Proceedings Track: February 14, AoE https://t.co/qO0bnyRQZg #ProbML #AABI #ICLR
Tweet card summary image
approximateinference.org
@timrudner
Tim G. J. Rudner
9 months
Submit your work to the 7th Symposium on Advances in Approximate Bayesian Inference! #AABI This year, #AABI will be co-located with #ICLR2025! Workshop Track: February 7, AoE Proceedings Track: February 7, AoE Fast Track: February 18 / March 14, AoE https://t.co/0DvnnTQCAw
0
5
11