 
            
              Ruqi Zhang
            
            @ruqi_zhang
Followers
                979
              Following
                252
              Media
                24
              Statuses
                111
              Assistant Professor @PurdueCS | PhD @Cornell | Probabilistic machine learning, Trustworthy AI, Monte Carlo sampling
              
              Joined October 2021
            
            
           A hybrid diffusion can outperform pure discrete (masked) diffusion! We introduce CANDI: - Combine discrete structure with continuous joint updates - Achieves strong low-NFE generation - Enables simple classifier guidance How does it work? Continuous diffusion on text wasn’t 
           Continuous diffusion dominates images but fails on discrete data—despite learning continuous gradients that should enable coordinated updates. "CANDI: Hybrid Discrete-Continuous Diffusion Models" explains why and how why hybrid diffusion fixes it! (1/8) 
            
                
                0
              
              
                
                5
              
              
                
                46
              
             Can we accelerate test-time alignment? YES! 📃paper: Reward-Shifted Speculative Sampling Is An Efficient Test-Time Weak-to-Strong Aligner 🔗arXiv:  https://t.co/hzDG2l9KZG  📌EMNLP 2025 
          
                
                1
              
              
                
                3
              
              
                
                6
              
             Thanks for having me and for putting together such a great event! Looking forward to the next one! 
           And a massive thank you to our mentors who led discussions on: ⚖️ Responsible AI: @GabrielSaadia, @adinamwilliams 🧘♀️ Career-Life Balance: Julia Kreutzer, Mor Geva Pipek 🏢 Industry Careers: @OlgaNLP, @gspandana, @BahareFatemi 📚 Keeping Pace w/ AI: @swetaagrawal20, @ruqi_zhang
            
          
                
                0
              
              
                
                0
              
              
                
                2
              
             Excited to give a talk on Oct 14 about Gradient-Based Discrete Sampling! How can we bring the power of Langevin dynamics to discrete spaces? I’ll discuss algorithms like Discrete Langevin and its extensions for multimodal distributions and combinatorial optimization, with 
           🎙️ Monte Carlo Seminar — Tue, Oct 14, 2025 Speaker: Ruqi Zhang (Purdue University) Title: Gradient-Based Discrete Sampling: Algorithms and Applications Time: 8:30 AM PT / 11:30 AM ET / 4:30 PM London / 5:30 PM Paris Zoom: 
          
                
                0
              
              
                
                6
              
              
                
                25
              
             We’re presenting three papers at #COLM2025! I’ll be here Oct 7–10. Please stop by our poster and DM me if you want to chat. I’ll also be at Mentorship Roundtables at WiML. See you there! 
          
                
                0
              
              
                
                3
              
              
                
                22
              
             Sherlock is accepted to NeurIPS2025! See u in San Diego 
           🕵️Introducing Sherlock, a self-correction and self-improvement training framework: - Analyze self-correction behavior of rasoning VLMs - Integrate self-correction and reasoning ability to VLM using < 20% annotated data compared to reasoning baselines 👇  https://t.co/f7viAksAhQ 
            
          
                
                1
              
              
                
                1
              
              
                
                1
              
             Excited to be speaking at the #IJCAI2025 Workshop! Hope to see you there! 
           The program for the #IJCAI2025 Workshop on User-Aligned Assessment of Adaptive AI Systems is now available. We have a fantastic lineup of invited speakers and talks. Link:  https://t.co/AhhwUfjqMO 
              @XujieSi @ruqi_zhang @sidsrivast @HazemTorfah
            
            
                
                0
              
              
                
                0
              
              
                
                7
              
             Purdue IPAI is hiring postdocs in AI! If you're interested in statistical machine learning or trustworthy AI, and would like to work with me, please get in touch! Applications are due by Sept 1, 2025.  https://t.co/Mf9KMnVukj 
          
          
            
            purdue.edu
              Purdue connects emerging leaders to world-class experts in physical artificial intelligence and applied fields through the IPAI Postdoctoral Fellows Program. Applications Due: October 15…
            
                
                0
              
              
                
                2
              
              
                
                8
              
             Proud advisor moment: Pascal Jutras-Dubé gave a talk at MMLS to hundreds! Great work on making samplers work in just one step! Paper:  https://t.co/JNrT9KPWH6 
          
          
                
                0
              
              
                
                0
              
              
                
                14
              
             Excited to share our latest work on self-correcting reasoning in Vision-Language Models! - Improve reasoning with minimal annotated data - Lots of insights + strong results Kudos to @YiDingywhy for leading this amazing work! 
           🕵️Introducing Sherlock, a self-correction and self-improvement training framework: - Analyze self-correction behavior of rasoning VLMs - Integrate self-correction and reasoning ability to VLM using < 20% annotated data compared to reasoning baselines 👇  https://t.co/f7viAksAhQ 
            
          
                
                0
              
              
                
                5
              
              
                
                32
              
             Excited to share our latest work on self-correcting reasoning in Vision-Language Models! - Improve reasoning with minimal annotated data - Lots of insights + strong results Kudos to @YiDingywhy for leading this amazing work! 
           🕵️Introducing Sherlock, a self-correction and self-improvement training framework: - Analyze self-correction behavior of rasoning VLMs - Integrate self-correction and reasoning ability to VLM using < 20% annotated data compared to reasoning baselines 👇  https://t.co/f7viAksAhQ 
            
          
                
                0
              
              
                
                5
              
              
                
                32
              
             The deadline for #IJCAI2025 Workshop on User-Aligned Assessment of Adaptive AI Systems is just 5 days away. If you are working on any aspect of assessment, regulation, compliance, etc., of AI systems, please check it out. More details here:  https://t.co/AhhwUfjqMO 
          
          
                
                0
              
              
                
                3
              
              
                
                13
              
             Excited to present this at today’s Poster session! Quick update: poster number is 592. Whova event seems to be outdated, but ICLR website has correct info. Check out the project page if you want to read more! Link:  https://t.co/Qc8nRMllxP  Time: 10am-12pm, poster number 592 
           DAB is a controlled decoding algorithm using gradient-based discrete sampling. It achieves better fluency and constraint satisfaction—all with much less computational cost. 
            
                
                0
              
              
                
                2
              
              
                
                6
              
             ETA:  https://t.co/uSOjVA8Nao  DAB:  https://t.co/ApscxYVqOJ  Gradient GA:  https://t.co/5Pz6g2BWcw  Single-step diffusion sampler: 
          
                
                0
              
              
                
                0
              
              
                
                1
              
             DAB is a controlled decoding algorithm using gradient-based discrete sampling. It achieves better fluency and constraint satisfaction—all with much less computational cost. 
          
                
                1
              
              
                
                1
              
              
                
                1
              
             ETA is an inference-time alignment approach that improves safety without compromising the capabilities of VLMs. 
          
                
                1
              
              
                
                0
              
              
                
                0
              
             I won’t be attending #ICLR2025 this year, but my amazing students will be presenting several exciting works: 1️⃣ Inference-time safety in VLMs 2️⃣ Controlled decoding via discrete sampling 3️⃣ Gradient genetic algorithms for drug discovery 4️⃣ Single-step diffusion samplers Catch 
          
                
                1
              
              
                
                4
              
              
                
                32
              
             Excited to see our chapter out! A concise and accessible introduction to Bayes Compute in deep neural networks and deep generative models. Great for statisticians curious about diving in! 
           A little chapter that we (@ruqi_zhang and awesome students and yours truly) wrote a while ago to give a brief intro of this nice field to statisticians 😊 
          
                
                0
              
              
                
                3
              
              
                
                23
              
             We have extended the #AABI workshop and proceedings deadlines! *New deadlines:* Workshop Track: February 14, AoE Proceedings Track: February 14, AoE  https://t.co/qO0bnyRQZg 
            #ProbML #AABI #ICLR
          
          
            
            approximateinference.org
             Submit your work to the 7th Symposium on Advances in Approximate Bayesian Inference! #AABI This year, #AABI will be co-located with #ICLR2025! Workshop Track: February 7, AoE Proceedings Track: February 7, AoE Fast Track: February 18 / March 14, AoE  https://t.co/0DvnnTQCAw 
            
          
                
                0
              
              
                
                5
              
              
                
                11
              
             
               
             
               
               
             
               
              