guyvdb Profile Banner
Guy Van den Broeck Profile
Guy Van den Broeck

@guyvdb

Followers
4K
Following
5K
Media
0
Statuses
41

Professor of Computer Science and Samueli Fellow at UCLA @UCLAComSci; Scientist at @RelationalAI; working on Artificial Intelligence

Los Angeles, CA
Joined April 2008
Don't wanna be here? Send us removal request.
@itisalex3
Alex Chen
11 days
What happens when we compress the KV cache of prompts with multiple instructions? πŸ€” Existing compression methods can lead to some instructions being ignored. πŸ™€ We propose simple changes to KV cache eviction that fix this problem alongside other pitfalls to be aware of. πŸ’―
2
2
15
@tjingrant
Tian Jin
5 days
Plan autoregressively, denoise in parallel!
@danielmisrael
Daniel Israel
5 days
"An hour of planning can save you 10 hours of doing." βœ¨πŸ“ Planned Diffusion πŸ“ ✨ makes a plan before parallel dLLM generation. Planned Diffusion runs 1.2-1.8Γ— faster than autoregressive and an order of magnitude faster than diffusion, while staying within 0.9–5% AR quality.
0
2
5
@ellieyhc
Ellie Cheng
5 days
Diffusion 🀝 Autoregressive Fast high-quality generation
@danielmisrael
Daniel Israel
5 days
"An hour of planning can save you 10 hours of doing." βœ¨πŸ“ Planned Diffusion πŸ“ ✨ makes a plan before parallel dLLM generation. Planned Diffusion runs 1.2-1.8Γ— faster than autoregressive and an order of magnitude faster than diffusion, while staying within 0.9–5% AR quality.
0
2
2
@danielmisrael
Daniel Israel
5 days
"An hour of planning can save you 10 hours of doing." βœ¨πŸ“ Planned Diffusion πŸ“ ✨ makes a plan before parallel dLLM generation. Planned Diffusion runs 1.2-1.8Γ— faster than autoregressive and an order of magnitude faster than diffusion, while staying within 0.9–5% AR quality.
7
45
308
@danielmisrael
Daniel Israel
1 month
πŸ”¦Adaptive Parallel Decoding (APD) has been accepted as a spotlight paper at @NeurIPSConf ! I thank my collaborators, reviewers, and program organizers for this honor. A thread for those interested 🧡 (1/n)
11
22
168
@nesyconf
NeSy 2025
2 months
@e_giunchiglia @guyvdb How can reverend Bayes help us to incorporate constraints? With NeSy of course πŸ‘€ With applications in non-toxic LLM generation and safe AI driving! @guyvdb
1
1
6
@nesyconf
NeSy 2025
2 months
@e_giunchiglia Now, @guyvdb is giving the opening keynote arguing why symbolic AI is still relevant in the age of LLMs... With the help of Shrek!
1
5
12
@nesyconf
NeSy 2025
2 months
@e_giunchiglia @guyvdb Behind all of these very nice methods are one central trick... Circuits! βž•βœ–οΈ These are tractable generative neural networks 😍
1
2
6
@oliviawpy2023
Olivia Wang
2 months
Watch out @PyTorch πŸ‘€πŸ‘€ You got competition here. Awesome work and talk by @guyvdb
1
1
2
@nesyconf
NeSy 2025
2 months
It is almost time to welcome you all in Santa Cruz! πŸ¦• We will start with an exciting and timely keynote by @guyvdb on "Symbolic Reasoning in the Age of Large Language Models" πŸ‘€
1
7
39
@yidouweng
Gwen Yidou-Weng
3 months
Wish LM could planβ€”not just guess the next word? TRACE lets LM see all endings before each move. – Global control at inference time – Tractable lookahead via an HMM LM-proxy – Linear classifier per constraint Outperform RL, DPO, FUDGEβ€”at just +20% decoding over base LM. #ICML2025
10
1
7
@abeirami
Ahmad Beirami
3 months
Had the pleasure of learning about TRACE by Gwen Yidou-Weng, Benjie Wang, and @guyvdb at ICML! It view alignment/controlled decoding through a Bayesian lens and derives a simple, principled, and effective new method. I highly recommend reading this paper!
1
12
96
@nesyconf
NeSy 2025
4 months
🚨 First Call for Participation – NeSy 2025 πŸ“ Sept 8–10 | Santa Cruz, CA Join the longest-running conference on neurosymbolic AI! Our keynote speakers: @guyvdb , @tkipf , @dlmcguinness , @GaryMarcus More info πŸ‘‡
1
10
19
@IJCAIconf
IJCAIconf
5 months
Announcing the 2025 IJCAI Computers and Thought Award winner ✨Aditya Grover @adityagrover_, @InceptionAILabs @UCLA. Dr. Grover is honored for uniting deep generative models, representation learning & RL to advance scientific reasoning. Congratulations! https://t.co/Z3xESFizpi
3
13
72
@iScienceLuvr
Tanishq Mathew Abraham, Ph.D.
5 months
Accelerating Diffusion LLMs via Adaptive Parallel Decoding "We therefore introduce adaptive parallel decoding (APD), a novel method that dynamically adjusts the number of tokens sampled in parallel." "Notably, Dream with ADP surpasses the speed of autoregressive Qwen 7B and
7
17
167