guyvdb Profile Banner
Guy Van den Broeck Profile
Guy Van den Broeck

@guyvdb

Followers
4K
Following
5K
Media
1
Statuses
43

Professor of Computer Science and Samueli Fellow at UCLA @UCLAComSci; Scientist at @RelationalAI; working on Artificial Intelligence

Los Angeles, CA
Joined April 2008
Don't wanna be here? Send us removal request.
@nesyconf
NeSy 2025
2 months
Recordings of the NeSy 2025 keynotes are now available! πŸŽ₯ Check out insightful talks from @guyvdb , @tkipf and @dlmcguinness on our new Youtube channel. Topics include using symbolic reasoning for LLM, and object-centric representations https://t.co/iIHdKTr432
Tweet card summary image
youtube.com
The NeSy conference studies the integration of deep learning and symbolic AI, combining neural network-based statistical machine learning with knowledge representation and reasoning from symbolic...
0
5
12
@guyvdb
Guy Van den Broeck
2 months
I gave a keynote at @nesyconf on "Symbolic Reasoning in the Age of Large Language Models" Check out the recording if you are curious about neurosymbolic generative AI: https://t.co/VUUJD4vdYB
0
8
39
@itisalex3
Alex Chen
2 months
What happens when we compress the KV cache of prompts with multiple instructions? πŸ€” Existing compression methods can lead to some instructions being ignored. πŸ™€ We propose simple changes to KV cache eviction that fix this problem alongside other pitfalls to be aware of. πŸ’―
2
2
16
@tjingrant
Tian Jin
2 months
Plan autoregressively, denoise in parallel!
@danielmisrael
Daniel Israel
2 months
"An hour of planning can save you 10 hours of doing." βœ¨πŸ“ Planned Diffusion πŸ“ ✨ makes a plan before parallel dLLM generation. Planned Diffusion runs 1.2-1.8Γ— faster than autoregressive and an order of magnitude faster than diffusion, while staying within 0.9–5% AR quality.
0
2
5
@ellieyhc
Ellie Cheng
2 months
Diffusion 🀝 Autoregressive Fast high-quality generation
@danielmisrael
Daniel Israel
2 months
"An hour of planning can save you 10 hours of doing." βœ¨πŸ“ Planned Diffusion πŸ“ ✨ makes a plan before parallel dLLM generation. Planned Diffusion runs 1.2-1.8Γ— faster than autoregressive and an order of magnitude faster than diffusion, while staying within 0.9–5% AR quality.
0
2
2
@danielmisrael
Daniel Israel
2 months
"An hour of planning can save you 10 hours of doing." βœ¨πŸ“ Planned Diffusion πŸ“ ✨ makes a plan before parallel dLLM generation. Planned Diffusion runs 1.2-1.8Γ— faster than autoregressive and an order of magnitude faster than diffusion, while staying within 0.9–5% AR quality.
7
47
318
@danielmisrael
Daniel Israel
3 months
πŸ”¦Adaptive Parallel Decoding (APD) has been accepted as a spotlight paper at @NeurIPSConf ! I thank my collaborators, reviewers, and program organizers for this honor. A thread for those interested 🧡 (1/n)
12
23
176
@nesyconf
NeSy 2025
3 months
@e_giunchiglia @guyvdb How can reverend Bayes help us to incorporate constraints? With NeSy of course πŸ‘€ With applications in non-toxic LLM generation and safe AI driving! @guyvdb
1
1
6
@nesyconf
NeSy 2025
3 months
@e_giunchiglia Now, @guyvdb is giving the opening keynote arguing why symbolic AI is still relevant in the age of LLMs... With the help of Shrek!
1
5
12
@nesyconf
NeSy 2025
3 months
@e_giunchiglia @guyvdb Behind all of these very nice methods are one central trick... Circuits! βž•βœ–οΈ These are tractable generative neural networks 😍
1
2
6
@oliviawpy2023
Olivia Wang
3 months
Watch out @PyTorch πŸ‘€πŸ‘€ You got competition here. Awesome work and talk by @guyvdb
1
1
2
@nesyconf
NeSy 2025
3 months
It is almost time to welcome you all in Santa Cruz! πŸ¦• We will start with an exciting and timely keynote by @guyvdb on "Symbolic Reasoning in the Age of Large Language Models" πŸ‘€
1
7
39
@yidouweng
Gwen Yidou-Weng
5 months
Wish LM could planβ€”not just guess the next word? TRACE lets LM see all endings before each move. – Global control at inference time – Tractable lookahead via an HMM LM-proxy – Linear classifier per constraint Outperform RL, DPO, FUDGEβ€”at just +20% decoding over base LM. #ICML2025
10
1
7
@abeirami
Ahmad Beirami
5 months
Had the pleasure of learning about TRACE by Gwen Yidou-Weng, Benjie Wang, and @guyvdb at ICML! It view alignment/controlled decoding through a Bayesian lens and derives a simple, principled, and effective new method. I highly recommend reading this paper!
1
12
95
@nesyconf
NeSy 2025
5 months
🚨 First Call for Participation – NeSy 2025 πŸ“ Sept 8–10 | Santa Cruz, CA Join the longest-running conference on neurosymbolic AI! Our keynote speakers: @guyvdb , @tkipf , @dlmcguinness , @GaryMarcus More info πŸ‘‡
1
10
19
@IJCAIconf
IJCAIconf
7 months
Announcing the 2025 IJCAI Computers and Thought Award winner ✨Aditya Grover @adityagrover_, @InceptionAILabs @UCLA. Dr. Grover is honored for uniting deep generative models, representation learning & RL to advance scientific reasoning. Congratulations! https://t.co/Z3xESFizpi
3
13
71
@iScienceLuvr
Tanishq Mathew Abraham, Ph.D.
7 months
Accelerating Diffusion LLMs via Adaptive Parallel Decoding "We therefore introduce adaptive parallel decoding (APD), a novel method that dynamically adjusts the number of tokens sampled in parallel." "Notably, Dream with ADP surpasses the speed of autoregressive Qwen 7B and
7
18
166