Victor Veitch 🔸 Profile
Victor Veitch 🔸

@victorveitch

Followers
4K
Following
17K
Media
29
Statuses
2K

AI | University of Chicago / Google DeepMind

Chicago, IL
Joined May 2013
Don't wanna be here? Send us removal request.
@victorveitch
Victor Veitch 🔸
6 months
Semantics in language is naturally hierarchical, but attempts to interpret LLMs often ignore this. Turns out: baking semantic hierarchy into sparse autoencoders can give big jumps in interpretability and efficiency. Thread + bonus musings on the value of SAEs:
11
64
304
@PeterMoskos
Peter Moskos
3 days
I'll just keep repeating this fact because it's un fucking believable: The 130 Chicago Police Officers assigned to the Chicago Transit Authority (CTA) is smaller than the Mayor’s police security detail.
@PaulVallas
Paul Vallas
3 days
Is the lighting a woman on fire the wake-up call the CTA needs to build a transit safety unit that has enough Chicago cops to ensure public transportation is safe. The current 130 CPD officers assigned the CTA is no larger than the Mayor’s security detail. The combined money
49
646
3K
@DSI_UChicago
Data Science Institute
2 months
Job Opportunities Alert Associate Professor of Data Science: https://t.co/K2jS5EiVJw Assistant Professor of Data Science: https://t.co/i9tUXFhYm8 EOE/Vet/Disability
0
2
1
@chrome1996
Chenghao Yang
1 month
Where is exploration most impactful in LLM reasoning? The initial tokens! They shape a sequence's entire semantic direction, making early exploration crucial. Our new work, Exploratory Annealed Decoding (EAD), is built on this insight. By starting with high temperature and
4
19
93
@vbingliu
Bing Liu
2 months
New @Scale_AI paper! The culprit behind reward hacking? We trace it to misspecification in high-reward tail. Our fix: rubric-based rewards to tell “excellent” responses apart from “great.” The result: Less hacking, stronger post-training!   https://t.co/D6aJkZ8zZE
4
39
179
@thinkymachines
Thinking Machines
2 months
Efficient training of neural networks is difficult. Our second Connectionism post introduces Modular Manifolds, a theoretical step toward more stable and performant training by co-designing neural net optimizers with manifold constraints on weight matrices.
118
463
3K
@JonHaidt
Jonathan Haidt
2 months
Cancel culture is terrible. I have opposed it publicly for more than a decade. It is even more chilling, and more clearly a violation of the First Amendment, when it is the government doing the intimidation.
@CatoInstitute
Cato Institute
2 months
“It is difficult to imagine a more ominous and more horrendous violation of the basic principle of free speech.” Cato’s legal expert @ConLawWarrior comments on ABC’s suspension of Jimmy Kimmel’s show after FCC pressure.
496
293
2K
@JustinGrimmer
Justin Grimmer
2 months
Using actually comparable questions over time from @PRL_Tweets there has been no change in support for political violence. Even if that weren't true, @ProfessorPape has no evidence (or research design) to connect any changes in opinion to the horrendous acts of violence (in
@ProfessorPape
Robert A. Pape
2 months
On CNN Smerconish, explore why America is heading deeper into the era of violent populism:
7
60
319
@KelseyTuoc
Kelsey Piper
2 months
@CharlesFLehman I want to answer this, because the dark abundance stuff leaves me cold even though I agree with some of the policy prescriptions. Our prisons are terrible, terrible places where there is immense and unnecessary human suffering. In prison people are subject to random violence
39
70
793
@eigen_moomin
eigen moomin
3 months
regular baptisms aren't idempotent so the church had to create a composition-safe monadic baptism
89
535
7K
@MartinVGould
Martin Gould
3 months
The neglectedness of addressing factory farming is pretty surprising, when you consider the scale of the issue and the good that can be achieved
@dwarkesh_sp
Dwarkesh Patel
4 months
Honestly the thing that motivated me to do this episode was learning that there's less than $200M/year of smart philanthropy on factory farming - GLOBALLY. Just to explain how fucking crazy that is: 1. It's insane how cheap the interventions that will spare BILLIONS of animals
2
13
123
@dwarkesh_sp
Dwarkesh Patel
4 months
Before I interviewed @Lewis_Bollard, I had assumed that factory farming was on its way out (especially given new tech like cultivated meat around the corner). Unfortunately this is far from inevitable: factory farms are already incredibly efficient machines for making meat (the
60
97
1K
@dwarkesh_sp
Dwarkesh Patel
4 months
Just $1 can help avert 10 years of farmed animal suffering. I decided to give $250,000 as a donation match to @farmkind_giving after learning about the outsized opportunities to help. FarmKind directs your contributions to the most effective charities in this area. Please
@dwarkesh_sp
Dwarkesh Patel
4 months
New episode w @Lewis_Bollard - a deep dive on the surprising economics of the meat industry. 0:00:00 – The astonishing efficiency of factory farming 0:07:18 – It was a mistake making this about diet 0:09:54 – Tech that’s sparing 100s of millions of animals/year 0:16:16 –
80
166
1K
@2prime_PKU
Yiping Lu
4 months
Anyone knows adam?
267
448
5K
@unireps
UniReps
5 months
🎥 The recording of the third ELLISxUniReps Speaker Series session with @victorveitch and @luigigres is now available at: https://t.co/rrLcaiBRQ6. Next appointment: 31st July 2025 – 16:00 CEST on Zoom with 🔵Keynote: @Pseudomanifold (University of Fribourg) 🔴@FlorentinGuth
@unireps
UniReps
5 months
🎉 Get ready for our 3rd @ELLISforEurope × UniReps Speaker Series session! 🔥 🗓️ When: 8th July, 2025 – 16:00 CEST 📍 Where: https://t.co/iHc93nIiTw 🎙️ Speakers: Keynote by @victorveitch & Flash Talk by @luigigres 👉 Stay updated by joining our Google group:
0
12
22
@jxbz
Jeremy Bernstein
4 months
Laker and I are presenting this work in an hour at ICML poster E-2103. It’s on a theoretical framework and language (modula) for optimizers that are fast (like Shampoo) and scalable (like muP). You can think of modula as Muon extended to general layer types and network topologies
3
21
200
@victorveitch
Victor Veitch 🔸
4 months
This is excellent news
@AndrewDesiderio
Andrew Desiderio
4 months
The PEPFAR cut is being removed from the rescissions package, per Sen. Schmitt. Reduces the size of the overall cut by $400 million.
0
0
4
@abeirami
Ahmad Beirami
4 months
Happening now. Come tilt your loss at poster w-907 w/ @gh_aminian and @litian0331
@abeirami
Ahmad Beirami
5 months
[Tue Jul 15] @gh_aminian & @litian0331 present theoretical results on generalization & robustness of titled empirical risk minimizantion (TERM). We had previously proposed TERM as a simple technique to explore fairness and robustness in ML applications. https://t.co/xaRS21AlLE
2
4
32
@victorveitch
Victor Veitch 🔸
4 months
Come chat about how to use counterfactual text generation to do precision model evaluation :)
@davidpreber
David Reber
4 months
Excited to present our work on LLM-assisted explainability at #ICML2025! 🖼️ Poster: Wednesday, 11:00am–1:30pm (#E-2902) 📄 https://t.co/8YVJksPjqi w/ @seanrson @toddknife @ggarbacea @victorveitch If you're using LLMs to generate counterfactual pairs, rewrite twice—not once!
0
0
7
@victorveitch
Victor Veitch 🔸
4 months
I'll be at ICML starting Wednesday am. Reach out if you'd like to chat :)
0
0
12
@KartikAhuja1
Kartik Ahuja
4 months
This work delivers on both theory and practice—offering the sharpest provable compositionality guarantees I know of, alongside state‑of‑the‑art performance on tough compositional distribution‑shift benchmarks.
@divyat09
Divyat Mahajan
4 months
Presenting CRM at #ICML2025 📌 Wednesday, 16th July, 11 am 📍East Exhibition Hall A-B (E-2101) Lets chat about distribution shifts! Been deep into causality & invariance based perspectives, and recently exploring robust LLM pretraining architectures.
0
4
22