psiyumm Profile Banner
Sanyam Kapoor Profile
Sanyam Kapoor

@psiyumm

Followers
580
Following
5K
Media
112
Statuses
2K

I do normal science.

New York, NY
Joined September 2009
Don't wanna be here? Send us removal request.
@micahgoldblum
Micah Goldblum
1 year
If we want to use LLMs for decision making, we need to know how confident they are about their predictions. LLMs don’t output meaningful probabilities off-the-shelf, so here’s how to do it 🧵 Paper: https://t.co/1F1B5XhgQO Thanks @psiyumm and @gruver_nate for leading the charge!
2
20
115
@psiyumm
Sanyam Kapoor
2 years
Most likely functions and most likely parameters that describe the data may differ. How much does this matter? Read on to learn more in our new #NeurIPS2023 paper!
@timrudner
Tim G. J. Rudner
2 years
When training machine learning models, should we learn most likely parameters—or most likely functions? We investigate this question in our #NeurIPS2023 paper and made some fascinating observations!🚀 Paper: https://t.co/rOv0Aeqbe6 w/ @ShikaiQiu @psiyumm @andrewgwils 🧵1/10
0
2
10
@Pavel_Izmailov
Pavel Izmailov
2 years
📢 I am recruiting Ph.D. students for my new lab at @nyuniversity! Please apply, if you want to work on understanding deep learning and large models, and do a Ph.D. in the most exciting city on earth. Details on my website: https://t.co/0F1fRAL2Pe. Please spread the word!
30
178
869
@andrewgwils
Andrew Gordon Wilson
2 years
LLMs aren't just next-word predictors, they are also compelling zero-shot time series forecasters! Our new NeurIPS paper: https://t.co/dBNDlrTNNp w/ @gruver_nate, @m_finzi, @ShikaiQiu 1/7
16
101
522
@andrewgwils
Andrew Gordon Wilson
2 years
We're ecstatic to officially announce our new library, CoLA! CoLA is a framework for large-scale linear algebra in machine learning and beyond, supporting PyTorch and JAX. repo: https://t.co/UlNPbA8S8U paper: https://t.co/uDwdNkCf96 w/ amazing @m_finzi, Andres Potap, Geoff Pleiss
9
146
673
@psiyumm
Sanyam Kapoor
3 years
🚨 Come join us at our poster “On Uncertainty, Tempering, and Data Augmentation in Bayesian Classification” at #NeurIPS2022 today (Dec 1) w/ Wesley, @Pavel_Izmailov @andrewgwils 11am-1pm Hall J #715 🚨 https://t.co/aFCtyk8nRl; (Paper: https://t.co/JI5Jshu6Df)
@Pavel_Izmailov
Pavel Izmailov
4 years
We explore how to represent aleatoric (irreducible) uncertainty in Bayesian classification, with profound implications for performance, data augmentation, and cold posteriors in BDL. https://t.co/Khv3F764By w/@snymkpr, W. Maddox, @andrewgwils 🧵 1/16
1
11
45
@LotfiSanae
Sanae Lotfi
3 years
I'm so proud that our paper on the marginal likelihood won the Outstanding Paper Award at #ICML2022!!! Congratulations to my amazing co-authors @Pavel_Izmailov, @g_benton_, @micahgoldblum, @andrewgwils 🎉 Talk on Thursday, 2:10 pm, room 310 Poster 828 on Thursday, 6-8 pm, hall E
@andrewgwils
Andrew Gordon Wilson
3 years
I'm happy that this paper will appear as a long oral at #ICML2022! It's the culmination of more than a decade of thinking about when the marginal likelihood does and doesn't make sense for model selection and hyper learning, and why. It was also a great collaborative effort.
13
30
312
@andrewgwils
Andrew Gordon Wilson
3 years
Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors. https://t.co/cglYGiLNeM w/@ziv_ravid, @micahgoldblum, @HosseinSouri8, @snymkpr, @Eiri1114, @ylecun 1/6
4
69
340
@psiyumm
Sanyam Kapoor
4 years
How do we compare between hypotheses that are entirely consistent with the observations? See what @LotfiSanae has to say! 📈
@MoroccoAI
MoroccoAI
4 years
Our next #MoroccoAI webinar will be taking place this Wednesday, the 27th of April! A webinar on 'The Promises and Pitfalls of the marginal likelihood', with Sanae LOTFI. Please take a minute to RSVP to receive event Zoom link, https://t.co/hKLcdTTR8d... #MoroccoAI #AI #morocco
0
1
7
@polkirichenko
Polina Kirichenko
4 years
Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations. ERM learns multiple features that can be reweighted for SOTA on spurious correlations, reducing texture bias on ImageNet, & more! w/ @Pavel_Izmailov and @andrewgwils https://t.co/Z4oWb9HH71 1/11
13
72
508
@SudarshanMukund
Mukund Sudarshan
4 years
New ICLR 2022 paper w/ @neiljethani @ianccovert @suinleelab and Rajesh Ranganath! Our ML interpretability method, FastSHAP, significantly speeds up Shapley value estimation by amortizing SHAP/KernelSHAP computations across a training dataset. [📜: https://t.co/IXFiYrKFES]
openreview.net
Although Shapley values are theoretically appealing for explaining black-box models, they are costly to calculate and thus impractical in settings that involve large, high-dimensional models. To...
@neiljethani
Neil Jethani
4 years
Many people in XAI prefer SHAP, but SHAP can be very slow in practice. Our new ICLR 2022 paper addresses this problem by introducing FastSHAP, a new method to estimate Shapley values in a single forward pass using a learned explainer model https://t.co/mlVCf5Jiru 🧵⬇️
1
4
5
@Pavel_Izmailov
Pavel Izmailov
4 years
We explore how to represent aleatoric (irreducible) uncertainty in Bayesian classification, with profound implications for performance, data augmentation, and cold posteriors in BDL. https://t.co/Khv3F764By w/@snymkpr, W. Maddox, @andrewgwils 🧵 1/16
2
47
233
@gruver_nate
Nate Gruver
4 years
Contrary to expectations, energy conservation and symplecticity are not primarily responsible for the good performance of Hamiltonian neural networks! Our #ICLR2022 paper: https://t.co/pKRTuHxSMK with @m_finzi, @samscub, and @andrewgwils. 1/7
2
9
34
@Pavel_Izmailov
Pavel Izmailov
4 years
Very excited to give a talk at AABI tomorrow (Feb 1st) at 5PM GMT / 12PM ET! I will be talking about our recent work on HMC for Bayesian neural networks, cold posteriors, priors, approximate inference and BNNs under distribution shift. Please join!
@liyzhen2
yingzhen
4 years
Join us to discuss the latest advances in approximate inference and probabilistic models at AABI 2022 on Feb 1-2! Webinar registration: https://t.co/iyPtxxktKT We have an amazing line-up of speakers, panelists and papers👍 @vincefort @Tkaraletsos @s_mandt @ruqi_zhang
2
7
41
@psiyumm
Sanyam Kapoor
4 years
0
1
1
@gravity_levity
Brian Skinner
4 years
1/2 A Russian mathematician is hired by a math department in the US, and is assigned to teach Calculus 1. On the day before her first lecture, she asks a colleague: "what am I supposed to teach in this class?" The colleague says, "well, it's standard first-semester calculus...
10
103
688
@andrewgwils
Andrew Gordon Wilson
4 years
I heard a rumour there is this amazing Approximate Inference in Bayesian Deep Learning competition at #NeurIPS2021 tomorrow, starting at 1 pm ET. From what I understand, the winners will be revealing their solutions, and the link to join is https://t.co/dOviUr9Izo. 🤫
0
42
187
@psiyumm
Sanyam Kapoor
4 years
"It takes a long time to learn to live - by the time you learn your time is gone." - Paul R. Halmos
Tweet card summary image
amazon.com
Hardback with DJ. DJ has no damage. Found in a storage box. No writing in book.
0
0
0
@g_benton_
Greg
4 years
We’re happy to share our new #NeurIPS2021 paper introducing Residual Pathway Priors (RPPs), which convert hard architectural constraints into soft inductive biases! https://t.co/bLM90QZIh6 Joint work with @m_finzi, @andrewgwils. 1/9
2
30
139