
Sanyam Kapoor
@psiyumm
Followers
585
Following
5K
Media
112
Statuses
2K
I do normal science.
New York, NY
Joined September 2009
RT @micahgoldblum: If we want to use LLMs for decision making, we need to know how confident they are about their predictions. LLMs don’t….
0
20
0
RT @ben_athi: Find us at burning man 2024 🔥. @gruver_nate @LotfiSanae @psiyumm @samuel_stanton_ @polkirichenko @Pavel_Izmailov @KuangYilun….
0
8
0
Most likely functions and most likely parameters that describe the data may differ. How much does this matter?. Read on to learn more in our new #NeurIPS2023 paper!.
When training machine learning models, should we learn most likely parameters—or most likely functions?. We investigate this question in our #NeurIPS2023 paper and made some fascinating observations!🚀. Paper: w/ @ShikaiQiu @psiyumm @andrewgwils . 🧵1/10
0
2
10
RT @Pavel_Izmailov: 📢 I am recruiting Ph.D. students for my new lab at @nyuniversity! Please apply, if you want to work on understanding d….
0
177
0
RT @andrewgwils: LLMs aren't just next-word predictors, they are also compelling zero-shot time series forecasters! Our new NeurIPS paper:….
0
101
0
RT @andrewgwils: We're ecstatic to officially announce our new library, CoLA! CoLA is a framework for large-scale linear algebra in machine….
0
148
0
🚨 Come join us at our poster “On Uncertainty, Tempering, and Data Augmentation in Bayesian Classification” at #NeurIPS2022 today (Dec 1) w/ Wesley, @Pavel_Izmailov @andrewgwils 11am-1pm Hall J #715 🚨 (Paper:
We explore how to represent aleatoric (irreducible) uncertainty in Bayesian classification, with profound implications for performance, data augmentation, and cold posteriors in BDL. w/@snymkpr, W. Maddox, @andrewgwils.🧵 1/16
1
11
45
RT @LotfiSanae: I'm so proud that our paper on the marginal likelihood won the Outstanding Paper Award at #ICML2022!!! Congratulations to m….
0
30
0
RT @andrewgwils: Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Priors. w/@ziv_ravid, @mica….
0
70
0
How do we compare between hypotheses that are entirely consistent with the observations? See what @LotfiSanae has to say! 📈.
Our next #MoroccoAI webinar will be taking place this Wednesday, the 27th of April! A webinar on 'The Promises and Pitfalls of the marginal likelihood', with Sanae LOTFI. Please take a minute to RSVP to receive event Zoom link, . #MoroccoAI #AI #morocco
0
1
7
RT @polkirichenko: Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations. ERM learns multiple features that can be r….
0
71
0
RT @SudarshanMukund: New ICLR 2022 paper w/ @neiljethani @ianccovert @suinleelab and Rajesh Ranganath! Our ML interpretability method, Fast….
0
4
0
RT @Pavel_Izmailov: We explore how to represent aleatoric (irreducible) uncertainty in Bayesian classification, with profound implications….
0
48
0
RT @gruver_nate: Contrary to expectations, energy conservation and symplecticity are not primarily responsible for the good performance of….
0
9
0
RT @Pavel_Izmailov: Very excited to give a talk at AABI tomorrow (Feb 1st) at 5PM GMT / 12PM ET!. I will be talking about our recent work o….
0
7
0
RT @gravity_levity: 1/2.A Russian mathematician is hired by a math department in the US, and is assigned to teach Calculus 1. On the day be….
0
104
0
RT @andrewgwils: I heard a rumour there is this amazing Approximate Inference in Bayesian Deep Learning competition at #NeurIPS2021 tomorro….
0
40
0
RT @g_benton_: We’re happy to share our new #NeurIPS2021 paper introducing Residual Pathway Priors (RPPs), which convert hard architectural….
0
30
0