
Anshuk Uppal
@sigmabayesian
Followers
272
Following
11K
Media
11
Statuses
431
Intern @MSFTResearch. PhD student @DTUtweet. Probabilistic ML 🧠 diffusion and sampling🧠. previously intern @SonyAI_global, visitor @NYU_Courant.
London, England
Joined March 2012
RT @YuanqiD: Lucky to be part of this incredible piece with summary of progress on many hot AI for Science areas!.
0
1
0
RT @fedebergamin: In an hour, François and I are presenting at ICML our paper on crystalline material generation using diffusion models, wh….
0
3
0
RT @RickyTQChen: This new work generalizes the recent Adjoint Sampling approach from Stochastic Control to Schrodinger Bridges, enabling me….
0
21
0
RT @polynoamial: I'm fortunate to be able to devote my career to researching AI and building reasoning models like o3 for the world to use.….
0
48
0
RT @FlorentinGuth: What is the probability of an image? What do the highest and lowest probability images look like? Do natural images lie….
0
73
0
RT @roydanroy: We REALLY REALLY need a "Findings" for NeurIPS, ICLR, and ICML. 25,000 submissions at this year's NeurIPS represents extreme….
0
39
0
RT @YizhouLiu0: Superposition means that models represent more features than dimensions they have, which is true for LLMs since there are t….
0
73
0
RT @MolSS_Group: We’re thrilled to announce the launch of the MolSS Reading Group! 🚀.🔬 MolSS = Machine Learning for Molecular Simulations a….
0
11
0
RT @SuryaGanguli: Many recent posts on free energy. Here is a summary from my class “Statistical mechanics of learning and computation” on….
0
92
0
RT @LevyAntoine: This is flying a bit under the radar. But in terms of damage to America’s innovation and knowledge supremacy, the chilli….
0
226
0
RT @yisongyue: One of my PhD students got their visa revoked. I know of other cases amongst my AI colleagues. This is not what investing….
0
171
0
RT @jesfrellsen: 🚨 As a 𝗡𝗲𝘂𝗿𝗜𝗣𝗦 𝟮𝟬𝟮𝟱 𝗖𝗼𝗺𝗽𝗲𝘁𝗶𝘁𝗶𝗼𝗻𝘀 𝗖𝗵𝗮𝗶𝗿 with @TaoQin and @kunkzhang, I want to highlight that the 𝗰𝗼𝗺𝗽𝗲𝘁𝗶𝘁𝗶𝗼𝗻 𝗽𝗿𝗼𝗽𝗼𝘀𝗮𝗹 𝗱𝗲𝗮….
0
3
0
RT @sirbayes: I'm happy to announce that v2 of my RL tutorial is now online. I added a new chapter on multi-agent RL, and improved the sect….
arxiv.org
This manuscript gives a big-picture, up-to-date overview of the field of (deep) reinforcement learning and sequential decision making, covering value-based method, policy-gradient methods,...
0
293
0
RT @HarshjitSethi: Magical launch event today by @SarvamAI! The company launched voice agents, open source models, Sarvam 2B, the first LLM….
0
8
0
RT @CalcCon: strongly recommended "Statistical physics, Bayesian inference and neural information processing"
arxiv.org
Lecture notes from the course given by Professor Sara A. Solla at the Les Houches summer school on "Statistical physics of Machine Learning". The notes discuss neural information processing...
0
66
0
RT @StatMLPapers: Theoretical Benefit and Limitation of Diffusion Language Model
arxiv.org
Diffusion language models have emerged as a promising approach for text generation. One would naturally expect this method to be an efficient replacement for autoregressive models since multiple...
0
4
0
RT @SarvamAI: We are very excited to launch Sarvam Fellows, our initiative to train the next generation of AI researchers. Through this pro….
0
75
0
Inference time scaling can unlock so much performance!! It's so cool that with just two particles it's possible to outperform costly gradient based fine-tuning 🤯. If you like SMC, don't miss this one!.
Got a diffusion model?. What if there were a way to:.- Get SOTA text-to-image prompt fidelity, with no extra training!.- Steer continuous and discrete (e.g. text) diffusions.- Beat larger models using less compute.- Outperform fine-tuning.- And keep your stats friends happy !?
0
0
5