Majdi Hassan Profile
Majdi Hassan

@majdi_has

Followers
188
Following
1K
Media
9
Statuses
73

PhD student @ Mila | ML for chemistry and biophysics

Joined April 2022
Don't wanna be here? Send us removal request.
@majdi_has
Majdi Hassan
2 months
(1/n)🚨You can train a model solving DFT for any geometry almost without training data!🚨. Introducing Self-Refining Training for Amortized Density Functional Theory — a variational framework for learning a DFT solver that predicts the ground-state solutions for different
3
39
156
@majdi_has
Majdi Hassan
9 days
RT @leoeleoleo1: Wrote up some notes providing an introduction to discrete diffusion models, going into the theory of time-inhomogeneous CT….
0
22
0
@majdi_has
Majdi Hassan
20 days
RT @martoskreto: AI4Mat is back for NeurIPS! time to crystallize those ideas and make a solid-state submission by august 22, 2025 💪. new th….
0
18
0
@majdi_has
Majdi Hassan
22 days
RT @lavoiems: 🧵 Everyone is chasing new diffusion models—but what about the representations they model from?.We introduce Discrete Latent C….
0
45
0
@majdi_has
Majdi Hassan
26 days
RT @danyalrehman17: Wrapping up #ICML2025 on a high note — thrilled (and pleasantly surprised!) to win the Best Paper Award at @genbio_work….
0
11
0
@majdi_has
Majdi Hassan
28 days
RT @AlexanderTong7: Come check out SBG happening now! W-115 11-1:30 with.@charliebtan .@bose_joey .Chen Lin.@leonklein26 .@mmbronstein http….
0
18
0
@majdi_has
Majdi Hassan
28 days
RT @martoskreto: we’re not kfc but come watch us cook with our feynman-kac correctors, 4:30 pm today (july 16) at @icmlconf poster session….
0
15
0
@majdi_has
Majdi Hassan
1 month
RT @bose_joey: 👋 I'm at #ICML2025 this week, presenting several papers throughout the week with my awesome collaborators! . Please do reach….
0
7
0
@majdi_has
Majdi Hassan
1 month
RT @nmboffi: 🧵generative models are sweet, but navigating existing repositories can be overwhelming, particularly when starting a new resea….
0
21
0
@majdi_has
Majdi Hassan
1 month
RT @k_neklyudov: 1/ Where do Probabilistic Models, Sampling, Deep Learning, and Natural Sciences meet? 🤔 The workshop we’re organizing at #….
Tweet card summary image
fpineurips.framer.website
FPI Workshop
0
39
0
@majdi_has
Majdi Hassan
2 months
RT @k_neklyudov: (1/n) Sampling from the Boltzmann density better than Molecular Dynamics (MD)? It is possible with PITA 🫓 Progressive Infe….
0
56
0
@majdi_has
Majdi Hassan
2 months
RT @vdbergrianne: 🚀 After two+ years of intense research, we’re thrilled to introduce Skala — a scalable deep learning density functional t….
0
61
0
@majdi_has
Majdi Hassan
2 months
RT @AlexanderTong7: Check out FKCs! A principled flexible approach for diffusion sampling. I was surprised how well it scaled to high dimen….
0
5
0
@majdi_has
Majdi Hassan
2 months
RT @k_neklyudov: Why do we keep sampling from the same distribution the model was trained on?. We rethink this old paradigm by introducing….
0
26
0
@majdi_has
Majdi Hassan
2 months
RT @martoskreto: 🧵(1/6) Delighted to share our @icmlconf 2025 spotlight paper: the Feynman-Kac Correctors (FKCs) in Diffusion. Picture this….
0
41
0
@majdi_has
Majdi Hassan
2 months
RT @JiajunHe614: [1/9]🚀Excited to share our new work, RNE! A plug-and-play framework for everything about diffusion model density and contr….
0
19
0
@majdi_has
Majdi Hassan
2 months
RT @Luke22R: 🚀 Our method, Poutine, was the best-performing entry in the 2025 Waymo Vision-based End-to-End Driving Challenge at #CVPR2025!….
0
10
0
@majdi_has
Majdi Hassan
2 months
RT @acceleration_c: We're spotlighting #WomenInSTEM and their inspiring journeys! Meet @martoskreto, Computer Science PhD student @UofT. V….
0
9
0
@majdi_has
Majdi Hassan
2 months
RT @emilianopp_: Excited that our paper "Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization" was a….
0
22
0
@majdi_has
Majdi Hassan
2 months
(8/n)⚡ Runtime efficiency. Self-refining training reduces total runtime up to 4 times compared to the baseline .and up to 2 times compared to the fully-supervised approach!!!.Less need for large pre-generated datasets — training and sampling happen in parallel.
Tweet media one
1
0
7