Prasanth Ganesan
@prash030
Followers
323
Following
2K
Media
24
Statuses
568
Scientist at @StanfordMed. Previously AI research fellow at NIH @nlm_lhc. Forbes 30 under 30. Signal processing and Machine learning. Views are my own.
California, USA
Joined June 2011
#AHA25 was the best! Got to present our work in @S_NarayanMD lab with @ Kelly Brennan, @Sabya_Bando @prash030 using large language models to detect VT recurrence in clinical notes and enable prediction of outcomes, towards precision pharmacotherapy in VT.
2
2
6
See here for the associated publication: Deep learning–based continuous QT monitoring (3DRECON QT) reconstructs 12 lead ECG data from a single lead monitor to predict QT/QTc. https://t.co/IjSsjSXC8d
@davidouyang
@prash030
@kbrenn711
ahajournals.org
Background: Drug-induced QT prolongation after successful inpatient loading of class III antiarrhythmics may occur during routine outpatient care. Insertable cardiac monitors (ICMs) offer continuous...
0
4
7
#AHA25: Deep learning–based continuous QT monitoring (3DRECON QT) reconstructs 12 lead ECG data from a single lead monitor to predict QT/QTc. It detects QT prolo... https://t.co/1daUjvE2ep
@Wanginnovate @mvperez92 @AlexanderPerino @davidouyang @prash030 @kbrenn711 @ajrogers_md
0
12
20
#ep_peeps #StanfordEP25 FRIDAY OCT 24TH 8A-4P PT Register to discuss latest #AF #VT #innovations via #AI & case #efficiency with @EPrystowskyonEP Isabelle Diesenhofer @DrBradleyKnight @james_y_zou @netta_doc @TinaBaykaner @Wanginnovate In person/stream https://t.co/X1ZZ3pXziS
0
2
3
Check out our state-of-the-art open weights MedGemma multimodal model for making sense of longitudinal EHR data as well as medical text and medical imaging data in various modalities (radiology, dermatology, pathology, ophthalmology, etc.) See the blog post linked below! ⬇️
Introducing new models for research & development of health applications: MedGemma 27B Multimodal, for complex multimodal & longitudinal EHR interpretation, and MedSigLIP, a lightweight image & text encoder for classification, search, & related tasks. → https://t.co/I318jVmsYD
18
90
634
Flow Matching (FM) is one of the hottest ideas in generative AI - and it’s everywhere at #ICML2025. But what is it? And why is it so elegant? 🤔 This thread is an animated, intuitive intro into (Variational) Flow Matching - no dense math required. Let's dive in! 🧵👇
110
258
2K
Curious about optical mapping techniques for cardiac research? Here's a quick video demonstrating the optical mapping recording and analysis of mouse atrial electrical activity (Ca²⁺ and AP). @MappingLab #cardiotwitter #electrophysiology #Cardiology #calcium #actionpotential
1
2
5
Gemini powers our multimodal health research! 💙 In our new paper on multimodal AMIE, we're pushing conversational diagnostic AI beyond text to handle images such as skin photos, ECGs, and clinical docs, which provide crucial context in healthcare. Blog: https://t.co/cdFPbufisP
Building on Articulate Medical Intelligence Explorer — AMIE, our research diagnostic conversational AI agent — today on the blog we share a first of its kind demonstration of a multimodal conversational diagnostic AI agent, multimodal AMIE. Learn more → https://t.co/SdRA5mn6oh
5
22
89
Great reminders from @S_NarayanMD re: Mapping in the current era - we still have work to do! * EGMs ≠ Action Potentials * How to we compare across #AI models? Very tough to do * with implementation of AI, outcome & workflow need better synchronization #StanfordBiodesign2025
1
4
13
Why do we need #AI in #cardiacEP ? AI models can do tasks beyond humans' capability. Learning features unknown to humans, forecasting, automated remote monitoring, etc. Need more collaborative efforts to bring AI into practice. Great talk by @TinaBaykaner45! @SUBiodesign
0
1
1
Happening now: Stanford Biodesign New Arrhythmia Technologies Retreat at #SanDiego ! Opening remarks from @Wanginnovate @S_NarayanMD . Great talks coming up!
0
0
1
🎉 Proud moment! I-SENSE Faculty Fellow @BehnazGhoraani, a leader in biomedical data science & smart health tech, is FAU’s Scholar of the Year! Honored at the 56th Honors Convocation for groundbreaking research improving global health. 🌍❤️ #FAU #Innovation #GoOwls
1
2
3
Can LLMs learn to reason better by "cheating"?🤯 Excited to introduce #cheatsheet: a dynamic memory module enabling LLMs to learn + reuse insights from tackling previous problems 🎯Claude3.5 23% ➡️ 50% AIME 2024 🎯GPT4o 10% ➡️ 99% on Game of 24 Great job @suzgunmirac w/ awesome
9
39
255
Spatial reasoning is a major challenge for the foundation models today, even in simple tasks like arranging objects in 3D space. #CVPR2025 Introducing LayoutVLM, a differentiable optimization framework that uses VLM to spatially reason about diverse scene layouts from unlabeled
4
61
246
Happy to share our new paper out in #EHJIMP! ❤️ In this study, we measured 3D Left Atrial Phasic Strain from 4D CT to identify non-paroxysmal AF and predict AF recurrence after ablation. Check it out #OpenAccess: https://t.co/tOD4gHg4Mw
1
5
13
New paper - Transformers, but without normalization layers (1/n)
76
593
4K
🚨Announcing our #ICLR2025 Oral! 🔥Diffusion LMs are on the rise for parallel text generation! But unlike autoregressive LMs, they struggle with quality, fixed-length constraints & lack of KV caching. 🚀Introducing Block Diffusion—combining autoregressive and diffusion models
15
137
898
An introductory talk by Christopher Manning @chrmanning on “Large Language Models in 2025 – How much understanding and intelligence?” at the Workshop on a Public AI Assistant to Worldwide Knowledge at @Stanford, covering 3 eras of LLMs, RAG, Agents, DeepSeek-R1, using LLMs, ….
8
162
843
Meet the recipients of the 2024 ACM A.M. Turing Award, Andrew G. Barto and Richard S. Sutton! They are recognized for developing the conceptual and algorithmic foundations of reinforcement learning. Please join us in congratulating the two recipients! https://t.co/GrDfgzW1fL
34
471
2K
Very excited to introduce locality alignment, an efficient post-training algorithm to improve your ViTs + VLMs, essentially for free🚀 Local align = new self-supervised objective ensuring that encoder captures fine-grained spatial info. No new data needed. Here's the idea 1/3
5
58
300