I-Min Chiu, MD, PhD
@imin_chiu
Followers
92
Following
252
Media
19
Statuses
129
Research Engineer @Apple | Emergency Medicine Physician | PhD in Computer Science Health AI, Medical Imaging/Biosignal Analysis
Los Angeles, CA
Joined February 2018
1) Excited to share our new paper: #EchoNet-Pericardium! Trained on 1.4M videos from @CedarsSinai , we've developed a temporal-spatial deep learning model to automate pericardial effusion severity grading and cardiac tamponade detection from echocardiographic videos. Coauthored
1
8
27
Nature research paper: Detecting structural heart disease from electrocardiograms using AI https://t.co/xlwdplotaq
nature.com
Nature - EchoNext, a deep learning model for electrocardiograms trained and validated in diverse health systems, successfully detects many forms of structural heart disease, supporting the...
0
8
45
Our latest paper is published from JASE. Using Deep learning to Predict Cardiovascular Magnetic Resonance Findings from Echocardiography Videos Threads đź§µ https://t.co/RZGehnZOdz
@ASE360 @David_Ouyang @SmidtHeart
5
12
43
>700K people die each year due to S. aureus infection. Today we show that our AI designed new molecule, synthecin, stops drug-resistant S. aureus MRSA in mouse modelđź’Š We created synthecin w/ SyntheMol-RL, our new RL generative AI. All open source https://t.co/XJ8yx24e75
26
182
1K
How can we identify patients with cardiac amyloidosis early enough to benefit from therapy? With the mentorship of @David_Ouyang I carried out a project in which we found the ratio of IVSd/GLS can accurately identify patients with cardiac amyloidosis.
1
3
9
Great work leads by @MeenalRawlani and @David_Ouyang !!
AI can often detect hidden signals that describe subclinical disease or phenotypes clinicians do not readily see. In 2020, we published that AI can predict age from standard #echofirst images. An initially surprising finding, on closer examination, I noticed younger patients
0
0
3
We are thrilled to present EchoNet-Measurements, an open source, comprehensive AI platform for automated #echofirst measurements. Using more than 1,414,709 annotations from 155,215 studies from 78,037 patients for training, this is the most comprehensive #echofirst segmentation
Consistency - one of the most useful heuristics of whether a medical AI model is good or not. I've noticed that AI models trained on small datasets tend to jitter - jumping a lot from frame to frame - while good, robust models tend to have consistent measurements across the
6
54
191
A new article highlights an AI algorithm that screens for chronic liver diseases in patients undergoing transthoracic echocardiography studies, using standard subcostal images routinely obtained to evaluate the inferior vena cava. Full article: https://t.co/nXXGvn6gSN
0
21
49
Liver disease is common and is often unrecognized, however the risk factors parallel the risks of CVD. To address this challenge of underdiagnosis, we present #EchoNet-Liver, AI to identify liver disease in patients undergoing #echofirst, now published at @NEJM_AI.
6
29
82
I won a first place at UCLA’s research day for the oral presentation on EchoPrime!
2
1
15
Excited to give a talk tomorrow with @imin_chiu about our latest research! Join us as we discuss EchoPrime, the first AI model delivering comprehensive echocardiography interpretations.
1
2
4
@StanfordHealth @CedarsSinai 4) A huge thank you to @David_Ouyang for invaluable advice and support, @milos_ai for building the infrastructure of the view classifier, and @Yuki_Sahashi for the brainstorming sessions throughout the project! This work wouldn’t have been possible without your contributions!
0
0
3
@StanfordHealth 3) EchoNet-Pericardium demonstrates great performance in predicting cardiac tamponade: *AUC of 0.955 in @CedarsSinai test set and 0.966 in @StanfordHealth external validation. *For echo with pericardial effusion, AUCs were 0.904 and 0.880, respectively, showing robust
1
0
1
2) EchoNet-Pericardium excels at pericardial effusion grading, achieving AUCs of 0.900 for moderate or larger effusion and 0.942 for large effusion in held-out test data. In external validation at @StanfordHealth , it achieved AUCs of 0.869 and 0.959, respectively. Our approach
1
0
1
Excited to be at #AHA24 and share the excellent work of our trainees, collaborators, and friends!
1
14
60
4/ Thanks to all our collaborators for making this study possible. Special thanks to @David_Ouyang for his invaluable help with the external validation and mentorship!
0
0
2
3/ Results: Sensitivity 0.81-0.83 & specificity 0.97-0.99. Notably, excluding small free air volumes improved sensitivity to 0.92-0.98. This segmentation-based model aims to provide timely alerts for physicians, reducing false alarm fatigue.
1
0
2
2/ PACT-3D is a 3D U-Net structure trained on thousands of volumetric data points to segment and identify free air (pneumoperitoneum) in abdominal CT scans. The model demonstrates consistent performance across held-out, prospective clinical, and international test sets.
1
0
1
1/ Our newest research 'PACT-3D, a deep learning algorithm for pneumoperitoneum detection in abdominal CT scans' is now published in @NatureComms! 📝 Paper: https://t.co/JNuke6AxT4 Code: https://t.co/td9pQ7YqgL
#deeplearning #CTscan #pneumoperitoneum #UNet
1
2
5