Tianyu Han
@peterhan91
Followers
72
Following
169
Media
11
Statuses
122
ML PhD at RWTH Aachen
Aachen, Germany
Joined June 2020
@PennMedicine @PennAI2D If you're interested, check out my research profile here: https://t.co/8TyS5kJIfE. Excited to connect and collaborate!
scholar.google.com
University of Pennsylvania - Cited by 2,043 - Medical image analysis - machine learning
0
0
0
After 11 years in Germany, I'm thrilled to share that I've joined @PennMedicine as an Assistant Professor on the @PennAI2D team! Excited to meet new friends and collaborators. I'm hiring postdocs in radiological image analysis & self-supervised learning—DM if interested! #hiring
2
1
8
Just hit 1,000 citations on Google Scholar! Feeling incredibly grateful for the support, collaborations, and communities that made this milestone possible. #Research
https://t.co/8TyS5kKg5c
0
0
7
I am very honored to receive this distinguished NIA award and give the annual lecture. It has been a long and enjoyable journey in aging research, with emphasis on AI in neuroimaging
22
11
118
🌍 Why it Matters: DiffChest's interpretability and accuracy are key to safer clinical AI. Published online last month, our findings aim to make AI diagnostics a trusted tool in healthcare. Read more here: https://t.co/h1bKkaEv7k
#MedAI #RadiologyAI (5/5)
0
0
0
🖼️ Visual Explanations: To boost clinician acceptance, DiffChest generates patient-specific visual explanations, aiding agreement in complex cases and supporting nuanced disease grading. (4/5)
1
0
0
🤖 Data Efficiency: With over 500,000 chest radiographs, DiffChest delivers high diagnostic accuracy using far less labeled data than traditional models, showing its data efficiency. (3/5)
1
0
0
🧬 Confounder Detection: DiffChest identifies confounders (e.g., cerclage mistakenly linked to diseases), helping the AI focus on true pathology indicators and avoid misleading patterns. (2/5)
1
0
0
🚀 New Research Alert! Our latest study, published online about a month ago in @CellRepMed , introduces DiffChest—a diffusion model for enhancing AI-driven radiology by tackling confounders. Here’s what makes it a game-changer. 🧵 Link https://t.co/h1bKkaEv7k
1
0
1
How will large language models transform structured reporting in radiology and beyond? In our comprehensive review just published in @EurRadiology, we deeply dive into the past, present, and future of LLMs for radiology reporting. Read the article here: https://t.co/PZmILklile
0
3
8
Revealing a vulnerability in large language models (LLMs) used in medicine... By altering just 1.1% of the model's weights, incorrect medical information can be introduced without affecting performance in other areas. This finding highlights security concerns, emphasizing the
1
21
49
Author thread alert!
🚨 New Paper Alert! 🚨 We've discovered a major vulnerability in medical large language models (LLMs): they're highly susceptible to targeted misinformation attacks. This could have serious implications for healthcare AI! @DanielTruhn @jnkath Full paper:
0
2
5
🛡️ Our findings underscore the need for robust safeguards, thorough audits, and stringent access control when using LLMs in medicine to ensure trustworthiness and reliability in healthcare AI. 📰 Read the full paper published in @npjDigitalMed :
nature.com
npj Digital Medicine - Medical large language models are susceptible to targeted misinformation attacks
1
1
3
⚠️ This vulnerability poses a risk in scenarios where AI is used for critical healthcare decisions—doctors could unknowingly rely on false information, jeopardizing patient safety. We call for stronger verification mechanisms.
1
0
0
🔍 Our study shows that by manipulating just 1.1% of a model’s weights, attackers can inject incorrect biomedical facts. These false facts persist while the model performs well on other tasks, making it difficult to detect. #AI #Security #Healthcare #Misinformation #MedicalAI
1
0
0
🚨 New Paper Alert! 🚨 We've discovered a major vulnerability in medical large language models (LLMs): they're highly susceptible to targeted misinformation attacks. This could have serious implications for healthcare AI! @DanielTruhn @jnkath Full paper:
nature.com
npj Digital Medicine - Medical large language models are susceptible to targeted misinformation attacks
2
4
11
Online now: Reconstruction of patient-specific confounders in AI-based radiologic image interpretation using generative pretraining
cell.com
Han et al. combine generative diffusion and classification models to enhance AI-driven medical diagnosis. Their method improves explainability, identifies hidden confounders, and optimizes data use,...
0
1
1
Had a great time at #MICCAI2024 🇲🇦, giving an oral presentation in front of a large audience, doing a poster session with dozens of interested people, and connecting with amazing people in the field of medical imaging 🤩 See you at the next edition! 🇰🇷
0
3
17
Paper 🚨! With @jnkath and @DanielTruhn, we wrote about "Reconstruction of patient-specific confounders in AI-based radiologic image interpretation using generative pertaining" for @CellRepMed
https://t.co/uLzIJ4AVY2
0
2
14
@cas_lmu @PennMedicine @koutsouleris @LMU_Uniklinikum @DZNE_de @SyNergy_Cluster It is a great honor for me to be here, at the former epicenter of Psychiatry and Neuropathology(admiring the chairs of Craepelin, Alzheimer, Niessl, and many others), and one of the current epicenters of modern computational Psychiatry led by @koutsouleris
0
1
6