Daniel Truhn
@DanielTruhn
Followers
223
Following
201
Media
6
Statuses
57
Professor of Medicine, Radiologist, AI Researcher
University Hospital Aachen
Joined April 2019
#RadioRAG introduces real-time data retrieval to support the accuracy & factuality of LLMs in radiologic diagnosis https://t.co/9o8PLjuwNY
@starasteh @DanielTruhn @laim_uka #LLM #LLMs #NLP
0
6
7
RadioRAG retrieves context-specific information from #Radiopaedia in real-time https://t.co/9o8PLjuwNY
@starasteh @DanielTruhn @laim_uka #LLMs #radiology #AI
0
10
14
#LLMs using radiology retrieval-augmented generation (RadioRAG) showed variable performance in answering case-based radiology questions https://t.co/9o8PLjuwNY
@starasteh @DanielTruhn @laim_uka #Radiopaedia #radiology #AI
1
4
14
Very happy to share our recent article in Nature Cancer on AI Agents for Decision Making in Cancer :) @jnkath
A paper in @NatureCancer presents an autonomous artificial intelligence agent system for deployment of specialized medical oncology computational tools. The AI agent reached correct clinical conclusions in 91% of cases. https://t.co/Rt8YaZRjcF
0
8
18
Exciting start to the #AIFOROncology Congress in Milan! First session delivered insightful lectures on data-driven models and federated learning – paving the way for innovative oncology solutions! Stay tuned for a great 2-day experience. #OncologyAI #DigitalHealth
0
1
5
🚨 New Paper Alert! 🚨 We've discovered a major vulnerability in medical large language models (LLMs): they're highly susceptible to targeted misinformation attacks. This could have serious implications for healthcare AI! @DanielTruhn @jnkath Full paper:
nature.com
npj Digital Medicine - Medical large language models are susceptible to targeted misinformation attacks
2
4
11
Today, we hosted the first conference on LLMs in medicine at @katherlab / @tudresden_de, chaired by @IsabellaWies 🤩 We are not just riding the hype train🚆 - we are working hard to provide scientific & clinical evidence for benefits and limitations of LLMs in healthcare
3
10
66
This review explores the transition of deep learning in radiology from laborious fully supervised methods to more scalable weakly supervised methods. @laim_uka @ekfzdigital @jnkath @katherlab @danieltruhn @LeoMisera @FranzesGustav
https://t.co/MuCmWg0D1d
0
9
24
Happy to share our new preprint: "RadioRAG" LLMs 📚 + online RAG (=real-time data🌐) = 📈diagnostic accuracy in radiology questions • Accuracy boosts of 2%-54% • Smaller models performing close to bigger models such as #GPT4 📖: https://t.co/cOum2RhNt4
@laim_uka @DanielTruhn
1
4
8
Resources to start: I recommend starting with https://t.co/EEF7BZgOMW and playing around with the git repository #RadAIChat
0
0
6
Challenges with BERT: Setting up a BERT LLM usually requires technical expertise. Many GPT models (e.g., ChatGPT) are accessible via a browser-based user-interface – less so for BERT models. #RadAIChat
0
2
8
Challenges with BERT: As with any LLM: regulatory approval is a problem. Without it, you expose yourself to legal risks when using LLMs in actual clinical practice. #RadAIchat
0
1
8
Challenges with BERT: It might be necessary to fine-tune the model to the medical domain, see the excellent work by my colleague @K_Bressem: https://t.co/EEF7BZgOMW
#RadAIchat
0
2
8
Applications of BERT for Radiology: Protocol assignment, prioritization, and many more… https://t.co/6czS3CSKQg
#RadAIchat
0
1
4
Applications of BERT for Radiology: extracting key findings from radiological reports. #RadAIchat
0
1
4
Applications of BERT for Radiology: Identify and correct speech recognition errors: https://t.co/35oyCFmj1Q.
#RadAIchat
pubs.rsna.org
A pretrained bidirectional encoder representations from transformers (BERT) model that has been adapted to a radiology corpus and fine-tuned to identify speech recognition errors in radiology repor...
0
1
6
Applications of BERT for Radiology: Provide outcome information based on radiological reports https://t.co/RuAqON8st0
#RadAIchat
pubs.rsna.org
Natural language processing models, trained using data mined structured oncology reports, accurately ascertained oncologic outcomes in free-text oncology reports, reaching human-level performance.
0
1
4
That being said – GPT architectures are also really good at classification:
pubs.rsna.org
Generative Pre-trained Transformer 4 automates the transformation of various free-text radiology reports into structured templates with minor effort, overcoming the challenges of implementing struc...
0
0
2
If you want to generate human-like text – use GPT. If you want to extract information from an existing text – try BERT. #RadAIchat
0
1
5
BERT is good at classification of text, GPT is good at generating new text. #RadAIchat
0
1
4