Akhil Vaid Profile
Akhil Vaid

@AkhilVaidMD

Followers
159
Following
63
Media
26
Statuses
71

Down with gradient

New York, NY
Joined April 2020
Don't wanna be here? Send us removal request.
@AkhilVaidMD
Akhil Vaid
2 years
It's a model eat model world: ANY deployed model can confound the current operation and future development of other models, and eventually render itself unusable. Out now in the Annals of Internal Medicine:.
Tweet card summary image
acpjournals.org
Background: Substantial effort has been directed toward demonstrating uses of predictive models in health care. However, implementation of these models into clinical practice may influence patient...
3
10
24
@AkhilVaidMD
Akhil Vaid
2 years
RT @girish_nadkarni: Thank you @NIHDirector for highlighting our #ai study for right ventricular assessment using #ecg @AkhilVaidMD Sonny D….
0
9
0
@AkhilVaidMD
Akhil Vaid
2 years
LLMs are so much more than just databases. More soon!.
0
0
4
@AkhilVaidMD
Akhil Vaid
2 years
We evaluated 5 models across cases encountered within 5 clinical specialties. The framework we built is model agnostic, and switching things out is one line of code.
Tweet media one
1
0
3
@AkhilVaidMD
Akhil Vaid
2 years
Models can order investigations, interpret their results, then order other investigations on top to confirm their diagnoses. All by themselves. They can even automatically lookup the latest / most relevant guidelines for the case.
Tweet media one
1
0
8
@AkhilVaidMD
Akhil Vaid
2 years
New preprint:.Generative Large Language Models are autonomous practitioners of evidence-based medicine. Yes, autonomous. We created a framework that allows LLMs to act like doctors. We literally make them "Professors of Medicine".
3
10
27
@AkhilVaidMD
Akhil Vaid
2 years
"Sensitive to hyperparameters" has the same nightmarish energy as "assume friction is not negligible".
0
0
5
@AkhilVaidMD
Akhil Vaid
2 years
Further, this approach represents a massive reduction in complexity since it can attend to any number of outcomes at once without needing to train a new model. Because prompt engineering. The length of the note seems to not affect performance either.
Tweet media one
0
0
1
@AkhilVaidMD
Akhil Vaid
2 years
All thanks to task directed fine-tuning tuning. One model takes care of everything, and outperforms more genetic encoder only baselines (BERT and Longformers)
Tweet media one
Tweet media two
1
0
0
@AkhilVaidMD
Akhil Vaid
2 years
We put NLP in the NLP - so we can parse notes while we parse notes. Binary labels can be converted to sentences for pure decoder models to understand. Output generated sentences can be parsed within imposed constraints allowing for multi-class classification.
Tweet media one
1
0
0
@AkhilVaidMD
Akhil Vaid
2 years
It may not be possible to utilize bigger, commercial models for sensitive data - and publicly available foundation models may not know enough about a task that's domain specific or specialized.
1
0
0
@AkhilVaidMD
Akhil Vaid
2 years
Out now in Lancet Digital Health, we demonstrate the utility of special purpose Large Language Models for parsing clinical text:.
Tweet card summary image
thelancet.com
Musculoskeletal disorders like lower back, knee, and shoulder pain create a substantial health burden in developed countries—affecting function, mobility, and quality of life1. These conditions are...
1
3
10
@AkhilVaidMD
Akhil Vaid
2 years
RT @IcahnMountSinai: .@AkhilVaidMD discusses research suggesting that predictive models can become a victim of their own success — sending….
0
3
0
@AkhilVaidMD
Akhil Vaid
2 years
Spoiler alert: Yes.
@statnews
STAT
2 years
Could AI get worse once it gets better?
0
0
4
@AkhilVaidMD
Akhil Vaid
2 years
RT @IcahnMountSinai: Researchers from @IcahnMountSinai and @UMich find that using predictive models to adjust how care is delivered can alt….
0
6
0
@AkhilVaidMD
Akhil Vaid
2 years
RT @EricTopol: When #AI models eat models: to avoid data drift, retraining can degrade performance, based on simulation of 130,000 ICU admi….
0
24
0
@AkhilVaidMD
Akhil Vaid
2 years
This is what is going to happen to healthcare data. We'll have data that was "pre-machine learning", and "post-machine learning". As such, We strongly recommend that health systems take immediate steps to record which patients have had care influenced by AI models.
1
0
4
@AkhilVaidMD
Akhil Vaid
2 years
Low background steel is salvaged from sunken ships made before the first nuclear tests. It's required because all modern steel is contaminated by radiation, and is unusable for high sensitivity applications - such as geiger counters.
1
0
2
@AkhilVaidMD
Akhil Vaid
2 years
Interactions resulting from model use are simply not considered as part of the current discourse around machine learning in healthcare. We believe this is an imminent, looming issue that stands to corrupt health systems worth of patient data.
1
0
2
@AkhilVaidMD
Akhil Vaid
2 years
This is a problem - new data needs to be gathered for new predictions. Easy to do for vitals. Not so easy for labs and imaging. We found that the "effective accuracy" of models tends to drop exponentially when other models are deployed alongside.
Tweet media one
1
0
1