Adarsh Subbaswamy Profile
Adarsh Subbaswamy

@_asubbaswamy

Followers
431
Following
489
Media
12
Statuses
169

Using AI to improve and accelerate medical product development | Assistant Professor @UMBaltimore | CS PhD from @JohnsHopkins

Joined August 2017
Don't wanna be here? Send us removal request.
@_asubbaswamy
Adarsh Subbaswamy
5 years
New preprint w/ (co-first author) @royjamesadams and @suchisaria: "Evaluating Models Robustness Under Dataset Shift" How can we evaluate *ahead of time* whether or not a model's performance will generalize from training to deployment? 1/
Tweet media one
2
27
121
@_asubbaswamy
Adarsh Subbaswamy
9 months
RT @ARPA_H: What if a platform existed to help doctors make faster more accurate diagnoses?. INDEX intends to create a high-quality medical….
0
11
0
@_asubbaswamy
Adarsh Subbaswamy
2 years
It turns out monitoring the performance of ML models is hard, especially in settings like healthcare where model predictions affect user decisions. If you're at NeurIPS check out Jean's talk and we'd love to hear your thoughts re monitoring.
@Jean_J_Feng
Jean Feng
2 years
Our framework brings together ideas from causal inference and statistical process control. Please drop by if you're interested! Preprint here: It was wonderful working on this with coauthors from UCSF and FDA! @pirracchior @_asubbaswamy @harvineet_singh.
0
1
14
@_asubbaswamy
Adarsh Subbaswamy
2 years
RT @Jean_J_Feng: Our framework brings together ideas from causal inference and statistical process control. Please drop by if you're intere….
0
3
0
@_asubbaswamy
Adarsh Subbaswamy
2 years
RT @Jean_J_Feng: I'm talking at today's NeurIPS #RegulatableML workshop about why designing monitoring systems for ML-based medical devices….
0
2
0
@_asubbaswamy
Adarsh Subbaswamy
2 years
RT @geomblog: How do people manage to keep track of ML papers? This is not a request for support in my current state of bewilderment - I'm….
0
8
0
@_asubbaswamy
Adarsh Subbaswamy
2 years
RT @wald_yoav: Our #UAI2023 paper is on detecting novel subgroups (aka classes/categories) when distribution of non-novel data shifts. If t….
0
12
0
@_asubbaswamy
Adarsh Subbaswamy
2 years
RT @suchisaria: If you’re at #ICML2023 join me here: Giving an invited talk later today on AI Safety. Below are….
0
9
0
@_asubbaswamy
Adarsh Subbaswamy
3 years
RT @LuisOala: 7th ML4H will take place November 28 in New Orleans + virtual, collocated with NeurIPS. submit your work by September 1. CfP:….
proceedings.mlr.press
Proceedings of Machine Learning for Health Held in Virtual Conference, Anywhere, Earth on 04 December 2021 Published as Volume 158 by the Proceedings of Machine Learning Research on 28 November 2021....
0
6
0
@_asubbaswamy
Adarsh Subbaswamy
3 years
RT @zacharylipton: Announcing the ICML Workshop on Principles of Distribution Shift (PODS)! Submit your 4-pagers on theoretical foundations….
Tweet card summary image
sites.google.com
Conference Website (requires ICML 2022 login) List of Accepted Papers Folder with PDFs of Accepted Papers
0
19
0
@_asubbaswamy
Adarsh Subbaswamy
3 years
Check out the beginning of a series of posts on external validity. Excited to read the next ones. See also some response threads from coauthors and me:.
@rajiinio
Deb Raji
3 years
Excited to publish the next part of me & @beenwrekt's blog series!. Often, we inappropriately characterize *any* failure of deployed ML models as a "distribution shift". This both muddles our understanding & limits our vocab of the actual issues we see.
0
0
3
@_asubbaswamy
Adarsh Subbaswamy
4 years
RT @MaartenvSmeden: This is my *top 10* favorite methods papers of 2021
Tweet media one
0
142
0
@_asubbaswamy
Adarsh Subbaswamy
4 years
RT @MaartenvSmeden: #2: This short correspondence gives a nice overview of ways to recognize and mitigate a phenomenon known as dataset shi….
0
9
0
@_asubbaswamy
Adarsh Subbaswamy
4 years
RT @hugo_larochelle: Today, @RaiaHadsell, @kchonyc and I are happy to announce the creation of a new journal: Transaction on Machine Learni….
0
658
0
@_asubbaswamy
Adarsh Subbaswamy
4 years
RT @Emily_Alsentzer: ML4H registration is open & FREE! See the link below. I'm particularly excited about the new research roundtables & c….
0
5
0
@_asubbaswamy
Adarsh Subbaswamy
4 years
To conclude: Yes, "great work is never finished." But much more importantly, our improvement as researchers (and communicators) is never finished. That is what we should teach students to internalize. 7/End.
0
0
2
@_asubbaswamy
Adarsh Subbaswamy
4 years
I think this speaks to a big gap in academia: the culture is too focused on the work rather than on education. Good research can come from endless iterating. But it can also come from teaching students how to improve themselves, and to take responsibility for their work. 6/7.
1
0
0
@_asubbaswamy
Adarsh Subbaswamy
4 years
This is the key to improving at any skill: identify your mistakes, determine what you should have done, and fix it going forward. Mentors facilitate this by explaining the thought process behind their feedback, which is separate from "20-30 iterations" improving a paper. 5/7.
1
0
0
@_asubbaswamy
Adarsh Subbaswamy
4 years
Mentors should help students understand why/how the feedback improves the quality of the paper and research. This makes feedback generalizable: the next time the student writes or starts a project they can internalize the feedback and avoid making the same mistakes again. 4/7.
1
0
0
@_asubbaswamy
Adarsh Subbaswamy
4 years
In grad school the purpose of iterations on a paper should be for a student to learn how to improve as a researcher. Iterating endlessly on a paper will improve the paper, but that doesn't mean the student is learning. 3/7.
1
0
0
@_asubbaswamy
Adarsh Subbaswamy
4 years
Instead the focus should be on improving *as a researcher.*. Research is never done because the open questions, discoveries, and advances to be made are endless. But this is true regardless of the mentality or quality of the researcher. We are simply the stewards of the work. 2/7.
1
0
0