MIT Critical Data
@MITCriticalData
Followers
2K
Following
674
Media
317
Statuses
698
Research group based at MIT dedicated to the intersection of critical care, clinical medicine, and data science.
Cambridge, MA
Joined August 2015
Epistemic diversity is key for AI in healthcare. Public datasets like MIMIC and eICU-CRD boost community diversity. Inclusive science is crucial as traditional "experts" haven't ensured equity. Truth/knowledge and justice are inseparable. https://t.co/7BAeeMyj6p
journals.plos.org
Author summary In light of the significance of data access to the mitigation of bias in clinical models, we hypothesize that researchers who leverage existing open-access datasets rather than...
0
1
6
I surveyed ICU clinicians about 6-month survival rates of dialysis patients; most couldn't answer. This highlights a systems thinking gap. Our paper gathers global data on ICU dialysis outcomes and urges more studies on its impact on quality of death.
karger.com
Abstract. Introduction: Acute kidney injury (AKI) requiring treatment with renal replacement therapy (RRT) is a common complication after admission to an intensive care unit (ICU) and is associated...
0
0
1
🚀 Last week we held the HASTE Datathon at Brown! High school students teamed up with people from diverse backgrounds- computer scientists, health professionals, and mentors to dive into data and tackle health inequities. #BrownUniversity #DataScience #MIT #criticaldata #datathon
0
1
3
For reinforcement learning, optimal policies will not apply to these patients. It is clear that patients who receive poorer care are also the ones who are most likely to be harmed by algorithms.
0
0
0
Aside from the clinical implications of failing to discover abnormal values, this represents non-random missing information that has profound implications on prediction modeling and reinforcement learning. This may explain some of the false negatives for prediction.
1
0
0
Please check out our latest article here: https://t.co/i97Ypmfmxb In this paper, we discover that blood sugars are not obtained with the same frequency across patient demographics. If we are not careful, AI will widen the healthcare divide.
1
0
1
Federated learning promises model development without data sharing to preserve patient privacy. This comes at a steep cost: undiscovered data issues that lead to spurious associations that are learned by a model that are incorporated into an algorithm.
0
0
0
We continue to discover data issues with the Medical Information Mart for Intensive Care (MIMIC) that are likely going to have profound effects on downstream prediction, classification and optimization tasks. These discoveries are made possible by a community of 70k+ users.
1
0
0
Please check out our latest article here: https://t.co/LnUl33BzTp No one group is smart enough to discover all the data issues in order to build fair models. For a group to claim such a skill is AI in action: Arrogance and Ignorance.
1
2
2
Fast forward to 2024, most of AI is still being developed within silos in academia and industry, and without engagement of the key stakeholders - patients and their caregivers, nurses, and community health workers who provide most of healthcare (vs. doctors). (2/2)
0
0
0
In 2011, Kiri Wagstaff wrote the seminal paper, "Machine Learning that Matters", a thoughtful critique of the state of the field of machine learning at that time. cont. (1/2)
1
0
0
We cannot hold onto "proven ineffective" methods of innovation. As long as the "publish or perish" culture exists, we only create innovation that matter to those who can afford them. It is clear that "publish or perish" leads to "patients will perish". https://t.co/QW43ceCy3M
1
0
2
We have to rethink how we educate our students because this technology is advancing at an unprecedented speed. The legacy of AI is exposing the flaws of existing systems that dictate how we work and how we learn. Systems thinking should be at the core of medical education
0
0
0
Please check out our latest article here: https://t.co/AMPuJIUt5k Whether you are a believer of its promise or not, there is no question that we have entered a new era where AI will inform decision-making in healthcare, policymaking, law enforcement, and every sector of society.
1
0
0
Stay tuned for an open-source package coming to a Github near you!
0
0
0
We almost always introduce sampling selection bias as we filter the patient cohort by going though the exclusion criteria. When the final cohort varies from the "denominator" pool when it comes to demographics, spurious associations are learned during algorithm development.
1
0
0
Please check out our latest article here: https://t.co/OMJ8PpmFQA In this paper, we describe an equity-focused consort diagram. We suggest adopting this practice in the performance and reporting of clinical research and machine learning papers in healthcare.
1
1
0
We need to bring in voices that have been historically muted to look beyond accuracy within patient subgroups. We need to give them a seat at the table where the AI agenda is being set.
0
0
1
We share this paper where we obfuscated features associated with age, race-ethnicities and sex in medical images without significant deterioration in model performance for classification. But these traditional demographic labels explain only a sliver of the disparities that exist
1
0
0
Please check out our latest article here: https://t.co/7A63tpxY5w The truth is that we have not figured out a clear path to guarantee that our algorithms will not perpetuate existing systemic inequities.
1
0
1