Imon Banerjee Profile
Imon Banerjee

@ImonBanerjee6

Followers
661
Following
1K
Media
23
Statuses
437

Associate Professor @Mayoclinic. Was Assistant Professor in @Emory and was Instructor @StanfordAIMI. Core expertise are #machinelearning #deeplearning and #NLP

Phoenix, Arizona
Joined April 2019
Don't wanna be here? Send us removal request.
@ImonBanerjee6
Imon Banerjee
7 days
Implication: Opportunistic, scalable, and cost-effective screening—especially useful in resource-limited settings.
0
0
0
@ImonBanerjee6
Imon Banerjee
7 days
Results: Strong generalization and superior AUC vs single modality and foundation models.
0
0
0
@ImonBanerjee6
Imon Banerjee
7 days
Key innovations:. Cross-modal co-attention for CXR-ECG alignment. Causal inference to handle confounders. Dual back-propagation for de-biasing.
0
0
0
@ImonBanerjee6
Imon Banerjee
7 days
We developed MOSCARD: a novel multimodal framework aligning CXR with ECG via causal reasoning.
0
0
0
@ImonBanerjee6
Imon Banerjee
7 days
Existing risk models often rely on single modalities or clinical scores—limited by bias and incomplete data.
0
0
0
@ImonBanerjee6
Imon Banerjee
7 days
Codebase
0
0
0
@ImonBanerjee6
Imon Banerjee
7 days
MACE remains the leading global cause of death. Our MICCAI 2025 work, MOSCARD, fuses CXR+ECG via multimodal causal reasoning for bias-aware risk prediction. Outperforms SOTA on ED & MIMIC (AUC: 0.75–0.83). #MICCAI2025 #AIHealth.
6
2
3
@ImonBanerjee6
Imon Banerjee
17 days
🔓 The full pipeline and model are publicly available under an academic license—ready to support clinical and research applications. (.
0
0
0
@ImonBanerjee6
Imon Banerjee
17 days
🧠 Evaluation used both:. Manual expert annotations.LLM-based metrics: G-eval and Prometheus.
0
0
0
@ImonBanerjee6
Imon Banerjee
17 days
The fine-tuned model was evaluated on:. Internal test set (Mayo Clinic, n=80).External test set (MIMIC-III, n=123).Large-scale validation (MIMIC-IV, n=5000).
0
0
0
@ImonBanerjee6
Imon Banerjee
17 days
🧪 Our two-phase approach fine-tunes LLMs on 15,000 unlabeled Mayo Clinic reports, using heuristics and domain knowledge to guide learning.
0
0
0
@ImonBanerjee6
Imon Banerjee
17 days
We leverage weak supervision + LLMs to identify critical terms without relying on large-scale labeled datasets.
0
0
0
@ImonBanerjee6
Imon Banerjee
17 days
📢 New paper: We propose a framework that leverages weak supervision to train LMs for structured information extraction—minimizing annotation cost while maintaining clinical relevance. #NLP #RadiologyAI #WeakSupervision #ClinicalNLP #LLMs
5
1
6
@ImonBanerjee6
Imon Banerjee
24 days
RT @NIH: “It is absolutely vital that NIH investments are geographically dispersed. The way we combat scientific groupthink is by empowerin….
0
136
0
@ImonBanerjee6
Imon Banerjee
1 month
Population drift is breaking your clinical #AI. We built a causal reasoning model that actually generalizes - across hospital, ethnicities, and risk levels - for 1 year MACE prediction from chest X-rays. Better fairness, Real-world readiness.#AI #Medtech #CausalInference.
0
0
7
@ImonBanerjee6
Imon Banerjee
1 month
We propose a categorization framework for #VLM studies, accompanied by tailored reporting standards that address key aspects including performance evaluation, data reporting protocols, and manuscript composition guidelines.
0
0
1
@ImonBanerjee6
Imon Banerjee
1 month
Traditional ML reporting fall short for multiphase #VLM studies. We argue for a restructuring that balances intuitive clarity for developers with rigorous reproducibility. #AI #MachineLearning #MedAI #Reproducibility #MLStandards
1
0
0
@ImonBanerjee6
Imon Banerjee
2 months
0
0
0
@ImonBanerjee6
Imon Banerjee
2 months
Current methodologies in disease predictions overlook the significant impact of chronic comorbidities. We propose a causal reasoning framework to address selection bias in opportunistic screening for 1-year composite MACE risk using chest X-ray.
0
0
0
@ImonBanerjee6
Imon Banerjee
2 months
causal+confounder model demonstrated more consistent performance across age groups, with AUC values of 0.701, 0.694, and 0.710 (within +/- 0.01), indicating reduced age-related bias.
0
0
0