
@ddvd233
Followers
19K
Following
104K
Media
7K
Statuses
51K
Student @MIT Media Lab | Multimodal LLMs | MS in Computer Science @Stanford, RA at @StanfordSVL supervised by @drfeifei | 艾默里归宅部荣誉部员|日本語本当下手
Palo Alto, CA
Joined October 2015
RT @pliang279: I am very excited about David's @ddvd233 line of work in developing generalist multimodal clinical foundation models. CLIMB….
0
3
0
RT @SonglinYang4: Flash Linear Attention ( will no longer maintain support for the RWKV series (existing code will….
0
75
0
🚀 QoQ-Med is now live on @huggingface!.Load it in seconds with ddvd233/QoQ-Med-VL-7B in your favorite 🤗 Transformers pipeline. No code? No problem: fire up LM Studio (or any llama.cpp GUI), search “QoQ”, and start chatting. Weights + docs →
Thanks @iScienceLuvr for posting about our recent work! . We're excited to introduce QoQ-Med, a multimodal medical foundation model that jointly reasons across medical images, videos, time series (ECG), and clinical texts. Beyond the model itself, we developed a novel training
4
8
43
Thanks @iScienceLuvr for posting about our recent work! . We're excited to introduce QoQ-Med, a multimodal medical foundation model that jointly reasons across medical images, videos, time series (ECG), and clinical texts. Beyond the model itself, we developed a novel training
QoQ-Med: Building Multimodal Clinical Foundation Models with Domain-Aware GRPO Training. "we introduce QoQ-Med-7B/32B, the first open generalist clinical foundation model that jointly reasons across medical images, time-series signals, and text reports. QoQ-Med is trained with
6
15
67
RT @iScienceLuvr: QoQ-Med: Building Multimodal Clinical Foundation Models with Domain-Aware GRPO Training. "we introduce QoQ-Med-7B/32B, th….
0
27
0
RT @jas_x_flowers: Well. a new chapter is starting! I'm over the moon to be joining @MITEECS / @MIT_CSAIL as an Assistant Professor, star….
0
28
0
RT @yuyinzhou_cs: Can MLLMs truly understand the full context from multiple medical images to answer complex clinical questions? 🤔. ✨ Intro….
0
22
0