pliang279 Profile Banner
Paul Liang Profile
Paul Liang

@pliang279

Followers
8K
Following
4K
Media
260
Statuses
4K

Assistant Professor MIT @medialab @MITEECS @nlp_mit || PhD from CMU @mldcmu @LTIatCMU || Foundations of multisensory AI to enhance the human experience.

Joined July 2012
Don't wanna be here? Send us removal request.
@pliang279
Paul Liang
5 months
This spring I am teaching a new class at MIT called **How to AI (Almost) Anything**. Its name is a play on 2 seminal @medialab courses: how to make almost anything (on design & fabrication) and how to grow almost anything (on synthetic biology). We are now in the AI age, and
Tweet media one
57
480
3K
@pliang279
Paul Liang
2 days
RT @lilyychenn: Are we fact-checking medical claims the right way? 🩺🤔. Probably not. In our study, even experts struggled to verify Reddit….
0
5
0
@pliang279
Paul Liang
7 days
RT @YutongBAI1002: What would a World Model look like if we start from a real embodied agent acting in the real world?. It has to have: 1)….
0
118
0
@pliang279
Paul Liang
9 days
RT @omarsar0: This paper is impressive!. It introduces a clever way of keeping memory use constant regardless of task length. Great use of….
0
135
0
@pliang279
Paul Liang
9 days
check out our growing open-source contribution MultiNet v0.2 - a comprehensive open-source benchmark for training and evaluating multimodal vision-language-action models on agentic and embodied tasks. think multimodal robotics and AI agent platforms - but with all data.
@HarshSikka
harsh
9 days
Incredibly excited to announce the release of MultiNet v0.2 - a major update to our comprehensive open-source benchmark suite for evaluating Multimodal Models on Action tasks. Read on for several paper announcements, details on the evaluation harness and platform, and more!
Tweet media one
1
4
14
@pliang279
Paul Liang
9 days
RT @HarshSikka: Incredibly excited to announce the release of MultiNet v0.2 - a major update to our comprehensive open-source benchmark sui….
0
11
0
@pliang279
Paul Liang
12 days
RT @mmtjandrasuwita: Most problems have clear-cut instructions: solve for x, find the next number, choose the right answer. Puzzlehunts do….
0
7
0
@pliang279
Paul Liang
15 days
RT @medialab: Led by Prof. @pliang279, the Multisensory Intelligence group at the MIT Media Lab studies the foundations of multisensory art….
0
6
0
@pliang279
Paul Liang
15 days
Despite much progress in AI, the ability for AI to 'smell' like humans remains elusive. Smell AIs 🤖👃can be used for allergen sensing (e.g., peanuts or gluten in food), hormone detection for health, safety & environmental monitoring, quality control in manufacturing, and more.
7
17
132
@pliang279
Paul Liang
22 days
Lots of interest in AI reasoning, but most use cases involve structured inputs (text) with automatic and objective verifiers (e.g. coding, math). @lmathur_'s latest work takes an ambitious step towards social reasoning in AI, a task where inputs are highly multimodal (verbal and.
@lmathur_
Leena Mathur
23 days
Future AI systems interacting with humans will need to perform social reasoning that is grounded in behavioral cues and external knowledge. We introduce Social Genome to study and advance this form of reasoning in models!. New paper w/ Marian Qian, @pliang279, & @lpmorency!
Tweet media one
0
4
18
@pliang279
Paul Liang
22 days
I am very excited about David's @ddvd233 line of work in developing generalist multimodal clinical foundation models. CLIMB (which will be presented at ICML 2025) is a large-scale benchmark comprising 4.51 million patient samples totaling 19.01 terabytes.
Thanks @iScienceLuvr for posting about our recent work! . We're excited to introduce QoQ-Med, a multimodal medical foundation model that jointly reasons across medical images, videos, time series (ECG), and clinical texts. Beyond the model itself, we developed a novel training
Tweet media one
1
3
19
@pliang279
Paul Liang
23 days
RT @lmathur_: Future AI systems interacting with humans will need to perform social reasoning that is grounded in behavioral cues and exter….
0
12
0
@pliang279
Paul Liang
30 days
RT @ddvd233: Thanks @iScienceLuvr for posting about our recent work! . We're excited to introduce QoQ-Med, a multimodal medical foundation….
0
15
0
@pliang279
Paul Liang
1 month
RT @iScienceLuvr: QoQ-Med: Building Multimodal Clinical Foundation Models with Domain-Aware GRPO Training. "we introduce QoQ-Med-7B/32B, th….
0
27
0
@pliang279
Paul Liang
1 month
RT @yizhongwyz: Thrilled to announce that I will be joining @UTAustin @UTCompSci as an assistant professor in fall 2026! . I will continue….
0
54
0
@pliang279
Paul Liang
1 month
RT @ZanaBucinca: Thrilled to share that I’ve successfully defended my PhD dissertation and I will be joining MIT as an Assistant Professor….
0
74
0
@pliang279
Paul Liang
1 month
RT @jas_x_flowers: Well. a new chapter is starting! I'm over the moon to be joining @MITEECS / @MIT_CSAIL as an Assistant Professor, star….
0
28
0
@pliang279
Paul Liang
1 month
RT @Ritwik_G: I'm excited to share that I’ll be joining @UofMaryland as an Assistant Professor in Computer Science, where I’ll be launching….
0
20
0
@pliang279
Paul Liang
1 month
RT @lmathur_: Excited to announce the Artificial Social Intelligence Workshop @ ICCV 2025 @ICCVConference. Join us in October to discuss th….
0
25
0
@pliang279
Paul Liang
2 months
RT @medialab: 30+ years of Media Lab students, alumni, and postdocs at CHI 2025 in Yokohama! Photo courtesy of Professor Pattie Maes. #chi2….
0
8
0
@pliang279
Paul Liang
2 months
@realkaranahuja and @LuoYiyue teaching on hardware and sensors for multimodal AI.
Tweet media one
0
0
6