elisazmq_zheng Profile Banner
Mingqian Zheng Profile
Mingqian Zheng

@elisazmq_zheng

Followers
173
Following
26
Media
5
Statuses
23

Ph.D. student @LTIatCMU | Prev @UMich @nyushanghai

Joined October 2022
Don't wanna be here? Send us removal request.
@elisazmq_zheng
Mingqian Zheng
10 months
🎙️ What if the way we prompt LLMs might actually hold it back?.🚨 Assigning personas like "helpful assistant" in system prompts might *not* be as helpful as we think!.✨ Check out our work accepted to Findings of @emnlpmeeting ✨. 📜 🧵 [1/7]
Tweet media one
10
94
433
@grok
Grok
1 day
Join millions who have switched to Grok.
96
178
1K
@elisazmq_zheng
Mingqian Zheng
6 days
RT @MaartenSap: I spoke to Forbes about why model "welfare" is a silly framing to an important issue; models don't have feelings, and it's….
forbes.com
Anthropic’s new feature for Claude Opus 4 and 4.1 flips the moral question: It’s no longer how AI should treat us, but how we should treat AI.
0
5
0
@elisazmq_zheng
Mingqian Zheng
6 days
RT @MaartenSap: We have been studying these questions of how models should refuse in our recent paper accepted to EMNLP Findings ( https://t….
0
4
0
@elisazmq_zheng
Mingqian Zheng
4 months
RT @nlpxuhui: When you interact with ChatGPT, have you wondered if they would ever "lie" to you? We found that in scenarios where truthfuln….
0
16
0
@elisazmq_zheng
Mingqian Zheng
9 months
RT @seungonekim: #NLProc .Just because GPT-4o is 17 times more expensive than GPT-4o-mini, does that mean it generates synthetic data 17 ti….
0
53
0
@elisazmq_zheng
Mingqian Zheng
9 months
RT @lltjuatja: 💬 Have you or a loved one compared LM probabilities to human linguistic acceptability judgments? You may be overcompensating….
0
18
0
@elisazmq_zheng
Mingqian Zheng
9 months
RT @PandaAshwinee: DO NOT DO THIS. I have previously raised this for Ethics Review when I saw it in a paper. You are not sneaky.
0
18
0
@elisazmq_zheng
Mingqian Zheng
10 months
RT @Joel_Mire: I’m thrilled to be at EMNLP this week presenting our paper, “The Empirical Variability of Narrative Perceptions of Social Me….
0
10
0
@elisazmq_zheng
Mingqian Zheng
10 months
📍Location: Jasmine.
0
0
1
@elisazmq_zheng
Mingqian Zheng
10 months
Heading to Miami to present our work! Feel free to stop by the poster session on Thursday, 10:30am-12:00pm 🙌.
@elisazmq_zheng
Mingqian Zheng
10 months
🎙️ What if the way we prompt LLMs might actually hold it back?.🚨 Assigning personas like "helpful assistant" in system prompts might *not* be as helpful as we think!.✨ Check out our work accepted to Findings of @emnlpmeeting ✨. 📜 🧵 [1/7]
Tweet media one
1
2
17
@elisazmq_zheng
Mingqian Zheng
10 months
RT @leczhang: [1/12] Optimizing prompts for specific tasks has been key to improving LLM performance, but what if we optimize prompts on sy….
0
9
0
@elisazmq_zheng
Mingqian Zheng
10 months
This work was done during my Master's at @UMich. Huge thanks to my amazing collaborators @jiaxin_pei @david__jurgens @lajanugen and @moontae_lee for their invaluable guidance and contributions! Special thanks to @LG_AI_Research for their support!.
0
0
9
@elisazmq_zheng
Mingqian Zheng
10 months
🔗 Curious to learn more? Check out our full paper and code below. 🙌 Don't hesitate to reach out if you have any questions or feedback!. 📜 🤖
github.com
Contribute to Jiaxin-Pei/Prompting-with-Social-Roles development by creating an account on GitHub.
1
2
13
@elisazmq_zheng
Mingqian Zheng
10 months
🚀 [7/7] Our study introduces a new computational pipeline to evaluate the impact of adding personas on LLM performance. These findings challenge the common practice of assigning personas in system prompts to guide LLM behavior. If you’re in #LLM, this is a must-read!.
1
1
11
@elisazmq_zheng
Mingqian Zheng
10 months
🎯 [6/7] What if we could pick the "right" persona to ask? We tested role-searching strategies to select the best persona for each question automatically. Unfortunately, most are only marginally better than random selection! Picking the “best persona” is a tough nut to crack.
Tweet media one
1
0
9
@elisazmq_zheng
Mingqian Zheng
10 months
💡 [5/7] Why do certain personas lead to higher accuracies? Having higher frequency of persona words, higher similarity between prompts and questions, and lower perplexity of the whole input generally lead to higher prediction accuracy—but these effects are still small.
Tweet media one
1
0
8
@elisazmq_zheng
Mingqian Zheng
10 months
🤔 [4/7] Are certain personas better than others? While gender-neutral, work-related, and domain-aligned roles show slight improvements, no persona consistently boosts accuracy in general.
1
0
7
@elisazmq_zheng
Mingqian Zheng
10 months
👥 [3/7] Does the framing of prompts affect the model’s performance—is it better to have a model be a student or be talking to a student? Turns out, specifying "who you are talking to" for models is slightly better than specifying "who you are".
Tweet media one
1
1
19
@elisazmq_zheng
Mingqian Zheng
10 months
🔍 [2/7] Do some personas give better answers? To answer this, we evaluated 162 personas and 4 prompt templates across 4 LLM families and 2410 factual questions. Our findings suggest that personas don't consistently improve performance, and in many cases, they may even hurt it!
Tweet media one
1
1
22