robinomial Profile Banner
Robin Jia Profile
Robin Jia

@robinomial

Followers
4K
Following
2K
Media
1
Statuses
292

Assistant Professor @CSatUSC | Previously Visiting Researcher @facebookai | Stanford CS PhD @StanfordNLP

Los Angeles, CA
Joined June 2018
Don't wanna be here? Send us removal request.
@robinomial
Robin Jia
1 day
RT @xinyue_cui411: Can we create effective watermarks for LLM training data that survive every stage in real-world LLM development lifecycl….
0
7
0
@robinomial
Robin Jia
2 days
RT @johntzwei: Hi all, after a month in hiding in Vienna, I will be attending #ACL2025! Please reach out if you are interested in chatting….
0
1
0
@robinomial
Robin Jia
2 days
RT @yanaiela: I'll be at #ACL2025 next week!.Catch me at the poster sessions, eating sachertorte, schnitzel and speaking about distribution….
0
6
0
@robinomial
Robin Jia
3 days
I’ll be at ACL 2025 next week where my group has papers on evaluating evaluation metrics, watermarking training data, and mechanistic interpretability. I’ll also be co-organizing the first Workshop on LLM Memorization @l2m2_workshop on Friday. Hope to see lots of folks there!.
0
5
47
@robinomial
Robin Jia
5 days
RT @qinyuan_ye: 1+1=3.2+2=5.3+3=?. Many language models (e.g., Llama 3 8B, Mistral v0.1 7B) will answer 7. But why?. We dig into the model….
0
15
0
@robinomial
Robin Jia
1 month
RT @johntzwei: Hi all, I'm going to @FAccTConference in Athens this week to present my paper on copyright and LLM memorization. Please reac….
0
3
0
@robinomial
Robin Jia
2 months
If an LLM’s hallucinated claim contradicts its own knowledge, it should be able to retract the claim. Yet, it often reaffirms the claim instead. Why? @yyqcode dives deep to show that faulty model internal beliefs (representations of “truthfulness”) drive retraction failures!.
@yyqcode
Yuqing Yang
2 months
🧐When do LLMs admit their mistakes when they should know better?. In our new paper, we define this behavior as retraction: the model indicates that its generated answer was wrong. LLMs can retract—but they rarely do.🤯. 👇🧵
Tweet media one
0
8
62
@robinomial
Robin Jia
2 months
RT @yyqcode: 🧐When do LLMs admit their mistakes when they should know better?. In our new paper, we define this behavior as retraction: the….
0
24
0
@robinomial
Robin Jia
2 months
RT @DeqingFu: posted my slides for today's talk here:. check it out!.
0
5
0
@robinomial
Robin Jia
2 months
RT @DeqingFu: Textual steering vectors can improve visual understanding in multimodal LLMs!. You can extract steering vectors via any inter….
0
13
0
@robinomial
Robin Jia
2 months
RT @stanfordnlp: For this week’s NLP Seminar, we are thrilled to host @DeqingFu to talk about Closing the Modality Gap: Benchmarking and Im….
0
9
0
@robinomial
Robin Jia
2 months
RT @l2m2_workshop: 📢 @aclmeeting notifications have been sent out, making this the perfect time to finalize your commitment. Don't miss the….
openreview.net
Welcome to the OpenReview homepage for ACL 2025 Workshop L2M2 ARR Commitment
0
7
0
@robinomial
Robin Jia
3 months
Becoming an expert requires first learning the basics of the field. Learning the basics requires doing exercises that AI can do. No amount of class redesign can change this. (What will change: the weight of exams in computing the final grade).
@kevinroose
Kevin Roose
3 months
I'm sympathetic to the professors quoted in this, but at a certain point if your students can cheat their way through your class with AI, you probably need to redesign your class.
1
6
66
@robinomial
Robin Jia
3 months
RT @VeredShwartz: I've noticed (& confirmed with multiple people at #naacl2025) that NLP _for_ humans is more popular than ever while NLP _….
0
11
0
@robinomial
Robin Jia
3 months
Check out @BillJohn1235813 ‘s excellent work on combining LLMs with symbolic planners at NAACL on Thursday! I will also be at NAACL Friday-Sunday, looking forward to chatting about LLM memorization, interpretability, evaluation, and more.
@BillJohn1235813
Wang Bill Zhu
3 months
At @naaclmeeting this week! I’ll be presenting our work on LLM domain induction with @_jessethomason_ on Thu (5/1) at 4pm in Hall 3, Section I. Would love to connect and chat about LLM planning, reasoning, AI4Science, multimodal stuff, or anything else. Feel free to DM!
Tweet media one
1
4
24
@robinomial
Robin Jia
3 months
RT @BillJohn1235813: At @naaclmeeting this week! I’ll be presenting our work on LLM domain induction with @_jessethomason_ on Thu (5/1) at….
0
6
0
@robinomial
Robin Jia
3 months
RT @DeqingFu: I’ll be at @naaclmeeting this week. Excited to meet old and new friends!.
0
1
0
@robinomial
Robin Jia
3 months
Really proud of this interdisciplinary LLM evaluation effort led by @BillJohn1235813 . We teamed up with oncologists from USC Keck SOM to understand LLM failure modes on realistic patient questions. Key finding: LLMs consistently fail to correct patients’ misconceptions!.
@BillJohn1235813
Wang Bill Zhu
3 months
🚨 New work!.LLMs often sound helpful—but fail to challenge dangerous medical misconceptions in real patient questions. We test how well LLMs handle false assumptions in oncology Q&A. 📝 Paper: 🌐 Website: 👇 [1/n]
Tweet media one
0
0
15
@robinomial
Robin Jia
3 months
RT @BillJohn1235813: 🚨 New work!.LLMs often sound helpful—but fail to challenge dangerous medical misconceptions in real patient questions.….
0
7
0
@robinomial
Robin Jia
4 months
RT @l2m2_workshop: Hi all, reminder that our direct submission deadline is April 15th! We are co-located at ACL'25 and you can submit archi….
Tweet card summary image
sites.google.com
News
0
6
0