
Omar Shaikh
@oshaikh13
Followers
1K
Following
6K
Media
35
Statuses
656
member of sociotechnical staff @Stanford - previously @GeorgiaTech
🇸🇦→🇨🇦→🇺🇸→🇸🇦→🇺🇸
Joined December 2012
What if LLMs could learn your habits and preferences well enough (across any context!) to anticipate your needs?. In a new paper, we present the General User Model (GUM): a model of you built from just your everyday computer use. 🧵
17
96
352
RT @houjun_liu: New Paper Day! For EMNLP findings—in LM red-teaming, we show you have to optimize for **both** perplexity and toxicity for….
0
11
0
RT @timalthoff: I’m excited to share our new @Nature paper 📝, which provides strong evidence that the walkability of our built environment….
0
711
0
RT @StevenyzZhang: Soon, AI agents will act for us—collaborating, negotiating, and sharing data. But can they truly protect our privacy?. W….
0
26
0
Oops broken arxiv link here’s the fixed one!!!.
arxiv.org
Human-computer interaction has long imagined technology that understands us-from our preferences and habits, to the timing and purpose of our everyday actions. Yet current user models remain...
0
0
12
The MCP relies on our General User Model work: a system that constructs inferences about you by observing your computer use. You can check it out here: . arxiv: .orr go right to the code for the MCP:
github.com
Contribute to GeneralUserModels/gumcp development by creating an account on GitHub.
1
0
21
RT @jessyjli: The Echoes in AI paper showed quite the opposite with also a story continuation setup. Additionally, we present evidence that….
0
15
0
RT @cquanze: Online groups and communities often need to make decisions around social concepts like what content is appropriate. But how do….
0
7
0
RT @tao_yujie: Self-presentation is multifaceted, but the expression is often limited to physical accessories. How could Audio AR transform….
0
7
0
RT @jama1017: We introduce MoVer, a Motion Verification DSL that automatically checks if AI-generated motion graphics animations match you….
0
14
0
RT @dilarafsoylu: Should you RL your compound AI system or optimize its prompts? We think both! 🤯. A short preview of work co-led with @Noa….
0
45
0
RT @StanfordHCI: The IxD lab, led by James Landay, is running a user study to evaluate a prototype for Japanese language learning using AI….
0
5
0
RT @chengmyra1: The more human-like LLMs become, the more we risk misunderstanding them. In our new paper, @lujainmibrahim and I explore ho….
0
24
0
RT @riley_d_carlson: Looking for a change in wardrobe🧤? I have a new pair of GloVes for you! . With meanings for 𝐜𝐡𝐚𝐭𝐠𝐩𝐭, 𝐫𝐢𝐳𝐳, 𝐜𝐨𝐯𝐢𝐝, 𝐛𝐫𝐚𝐢….
0
15
0
RT @jaredlcm: I'm excited to share work to appear at @COLM_conf! Theory of Mind (ToM) lets us understand others' mental states. Can LLMs go….
0
7
0
RT @nehasrikanth: When questions are poorly posed, how do humans vs. models handle them? Our #ACL2025 paper explores this + introduces a fr….
0
14
0
RT @michaelryan207: CS majors will binge Netflix for 5 hours then let their switch statements fall through bruh you’re worried about the wr….
0
2
0