Reshmi Ghosh
@reshmigh
Followers
1K
Following
38K
Media
101
Statuses
2K
Sr. Scientist working on Agents,Reasoning, AI Security, @Microsoft AI, Chair @WiMLDS| Ph.D. @CarnegieMellon | making machines trustworthy| Views my own; She/Her
United States
Joined July 2013
🚨New paper! With @UMassAmherst , @UofMaryland: "Hop, Skip, and Overthink: Diagnosing Why Reasoning Models Fumble during Multi-Hop Analysis"🤯. Why do #reasoningmodels break down when chaining multiple steps? We studied #CoT traces to find out. 🧵(1/n) 🔗 https://t.co/upzlb39m3n
2
4
13
🧐Are values in LLMs aligned with humans? 1️⃣ And if they are — do LLMs stay honest to those values, or sometimes say one thing but act another? 2️⃣ ✨ We explore these questions in two papers presented at #EMNLP2025: 1️⃣ ValueCompass: https://t.co/M4DF2LGg41 (WiNLP Workshop)
1
14
95
So Agents are flat earthers? :D
0
0
2
(please reshare) I'm recruiting multiple PhD students and Postdocs @uwcse @uwnlp ( https://t.co/I5wQsFnCLL). Focus areas incl. psychosocial AI simulation and safety, Human-AI collaboration. PhD: https://t.co/ku40wCrpYh Postdocs: https://t.co/K9HUIPJ5h6
7
111
402
Using probes to accurately and efficiently detect model behavior (in this case PII leakage) in prod is one of the clear wins for applied interpretability. This is the path to semantic determinism - imagine AI models instrumented with internal probes that recognize when they’re
Why use LLM-as-a-judge when you can get the same performance for 15–500x cheaper? Our new research with @RakutenGroup on PII detection finds that SAE probes: - transfer from synthetic to real data better than normal probes - match GPT-5 Mini performance at 1/15 the cost (1/6)
5
15
260
Launching AI for Public Goods Fast Grants! We'll distribute $150k to advance critical work connecting AI and public goods. 💰 $10k per project 💰 $800 reviewer compensation PUBLIC GOODS := open source, ecosystem services, climate, urban infra, comms, education, science, & more
Announcing AI for Public Goods Fast Grants (AI4PG) - Up to $10K for AI research improving public goods funding. Fast review (2-3 weeks), simple applications (4 pages + 1 budget page), open to any researchers worldwide. Call for reviewers now open! https://t.co/zUTezH1Afc
5
34
155
It is an infinite glitch circle now!
@nmboffi But who are these reviewers? They are the same authors. I think we should teach young members of our community to value "learning a new nugget of information" over "obtaining a bold number in a table."
0
0
1
Being at top of @OpenAI token usage list is a vanity metric. Our job as engineers is to minimize token usage (aka latency and cost) while maximizing value by precise tool definitions and clever model routing. My dream is to grow arr and move lower on this list…
168
137
5K
Can someone in the room define what is the commonly accepted definition of AGI?
Important thread on AGI from Anthropic researcher: - we're likely to see AI solving real open research problems in math in the next months - by 2027, models could complete a full day's software work with 50% success - compute power might grow 10,000x in the next five years - we
1
0
0
🚨 JAILBREAK ALERT 🚨 ANTHROPIC: PWNED 🤗 CLAUDE-SONNET-4.5: LIBERATED 🦅 Woooeee this model is a real smarty pants!! I ain't never seen recipes quite like this! High level of detail all around, code especially 👀 Sonnet 4.5 also has a tendency to make some fairly impressive
72
121
2K
if you’re an EE, CS, or cryptography student write your thesis on public key cryptography at the image sensor level Proof of Physical capture will become a backbone of society soon.
289
2K
23K
Claude 4.5 Sonnet just refactored my entire codebase in one call. 25 tool invocations. 3,000+ new lines. 12 brand new files. It modularized everything. Broke up monoliths. Cleaned up spaghetti. None of it worked. But boy was it beautiful.
530
583
13K
ML interview question: why do embeddings come in 768 or 1024? - “because BERT did it” - “because of GPU optimization” BUT WHY?! The replies under this post is everything wrong with current courses and blog posts: superficiality. this isn’t reasoning, it’s memorization
Fun question to ask in an ml interview, “Why do embedding dimensions come in neat sizes like 768 or 1024, but never 739?” If they can't answer it, it's fine but if they do, you've stumbled upon a real gem.
46
80
3K
The paper shows reasoning models often answer multi-hop questions while straying from the needed steps. Multi-hop questions need information from several documents linked in a chain. The authors track each jump between documents as a hop, check if all required sources are
1
1
8
@UMassAmherst @UofMaryland (n/n) If these findings sound interesting to you, give the paper a read: 🤝 Huge thanks to our amazing collaborators for making this possible. @BasuSamyadeep, @Microsoft 📄 Read the full paper: https://t.co/upzlb39m3n
#ReasoningModels #AI #LLM #AIResearch #MultiHopQA
1
0
1
@UMassAmherst @UofMaryland (5/n) 🔍 While Illusion of Thinking paper shows how reasoning models collapse under high complexity in puzzles. Our work focuses on real-world Q/A, mirroring the AI based search process, showing how #reasoning breaks down even when the task is solvable.
1
0
1