Alex Cui
@alexcdot
Followers
731
Following
26K
Media
67
Statuses
451
Building @GPTZeroAI | Previously @Waabi_ai, @UofTCompSci @Caltech, @Facebook | Staying curious
Toronto, Ontario
Joined January 2017
There are a LOT more papers that need to be desk rejected at ICLR. Somehow, this hallucination wasn't caught. So, I went on a crazy rabbit hole and found 50 more (many are just as funny). We're approaching crisis levels Paper titles included below 👇 https://t.co/PeTjJAbkrH
This paper has been desk rejected. LLM-generated papers that hallucinate references and do not report LLM usage will be desk rejected per ICLR policy ( https://t.co/HcsxTDAVwl) Reviewers of other versions of this submission have been notified.
18
47
397
We greatly speed up hallucination detection, but there should human-in-the-loop verification as we tighten up our model during the holidays Thanks to humans: @Timeroot, Siqiao Mu, Winter Pearson, @edward_the6 ,@jessicacheechee, Paul Esau, Alex Adam & @GPTZeroAI for your help!
1
0
13
Experience speeds up to 400+ Mbps to enjoy 4K streaming on multiple devices at once, working from home effectively, online gaming, social media browsing, and more. Order online in under 2 minutes.
273
1K
5K
And you can see the other 44 hallucinations we found at https://t.co/PeTjJAbkrH ICLR team has been really responsive, so I give full credit to them for taking initiative in this new age. If you're a conference organizer, reach out! We'd love to help you and your reviewers
2
4
39
It's already a crazy workload to review before LLMs Now, they have to check 100+ citations/paper. @gptzeroai built a free tool that helps reviewers find hallucinated citation in a one click You can see the "
https://t.co/elULxDujOp" result here https://t.co/SplHcbwEMl
1
0
0
And the the next 2 papers in the list. We found 50 hallucinations out of 300. And we are going through the next 20,000 submissions. Stay tuned. Why? Because reviewers deserve better.
1
0
20
Here are the papers from the first 4 hallucinations. Safe-LLM even claimed they were already published at ICLR 2025, incredible!
1
1
17
None of the authors in this citation are correct This paper got 3 reviews of 8/10 - it probably would get ICLR oral (top 1.82% of submissions). And yet 73% of ICLR submissions, where people did honest, hard work, will be rejected
1
1
32
This citation was wonderfully ironic, showing LLMs do have a sense of humor
1
2
21
I tried to get ChatGPT to help me see if "Oded teht sun" was a secret schizo message and unfortunately it went insane. rare in 2025
1
1
15
Feeling sorry for Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, Samuel R. Bowman... who got kicked out of this citation of a real paper! Either an insane lab drama, or "Oded teht sun" made some great contributions
1
3
25
So the AI went OFF on this fake citation on paper 2. Seems like the ai assistant got the gist (Just make up Chinese author initials!). Another example of overfitting in LLMs, understandably
1
2
42
The decade is not lost. Work hard. The future is bright. Build Canada
1
0
1
One of the most illuminating articles I’ve read on the health of Canada recently. Compared to Americans, Canadians have higher median wages, live 3 years long, and are happier. Reading Twitter you think everything is backwards because of the cult of tech poasting, it’s wild
1
0
3
People think Canadians are an inclusive people There are exceptions Profile location is a glorious thing
0
0
0
Feeling it, 2025 is taking a hard turn towards what’s authentic and human
2026 is the year of no slop AI Seeing multiple companies have success with “no slop” messaging @Replit / @amasad with Replit Design @descrypt has a new “no slop guide” to content repurposing At @SpellbookLegal we’re building the no slop AI contract review, by grounding in
0
0
2