Peter J. Liu Profile
Peter J. Liu

@peterjliu

Followers
8K
Following
3K
Media
51
Statuses
722

AI research-eneur. Hiring eng: https://t.co/fv5QBjsv90. Was Research Scientist @ Google Brain / DeepMind, language model research. ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡บ๐Ÿ‡ธ

Joined May 2009
Don't wanna be here? Send us removal request.
@peterjliu
Peter J. Liu
1 month
Yes, ๐Ÿ’ฏ. If you want to join former Google Brain/DeepMind researchers/engineers building in the application-layer (rare!) on the road to AGI, Iโ€™m hiring top-decile full-stack and systems engineers. DMs are open!. Weโ€™re first going to deliver AGI to today's underserved finance.
@OfficialLoganK
Logan Kilpatrick
2 months
AGI is going to be achieved by a product, not necessarily a โ€œmodelโ€.
6
7
230
@peterjliu
Peter J. Liu
7 days
Next Math target: Putnam (college-level math competition)
0
0
2
@peterjliu
Peter J. Liu
11 days
It's a happy accident that what we needed to make Transformers really powerful (and Turing-complete), chain-of-thought, happened in language first, and has this nice side-effect of making reasoning semi-interpretable. But this truly was an accident. There could be more.
@balesni
Mikita Balesni ๐Ÿ‡บ๐Ÿ‡ฆ
12 days
A simple AGI safety technique: AIโ€™s thoughts are in plain English, just read them. We know it works, with OK (not perfect) transparency!. The risk is fragility: RL training, new architectures, etc threaten transparency. Experts from many orgs agree we should try to preserve it:
Tweet media one
0
0
4
@peterjliu
Peter J. Liu
13 days
0
0
3
@peterjliu
Peter J. Liu
16 days
This is what's called Turing-complete.
@burkov
Andriy Burkov
17 days
This is what they call "an agent."
Tweet media one
0
0
2
@peterjliu
Peter J. Liu
18 days
If you could just pay $100M so that your $200M pre-training run works the first time, it just makes economic sense actually.
0
0
3
@peterjliu
Peter J. Liu
19 days
Memegen is priceless -- Google leadership should leverage that.
@agihippo
yi
20 days
AI researcher: my total comp is 7 to 8 figures ๐Ÿ˜Ž.Also AI researcher: do I get free YouTube premium at google? .๐Ÿ˜‚๐Ÿ˜‚๐Ÿ˜‚.
1
0
10
@peterjliu
Peter J. Liu
1 month
understatement ๐Ÿคฃ.
@JeffDean
Jeff Dean
1 month
Index freshness is something I and many others at Google have worked on for many years.
0
0
4
@peterjliu
Peter J. Liu
1 month
Vibe-coding is great, but it's important to real-code once a while so your coding muscles don't completely atrophy.
1
0
6
@peterjliu
Peter J. Liu
1 month
There was an aspiration to put Google on Google Cloud, but it was deemed too difficult.
2
2
50
@peterjliu
Peter J. Liu
1 month
You may be wondering why the Google Cloud (GCP) outage didn't take down Google products. It's because Google doesn't actually use GCP for most products.
11
11
334
@peterjliu
Peter J. Liu
3 months
AI is like that great intern who later becomes your boss.
1
0
3
@peterjliu
Peter J. Liu
3 months
Congrats @DBahdanau well deserved test of time award. For the ML newcomers, in case you didn't know, this is for the OG attention paper . Transformer wouldn't have happened without it.
Tweet media one
@DBahdanau
๐Ÿ‡บ๐Ÿ‡ฆ Dzmitry Bahdanau
3 months
Come to my award talk for some hot takes ๐ŸŒถ๐ŸŒถ๐ŸŒถ. Though it does feel a bit like what playing Master of Puppets must feel for Metallica these days.
0
0
2
@peterjliu
Peter J. Liu
3 months
While OpenAI is not as open as before, you have to admit that 3/4 accepted papers (looking at more than 2 OpenAI authors) at #ICLR2025 being oral presentations (about 6% of papers) is impressive and what they are publishing is high quality.
Tweet media one
0
0
9
@peterjliu
Peter J. Liu
3 months
FYI it's a lot of tokens, 10s of millions -- more than what long-context models can handle and a lot more accurate than standard RAG.
1
0
3
@peterjliu
Peter J. Liu
3 months
Try it now at . @radpod_ai.
1
0
2
@peterjliu
Peter J. Liu
3 months
One of the top machine learning conferences #ICLR2025 is this week. But thereโ€™s 3000+ accepted papers, which is a lot to sift through. Use RadPod to chat with them all and quickly hone in on your interests. Examples queries:. โ€œfind papers with more than one OpenAI-affiliated
1
3
8
@peterjliu
Peter J. Liu
4 months
I and @yaozhaoai have been researching getting LLM reasoning agents to work reliably, especially with a large amount of unstructured, document context. It is relatively under-studied but is critical to unlocking AI for a lot of high-value knowledge work (finance, legal,.
0
0
5
@peterjliu
Peter J. Liu
4 months
But clearly there's a lot more that could be found out if people could use a tool like RadPod to dig into this further with their own questions. Check out to explore some example questions/responses and sign-up to try your own on the 2300+ JFK files!
Tweet media one
1
0
4