
Sriram Krishnan
@sriramk
Followers
300K
Following
128K
Media
11
Statuses
459
bitter-lesson-pilled. official: @skrishnan47
Joined December 2006
An important thing to track is: marketshare of America AI vs Chinese AI. Right down to tokens / month inferenced and the models + hardware + stack used.
Marc Andreessen on the AI race: 20 years from now, the world is going to be running on either Chinese AI or American AI. AI will be the control layer to everything, and it will teach your kids.
14
35
322
things I personally would love from LLMs/frontier models. - be able to have my personal data (email/docs/messages) in context at all times. - learn from previous prompts from me and others (see earlier post from @dwarkesh_sp and @karpathy ).- notice/suggest "agentification" of.
49
32
640
Really good post from @dwarkesh_sp on continuous learning in LLMs. Also see @karpathy response to this.
dwarkesh.com
Continual learning is a huge bottleneck
7
28
387
Think @dwarkesh_sp is onto something with “AI management” and seeing the future Silicon Valley organizations being much better run. Already seeing some small moves towards this.
dwarkesh.com
Everyone is sleeping on the *collective* advantages AIs will have, which have nothing to do with raw IQ - they can be copied, distilled, merged, scaled, and evolved in ways humans simply can't.
9
29
340
A lot of people are pointing to @tobi’s AI memo and one easily overlooked part is - *learning to use AI well is a skill* . I’ve learned a ton when I watch people use these tools and especially those who have gone all in on using them seriously. Deep knowledge of how to use all.
28
57
834
Now have an official account - @skrishnan47. Do follow for all policy work on AI. Will try and talk more about everything from pro wrestling to gaming here!.
26
9
325
find myself wanting this everyday. feels like a fundamental limit still of what you can squeeze into context. some good responses in the thread below.
What is currently the best solution for turning a large collections of personal notes and chats into an LLM-interrogable dataset?. It seems like most off-the-shelf options can't keep context at scale.
24
17
235