minisounds Profile Banner
Jason Zhang Profile
Jason Zhang

@minisounds

Followers
86
Following
116
Media
2
Statuses
41

ml research @lambdaAPI / cs @stanford / lead speaker series @stanfordaiclub

Palo Alto
Joined May 2022
Don't wanna be here? Send us removal request.
@minisounds
Jason Zhang
7 days
Lambda's ML Research Team is hiring interns right now! Would highly recommend applying, the team here is incredibly talented (just went 6/6 at NeurIPS this year) and has a culture of genuine passion, humility, and curiosity that is infectious. DM if you have any questions about
Tweet card summary image
jobs.ashbyhq.com
Lambda, The Superintelligence Cloud, builds Gigawatt-scale AI Factories for Training and Inference. Lambda’s mission is to make compute as ubiquitous as electricity and give every person access to...
1
0
3
@stanfordaiclub
Stanford AI Club
9 days
This Tuesday, @rauchg (CEO, Vercel) will be joining the speaker series. Apply for limiting seating: https://t.co/YW3ferJntC
0
3
6
@minisounds
Jason Zhang
24 days
Great talk from Jason! More of these to come!
@stanfordaiclub
Stanford AI Club
24 days
Was awesome to have @_jasonwei join us for the speaker series yesterday! Here's his talk on the 3 Key Ideas to Understand in AI in 2025:
0
0
2
@minisounds
Jason Zhang
1 month
@stanfordaiclub
Stanford AI Club
1 month
Our next speaker series will feature @_jasonwei from @meta super-intelligence labs! Apply here for limited seating: https://t.co/YJGoZMcfVG
0
0
2
@minisounds
Jason Zhang
1 month
let’s go!!
@stanfordaiclub
Stanford AI Club
1 month
For our next speaker series event, @ishanmkh from @rox_ai will be joining us this Wednesday! Apply to join through the QR code for limited seating.
0
0
0
@stanfordaiclub
Stanford AI Club
1 month
We had a great conversation with @paraga yesterday! Thanks to everyone for coming and stay tuned for more exciting Speaker Series this year!
0
2
4
@minisounds
Jason Zhang
1 month
Starting this year’s @stanfordaiclub speaker series off strong with Parag Agarwal, current CEO of Parallel Web Systems and former CEO of Twitter! Apply for limited seating via the QR Code attached or through https://t.co/l7ADojMbLZ! Lots of exciting events and speakers coming
@BhathalTanvir0
Tanvir Bhathal
1 month
@paraga is coming to Stanford this Thursday! Limited seats, sign up soon: https://t.co/LG6pgkGF2f
0
0
4
@minisounds
Jason Zhang
2 months
Super excited to be helping organize this; sign up & win up to $20k in lambda compute credits :)
@LambdaAPI
Lambda
2 months
AI that sees, hears, and reasons: superintelligence starts here. #LambdaResearch invites all researchers, engineers and AI enthusiasts to participate in the Grand Challenge on Multimodal Superintelligence. Join us and receive up to $20,000 compute credit per team to build the
0
0
2
@LambdaAPI
Lambda
2 months
AI that sees, hears, and reasons: superintelligence starts here. #LambdaResearch invites all researchers, engineers and AI enthusiasts to participate in the Grand Challenge on Multimodal Superintelligence. Join us and receive up to $20,000 compute credit per team to build the
3
7
23
@LambdaAPI
Lambda
3 months
Exciting news from #LambdaResearch and collaborators 🎉 Their paper "DepR: Depth-guided Single-view Reconstruction with Instance-level Diffusion" has been accepted into @ICCVConference, one of the most prestigious conferences in the field of computer vision. It tackles the
3
5
13
@minisounds
Jason Zhang
3 months
clarity of thought is the ultimate moat
1
0
3
@minisounds
Jason Zhang
4 months
super fun to play around with, go check it out :)
@LambdaAPI
Lambda
4 months
We're thrilled to introduce NeuralOS, Lambda’s first neural-native operating system: https://t.co/xHCfUxJ7RJ The OS, the apps, the network, the logic... all dreamed in the mind of a model. This is where words become apps. And thoughts become systems. - Create your own space
0
0
0
@mihirp98
Mihir Prabhudesai
4 months
🚨 The era of infinite internet data is ending, So we ask: 👉 What’s the right generative modelling objective when data—not compute—is the bottleneck? TL;DR: ▶️Compute-constrained? Train Autoregressive models ▶️Data-constrained? Train Diffusion models Get ready for 🤿 1/n
127
198
1K
@aryaman2020
Aryaman Arora
9 months
new paper! 🫡 we introduce 🪓AxBench, a scalable benchmark that evaluates interpretability techniques on two axes: concept detection and model steering. we find that: 🥇prompting and finetuning are still best 🥈supervised interp methods are effective 😮SAEs lag behind
9
71
419
@minisounds
Jason Zhang
10 months
A special thanks to @ZhengxuanZenWu for advising throughout this project!
0
0
1
@minisounds
Jason Zhang
10 months
[5/5]: Lastly, we experiment with compositional steering using SAE activation weights (Ex: Does injecting -1 * happy + sad into Gemma-2-2b steer towards sad sentiment more consistently?) as a proof of concept and find moderate signals of success in consistency (~15-20% bump) in
1
0
0
@minisounds
Jason Zhang
10 months
[4/5]: Even more surprisingly, we find that the 20 most geometrically similar feature pairs in the entire feature space range from very loosely to completely semantically unrelated. For example, while GemmaScope-2-2b's features related to "brightness" and "darkness" have a
1
0
0
@minisounds
Jason Zhang
10 months
[3/5]: But we find SAEs don't mirror these phenomena. We examine 20 semantically related feature pairs (Ex: hot-cold, happy-sad) from the Encoder and Decoder weights of GemmaScope-2-2b and find their respective cosine similarities actively mirror random baselines.
1
0
0
@minisounds
Jason Zhang
10 months
[2/5]: One of the founding principles within language modeling is that semantically related concepts are all geometrically related and similar to each other. (Ex: "King 👨👑 - Man👨+ Woman 👩= Queen👩👑") This logic governs the way we train our embedding models, conceptualize
1
0
0