Abhay Gupta
@gupta__abhay
Followers
415
Following
4K
Media
29
Statuses
1K
Scaling and efficiency lead @DbrxMosaicAI | Previously @CerebrasSystems @CMU_Robotics | Making GPUs and agents go brrrr !!
San Francisco, CA
Joined November 2014
Every time I see some SgLang or VLLM only feature being talked about
0
0
2
Is the @iclr_conf review deadline really Oct 31? 5 papers in 15 days, that’s just begging for bad reviews 🤦♂️ 🤦♂️
0
0
1
Raj Ammanabrolu (@rajammanabrolu) on safety-capability spectrum when training agents from feedback -- move away from scalar rewards to multi-faceted rewards that can act as safety-capability levers -- CLoud: generate a reward based on the critique of the output. The critique
The IVADO workshop on Agent Capabilities and Safety is happening now at HEC Montreal, Downtown (Oct 3--6) https://t.co/MEL4JAzLRn
#LLMAgents
0
2
6
Today, @ekindogus and I are excited to introduce @periodiclabs. Our goal is to create an AI scientist. Science works by conjecturing how the world might be, running experiments, and learning from the results. Intelligence is necessary, but not sufficient. New knowledge is
430
452
4K
what's happened to the @PyTorch docs? They were so pleasant to read and now .... :sigh:
0
0
0
We're releasing the DASLab GGUF Quantization Toolkit! 🚀 First open-source toolkit bringing GPTQ + EvoPress to @ggerganov's GGUF format, enabling heterogeneous quantization based on importance. Result: Better models at the same file size. [1/5]
4
50
268
Dario: Claude will take your job, but it will feel ashamed. Elon: Look at this anime girl. She says the N word and is almost naked. Zuck: ✨Superintelligence✨ will help people watch more instagram reels. Demis: Gemini recently Calculated more precisely the motion of the
143
597
11K
2
6
37
We’re super excited about our recent performance and strong momentum. We’re looking forward to accelerating our AI strategy — expanding Agent Bricks, launching the new Lakebase category, and fueling global growth.
1
5
46
Big milestone for Databricks! We just shared our strongest results ever: 📈 - Surpassing $4 billion revenue run-rate, growing >50% year over year. - Exceeding $1 billion revenue run-rate for our AI products. We are also closing our Series K funding, raising $1 billion of
6
30
161
What if you could reliably monitor, evaluate, and control your AI’s behavior with a single, adaptable tool – no deep expertise required? Databricks’ new Prompt-Guided Reward Model brings together reward modeling and judging to do just that. PGRM is your AI’s quality control
3
4
18
This thread captures what’s at the heart of empiricism and converting futuristic ideas to commodities everyone can benefit from!!! Scaling good science and infra is the only path to the AI-integrated future we want for ourselves.
1/ Some pundits are predicting that the AI bubble will burst. I doubt it. But more ideas or compute won't unlock an "intelligence explosion." The biggest bottleneck AI research faces is the pace and quality of experimentation.
0
1
4
There’s probably better sushi is Los Altos! Also an under-explored area for sure.
@nadavitall La Bodeguita del Medio is the best Cuban food I’ve ever had. Daigo is probably top-3 sushi. Zola has the best steak frites. Protege is probably one of the best fine dining experiences without the bullshit.
0
0
0
Agents are the future! We’re adding an amazing team to build it with us.
We're excited to share that @TectonAI will soon join Databricks, providing enterprises with fast, reliable, real-time data for deploying AI agents. Tecton’s technology helps enterprises leverage their mission-critical data to power AI agents for critical use cases. Bringing
0
0
0
In all respects, numer still go up !!!
Databricks just signed a Series K term sheet at >$100B valuation to scale two flagship products: 🔥 Lakebase — serverless Postgres with true compute/storage separation 🧠 Agent Bricks — agentic framework with built-in reasoning guardrails for enterprise data
0
0
3
Not that I have a favorite recent project, but... 🧵 LLM judges are the popular way to evaluate generative models. But they have drawbacks. They're: * Generative, so slow and expensive. * Nondeterministic. * Uncalibrated. They don't know how uncertain they are. Meet PGRM!
Ever wonder what it'd look like if an LLM Judge and a Reward Model had a baby? So did we, which is why we created PGRM -- the Prompt-Guided Reward Model. TLDR: You get the instructability of an LLM judge + the calibration of an RM in a single speedy package (1/n)
4
15
77
Ever wonder what it'd look like if an LLM Judge and a Reward Model had a baby? So did we, which is why we created PGRM -- the Prompt-Guided Reward Model. TLDR: You get the instructability of an LLM judge + the calibration of an RM in a single speedy package (1/n)
6
25
155