Aditya Desai Profile
Aditya Desai

@Apd10Desai

Followers
29
Following
21
Media
0
Statuses
11

Post Doc, UC Berkeley

Berkeley, CA
Joined August 2022
Don't wanna be here? Send us removal request.
@xalg_ai
xAlg-ai
6 days
Excited to share our new research: vAttention - Verified Sparse Attention. Sparse attention with provable quality guarantees for LLMs. Full paper: https://t.co/pvOSEI8E7J Gibhub: xAlg-ai/sparse-attention-hub 🧵 A thread 👇
Tweet card summary image
arxiv.org
State-of-the-art sparse attention methods for reducing decoding latency fall into two main categories: approximate top-$k$ (and its extension, top-$p$) and recently introduced sampling-based...
1
3
6
@Apd10Desai
Aditya Desai
1 month
Very similar to my experience with coding agents so far.
@karpathy
Andrej Karpathy
2 months
Continuing the journey of optimal LLM-assisted coding experience. In particular, I find that instead of narrowing in on a perfect one thing my usage is increasingly diversifying across a few workflows that I "stitch up" the pros/cons of: Personally the bread & butter (~75%?) of
0
0
0
@LakshyAAAgrawal
Lakshya A Agrawal
2 months
Very excited to share that GEPA is now live on @DSPyOSS as dspy.GEPA! This is an early code release. We’re looking forward to community feedback, especially about any practical challenges in switching optimizers.
9
39
321
@profjoeyg
Joey Gonzalez
2 months
I am really excited to announce the release of LEANN. Most vector stores more than double the size of the data they index. With LEANN, we only increase storage costs by a few percent. We made it easy to use LEANN to index your email and document folders and apply RAG without
@YichuanM
Yichuan Wang
2 months
1/N 🚀 Launching LEANN — the tiniest vector index on Earth! Fast, accurate, and 100% private RAG on your MacBook. 0% internet. 97% smaller. Semantic search on everything. Your personal Jarvis, ready to dive into your emails, chats, and more. 🔗 Code: https://t.co/QwkYx1t0oa 📄
0
5
33
@matei_zaharia
Matei Zaharia
2 months
Really excited about ALHF, new work from our research team that lets users give natural language feedback to agents and optimizes them for it. It sort of upends the traditional supervision paradigm where you get a scalar reward, and it makes AI more customizable for non-experts.
2
31
223
@Apd10Desai
Aditya Desai
2 months
Congratulations @Anshumali_. Very happy to read this news!
@RiceCompSci
Rice Computer Science
2 months
Congrats to Rice CS' @Anshumali_ Shrivastava, who has been promoted to full professor. Shrivastava is well on his way to revolutionizing how LLMs & other deep learning models are trained & stored, using new algorithms to make AI scalable & more accessible. https://t.co/8VpFk371gp
1
0
3
@matei_zaharia
Matei Zaharia
3 months
AI progress is hardest in tasks where feedback is slow, so I'm super excited about this work to learn faster using reflective prompt updates! In just 1 rollout, a reflective optimizer can make a natural language update to a prompt that greatly boosts perf, unlike RL on weights.
@LakshyAAAgrawal
Lakshya A Agrawal
3 months
How does prompt optimization compare to RL algos like GRPO? GRPO needs 1000s of rollouts, but humans can learn from a few trials—by reflecting on what worked & what didn't. Meet GEPA: a reflective prompt optimizer that can outperform GRPO by up to 20% with 35x fewer rollouts!🧵
1
15
124
@RunLLM
RunLLM
3 months
🎬 AI Hot Take 🔥 "What makes a bad AI product is a product that's too focused on AI." - UC Berkeley Professor @profjoeyg 👀 Watch the full mini-documentary here! https://t.co/96sMjz5hrs
0
4
6
@yuxin_tang_
Yuxin Tang 🦉
3 months
📣📣📣Excited to meet @Anshumali_ at ICML 2025! I want to promote their work on parameter-efficient fine-tuning (PEFT) for LLMs using sketches. This method offers great results with fewer parameters. @ZhaozhuoX @Tianyi_zha @Apd10Desai
0
2
4
@luisgschroeder
Luis Gaspar Schroeder
4 months
What if we can guarantee the correctness of LLM responses? Turns out—we can! We built vCache, the first semantic cache with error guarantees. For the first time, you can use semantic caching while knowing exactly how often the system is allowed to make a mistake.
6
4
6
@LiorOnAI
Lior Alexander
2 years
GPT-Engineer just hit 12,000 stars on Github. It's an AI agent that can write an entire codebase with a prompt and learn how you want your code to look. â–¸ Asks clarifying questions â–¸ Generates technical spec â–¸ Writes all necessary code â–¸ Easy to add your own reasoning
68
329
2K