
Deepak Bhandari
@DeepDiveonAI
Followers
81
Following
3K
Media
23
Statuses
511
Deep dives into AI tools, agents and what's next in tech. Follow for curated insights, honest takes and reviews.
India
Joined March 2018
RT @conor_ai: If Soham ever applies to YC he's going to have a god-tier answer for this question
0
8
0
Cursor is the most exciting software tool out there that will disrupt the software coding efficiency by a scale nobody can fathom right now. This is a $100bn company right there. Remember this tweet ✅.
Cursor is now on your phone and on the web. Spin off dozens of agents and review them later in your editor.
0
0
2
The best shortcut to building very good, functional and aesthetic websites.
🚀Thrilled to release the full redesign of the Butternut AI website 🚀. We’ve redesigned the website from first principles to make it faster, simpler and more focused. What’s new -. ✅ A modern, intuitive UI . ✅ Cleaner fonts, smoother navigation. ✅ Dark mode. ✅ Live chat. ✅
1
2
5
Some founders never rest even after building multi-billion dollar companies!.
If you get an email from Mark Zuckerberg, do not assume that it is fake. He has taken over recruitment for the superintelligence lab and is reaching out to hundreds of prospects personally. If you respond, the next step is an invitation to dinner.
0
0
1
I agree! Human ingenuity, wisdom and expertise is critical to achieve the scalable robust tech.
GitHub CEO Thomas Dohmke had a clear message for startup founders at VivaTech in Paris: a startup built solely with AI coding assistants "doesn't have much value.". AI tools are great for launching quickly, but scaling a product and attracting serious investors still requires
0
0
0
A real game changing update would be Claude coming up with a 100M tokens window. Beyond hobby projects, this is a genuine limitation with AI coding agents. Engineers on my team have lost confidence relying on these tools to understand full codebase context.
AI coding agents hit a wall when codebases get massive. Even with 2M token context windows, a 10M line codebase needs 100M tokens. The real bottleneck isn't just ingesting code - it's getting models to actually pay attention to all that context effectively.
0
0
3