David Garnitz
@DGarnitz
Followers
160
Following
9K
Media
10
Statuses
337
Secure production deployment of agents with access to real world data is one of the most interesting problem spaces in AI right now. With @yapify_ai we wanted to use AI to QA our code but couldn't 100% guarantee it wouldn't see an email if it had browser control so we didn't
Why securing AI is harder than anyone expected and the approaching AI security crisis with @SanderSchulhoff Sander is a leading researcher in the field of adversarial robustness, which is the art and science of getting AI systems to do things they shouldn't do, through
0
0
0
the more features LLMs provide, the more knobs we have to figure when/how to turn to get the right result. this was already a major issue with RAG 2 years ago and is fast becoming a problem again. You can't exhaustively test everything
0
0
0
a question I am mulling around in my head: how will improvements in AI translate to us having a better understand of our own biomechanics
0
0
0
Yap your Flows!!
@WisprFlow is acquiring @yapify_ai to accelerate building the Voice Operating System! After nerding out with @tankots about building the real-world Jarvis, it was obvious he shared the same vision @DGarnitz and I had when we started Yapify. Voice is becoming the primary way
0
0
0
Pretty cool AI use case! Accidentally ordered the wrong size ski goggles for Black Friday. Simple enough problem to delegate to AI. Instant response, which is great as a customer. I wonder who powers this
1
0
1
A little explore consequence of LLMs: DIY projects have become 1-2 orders of magnitude easier. What happens to the economy when people can fix a sink that previously required a plumber or cook that dish that was only possible at a restaurant before?
0
0
3
trying to do micro-meditations while the LLMs runs, 3-10 deep focused breaths, rather than multitask
0
0
1
if you're using LLMs to write emails or texts for you, try generating one with Claude then editing it with chatGPT. The mixture of these two personas can lead to nice results
0
0
1
while there is bubble-like hype in AI, the optimized model inference/serving enthusiasm is legit. You can get so much performance out of LLM apps by pumping tokens through LLMs. Even for small models. As self-hosted OSS models become faster and cheaper, this will unlock alot
0
0
0
fascinating to see X full of people who have never built an agent rave about AI native slack. I am highly skeptical this will work with the current generation of LLMs because **predicting intentionality in messages is extremely hard** even with tons of integrations & context
0
0
0
just a reminder to my fellow builders working on AI that a nice chunk of the populace has no clue about the benefits of what we are doing
0
0
0
before starting a company, I have never even heard of Notion. 3 years later its a critical tool for managing my personal life. One great thing about being surrounded by early stage people is how quick they are to adopt tools
0
0
1
Startups suffer heavily from Narrative fallacy. This refers to our tendency to create coherent stories to explain events, especially success or failure, even when a large part of the outcome is driven by randomness, timing, or other uncontrollable factors.
0
0
0
Still waiting for Apple to come out with a best-in-class on device LLM for my macbook. A quantized 30B model or an 8B MoE with SoTA performance would be a game changer both for app development & personal productivity
0
0
0