
David Clark
@TheDaveClark
Followers
1K
Following
812
Media
94
Statuses
593
3x founder, 1x exit, ex-Amazon. Follow to learn how to build better automations with AI Building Joy, VC-backed AI tools for real estate
π‘
Joined March 2011
90% of founders waste years building products nobody wants. They all skip the ONE question that matters most. Answering it is how I got strangers on Reddit to pay me for an 'AI' startup before even having a product. The "reverse launch" methodβget paid first, build second: π§΅
9
14
60
Anyone else noticing Gemini has been REALLY good recently? It makes sense this would eventually happen Google search > Bing search (openai) LLM Quality = Data + Compute
2
0
3
Why do founders resist "small" pivots? Because admitting you were wrong about one thing feels like admitting you're wrong about everything. But markets don't care about your ego. They care about solutions to real problems. Pride is expensive in startups.
0
0
2
Bookmark this post so you can revisit in the future. If this framework helped you, follow @thedaveclark for more AI automation and startup insights. I share strategies from building with AI at Amazon and my VC-backed startup. RT the first tweet to help others master AI
Most AI prompts break in production. Not because they're bad prompts. Because they ask too much. After 1000+ hours building AI systems, here's what actually works (& what Einstein had to say about prompting LLMs)
0
1
3
The counterintuitive truth: Powerful AI isn't about complex prompts. It's about simple, precise instructions. Master this, and you'll be in the top 1% of AI builders.
1
0
3
Quick implementation summary: 1. Break your large task into micro-prompts 2. Write PACT (Persona, Action, Context, Template) prompts for each decision 3. Be painfully explicit 4. Write in the affirmative 5. Eliminate logical conflicts 6. Turn down the temperature (usually) 7.
1
0
2
Framework Rule #8: Freeze & Test Most models auto-update by default: - Freeze your model version - Build regression tests - Test EVERY prompt change A "minor" version release can break your workflow. Treat prompts like production code. Source control + unit tests on changes.
1
0
1
Framework Rule #7: Output Constraints Force structure with: - JSON schemas - Numbered lists - Character limits - Required fields Constraints paradoxically improve quality. The LLM knows exactly what success looks like.
1
0
1
Framework Rule #6: Turn Down the Temperature Temperature controls randomness. For consistency: - Business logic / Critical decisions: 0-0.2 - Some variation (email templates, product descriptions): 0.3-0.6 - Creative tasks: 0.7+ Lower temp = more predictable outputs.
1
0
2
Framework Rule #5: Eliminate Logical Conflicts Your prompt can't contradict itself. Example: β "Be extremely detailed but keep it under 50 words" β "Use formal language but sound casual and friendly" β
Fix: "Use 50 words. Include price, timeline, and next steps."
1
0
2
Framework Rule #4: Write in the Affirmative Tell the LLM what TO do, not what NOT to do. LLMs follow instructions better than restrictions. β "Don't be verbose or use technical jargon" β
"Use simple words. Write sentences under 15 words." Positive instructions = consistency
1
0
2
Framework Rule #3: Be Painfully Explicit LLMs don't infer. They predict. Every ambiguity = inconsistency. Example: β "Summarize this briefly" β
"Summarize in exactly 3 bullet points, each 15 words maximum"
1
0
2
Framework Rule #2: Make a PACT P - Persona: Give your LLM a specific role A - Action: Define ONE clear action C - Context: Provide relevant background T - Template: Structure the output format This primes the model's internal weights to better complete the task at hand. P -
1
0
2
Breaking large task into micro-tasks = cumulatively higher accuracy. At the end of the day LLMs are trying to predict the next token. The more a prompt tries to do, the more likely it is to fail. The goal is consistent behavior from your LLM
1
0
2
Framework Rule #1: Ask Less of Your LLM β Bad: "Take this customer support ticket, classify, prioritize, route, and write a response" β
Good: Break prompt into micro-tasks: Prompt 1: Classify the ticket type Prompt 2: Triage Prompt 3: Route to team Prompt 4: Draft response
1
0
3
Complex prompts = inconsistent results. Simple prompts = reliable outputs. Think of it this way: β "Handle this appropriately" β
"If angry customer, apologize. If confused, clarify. If happy, thank." The secret? Break everything down AND add examples.
1
0
3
Additionally - Einstein's rule for physics applies almost perfectly to AI prompts: "Everything should be made as simple as possible, but not simpler." Strip away complexity, but keep what's essential. That's the entire framework.
1
0
14
First, the counterintuitive truth: Prompt failures usually come from putting TOO MUCH in your prompt. Like business writing - the best memos are written at a 5th-grade level. Clear. Simple. Zero ambiguity.
1
0
15
Most AI prompts break in production. Not because they're bad prompts. Because they ask too much. After 1000+ hours building AI systems, here's what actually works (& what Einstein had to say about prompting LLMs)
3
24
53
The 80/20 rule of AI programming: AI handles 80% of tasks brilliantly (hello world, simple scripts, boilerplate). The other 20% (complex business logic, production debugging, system architecture) requires deep expertise AI doesn't have. That 20% is where the money is.
0
0
1
The "demo effect" in AI programming: Perfect demonstrations in controlled environments create false confidence about real-world capability. It's like assuming someone who can cook a perfect dish following a recipe can run a restaurant kitchen during dinner rush. Similar
0
0
0