chetan conikee Profile
chetan conikee

@conikeec

Followers
2K
Following
26K
Media
271
Statuses
10K

Building Something Exciting Ex-Founder of Qwiet AI (https://t.co/fw6wD90jF9) - Acquired Harness

Joined May 2008
Don't wanna be here? Send us removal request.
@conikeec
chetan conikee
5 years
[A follow up post] : SUNBURST SolarWinds Breach: Crime Scene Forensics #SolarWinds #SUNBURST
0
2
12
@conikeec
chetan conikee
25 days
o the skill to cultivate isn't "can I prompt AI to build faster?" It's "can I see the underlying structure that will let me build 10 things from 1 foundation?" That pattern recognition, that taste, that architectural vision—still human superpowers. Build your LEGOs wisely. 🧱
0
0
0
@conikeec
chetan conikee
25 days
The companies winning long-term aren't the ones with the most code. They're the ones whose humans spotted the right abstractions early—then rode those building blocks across every seasonal trend, market shift, and product evolution.
1
0
0
@conikeec
chetan conikee
25 days
Agentic AI excels at execution within abstractions. Humans excel at choosing which abstractions to build. The engineer who sees that "notifications" and "activity feeds" and "messaging" are all the same abstraction? That's 10x leverage.
1
0
0
@conikeec
chetan conikee
25 days
That foresight requires: Pattern recognition across domains - Taste about what's essential vs. incidental - Intuition about where markets/tech will evolve - Judgment about what complexity to hide vs. expose This is still deeply human territory.
1
0
0
@conikeec
chetan conikee
25 days
AI agents can generate code at lightning speed. They can implement patterns you describe. But here's what they can't do yet: recognize which abstractions will pay dividends across 10 different future use cases you haven't imagined.
1
0
0
@conikeec
chetan conikee
25 days
This is bricolage thinking—the art of working with what you have and recombining it creatively. Fashion brands do this every season. Same silhouettes, new colorways. Disney does it across franchises. Good engineers do it with code. The packaging changes. The core compounds.
1
0
0
@conikeec
chetan conikee
25 days
Here's the kicker: once you nail the abstraction, the only real cost is packaging. Same authentication system → powers B2C, B2B, enterprise. Same recommendation engine → surfaces products, content, connections. Different seasonality. Different markets. Same foundation.
1
0
0
@conikeec
chetan conikee
25 days
React components. Unix pipes. REST APIs. Database schemas. Design systems. These aren't just "code." They're compression algorithms for human thought—taking complex problems and collapsing them into reusable, recombinant building blocks.
1
0
0
@conikeec
chetan conikee
25 days
Think of abstractions like LEGO bricks. Once you've designed the right pieces, you can build infinite variations without reinventing the wheel. The Spotify interface? Just "playlists + shuffle + queue" recombined endlessly across podcasts, audiobooks, and social features.
1
0
0
@conikeec
chetan conikee
25 days
The best engineers don't write more code. They write better abstractions.And here's why that matters more than ever: good abstractions are compounding assets that pay dividends forever. Thread on why human intelligence still owns abstraction in the age of AI đź§µ
1
0
0
@conikeec
chetan conikee
26 days
Looking forward
@bgurley
Bill Gurley
26 days
I have written my first book! A passion project of almost 10 years, Runnin' Down a Dream aims to give people both the motivation & the methods for thriving in a career they actually love. Put a lot of heart and soul into this - hope you ❤️ it. Pre-order:
0
0
0
@TrungTPhan
Trung Phan
1 month
an incredible visual of how money is moving around the AI ecosystem
53
262
1K
@conikeec
chetan conikee
1 month
Blog Post :
1
0
0
@conikeec
chetan conikee
1 month
Full pattern docs: https://t.co/hMpnJjK7Kx Working code: https://t.co/POmUFwzIld Builds on the GEPA implementation from PR #19. Pretty cool to see how the judge's feedback drives prompt evolution.
Tweet card summary image
github.com
A DSPy rewrite to(not port) Rust. Contribute to conikeec/DSRs development by creating an account on GitHub.
1
0
2
@conikeec
chetan conikee
1 month
Cost consideration: this doubles your LM calls (task + judge per evaluation), so budget controls are important. The example uses max_lm_calls to cap it. You can also do hybrid - explicit checks for deterministic stuff, judge for subjective quality analysis.
1
0
0
@conikeec
chetan conikee
1 month
In the math example, baseline score was 0.14. After GEPA optimization using judge feedback, it jumped to 0.28 (2x improvement). The evolved prompt became way more explicit about tracking quantities, showing all steps, and checking work - all learned from the judge's analysis of
1
0
0
@conikeec
chetan conikee
1 month
The judge evaluates both correctness AND reasoning quality. This catches issues like: - Right answer with flawed logic (lucky guess) - Wrong answer but valid approach (partial credit) - Skipped steps in reasoning - Conceptual misunderstandings Scoring: 1.0 = both correct 0.7 =
1
0
0
@conikeec
chetan conikee
1 month
Why this matters: GEPA needs rich feedback to work well, but writing explicit feedback rules for complex tasks is tedious. With an LLM judge, you get detailed analysis automatically. Example judge output: "Student correctly identified multiplication needed. Calculation accurate.
1
0
0
@conikeec
chetan conikee
1 month
The setup is pretty straightforward - you have three LLMs working together: Task LM: generates answer + reasoning Judge LM: analyzes the quality GEPA Reflection: uses judge feedback to improve the prompt Each one specializes in its role.
1
2
1
@conikeec
chetan conikee
1 month
Using LLM-as-Judge with GEPA for automatic feedback generation Just added a pattern to DSRs that lets you use an LLM judge to automatically generate rich textual feedback for prompt optimization instead of writing manual rules. cc @LakshyAAAgrawal @zaph0id
@conikeec
chetan conikee
1 month
Just implemented GEPA (the reflective prompt optimizer from https://t.co/gRIzxz4C2G) in Rust for DSRs. Key difference from COPRO/MIPROv2: uses rich textual feedback + per-example Pareto frontier instead of just scalar scores. Keeps diverse candidates around instead of converging
2
9
92