Andrew
@dremnik
Followers
124
Following
2K
Media
34
Statuses
476
building the runtime for software 3.0 | designer + engineer.
SF
Joined October 2025
ideal startup structure in next 6 months: 1-2 founders 1 eng 1 marketing 1 ops that’s it. the rest will be overhead. your margin is their opportunity.
2
0
3
just shipped xcli — a CLI for posting to X from your terminal. compose, post, reply, quote, and attach media from files. P.S. THIS IS NOT AN ENDORSEMENT OF SLOP. just making your life easier. https://t.co/LQj7LL7YG5 (posted from x-cli)
github.com
CLI for the X API. Contribute to dremnik/x-cli development by creating an account on GitHub.
0
0
1
a little known fact about @zeddotdev is that it has realtime collaboration built-in with CRDTs. they are perfectly positioned to be the IDE of the future with realtime, and i sincerely hope they can execute on this with speed. github is obsolete. https://t.co/bTx1M9DJpp
0
0
2
if you're not literally having claude write your timetable in realtime each day, what are you really doing?
1
0
2
i can't stop thinking about this point from a recent article i wrote. AI will not democratize ability. it will exacerbate the power laws already that dominate creative output. > power laws occur everywhere in the world. and yet they are hard to understand because our brains
0
0
0
being deep on this question recently, i can say with confidence that people have yet to realize the rate of information processing that the next generation of companies will be capable of.
0
0
2
my own view on this is that slack is replaced by a native workspace where humans + AI collaborate in realtime. pace of work is dramatically faster now. what does that mean? slack just isn’t gonna cut it with the increased coordination overhead. teams are going to shrink.
im extremely bullish on slack native ai agents devin sits in our slack like an ic1 engineer you ping it make an end to end test shrink this button from 32px to 24 px and everyone sees its thought process as it works its coding in public but for your own codebase
1
0
3
if I could run opus 4.6 through @elevenlabsio even for a few minutes, i cannot describe how powerful this will be..
2
1
1
every system is a set of nested feedback loops running at different frequencies. the brain: reflexes → attention → sleep/learning. a company: support pings → weekly sprints → quarterly planning. when these cycles are in harmony, the system is aligned. each layer does its
1
0
1
at this point, i’ve accepted the proposition that if there is regularity in the data, it will be learned. and with these models being deployed with ever greater levels of autonomy, they are collecting the necessary data in real time.. so the only question you have to ask
0
0
2
d de des desi desig design design design i design is design is design is t design is th design is the design is the design is the n design is the ne design is the new design is the new design is the new b design is the new bo design is the new bot design is the new bott
0
0
1
curious how you guys are working with coding agents in your IDE. what is your setup? personally i've started just keeping a https://t.co/uQQ2Thm9dj in a .tasks/ so claude can also see + edit that list with me as we make progress.
0
0
2
and now we start to see where design is going to become the bottleneck and claude code stops being a useful form factor for this kind of work. more on this to come...
Claude Code now supports agent teams (in research preview) Instead of a single agent working through a task sequentially, a lead agent can delegate to multiple teammates that work in parallel to research, debug, and build while coordinating with each other. Try it out today by
0
0
2
there are showmen , and there are deep practitioners. there are masters, and there are dilettantes who speak to the crowd. @bcantrill is the “musician’s musician” of engineering. his depth of knowledge is remarkable, and his passion is inspiring: https://t.co/76i9H15rMC
0
0
2
@dremnik built an arxiv-scraper that finds papers relevant to itself using kernl, the exciting next-gen agent sdk. We traced it with @lmnrai and found 11% token waste / cut 10 seconds per run. Here's how:
laminar.sh
Tracing a kernl research agent to remove redundant doc fetches and cut runtime and tokens.
0
1
1