
Ryan Singer
@rjs
Followers
50K
Following
1K
Media
148
Statuses
985
20+ years building product. Author of Shape Up. Former Head of Strategy at 37signals/Basecamp. [email protected]
Portugal
Joined November 2007
Yes. This thread applies to collaborative problem solving in general, from buying decisions to design work to product development. Work backwards from the gap in the current world to first answer “who cares?” and then fill in with the new thing.
The desiccated "Theorem, Lemma, Proof, Corollary,. " presentational style is staggeringly counterproductive, if one's objective is actually communicating the underlying mathematical intuitions and thought processes behind a result. In reality, the process is more like. (1/4).
1
2
11
Love these visualizations of running different async processes with @EffectTS_ . By @kitlangton.
2
4
144
Many “either/or” or “should/shouldn’t” arguments dissolve when properly parameterized.
@CodeProMax @levelsio What “planning” means and whether it makes any sense to do depends on the number of people involved and the amount of resource at risk. There are lots of solo / indie-hacking style cases where you just don’t need it. Or where your downside is capped such that you don’t mind.
1
3
23
It’s easy to imagine a near future where ChatGPT Voice Mode is more like “show and tell mode” and it live generates visuals that correspond to whatever you’re talking about. That would be a new medium. For example, when I ask about fixed points in RG, it would spontaneously show.
Just used ChatGPT voice mode to ask questions about the renormalization group for 15+ minutes. Incredible how good this has gotten. Extremely useful when trying to get bearings on a new topic or something that’s fuzzy. Funny how “prompting” in voice mode is just … asking good.
2
1
22
This approach works at different scales. Eg at the project level, above the level of implementation concept, above the level of tasks:. What is the current way today? What about that presents a problem? → What will be different after? How will we know it's working?.
A well defined task isn't a boolean (checked, unchecked), it's a vector. Unknown → known. Known → implemented. Implemented → verified. Etc. It's like a diff in the future. From this to that. It answers the question: How will we know that this was done? Where will it take us?.
1
0
19
RT @ryanflorence: It's funny how thinking about building for LLMs to understand your abstractions better makes you do what you should have….
0
11
0
This is true for humans too. Recruiting, mentoring, shaping etc are all context alignment.
The most important factor for AI Agents is to get them the context necessary to execute the task successfully. No matter how powerful AI models get, context will always be king. Data, workflows, tools, domain knowledge, and tuned instructions will all be critical.
0
1
24
Btw, if you applied Shape Up and it led to tech debt, that is the #1 sign that you weren't actually shaping. Shaping means figuring out what you can deliver *at quality* and *in the time* you have. Doing that requires making trade-offs. If you don't make trade-offs between biz.
4
3
60
The dynamics we're seeing with managing coding agents (more concrete prompting, tighter verification loops) are the same dynamics we've already had with all but the most senior programmers. Take this loop from @karpathy's talk for example. This is the QA/Review loop. It's always
0
1
16