Laurynas Keturakis
@_laurynas
Followers
628
Following
3K
Media
162
Statuses
4K
🇱🇹 Post-Soviet millennial, liberal arts grad and technical mutt calling tools at @fiberplane
eu-central-1
Joined August 2011
hot version control take: a jj-native forge should dispose with the ‘jj push’ semantics - that’s a relic from git. Change ids are stable, unique and can be immediately synced across all connected hubs or clients
0
0
0
attempted to indoctrinate a few of my colleagues into the obviously better version control system before the Christmas side project season
0
0
2
still trying something like this out but i found jj (jujutsu) model for this particularly suitable. You can get Claude Code to pre-generate commits empty (jj new -m "<change_1>" && jj new -m "<change_2>") and then have it assign the actual code changes to those commits
v1 of my "reimplement this PR using an ideal commit history" command, actually works quite well. "What commits would I have made if I had perfect information about the desired end state?"
0
0
2
probably no way for @atuinsh to track this but I wonder what the graph would look like if it tracked interactive shell use time (claude code would obviously top the charts)
0
0
0
it's atuin wrapped time! story of my 2025 shell is that I adopted jj halfway through the year (you can see lg - lazygit - decline) and mostly switched projects from pnpm (shorthand: pn) to bun
1
0
2
this post was specifically prompted by the behavior were flags are ALWAYS before subcommands when the normal practice is to allow for interleaving
0
0
0
I've slowly been Effect-pilled over the last weeks but I think that effect/cli is overly-opinionated in not the right way
1
0
0
wild how far you can get with markdown files with front-matter, unix-y tools, and some scripts
0
0
1
when invoking a shell command claude code sets the following env vars: CLAUDECODE=1 GIT_EDITOR=true (to prevent interactive git editors) CLAUDE_CODE_ENTRYPOINT=cli would love if they passed in more metadata like the session id
0
0
1
claude code showing % of context used, and codex showing % of context left is pure context rot for my context-switching brain
0
0
0
reworked `ragrep` (fully local semantic search that you can use on Claude Code) over the weekend: it's more accurate (using a reranker) works faster if used in a server/client mode (loads the models only once) reindexes incrementally
i made a fully local version prototype of this a while back (using @mixedbreadai ‘s own oss models!) should probably dust it off and update with live indexing and better reranking..
1
0
1
i made a fully local version prototype of this a while back (using @mixedbreadai ‘s own oss models!) should probably dust it off and update with live indexing and better reranking..
github.com
Fast semantic search for your code. Contribute to laulauland/ragrep development by creating an account on GitHub.
we just made Claude Code - use 53% fewer tokens - respond 48% faster - give 3.2x better responses just by giving it a better grep
1
0
2
First one was a blast! Demos included: 🧑🏼🎨AI building & testing @FlorisWeers 🌚Darker mode emails @MatteoGauthier_ 🔮Visualizing AI logic @CompuIves 🍬Icon studio @robertvklinken ⚙️MCP optimization w/ GEPA @_laurynas 👥AI collab iOS apps @sirian_m 🥽VR + Rust scripting @rikarends
Augmented Tools Club in Amsterdam. Makers only. A space to demo real work in progress, exchange ideas, and get feedback in a friendly circle of fellow builders.
2
7
18
*grabs @stevekrouse soap box* So with Code Mode” for MCP… I’ve been thinking less about the tech, and more about the UX of it all. If LLMs are now writing code to use APIs, what does that actually look and feel like for the end-users? If “the model writes code in a sandbox,”
*gets up on soap box* With the announcement of this new "code mode" from Anthropic and Cloudflare, I've gotta rant about LLMs, MCP, and tool-calling for a second Let's all remember where this started LLMs were bad at writing JSON So OpenAI asked us to write good JSON schemas
1
2
3
I did some garage science and talked about programmatically optimizing tools at @AITinkerers today. This was the most fun bit - building a visualizer for GEPA lineage
0
2
11
The @linear team built a good MCP. Instead of wrapping every API endpoint, they designed task-oriented tools. Simpler parameters, explicit mappings, and thoughtful validation. The result: fewer tokens, higher signal, and agent workflows that actually work. Read the breakdown
1
2
4