Protty
@kingprotty
Followers
3K
Following
3K
Media
93
Statuses
1K
System optimizer + concurrency geek.
Joined August 2019
2025 was the Year of the @TigerBeetleDB, with 50+ project folders and 100+ artworks!!🤯🥳 I missed spending time on personal projects, but I had some great moments that are directly and indirectly connected to these beetles, and I’m more than ready for 2026! BTHF! #artvsartist
0
3
15
Excited to share that TigerBeetle and @synadia have pledged $512,000 to the @ziglang Software Foundation. Zig changed my life, making TigerBeetle possible. It's been an adventure these past 5 years, and thrilled to pay it forward, together with my friend @derekcollison.
Excited to announce that TigerBeetle and @synadia have pledged $512,000 to the Zig Software Foundation. https://t.co/rcweri9HpD
10
20
276
Blog: Batched Critical Sections A better alternative to mutexes https://t.co/fd37OsRSu7
kprotty.me
Critical sections don’t have to be scheduler-bottlenecked.
1
7
53
Writing fast code under optimizing compilers feels similar to *prompting: 1) Developing an intuitive understanding of how the model combines your patterns to outputs 2) Knowing what a "good" output is, in regards to correction. * any deterministic ML inference with text input
0
5
38
Blog: You can build any blocking sync primitive using any other one. And it's not just a neat realization. It actually works https://t.co/im7EglrJDG
kprotty.me
You can use any thread synchronization primitive to build any other one. Here’s how:
3
22
154
This applies both across cpu cores and across the network. It's just exacerbated and more obvious in the later, and sometimes applied without optimization in the former.
0
0
13
Re: channels, don't send to an actor task/thread. Instead, have the first sender on empty BECOME the receiver. On fan out, prefer workstealing or duplication at worst. On fan in, bump a counter & wake on last instead of multiple back/forth joins/acks. Avoid communication latency
1
1
14
Re: dataflow, don't use concurrent hashmaps. Instead, send changes to a task that updates the map or better yet accumulate them at the end. For concurrent reads, do them batched/simd serially before dispatching to MT process. It's a work graph problem, not a shared memory problem
1
0
16
Mutexes, Condvars, and Rwlocks are a scam. They introduce latency bottlenecks/dependencies from the OS scheduler. Instead, design data flow first, then optimize its channels second:
5
6
127
Probably the smoothest conference I've attended so far. Was also great meeting old and new faces alike. Props to TigerBeetle again for setting it all up. #systemsdistributed
2
2
49
I agree. Recently experiencing this a lot, having to follow through layers of "helper" functions and abstractions only to realize it could've avoided intermediary allocations or just be a few LOC once inlined.
When you split a function to N different small functions, the reader also suffers multiple "instruction cache" misses (similar to CPU when executing it). They need to jump around the code base to continue reading. Big linear functions are fine. Code should read like a book.
0
1
19
I was missing that anyone with signer's public + hash of message can also generate the shared secret (without the signer's private). Ignore this idea.
0
0
2
Why don't we implement cryptographic signatures as just key-exchange over signer's private key + hash of message as other's private key? The signature being the shared secret. Seems too simple to work & likely broken somehow. What am I missing?
1
0
0