Jacob Hands
@jachands
Followers
1K
Following
33K
Media
1K
Statuses
11K
building things at cloudflare // https://t.co/kNlKHcxDHd // views are my own
United States
Joined December 2012
I used GPT-5 Deep Research to find an npm package for something - it told me I shouldn't use the old package that has 4M downloads/week, and instead should use an obviously vibe-coded package with 1 dl/week that didn't work 🙃
1
0
2
Love seeing faster build start times 😍
Did you know Workers Builds are built on top of Containers? We're able to build the best product for developers when we build with that product. After some hard work behind the scenes, the time it takes to start your build should be even faster — by 3.3x in some cases.
0
0
8
Tracing in Workers is huge - I've been waiting for this for years!! Makes it so much easier to debug and understand what's happening in my Workers
👏Today is the day👏 Workers ✨automatic✨ tracing is now in open beta! ✅Enable in seconds – no code changes required 🔎View and query trace data directly in the Cloudflare dashboard 📦Export traces (and logs!) to any external destination with an OTel endpoint
0
1
18
@PlanetScale @CloudflareDev That does line up with when I saw a big spike of DB connection errors
0
0
0
@PlanetScale @CloudflareDev huh, I wonder what made CPU max out for a bit. Maybe some cache misses? Or was there CPU contention on the host VM? (does that happen on AWS?)
1
0
0
@PlanetScale @CloudflareDev Set max_db_connections under PGBouncer and will see if I stop getting errors, or if I'm misunderstanding how this works
1
0
0
@PlanetScale @CloudflareDev A somewhat bad default I've noticed: @PlanetScale defaults PGBouncer to unlimited backend connections, so it keeps filling up the connection limit of the DB each time I raise it, resulting in query errors saying I'm out of connections.
3
0
0
Btw these stats are queried directly from @AxiomFM - it's amazing how easy it is to just send everything to Axiom and then query in grafana. It's handling 12k events/sec across different projects without missing a beat
Testing @PlanetScale with @CloudflareDev Workers and enabling Hyperdrive caching feels like actual magic - P99 read latency reduced by 97% 🤯
0
2
9
@PlanetScale @CloudflareDev Wow, @PlanetScale has some really good tools for investigating which queries are being used a lot so I can debug what's taking the most time
1
0
8
@PlanetScale @CloudflareDev This is a PS-20 cluster btw - though these are very cheap queries on a small table so I'm not surprised that we got over 300 queries/sec before maxing out CPU.
1
0
2
@PlanetScale @CloudflareDev this is cool - you can see where I added the second set of queries using the second hyperdrive connection - the load went right back down when I enabled caching:
1
0
2
@PlanetScale @CloudflareDev Without hyperdrive caching, reads vary a decent amount depending on location:
2
0
5
Testing @PlanetScale with @CloudflareDev Workers and enabling Hyperdrive caching feels like actual magic - P99 read latency reduced by 97% 🤯
7
11
243