thedumb_p Profile Banner
The Code Alchemist Profile
The Code Alchemist

@thedumb_p

Followers
113
Following
136
Media
101
Statuses
473

YouTuber & Lead Engineer | Sharing hands-on videos on Java, Spring, Microservices & more | System Design & Architecture Enthusiast | Anime & Mystery/Thriller 🎬

India
Joined March 2023
Don't wanna be here? Send us removal request.
@thedumb_p
The Code Alchemist
2 months
Running Spring Boot + Postgres on local Kubernetes doesn’t have to be hard. Here’s a full walkthrough-clean and minimal Watch it here 👉 https://t.co/eRkApHUDqO #Java #SpringBoot #Kubernetes #CloudNative #DevLife
0
0
0
@thedumb_p
The Code Alchemist
3 months
Want to learn SpringBoot + Kubernetes Start here 👉 https://t.co/ER95eAUq7N #SpringBoot #Kubernetes #Java #CloudNative
0
0
2
@thedumb_p
The Code Alchemist
3 months
If you’re serious about mastering concurrency in Java, these two books are essential. They don’t just give answers, they reshape how you think: 1. Concurrent Programming in Java – Doug Lea 2. Java Concurrency in Practice
0
0
1
@thedumb_p
The Code Alchemist
3 months
💳 Payments look scary… but as a dev, it’s mostly just a few API calls. I built a working checkout with #springboot + #Stripe → PaymentIntent, Checkout, webhooks — all in one simple flow. Video here 🎥: https://t.co/qpEIy5feKv #Coding #programming
0
0
1
@thedumb_p
The Code Alchemist
3 months
Partitioning strategies: 1) Range-based: e.g., user IDs 1–1000, 1001–2000. Ideal for ranges, prone to hotspots. 2) Hash-based: hash(key) % N. Uniform, avoids hotspots, but disrupts range queries. 3) Directory/Lookup-based: mapping service directs data location. 4) List-based:
0
0
0
@thedumb_p
The Code Alchemist
3 months
Trade-offs: 1) Routing: System must identify data partition. 2) Queries: Range queries efficient (range sharding) or poor (hash sharding). 3) Hotspots: Poor partitioning causes skewed load. 4) Rebalancing: Moving partitions when adding/removing nodes.
1
0
0
@thedumb_p
The Code Alchemist
3 months
Why: 1) Capacity scaling: A single machine cannot hold or process all data. 2) Performance scaling: Distribute reads/writes across nodes for parallelism. 3) Fault isolation: One shard/node failure does not affect the entire dataset.
1
0
0
@thedumb_p
The Code Alchemist
3 months
Q10: What is partitioning (or sharding), and why is it used? 👇 Definition: Dividing a dataset into smaller, disjoint chunks (partitions/shards) and distributing them across multiple nodes. #SystemDesign #Interview #Coding #CodingJourney
1
0
0
@thedumb_p
The Code Alchemist
3 months
2. Apply Lag (Replay Lag): A replica may apply logs slowly due to CPU/IO bottlenecks. Track the difference between the leader’s current offset and the replica’s applied offset.
0
0
0
@thedumb_p
The Code Alchemist
3 months
1. Commit/Replication Lag: Each write has a log sequence number or offset. The leader tracks the offset each follower acknowledges.
1
0
0
@thedumb_p
The Code Alchemist
3 months
Q8: In a leader–follower setup, what metric would you track to detect if a replica is lagging behind the leader? 👇 #SystemDesign #INTERVIEW #programminghelp #Coding
1
0
0
@thedumb_p
The Code Alchemist
4 months
4) Async replicas serve stale data, causing clients to see anomalies (dirty reads, lost updates). 5) More replicas increase storage, network, and operational overhead; replication wastes resources if the workload doesn’t require it.
0
0
0
@thedumb_p
The Code Alchemist
4 months
2) In synchronous setups, if replicas are slow or unavailable, the leader stalls, making the system appear "down" despite intact data. 3) The leader consumes excessive CPU/network resources pushing logs, reducing throughput.
1
0
0
@thedumb_p
The Code Alchemist
4 months
Q9: When can replication harm a system? 👇 1) Misconfiguration Assuming data is safe, but async or poorly tuned replication → leader crash = data loss #SystemDesign #interview #systemsdesign
1
0
0
@thedumb_p
The Code Alchemist
4 months
💳 Ever wondered what happens when you hit “Pay Now”? Issuers, acquirers, auth, capture, voids, refunds, chargebacks… sounds scary, but it’s actually simple. I broke it down in my new video https://t.co/GcUkItPtpw #PaymentProcessing #Coding
0
0
0
@thedumb_p
The Code Alchemist
4 months
Replica Unavailability Writes fail to reach all replicas, or reads return stale data. Mitigation: Fail requests requiring strong consistency. Use hinted handoff (temporarily store writes at another replica) Use Anti-entropy mechanisms like Merkle trees for later reconciliation.
0
0
0
@thedumb_p
The Code Alchemist
4 months
Unclear write acknowledgment (when is a write "successful"?) Client assumes write succeeded, but replicas haven't applied it (data loss risk). Mitigation: Use quorum rules (R+W > N) or synchronous/semi-synchronous replication for stronger guarantees.
1
0
0
@thedumb_p
The Code Alchemist
4 months
Q7: Name two common failures in replication 👇 #SystemDesign #Interview #interviewtips #Byte
1
0
0
@thedumb_p
The Code Alchemist
4 months
Reads/Writes: Writes succeed after W replicas acknowledge. Reads succeed after R replicas respond, reconciling values if needed. Consistency: Tunable—adjusting R and W trades availability for consistency.
0
0
0
@thedumb_p
The Code Alchemist
4 months
Flow: Any replica can accept reads or writes. The system uses quorum rules (R + W > N) for consistency. Replication: The coordinator node forwards writes to all relevant replicas (based on partitioning/sharding). Even if not a replica owner, it can hand off requests.
1
0
0