
Giovanni D'Andrea
@jontec8
Followers
353
Following
7K
Media
644
Statuses
4K
CTO @ Knowise - sometimes i touch prod - sometimes i shitpost - [email protected] - https://t.co/tqeinHk1p7 👀
Co - Founder & CTO 👉🏻
Joined May 2022
Rust is pure love and hate but every time i see the size of the /target folder, i start thinking maybe node_modules wasn’t that bad after all 🫠
0
0
1
Reading helps you understand how things work. 📖 Building helps you understand why. Combine both and you’ll learn faster than any course could teach you. You’ll be unstoppable 💀
2
0
6
We literally hit 2.64M requests and almost 15k unique visitors in 30 days with zero marketing.. thank you all, this is unreal 😭
1
0
6
Just rewatching Silicon Valley and I forgot those kinds of atrocities
0
0
2
Long-term memory in LLMs is a double-edged sword. It can make agents feel more human, or more confused. The next breakthrough might not be a bigger model. It might be one that knows what to forget.
0
0
0
How to prevent it (ideas)💡 Maybe the solution isn’t more memory, but cleaner memory. > Periodic summarization > Context validation > External memory stores (vector DBs) > Hierarchical context refresh
1
0
0
The effects: The model becomes less precise. It forgets what mattered, overweights random bits, and starts rewriting its own truth. You can think of it as cognitive decay for machines.
1
0
0
Why it happens? The model doesn’t “remember” like a human. It just processes a giant sequence of tokens. As the context grows (100k+ tokens), focus blurs, dependencies weaken, and contradictions start to slip through unnoticed. That’s context auto-poisoning.
1
0
0
The concept: When an LLM reads too much of its own history, it starts to inherit its past mistakes. Each new generation of output slightly distorts the previous one, and over time, that distortion compounds. It’s like semantic noise building up in the system.
1
0
0
Large context windows are powerful. Until they start poisoning themselves. As LLMs loop over their own outputs, semantics drift. Contradictions grow. Accuracy fades. Self-poisoning might be the next big challenge In long-term memory design. The real question is, how do we
3
1
7
let’s be honest… who actually uses the Notepads tab in Cursor??
1
0
8
📘Came across this book while visiting a few startup offices in San Francisco. It offers a clear and accessible take on MCPs, Agents, RAG and LLMs, plus the framework developed by the Mastra folks. Definitely worth a read (and it’s free)
0
0
10
Future languages: - python - rust - golang - wasm Learn one at least before it’s too late
0
0
5