Florian Clanet π¨π»βπ» π€
@FlolightC
Followers
2K
Following
23K
Media
611
Statuses
6K
π»Prototype Architect @awscloud π₯Cloud and AIML / GenAI enthusiast πStage lighting passionate πOpinions here my own β he / him Follow me to grow together!
Toulouse, France
Joined April 2013
Hey, nice to meet you, Iβm Florian ! Coding enthusiast and building cool things in the Cloudβ
οΈ Curiosity is my main sin and Iβm trying to share it with you ! I also talk about cybersecurity, lighting design and try to find out the purpose of life π§ββοΈ π https://t.co/e8t28f1FUu
florianclanet.medium.com
Read writing from Florian Clanet on Medium. Prototype Architect for Amazon Web Services | Cloud enthusiast | Lighting design passionate https://twitter.com/FlolightC
4
2
48
Holy shit... this might be the next big paradigm shift in AI. π€― Tencent + Tsinghua just dropped a paper called Continuous Autoregressive Language Models (CALM) and it basically kills the βnext-tokenβ paradigm every LLM is built on. Instead of predicting one token at a time,
332
1K
7K
You will hate yourself for not doing this: 1. Pick a topic to learn 2. Find 3 roadmaps 3. Commit >2 hrs a day 4. Learn it from various resources 5. Take notes of what you study 6. Save notes on GitHub 7. Summarize them in 1 paragraph 8. Share notes and summary on X 9. Get so
16
88
982
AMD is using Cline as their coding agent for local models. After testing 20+ models, they found what actually works: > 32GB RAM: Qwen3-Coder 30B (4-bit) > 64GB RAM: Qwen3-Coder 30B (8-bit) > 128GB+ RAM: GLM-4.5-Air 10-minute setup with @lmstudio + Cline, linked below
Your vibes. Your code. Get started with completely local vibe coding using @cline and @lmstudio and the AMD Ryzen AI Max+ series processors. https://t.co/wgStKqz2Yn
16
58
544
The paper shows small models reason better when traces match their instincts. The key finding is that low-probability tokens overload small models, so filtering at 1% keeps guidance while preserving correct answers. Copying raw teacher traces drops accuracy by 20.5%. Tailored
1
3
15
Best coding model in the world is here! https://t.co/ITTxpwjc3C
anthropic.com
Claude Sonnet 4.5 is the best coding model in the world, strongest model for building complex agents, and best model at using computers.
0
0
3
This is an interesting one: real use case that helps people to navigate into the property investment landscape. https://t.co/MfgHe7tMgJ
aws.amazon.com
In this post, we explore how we built a multi-agent conversational AI system using Amazon Bedrock that delivers knowledge-grounded property investment advice. We explore the agent architecture, model...
0
0
0
I spent today digging through GitHub and collected +20 hot open-source AI repositories. Forget unpaid internships, 600 job applications, and 200 cold reachouts, Pick a repo, start with small contributions, and build a practical, solid profile. Don't run after opportunities,
12
42
325
π₯ Blog post: The AI Memory Wars: Why One System Crushed the Competition (And It's Not OpenAI) Nice benchmark of OpenAI Memory/LangMem/MemGPT/Mem0. One system with 26% better accuracy and 91% faster perf. Must-read for anyone building long-term AI agents! https://t.co/dWLHZh2TcB
0
0
0
π Scale your multi-modal AI agents using Amazon S3 Vectors + Strands Agent π From local to cloud with persistent memory #AWS #AI #Python #CloudComputing
https://t.co/xqKDUTLHr0
dev.to
π»πͺπ¨π± Dev.to Linkedin GitHub Twitter Instagram Youtube Linktr ...
0
5
4
β¨ Tip of the day: Use "<your-command> | pbcopy" to copy directly the output of a terminal command to your clipboard.
1
0
3
... 4/4 There is an interesting part about why reinforcement learning from human feedback cannot fully eliminate hallucinations, as it addresses symptoms rather than the root cause of conflicting internal representations within the model. Paper link:
0
0
0
... 3/4 - Fundamental design of LLMs as statistical prediction systems makes hallucinations an intrinsic feature rather than a fixable bug
1
0
0
... 2/4 - LLMs predict patterns rather than truths - Hallucinations occur because LLMs encounter conflicting patterns in training data and must choose between them
1
0
0
In case you haven't heard about it yet, have a look at the hallucination paper from OpenAI team. From my opinion, it's writing on paper some obvious thoughts you can have when working day to day with LLMs but part of it is actually interesting - 1/4
1
0
0
Interesting research from @damienhci team: Supporting Story Writing with Visual Elements. A novel approach to enhance narratives with integrated visuals. Generates and manages visual content alongside text, making storytelling more engaging and accessible. https://t.co/e4nXhNbdpU
0
0
0
Announcing the AI Futures Project: an AGI forecasting and governance organization led by @DKokotajlo. Our first project is AI 2027: a scenario forecast of the development and effects of superintelligence. We've also developed an AGI tabletop exercise.
9
30
135
I really liked the workflow working with Kiro. I wonder if we could give some custom instructions to Cline to behave the same way...
0
0
0
βοΈ I just added the "Bedrock Token Counter" tool to my aws-useful repository. If you work with Claude or other LLMs on AWS, this Python tool counts tokens using the official Bedrock API - no separate keys needed. https://t.co/Yxn3Xs6yug
0
0
0