lbindschaedler Profile Banner
Laurent Bindschaedler Profile
Laurent Bindschaedler

@lbindschaedler

Followers
355
Following
150
Media
21
Statuses
314

Faculty @ MPI-SWS working on systems for Big Data and Machine Learning. Formerly Postdoc @ MIT CSAIL and PhD @ EPFL. Blockchain astronaut. Newbie entrepreneur.

Geneva, Switzerland
Joined August 2012
Don't wanna be here? Send us removal request.
@lbindschaedler
Laurent Bindschaedler
22 days
Perfect example of what happens when you fire the PhDs in your leadership team… #GPT5
Tweet media one
1
1
28
@lbindschaedler
Laurent Bindschaedler
29 days
Few people are talking about this: AI is completely changing how we build scalable software. Instead of throwing away whole microservices and rewriting from scratch, we now have agents endlessly optimizing them. Great devs knew when to start fresh. What kind of mess will AI make?.
0
0
1
@grok
Grok
2 days
Join millions who have switched to Grok.
121
238
2K
@lbindschaedler
Laurent Bindschaedler
2 months
πŸš€ Our latest collaboration with AU just came out! . πŸ‘‰ Transparent client-side caching for LLMs. CacheSaver massively cuts your costs and carbon emissions without impacting performance. Check it out and keep notifications on: more game changing stuff on the way! πŸ˜±πŸ””.
@aroraakhilcs
Akhil Arora
2 months
2️⃣ Cache Saver: A Modular Framework for Efficient, Affordable, and Reproducible LLM Inference.πŸŽ“ Paper: πŸ’» Code: πŸ“… Fri 18 Jul.- MAS Wkshp.-πŸ“West Meeting Room 109-110. πŸ“… Fri 19 Jul.- ESFOMO Wkshp.-πŸ“East Exhibition Hall A. 🧡4/n
Tweet media one
0
1
6
@lbindschaedler
Laurent Bindschaedler
2 months
πŸ›°οΈ Excited to announce that we are open-sourcing SkyPulse, a Satellite Data Augmentation platform we built over the past couple of years!. SkyPulse is not just a research prototype. It is a production-ready system. πŸ”—
0
0
5
@lbindschaedler
Laurent Bindschaedler
3 months
RT @lbindschaedler: @illyism So they are basically leveraging delta encoding of QR codes for compression and that is the main innovation he….
0
1
0
@lbindschaedler
Laurent Bindschaedler
5 months
7/πŸ“„ Read the full paper: A huge thanks to my coauthor and intern, Excel Chukwu, for his incredible contributions! πŸ™Œ. Let’s build the future of stateful AI systems together. πŸš€. #AI #LLMs #MachineLearning #EuroMLSys #EuroSys.
0
0
1
@lbindschaedler
Laurent Bindschaedler
5 months
6/ This framework unlocks adaptive, context-aware AI systems for applications like:. - Personalized assistants.- Organizational knowledge bases.- Task automation.
1
0
1
@lbindschaedler
Laurent Bindschaedler
5 months
5/ How It Works:
Tweet media one
1
0
1
@lbindschaedler
Laurent Bindschaedler
5 months
4/ Key Highlights:. Stateful LLMs: Retain and update hierarchical knowledge, like LSM trees. Hybrid Approach: Combines RAG for memory with LoRA for fast updates. Better Performance: Improves state retention, query accuracy, and efficiency compared to traditional methods.
1
0
1
@lbindschaedler
Laurent Bindschaedler
5 months
3/ In this work, led by my amazing intern Excel Chukwu, we propose a novel framework that transforms LLMs into stateful systems by combining:. Retrieval-Augmented Generation (RAG) for long-term memory.Low-Rank Adaptation (LoRA) for lightweight, efficient updates.
1
0
1
@lbindschaedler
Laurent Bindschaedler
5 months
2/ Large Language Models (LLMs) are powerful but stateless, meaning they can’t retain user-specific context, task history, or evolving information. This limits their ability to provide personalized, adaptive responses in dynamic scenarios.
1
0
1
@lbindschaedler
Laurent Bindschaedler
5 months
1/ πŸš€ Excited to share our latest research, "May the Memory Be With You: Efficient and Infinitely Updatable State for Large Language Models," presented at #EuroMLSys 2025! πŸŽ‰. We tackle the challenge of making LLMs stateful for personalized and adaptive interactions. Curious?πŸ‘‡.
1
0
3
@lbindschaedler
Laurent Bindschaedler
5 months
8/ Huge thanks to my amazing intern, Bardia, for leading this work! πŸ™Œ Let’s unlock more LLM use cases for analytics together. πŸš€ #AI #LLMs #Databases #DOLAP2025.
0
0
0
@lbindschaedler
Laurent Bindschaedler
5 months
7/ Code & datasets: (coming soon!) πŸ–₯️.
1
0
0
@lbindschaedler
Laurent Bindschaedler
5 months
6/ Curious? Read the full paper here: πŸ“„.
1
0
0
@lbindschaedler
Laurent Bindschaedler
5 months
5/ Key results:.- Models up to 76% smaller (via quantization, sparsification, pruning). - Up to 3.31Γ— faster throughput. πŸš€.- Identical or better accuracy.
1
0
0
@lbindschaedler
Laurent Bindschaedler
5 months
4/ We tested IOLM-DB on tasks like:.- Summarization.- Error correction.- Semantic joins.- ….
1
0
0
@lbindschaedler
Laurent Bindschaedler
5 months
3/ Enter IOLM-DB: our prototype system that makes this practical by generating lightweight, query-specific models tailored to each query. ⚑.
1
0
0
@lbindschaedler
Laurent Bindschaedler
5 months
2/ Imagine running this query:. SELECT product_id, user_id, . prompt('summarize in 5 words: ' || review) AS review_summary .FROM product_reviews; . Cool, right? But it’s crazy expensive at scale. Millions of rows = millions of LLM calls. 😱.
1
0
0
@lbindschaedler
Laurent Bindschaedler
5 months
1/ πŸŽ‰ Excited to present our paper, "The Case for Instance-Optimized LLMs in OLAP Databases", at #DOLAP (EDBT/ICDT) 2025! We tackle the challenge of scaling LLM-enhanced database queries. How? Let’s dive in! πŸ‘‡πŸ§΅
Tweet media one
1
1
4