Jure Leskovec
@jure
Followers
44K
Following
1K
Media
223
Statuses
1K
Professor of #computerscience @Stanford; Co-founder at https://t.co/hhm1j5wP0f #machinelearning #graphs.
Stanford, CA
Joined August 2007
It’s time to bring the Foundation Model era to structured enterprise data. Thanks to the @mlopscommunity for having me. Full episode here:
home.mlops.community
Today’s foundation models excel at text and images—but they miss the relationships that define how the world works. In every enterprise, value emerges from connections: customers to products,...
0
0
1
By learning directly on the "raw" database structure (multiple tables at once), we can: - Eliminate the time-sink of feature engineering. - Capture signal that flat tables miss. - Fix the "time travel" issues plaguing production ML. 4/5
1
0
1
This is the gap Relational Foundation Models (RFMs) fill. Unlike LLMs which rely on general "common sense," RFMs use Graph Neural Networks to reason over the complex web of interactions (user -> click -> product -> supplier) inside your data warehouse. 3/5
1
0
1
The history of AI is a move toward learning on raw data. Computer Vision moved from filters to raw pixels. NLP moved from parsing to raw tokens. Yet, predictive modeling is stuck in the past: manually joining tables and hand-engineering features for months. 2/5
1
0
0
We seem to have forgotten that structured data is the blueprint of the business. While everyone is focused on LLMs and documents, the "ground truth" of your enterprise still lives in relational databases. I joined the @mlopscommunity Podcast to discuss why this matters. 🧵
1
5
29
I break down the architecture, the shift from manual ML pipelines to Foundation Models, and how industry leaders like Netflix are approaching this. Read the full analysis here:
towardsdatascience.com
LLMs are a seamless way to find value in your unstructured data, but the truth is, there is so much more value hidden within your structured data. This post explores what LLMs are (and aren’t)...
1
4
31
We don't need better prompts; we need a different architecture. Enter Relational Foundation Models (RFMs). By treating databases as graphs (nodes & edges), RFMs can learn patterns across tables without manual feature engineering. It’s the "GPT moment" for structured data.
1
3
37
Business data isn't just text; it's relational. It’s a complex graph of customers, transactions, and inventory. LLMs predict the next token. They don't "reason" over SQL joins or verify calculations. When accuracy matters (fraud detection, supply chain), hallucinations are a
1
8
42
MIT research suggests 95% of GenAI pilots are failing to deliver ROI. Why? Because we are forcing LLMs to do jobs they weren't designed for. LLMs mastered language. But they don't understand the structured, relational data that businesses actually run on. My new piece in
28
90
396
Incredibly proud of my student @_rishabhranjan_ and our collaboration with @SAP on this exciting work! 🚀 We’re bringing the power of Transformers beyond sequences—into the world of relational data that underpins enterprise applications. A great example of how foundational
Transformers are great for sequences, but most business-critical predictions (e.g. product sales, customer churn, ad CTR, in-hospital mortality) rely on highly-structured relational data where signal is scattered across rows, columns, linked tables and time. Excited to finally
1
16
104
Great reflections on how GNNs continue to thrive in specialized domains like relational data, even as the field broadens into geometric and transformer-based approaches. Exciting to see both academic and industry momentum — especially from teams like Kumo pushing the frontier of
@mttrdmnd I personally never identified with the label “Geometric Deep Learning”, but graph neural nets (GNNs) are still going strong for certain application domains (like relational databases). Plenty of people and industry labs still working on that (incl. startups like Kumo). As for
1
8
93
The Stanford Graph Learning Workshop 2025 videos are now live! 🎥 Watch all talks! links below 👇 This year’s themes: 🧠 Agents 🔗 Relational Foundation Models ⚡ Fast LLM Inference Explore the frontiers of AI & data science with top researchers and innovators.
4
91
387
Want a job in AI? Don’t just study it — build with it. Launch projects. Join hackathons. Stay curious. At @Kumo_ai_team + @Stanford, we look for people who experiment, learn fast, and communicate well. 🧠 “There’s no playbook for AI — we’re writing it now.” @BusinessInsider
businessinsider.com
"There's no playbook for AI," says Stanford professor Jure Leskovec. That's why it's important to launch your own projects and develop these skills.
3
13
151
Biomni-R0-32B-Preview is now open-weight on Hugging Face! Biomni-R0-32B is biomedical AI model trained by the Biomni team. It beats GPT-5 and Claude Sonnet 4. To ground progress, we’re also releasing Biomni-Eval1 — 443 data points across 10 tasks for benchmarking agent on
1
12
42
🚀 It’s time! The Stanford Graph Learning Workshop 2025 is kicking off NOW! Join us for a full day of cutting-edge research, dynamic talks, and community connection — all centered around this year’s themes: 🤖 Agents – autonomous systems reshaping human–AI interaction 🔗
1
22
143
AI’s revolution is incomplete. LLMs shine on unstructured text, yet stumble on what powers enterprises: relational data (tables, joins, keys, time, constraints). Tokens ≠ tables. I break down the gap—and what comes next—in my @AIconference keynote: https://t.co/ZP6tBPsvX6
0
7
74
Love seeing creative uses of https://t.co/JQmhNIjY9b! Philippe Dagher shows how to forecast 21-day SKU demand — no retraining, no feature factory — just data → graph → forecast. Built on https://t.co/cUKKS70k36’s relational foundation model, purpose-built for structured data.
0
3
24
Only a few days left until the Stanford Graph Learning Workshop on Oct 14. Check out the full schedule of talks on Agents, RFMs, and LLM Inference. Time is running out to register! See the agenda & join us: https://t.co/fOsbBIqGR5
1
11
27
Scaling GNNs/Graph Transformers isn’t about layers—it’s about neighborhoods. Our blog breaks down how @Kumo_ai_team online sampling, temporal correctness, and feature streaming keep GPUs busy and make 6+ hop training on billion-node graphs real. Must-read! https://t.co/gjdShTAEpN
0
4
24
🚀 Biomni v0.0.7 is live! New features: 📜 PDF export for agent chats 🦠 SOTA cell type transfer algorithms 🔬LazySlide pathology support 🔍Claude web search 🌟Bioimaging pipeline tools 🧬Gene conversion & ESM embeddings 🪴Glycoengineering capabilities Plus improved commercial
github.com
Biomni: a general-purpose biomedical AI agent. Contribute to snap-stanford/Biomni development by creating an account on GitHub.
0
9
57