Jure Leskovec Profile
Jure Leskovec

@jure

Followers
44K
Following
1K
Media
223
Statuses
1K

Professor of #computerscience @Stanford; Co-founder at https://t.co/hhm1j5wP0f #machinelearning #graphs.

Stanford, CA
Joined August 2007
Don't wanna be here? Send us removal request.
@jure
Jure Leskovec
4 days
By learning directly on the "raw" database structure (multiple tables at once), we can: - Eliminate the time-sink of feature engineering. - Capture signal that flat tables miss. - Fix the "time travel" issues plaguing production ML. 4/5
1
0
1
@jure
Jure Leskovec
4 days
This is the gap Relational Foundation Models (RFMs) fill. Unlike LLMs which rely on general "common sense," RFMs use Graph Neural Networks to reason over the complex web of interactions (user -> click -> product -> supplier) inside your data warehouse. 3/5
1
0
1
@jure
Jure Leskovec
4 days
The history of AI is a move toward learning on raw data. Computer Vision moved from filters to raw pixels. NLP moved from parsing to raw tokens. Yet, predictive modeling is stuck in the past: manually joining tables and hand-engineering features for months. 2/5
1
0
0
@jure
Jure Leskovec
4 days
We seem to have forgotten that structured data is the blueprint of the business. While everyone is focused on LLMs and documents, the "ground truth" of your enterprise still lives in relational databases. I joined the @mlopscommunity Podcast to discuss why this matters. 🧵
1
5
29
@jure
Jure Leskovec
11 days
I break down the architecture, the shift from manual ML pipelines to Foundation Models, and how industry leaders like Netflix are approaching this. Read the full analysis here:
Tweet card summary image
towardsdatascience.com
LLMs are a seamless way to find value in your unstructured data, but the truth is, there is so much more value hidden within your structured data. This post explores what LLMs are (and aren’t)...
1
4
31
@jure
Jure Leskovec
11 days
We don't need better prompts; we need a different architecture. Enter Relational Foundation Models (RFMs). By treating databases as graphs (nodes & edges), RFMs can learn patterns across tables without manual feature engineering. It’s the "GPT moment" for structured data.
1
3
37
@jure
Jure Leskovec
11 days
Business data isn't just text; it's relational. It’s a complex graph of customers, transactions, and inventory. LLMs predict the next token. They don't "reason" over SQL joins or verify calculations. When accuracy matters (fraud detection, supply chain), hallucinations are a
1
8
42
@jure
Jure Leskovec
11 days
MIT research suggests 95% of GenAI pilots are failing to deliver ROI. Why? Because we are forcing LLMs to do jobs they weren't designed for. LLMs mastered language. But they don't understand the structured, relational data that businesses actually run on. My new piece in
28
90
396
@jure
Jure Leskovec
1 month
Incredibly proud of my student @_rishabhranjan_ and our collaboration with @SAP on this exciting work! 🚀 We’re bringing the power of Transformers beyond sequences—into the world of relational data that underpins enterprise applications. A great example of how foundational
@_rishabhranjan_
rishabh ranjan
1 month
Transformers are great for sequences, but most business-critical predictions (e.g. product sales, customer churn, ad CTR, in-hospital mortality) rely on highly-structured relational data where signal is scattered across rows, columns, linked tables and time. Excited to finally
1
16
104
@jure
Jure Leskovec
1 month
Great reflections on how GNNs continue to thrive in specialized domains like relational data, even as the field broadens into geometric and transformer-based approaches. Exciting to see both academic and industry momentum — especially from teams like Kumo pushing the frontier of
@tkipf
Thomas Kipf
1 month
@mttrdmnd I personally never identified with the label “Geometric Deep Learning”, but graph neural nets (GNNs) are still going strong for certain application domains (like relational databases). Plenty of people and industry labs still working on that (incl. startups like Kumo). As for
1
8
93
@jure
Jure Leskovec
1 month
The Stanford Graph Learning Workshop 2025 videos are now live! 🎥 Watch all talks! links below 👇 This year’s themes: 🧠 Agents 🔗 Relational Foundation Models ⚡ Fast LLM Inference Explore the frontiers of AI & data science with top researchers and innovators.
4
91
387
@jure
Jure Leskovec
1 month
Want a job in AI? Don’t just study it — build with it. Launch projects. Join hackathons. Stay curious. At @Kumo_ai_team + @Stanford, we look for people who experiment, learn fast, and communicate well. 🧠 “There’s no playbook for AI — we’re writing it now.” @BusinessInsider
Tweet card summary image
businessinsider.com
"There's no playbook for AI," says Stanford professor Jure Leskovec. That's why it's important to launch your own projects and develop these skills.
3
13
151
@jure
Jure Leskovec
2 months
Biomni-R0-32B-Preview is now open-weight on Hugging Face! Biomni-R0-32B is biomedical AI model trained by the Biomni team. It beats GPT-5 and Claude Sonnet 4. To ground progress, we’re also releasing Biomni-Eval1 — 443 data points across 10 tasks for benchmarking agent on
1
12
42
@jure
Jure Leskovec
2 months
🚀 It’s time! The Stanford Graph Learning Workshop 2025 is kicking off NOW! Join us for a full day of cutting-edge research, dynamic talks, and community connection — all centered around this year’s themes: 🤖 Agents – autonomous systems reshaping human–AI interaction 🔗
1
22
143
@jure
Jure Leskovec
2 months
AI’s revolution is incomplete. LLMs shine on unstructured text, yet stumble on what powers enterprises: relational data (tables, joins, keys, time, constraints). Tokens ≠ tables. I break down the gap—and what comes next—in my @AIconference keynote: https://t.co/ZP6tBPsvX6
0
7
74
@jure
Jure Leskovec
2 months
Love seeing creative uses of https://t.co/JQmhNIjY9b! Philippe Dagher shows how to forecast 21-day SKU demand — no retraining, no feature factory — just data → graph → forecast. Built on https://t.co/cUKKS70k36’s relational foundation model, purpose-built for structured data.
0
3
24
@jure
Jure Leskovec
2 months
Only a few days left until the Stanford Graph Learning Workshop on Oct 14. Check out the full schedule of talks on Agents, RFMs, and LLM Inference. Time is running out to register! See the agenda & join us: https://t.co/fOsbBIqGR5
1
11
27
@jure
Jure Leskovec
2 months
Scaling GNNs/Graph Transformers isn’t about layers—it’s about neighborhoods. Our blog breaks down how @Kumo_ai_team online sampling, temporal correctness, and feature streaming keep GPUs busy and make 6+ hop training on billion-node graphs real. Must-read! https://t.co/gjdShTAEpN
0
4
24
@ProjectBiomni
Biomni
2 months
🚀 Biomni v0.0.7 is live! New features: 📜 PDF export for agent chats 🦠 SOTA cell type transfer algorithms 🔬LazySlide pathology support 🔍Claude web search 🌟Bioimaging pipeline tools 🧬Gene conversion & ESM embeddings 🪴Glycoengineering capabilities Plus improved commercial
Tweet card summary image
github.com
Biomni: a general-purpose biomedical AI agent. Contribute to snap-stanford/Biomni development by creating an account on GitHub.
0
9
57