JunchenJiang Profile Banner
Junchen Jiang Profile
Junchen Jiang

@JunchenJiang

Followers
397
Following
88
Media
4
Statuses
85

CS Prof @ UChicago https://t.co/U01oOWGnip (Fast distributed LLM inference) https://t.co/hoetjwXKIt (Best KV cache layer)

Chicago, IL
Joined September 2012
Don't wanna be here? Send us removal request.
@JunchenJiang
Junchen Jiang
1 day
Go LMCache ๐Ÿš€.
@EmbeddedLLM
EmbeddedLLM
2 days
@lmcache in @vllm_project Singapore meetup!
Tweet media one
1
1
10
@JunchenJiang
Junchen Jiang
13 days
RT @lmcache: 8 KV-Cache Systems You Canโ€™t Afford to Miss in 2025. By 2025, KV-cache has evolved from a โ€œnice-to-haveโ€ optimization into a cโ€ฆ.
0
16
0
@JunchenJiang
Junchen Jiang
15 days
RT @zhzHNN: Interviewing 100 Bay Area Startups has always been my dream โ€” and today, Iโ€™m starting the journey. ๐Ÿš€. Big thanks to @lmcache fโ€ฆ.
0
3
0
@JunchenJiang
Junchen Jiang
16 days
RT @TerryTangYuan: Excited to share that I'll be speaking at ๐—–๐—น๐—ผ๐˜‚๐—ฑ ๐—ก๐—ฎ๐˜๐—ถ๐˜ƒ๐—ฒ ๐—ž๐Ÿด๐˜€ ๐—”๐—œ ๐——๐—ฎ๐˜†, in addition to @KubeCon_! . Dan Sun and I will be delโ€ฆ.
colocatedeventsna2025.sched.com
View more about this event at CNCF-hosted Co-located Events North America 2025
0
4
0
@JunchenJiang
Junchen Jiang
20 days
RT @lmcache: CacheGen( lets you store KV caches on disk or AWS S3 and load them way faster than recomputing! . Modeโ€ฆ.
0
6
0
@JunchenJiang
Junchen Jiang
23 days
RT @bentomlai: ๐Ÿค”ย What is KV cache offloading and why does it matter for LLM inference?. #LLMs use the KV cache to accelerate inference speeโ€ฆ.
0
2
0
@JunchenJiang
Junchen Jiang
24 days
RT @lmcache: LMCache supports gpt-oss (20B/120B) on Day 1!. TTFT 1.20s โ†’ 0.39s (-67.5%), finish time 15.70s โ†’ 7.73s (-50.7%) compared to Vaโ€ฆ.
0
9
0
@JunchenJiang
Junchen Jiang
26 days
RT @lmcache: ๐Ÿš€ Big news from LMCache Lab!. ๐Ÿ“ 3 papers accepted at SOSP โ€™25 & NSDI โ€™26, pushing the frontier of LLM-inference efficiency:โ€ฆ.
0
6
0
@JunchenJiang
Junchen Jiang
1 month
RT @NadavTimor: KV cache go brrr with @JunchenJiang's @lmcache!.Join us tomorrow to learn more about nextโ€‘gen longโ€‘context LLM inference: hโ€ฆ.
0
2
0
@JunchenJiang
Junchen Jiang
1 month
RT @zhzHNN: @hidecloud Hi, we are organizing a meetup in Bay Area to discuss context engineering with @JunchenJiang and @lmcache . Are youโ€ฆ.
0
1
0
@JunchenJiang
Junchen Jiang
1 month
RT @zhzHNN: 25 Must-Know Projects for AI/LLM Serving โ€“ From 2017 to Now.
Tweet media one
0
3
0
@JunchenJiang
Junchen Jiang
1 month
RT @siddhantrayyy: With RAG and agents becoming ubiquitous in LLM systems, tuning quality and performance JOINTLY is essential to achieve tโ€ฆ.
0
6
0
@JunchenJiang
Junchen Jiang
1 month
RT @astrogu_: Excited to share our latest work ๐— ๐—˜๐—ง๐—œ๐—ฆ at #SOSP2025. This oneโ€™s special as itโ€™s my first full CS project from start to finisโ€ฆ.
0
3
0
@JunchenJiang
Junchen Jiang
2 months
RT @lmcache: The gang ๐Ÿซก
Tweet media one
0
4
0
@JunchenJiang
Junchen Jiang
2 months
RT @this_will_echo: ๐Ÿคฏ Believe it or not, even when an LLM generates just ONE SINGLE word, it can still be powerful!. Say in recommendation:โ€ฆ.
0
7
0
@JunchenJiang
Junchen Jiang
2 months
RT @lmcache: ๐Ÿšจ LMCache now turbocharges multimodal models in vLLM!. By caching image-token KV pairs, repeated images now get ~100% cache hiโ€ฆ.
0
12
0
@JunchenJiang
Junchen Jiang
2 months
RT @lmcache: ๐—Ÿ๐— ๐—–๐—ฎ๐—ฐ๐—ต๐—ฒ ๐—ฟ๐—ฒ๐—ฎ๐—ฐ๐—ต๐—ฒ๐˜€ ๐Ÿฎ,๐Ÿฌ๐Ÿฌ๐Ÿฌ+ ๐˜€๐˜๐—ฎ๐—ฟ๐˜€ ๐—ผ๐—ป ๐—š๐—ถ๐˜๐—›๐˜‚๐—ฏ! ๐ŸŒŸ . A huge thank you to our open-source communityโ€”your support is fueling nextโ€‘gen effโ€ฆ.
0
5
0
@JunchenJiang
Junchen Jiang
2 months
RT @astrogu_: ๐Ÿฅณ๐Ÿฅณ๐Ÿฅณ.
0
1
0
@JunchenJiang
Junchen Jiang
2 months
RT @GitHubGPT: ๐Ÿ“› LMCache.๐Ÿง  LMCache, an LLM engine, boosts performance by minimizing TTFT and enhancing throughput via effective KV cache maโ€ฆ.
Tweet card summary image
github.com
Supercharge Your LLM with the Fastest KV Cache Layer - LMCache/LMCache
0
3
0
@JunchenJiang
Junchen Jiang
2 months
RT @lmcache: Our very own @JunchenJiang gave a talk about large-scale efficient inference at Open Source Summit 2025 yesterday with Yue Zhโ€ฆ.
0
2
0