Yu Zhang ππ
@yzhang_cs
Followers
1K
Following
5K
Media
8
Statuses
730
@Kimi_Moonshot; PhD Student @ Soochow University; working on efficient methods for LLMs; disciple of parallel programming; INTP
Joined February 2023
Weβre also shipping fla-core in lock-step with flash-linear-attention: a minimal, forever-in-sync companion pkg that carries nothing except triton+torch Need only fused Norm, CausalConv, linear-attn kernels, w/o transformers worries? fla-core is enough. https://t.co/uspgYtZ4t0
pypi.org
Core operations for flash-linear-attention
Excited to see Gated DeltaNet being adopted in the @Alibaba_Qwen series ! It has also previously demonstrated strong effectiveness in @nvidia's Jet-Nemotron
1
8
65
If you are interested in Kimi K2 thinking, you can check out this interview with Yang Zhilin, founder of Kimi (with Chinese and English bilingual subtitles):
3
6
51
π’ (1/16) Introducing PaTH π£οΈ β a RoPE-free contextualized position encoding scheme, built for stronger state tracking, better extrapolation, and hardware-efficient training. PaTH outperforms RoPE across short and long language modeling benchmarks https://t.co/nJItUuYKWZ
arxiv.org
The attention mechanism is a core primitive in modern large language models (LLMs) and AI more broadly. Since attention by itself is permutation-invariant, position encoding is essential for...
9
88
551
We've just finished some work on improving the sensitivity of Muon to the learning rate, and exploring a lot of design choices. If you want to see how we did this, follow me ....1/x (Work lead by the amazing @CrichaelMawshaw)
5
22
178
π "Quantization is not a compromise β it's the next paradigm." After K2-Thinking's release, many developers have been curious about its native INT4 quantization format. εε°δΌ, infra engineer at @Kimi_Moonshot and Zhihu contributor, shares an insider's view on why this choice
13
90
531
β¬β¬.β.β¬β¬ βββββββ β’β€ βββββββββββ’β€ ββ β ββ ββββββββββ¬ β₯ββββββ€ βββ©βββ©ββ β¬ββ¬ β¬ββ¬ Just dropped down to say β¬ββ¬ Don't β¬ββ¬ Push To Production On Friday β¬ββ¬ β¬ββ¬ β»/ β¬ββ¬/β β¬ββ¬/ \
106
337
3K
> All benchmark results are reported under INT4 precision. Do you understand what a flex this was. They go toe to toe with GPT-5 on the heaviest, longest-range tasks, with hundreds of tool calls. ALL IN INT4. Β«Convert to fp8 if you needΒ» Frontier lab.
π Hello, Kimi K2 Thinking! The Open-Source Thinking Agent Model is here. πΉ SOTA on HLE (44.9%) and BrowseComp (60.2%) πΉ Executes up to 200 β 300 sequential tool calls without human interference πΉ Excels in reasoning, agentic search, and coding πΉ 256K context window Built
21
84
763
MoonshotAI has released Kimi K2 Thinking, a new reasoning variant of Kimi K2 that achieves #1 in the Tau2 Bench Telecom agentic benchmark and is potentially the new leading open weights model Kimi K2 Thinking is one of the largest open weights models ever, at 1T total parameters
81
286
2K
Leaving Meta and PyTorch I'm stepping down from PyTorch and leaving Meta on November 17th. tl;dr: Didn't want to be doing PyTorch forever, seemed like the perfect time to transition right after I got back from a long leave and the project built itself around me. Eleven years
496
554
11K
Day-0 support for Kimi K2 Thinking on SGLang β‘ The new open-source thinking-agent model pushes reasoning, coding, and multi-step tool use to new heights. Proud to collaborate with @Kimi_Moonshot to make it run seamlessly: python -m sglang.launch_server \ --model-path
π Hello, Kimi K2 Thinking! The Open-Source Thinking Agent Model is here. πΉ SOTA on HLE (44.9%) and BrowseComp (60.2%) πΉ Executes up to 200 β 300 sequential tool calls without human interference πΉ Excels in reasoning, agentic search, and coding πΉ 256K context window Built
0
6
26
π Hello, Kimi K2 Thinking! The Open-Source Thinking Agent Model is here. πΉ SOTA on HLE (44.9%) and BrowseComp (60.2%) πΉ Executes up to 200 β 300 sequential tool calls without human interference πΉ Excels in reasoning, agentic search, and coding πΉ 256K context window Built
555
1K
9K
Expectation of the maximum of gaussian random variables https://t.co/UuPsm7LH4w
1
4
54
I just love Hugging Face. Their new 200+ page Training Playbook covers everything: training frameworks, model architecture, data curation, pre/mid/post-training, eval, how GPUs work, latest research, and ablations. Packed with practical wisdom. I read it like a novel.
15
63
587
Kimi K2 Thinking benchmarks are here and it's competitive with (and in some cases beats!) GPT-5 π₯π₯
8
10
278
Hybrid models like Qwen3-Next, Nemotron Nano 2 and Granite 4.0 are now fully supported in vLLM! Check out our latest blog from the vLLM team at IBM to learn how the vLLM community has elevated hybrid models from experimental hacks in V0 to first-class citizens in V1. π
1
35
139
love to see it - ongoing community effort makes deploying recurrent models (mamba, deltanet, other linear attention hybrids) easier than ever to realize their inference throughput wins
Hybrid models like Qwen3-Next, Nemotron Nano 2 and Granite 4.0 are now fully supported in vLLM! Check out our latest blog from the vLLM team at IBM to learn how the vLLM community has elevated hybrid models from experimental hacks in V0 to first-class citizens in V1. π
3
14
97
Hybrid Models as First-Class Citizens in vLLM π
Hybrid models like Qwen3-Next, Nemotron Nano 2 and Granite 4.0 are now fully supported in vLLM! Check out our latest blog from the vLLM team at IBM to learn how the vLLM community has elevated hybrid models from experimental hacks in V0 to first-class citizens in V1. π
1
6
144
Collaborator and friend Dan Alistarh talks at ETH about using the new NvFP4 and MXFP4 block formats for inference. Some going from "terrible" accuracy to acceptable using micro rotations to smoothen outliers in blocks. https://t.co/4samDQeuGj Great collaboration and cool stuff
1
1
24