nanjiangwill Profile Banner
Nan Jiang Profile
Nan Jiang

@nanjiangwill

Followers
72
Following
111
Media
5
Statuses
31

research at UChicagoCS / at Amazon AGI SF Lab

San Francisco, CA
Joined January 2018
Don't wanna be here? Send us removal request.
@nanjiangwill
Nan Jiang
3 months
RT @SonglinYang4: 📢 (1/16) Introducing PaTH 🛣️ — a RoPE-free contextualized position encoding scheme, built for stronger state tracking, be….
Tweet card summary image
arxiv.org
The attention mechanism is a core primitive in modern large language models (LLMs) and AI more broadly. Since attention by itself is permutation-invariant, position encoding is essential for...
0
91
0
@nanjiangwill
Nan Jiang
3 months
RT @lmsysorg: Awesome collaboration between our SGLang team @lmsysorg, @verl_project, LinkedIn, and UCLA AGI Lab. Thanks so much for the….
0
5
0
@nanjiangwill
Nan Jiang
3 months
RT @wzhao_nlp: Some personal news: I'll join @UMassAmherst CS as an assistant professor in fall 2026. Until then, I'll postdoc at @Meta nyc….
0
32
0
@nanjiangwill
Nan Jiang
4 months
amazing Jason, amazing Nexad, please check this out!.
@onjas_6
Jason Hu
4 months
Let’s be real—ads have annoyed me for years. Pop-ups, spam, etc… while the world is moving towards AGI, the ad world felt stuck in the past. So I decided to flip the script. Today, I’m proud to share: Nexad has raised a $6M seed round, led by @a16z SR04, @Prosus_Ventures ,
1
0
1
@nanjiangwill
Nan Jiang
5 months
RT @wzhao_nlp: Coding agents can debug their own outputs, but what if none of the fixes are correct? We overcome sparse rewards by making t….
0
25
0
@nanjiangwill
Nan Jiang
11 months
RT @srush_nlp: I teach a class where students code up an ML library from scratch in Python. Wenting showed me that a Claude Agent (with int….
0
37
0
@nanjiangwill
Nan Jiang
11 months
So. can agents now build a package from scratch? Test them on Commit0!. This is an amazing and fun project this summer! Huge thanks to Wenting and to everyone in the lab for their support and guidance! 🚀👏.
@wzhao_nlp
Wenting Zhao
11 months
Introducing the commit0 interactive environment for coding agents. Challenge: generate Python libraries from scratch. Commit0 is designed with interactivity, dependencies, and specifications as first-class considerations. We include a benchmark with 50+ challenging libraries.
0
2
9
@nanjiangwill
Nan Jiang
1 year
RT @xiuyu_l: Handling long context in LLMs is expensive, but can we cut the cost by learning them offline for a specific set/genre of docum….
0
52
0
@nanjiangwill
Nan Jiang
1 year
RT @onjas_buidl: 🚀 Introducing RouterBench, the first comprehensive benchmark for evaluating LLM routers! 🎉.A collaboration between @withma….
0
30
0
@nanjiangwill
Nan Jiang
2 years
RT @wkvong: 1/ Today in Science, we train a neural net from scratch through the eyes and ears of one child. The model learns to map words t….
0
689
0
@nanjiangwill
Nan Jiang
2 years
We're excited to contribute to the exploration of alternative architectures and emergent capabilities!!. 🎉🎉🎉 Huge congrats and many thanks to Ivan Lee and Prof. Taylor Berg-Kirkpatrick @BergKirkpatrick. 🧵[9/9].
0
0
1
@nanjiangwill
Nan Jiang
2 years
Section 3.1: A Simple Few-Shot Natural Language Task. 1) Stronger models tend to have worse performance when not relying on semantics.2) Most architectures fail in the flipped setting while Hyena is the best model compared to other models that is not pre-trained. 🧵[8/9]
Tweet media one
Tweet media two
1
0
1
@nanjiangwill
Nan Jiang
2 years
Section 3: ICL in the real world: LM and Commonsense Reasoning. 1) Most architectures exhibit an abrupt improvement in ICL score.2) Models in the same family behave similarly.3) We use these models that trained from scratch to perform generation task, see Section 3.1. 🧵[7/9]
Tweet media one
1
0
1
@nanjiangwill
Nan Jiang
2 years
Section 2: Effects of Training Data Distribution on Omniglot. 1) ICL does not emerge when trained on purely non-bursty examples. 2) Some architectures(Llama 2, GPT-2, Heyna, H3, RWKV) are predisposed towards ICL and others are predisposed towards memorization. 🧵[6/9]
Tweet media one
1
0
2
@nanjiangwill
Nan Jiang
2 years
Section 1: ICL to Associative Recall, Linear Regression and Multiclass Classification. 1) All architectures can perform ICL.2) Many architectures are poor at extrapolation while some attention alternatives are good at it.3) Many models are comparable to transformer. 🧵[5/9]
Tweet media one
1
0
2
@nanjiangwill
Nan Jiang
2 years
We discover that. 1) all architectures can perform ICL under certain conditions. 2) emerging attention alternatives with sub-quadratic time and memory complexity are more robust in-context learners than transformer. 🧵[4/9].
1
0
3
@nanjiangwill
Nan Jiang
2 years
We evaluate 12 architectures from 4 families across a suite of synthetic ICL tasks. 🧵[3/9]
Tweet media one
Tweet media two
1
0
2
@nanjiangwill
Nan Jiang
2 years
Background:. While transformers demonstrate impressive in-context learning (ICL) capabilities, they face challenges such as quadratic time complexity and increased memory demand. So, could other architectures effectively perform ICL?. 🧵[2/9].
1
0
1
@nanjiangwill
Nan Jiang
2 years
❓Are attention-based models needed for In-Context Learning(ICL)?.🤔Can emerging architectures perform ICL?. 🎉Check out our #ICLR2024 paper "Exploring the Relationship Between Model Architecture and In-Context Learning Ability" 🎉 #LLM. Paper: 🧵[1/9].
Tweet card summary image
arxiv.org
What is the relationship between model architecture and the ability to perform in-context learning? In this empirical study, we take the first steps toward answering this question. We evaluate...
1
29
123
@nanjiangwill
Nan Jiang
2 years
RT @marcusjmin: 🚨 #GPT4 doesn't understand the code/specification written by itself!? 🚨. 🥳 Check out our #ICLR2024 paper "Beyond Accuracy:….
0
5
0