
Wenting Zhao
@wzhao_nlp
Followers
3K
Following
336
Media
35
Statuses
416
RT @ericzelikman: i've been thinking lately about how future ai systems will interact with us and how we can make systems that care about p….
0
13
0
this is so cool, we should also have a human player in there.
What a show! The Kaggle Game Arena AI Chess Exhibition Tournament is complete, and the winner is O3 🏆! . A huge thank you to everyone who tuned in and to our amazing partners @MagnusCarlsen, @GMHikaru, @GothamChess and @DavidHowellGM for the fantastic commentary and analysis on
3
0
4
RT @justintchiu: haven't made a new blog post in over a year, so here's a new one: it's short.
justintchiu.com
RL is better than SFT
0
22
0
RT @yorambac: AI Research Agents are becoming proficient at machine learning tasks, but how can we help them search the space of candidate….
0
68
0
RT @michahu8: 📢 today's scaling laws often don't work for predicting downstream task performance. For some pretraining setups, smooth and p….
0
37
0
RT @ori_press: Do language models have algorithmic creativity?. To find out, we built AlgoTune, a benchmark challenging agents to optimize….
0
59
0
RT @_jasonwei: We don’t have AI self-improves yet, and when we do it will be a game-changer. With more wisdom now compared to the GPT-4 day….
0
168
0
Congrats to team! They built my dream benchmark.
Recently, there has been a lot of talk of LLM agents automating ML research itself. If Llama 5 can create Llama 6, then surely the singularity is just around the corner. How can we get a pulse check on whether current LLMs are capable of driving this kind of total
0
0
11
RT @NovaSkyAI: ✨Release: We upgraded SkyRL into a highly-modular, performant RL framework for training LLMs. We prioritized modularity—easi….
0
44
0
Dang, truly impressed by how an academic lab just figured out a lot of mysteries in mid-training to close the RL gap between llama and qwen:.* length scheduler plays a key role to stabilize RL.* there is some dark magic in prompt template?.* the data interaction stuff is really.
What Makes a Base Language Model Suitable for RL?. Rumors in the community say RL (i.e., RLVR) on LLMs is full of “mysteries”:. (1) Is the magic only happening on Qwen + Math?.(2) Does the "aha moment" only spark during math reasoning?.(3) Is evaluation hiding some tricky traps?
3
16
196
It's time to think about code generation beyond functional correctness. Refactoring multiple libraries requires designing APIs that support past and future use cases, which is challenging for even human engineers. Can't wait for LLMs to unify pytorch, tensorflow, and jax 😬.
Are code agents good at software design, ie building general and reusable code?.We present Librarian, a new refactoring method, and MiniCode, a verifiable refactoring benchmark that requires agents to design libraries that jointly minimizes code from multiple repos 🧵
1
4
48
The more I dive into LM training, the more I feel pretraining is just starting. Some questions I’m particularly interested in:.* what data unlocks what capabilities?.* do we train on capabilities sequentially or in parallel?.* how many synthetic examples is a human example worth?.
Mildly obsessed with what the "highest grade" pretraining data stream looks like for LLM training, if 100% of the focus was on quality, putting aside any quantity considerations. Guessing something textbook-like content, in markdown? Or possibly samples from a really giant model?.
8
27
334
That’s the vision of commit0: there is nearly zero improvement on this benchmark in the past few months. I don’t think this problem is solvable in 24 months….
github.com
Commit0: Library Generation from Scratch. Contribute to commit-0/commit0 development by creating an account on GitHub.
cursor is a $100M business that will be worth $0 in 24 months. not because they built wrong - they built perfectly. but they built a sail for a race that's about to end. when AI just writes entire codebases, even the best IDE becomes irrelevant.
1
1
19
RT @AlexGDimakis: There are still posts about 'new papers showing AI models cannot reason'. There are unfortunately problems into how these….
0
19
0
RT @gneubig: Where does one language model outperform the other?. We examine this from first principles, performing unsupervised discovery….
0
11
0