
Junlin Wang
@JunlinWang3
Followers
229
Following
304
Media
13
Statuses
41
PhD Student @duke_nlp. Interning at @togethercompute. Inference-time scaling, multi-agent systems
Joined December 2013
RT @togethercompute: Most AI benchmarks test the past. But real intelligence is about predicting the future. Introducing FutureBench — a….
0
19
0
Work done during my internship at Together AI is being presented at #icml25. Come and check it out! . We propose a new model alignment pipeline that harness collective intelligence from open-source llms!
0
3
18
RT @NehzUx: LLMs are getting more powerful, but they still struggle with super long documents. A common trick is "Divide and Conquer" - cho….
0
6
0
Curious why backtracking works so well? Dive into our latest study that unpacks the mechanisms behind its success.
🚀Excited to share our new paper: How Much Backtracking is Enough?. 🤔 How many backtracks should your LLM learn to reason better?.Turns out: the harder the task, the more backtracking you need!
0
0
2
RT @james_y_zou: Our new #icml2025 paper w/@togethercompute shows how to use synthetic data from Mixture-of-Agents to boost LM fine-tuning….
0
20
0
RT @togethercompute: 🚀 Introducing Mixture-of-Agents Alignment (MoAA), a new method to "distill" the collective intelligence of open-source….
0
11
0
Non-reasoning models augmented by inference-time methods cannot match reasoning models. Find out more in our preprint 👇.📄 Awesome collaboration with @ShangZhu18 @JonSaadFalcon @ben_athi @QingyangWu1 @JueWANG26088228 @Leon75421958 @ce_zhang @bhuwandhingra
1
1
6
Excited to share work from my @togethercompute internship—a deep dive into inference‑time scaling methods 🧠. We rigorously evaluated verifier‑free inference-time scaling methods across both reasoning and non‑reasoning LLMs. Some key findings:. 🔑 Even with huge rollout budgets,
1
54
175
RT @iScienceLuvr: Think Deep, Think Fast: Investigating Efficiency of Verifier-free Inference-time-scaling Methods. "This work conducts a c….
0
37
0
RT @LindaHe49140661: Excited to share our work on scaling LLMs to handle million-token contexts! Training models for ultra-long sequences i….
0
46
0
RT @AnnieFeng6: #ICLR2025 Oral. LLMs often struggle with reliable and consistent decisions under uncertainty 😵💫 — largely because they can….
0
39
0
RT @togethercompute: Introducing Open Deep Research!. A fully open-source Deep Research tool that:.• writes comprehensive reports.• does mu….
0
73
0
RT @james_y_zou: Happy to share that Mixture of Agents (MoA) is selected as a #ICLR2025 spotlight!⚡️ @togethercompute. Our updated paper ha….
github.com
Together Mixture-Of-Agents (MoA) – 65.1% on AlpacaEval with OSS models - togethercomputer/MoA
0
3
0
RT @bhuwandhingra: **New paper from @RoyXie_ **. Do LLMs know when they have read enough to answer a question?. We show how language models….
0
4
0
RT @ClementDelangue: Our science team has started working on fully reproducing and open-sourcing R1 including training data, training scrip….
0
532
0
To be presented @ICLR2025 !!! Super excited for my first iclr conference😆.
TogetherAI presents Mixture-of-Agents Enhances Large Language Model Capabilities. Achieves SotA performance on AlpacaEval 2.0, MT-Bench and FLASK, surpassing GPT4o.
0
0
13