Lossfunk
@lossfunk
Followers
12K
Following
432
Media
92
Statuses
367
For independent minds exploring foundational questions
🇮🇳
Joined January 2025
@inceptmyth @paraschopra @Muennighoff @_jasonwei 7/ Our full paper has tons of additional results and ablations. Go check it out! Blog post: https://t.co/K0Wo61DgcK Full paper: https://t.co/Z3yCVITR8r Feedback welcome!
0
0
3
6/ This work was led by Aman Sharma (@inceptmyth) under the supervision of Paras Chopra @paraschopra! Pinging those who’ll find our work interesting: • @Muennighoff (s1: Simple test-time scaling (2025)) • @_jasonwei (Self-Consistency Improves Chain of Thought Reasoning in
1
0
4
5/ Our intuition for why sequential scaling works so well. It's mostly mechanisms parallel scaling can’t touch: • Iterative Error Correction: Models flag and patch mistakes in real time. • Progressive Context Buildup: Insights compound, turning shallow takes into profound
1
0
3
4/ You know the fascinating part? Sequential scaling helps with higher diversity for creativity too (so it’s not just reasoning boost) In an ablation on creative tasks like joke generation, sequential methods demonstrated improved quality and diversity through iterative
1
0
3
3/ We also found sequential's advantages hold across model families (GPT-OSS, Qwen3, Kimi-K2) and scales (20B to trillion params). Our main result below, showing accuracies across all tested models, benchmarks, and chain counts. Notice: • Sequential consistently outperforms
1
0
4
2/ Entropy-based voting has been used as a way to assess confidence. What’s novel about our work: • We leverage Shannon entropy from token-level logprobs for principled uncertainty quantification in sequential chains. • Our inverse-entropy weighted (IEW) voting performs as
1
0
4
1/ Our method is really simple and can be seen in the image below. TLDR: • Sequential scaling where chains explicitly build upon previous attempts consistently outperforms the dominant parallel self-consistency paradigm in 95.6% of configurations with gains in accuracy up to
1
0
7
📢 Releasing our latest paper Selected for @NeurIPSConf workshop on efficient reasoning! We show that the optimal test time scaling method is iteratively refining through sequential steps. 👉 Our method beats majority voting by parallel chains in 95% of configurations with
3
11
64
Look ma, what we're cooking! 🧑🍳
It has been roughly 6 months of pursuing research at @lossfunk! Check out the kinds of questions we're exploring 👇
0
1
50
Apply here: https://t.co/4UU852gsn8 Program details: https://t.co/IsX830gu8d If you had already applied, know that we're yet to review applications. We'll reach out in November if you're a good fit for the program.
binary.so
2
0
22
It's a wrap for Batch 6! Next batch starts Jan 2026 - we're taking a winter break. Applications opens 👇
1
0
47
LossFunk Demo Day was 🔥! @paraschopra + Loss Funk innovators crushed 6-week builds Thread of stars in bullets #LossFunk
1
1
7
just presented my work on ai × olfaction at the @lossfunk demo day as part of their residency program. grateful to be part of such an inspiring community. shoutout to batch 6!
2
4
66
Yes, this is how we party at @lossfunk :)
I was at a party in blr and this guy pulled out a chess board while everyone else was going mad at the dance floor. what do you even call this?
2
2
63
one thing off the bucket list for 2025 - p̶r̶e̶s̶e̶n̶t̶i̶n̶g̶ ̶a̶t̶ ̶a̶ ̶t̶e̶c̶h̶ ̶c̶o̶n̶f̶e̶r̶e̶n̶c̶e̶ @lossfunk great event!
3
1
28
@r_sindhoora girl represent 👩🔬 dogs can sense 🫁 cancer, @r_sindoora going into the scientific parts of it! 🐕👃🐽 what was i doing in 12th grade 🥲
2
3
13
Copilot for building bio molecules, @try_litefold all built with live users within five months of starting at @lossfunk.🤯🧬👩🔬 Goosebumps cause @medsee_ai started with a similar strategy in a parallel space!
2
3
30
Presented @dictationdaddy at @lossfunk demo day. Thanks for @paraschopra for building such an amazing community. Thanks to @whyloop_ and @saibhardwaj for making it possible to have such a smooth demo day. Cheers to all the fellow builders of batch 6. It was fun working with
6
5
39
Just gave a talk on "how you can make llms talk in tulu" and graduated from @lossfunk's Batch 6 😁 Joining lossfunk as a research intern soon :D
7
4
87