Linus Pin-Jie Lin Profile
Linus Pin-Jie Lin

@linusdd44804

Followers
73
Following
709
Media
18
Statuses
123

PhD @VT_CS, Master @LstSaar. Interested in efficient model development & modular LMs

Saarbrücken
Joined April 2019
Don't wanna be here? Send us removal request.
@linusdd44804
Linus Pin-Jie Lin
2 years
✨Personal update - I am glad to share that, after graduating from @LstSaar this Spring, I will be pursuing a PhD at @VT_CS this fall and advised by @tuvllms. Very excited about what's to come. 😎
2
1
12
@linusdd44804
Linus Pin-Jie Lin
9 days
Drop by today if you’re around!
@tuvllms
Tu Vu
9 days
I am not at EMNLP this year, but my student @linusdd44804 will be presenting our paper on efficient model development through fine-tuning transfer. The presentation is tomorrow 2-3:30 pm, A109 (session 15). Please come talk to him!
0
0
0
@linusdd44804
Linus Pin-Jie Lin
9 days
I’ll be presenting our fine-tuning transfer paper tomorrow! TLDR: Alignment tuning effects can be captured as transferable model diff vectors — no need to fine-tune from scratch for every new base model version. Come find me: 🕑 14:00–15:30 📍 A109 (Session 15) #EMNLP2025
@tuvllms
Tu Vu
3 months
Excited to share that our paper on efficient model development has been accepted to #EMNLP2025 Main conference @emnlpmeeting. Congratulations to my students @linusdd44804 and @Sub_RBala on their first PhD paper! 🎉
0
0
5
@thinkymachines
Thinking Machines
2 months
LoRA makes fine-tuning more accessible, but it's unclear how it compares to full fine-tuning. We find that the performance often matches closely---more often than you might expect. In our latest Connectionism post, we share our experimental results and recommendations for LoRA.
82
564
3K
@thinhphp_vt
Thinh
3 months
DeepSeek achieved a strong result on SEAL0, a challenging benchmark for reasoning with conflicting search results. 🎊
@deepseek_ai
DeepSeek
3 months
Tools & Agents Upgrades 🧰 📈 Better results on SWE / Terminal-Bench 🔍 Stronger multi-step reasoning for complex search tasks ⚡️ Big gains in thinking efficiency 3/5
0
1
5
@linusdd44804
Linus Pin-Jie Lin
3 months
🎉🎉
@tuvllms
Tu Vu
3 months
Excited to share that our paper on efficient model development has been accepted to #EMNLP2025 Main conference @emnlpmeeting. Congratulations to my students @linusdd44804 and @Sub_RBala on their first PhD paper! 🎉
0
0
1
@thinhphp_vt
Thinh
4 months
We just evaluated Grok 4 on our SEAL-0 dataset 👍Try it: https://t.co/g5JIhB1EoI
0
2
14
@TsendeeMTS
Tsendsuren
5 months
This work got accepted at Transactions on Machine Learning Research (TMLR). Congratulations to @prateeky2806 and my co-authors. Also, thank you to the reviewers and editors for their time.
@prateeky2806
Prateek Yadav
1 year
Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it generalizes well? cc: @GoogleAI @GoogleDeepMind @uncnlp 🧵👇 Excited to announce my
0
4
13
@prateeky2806
Prateek Yadav
1 year
Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it generalizes well? cc: @GoogleAI @GoogleDeepMind @uncnlp 🧵👇 Excited to announce my
6
87
393
@tuvllms
Tu Vu
5 months
Excited to share that our paper on model merging at scale has been accepted to Transactions on Machine Learning Research (TMLR). Huge congrats to my intern @prateeky2806 and our awesome co-authors @_JLai, @alexandraxron, @manaalfar, @mohitban47, and @TsendeeMTS 🎉!!
@prateeky2806
Prateek Yadav
1 year
Ever wondered if model merging works at scale? Maybe the benefits wear off for bigger models? Maybe you considered using model merging for post-training of your large model but not sure if it generalizes well? cc: @GoogleAI @GoogleDeepMind @uncnlp 🧵👇 Excited to announce my
2
21
90
@rohanpaul_ai
Rohan Paul
5 months
More thinking power at test-time doesn't fix noisy-search problems—SealQA proves it. AI's reasoning capabilities fall flat when web search turns messy, and SealQA quantifies that. SealQA introduces an exceptionally challenging benchmark for search-augmented language models,
3
3
9
@tuvllms
Tu Vu
6 months
✨ New paper ✨ 🚨 Scaling test-time compute can lead to inverse or flattened scaling!! We introduce SealQA, a new challenge benchmark w/ questions that trigger conflicting, ambiguous, or unhelpful web search results. Key takeaways: ➡️ Frontier LLMs struggle on Seal-0 (SealQA’s
4
41
146
@sivareddyg
Siva Reddy
8 months
Introducing the DeepSeek-R1 Thoughtology -- the most comprehensive study of R1 reasoning chains/thoughts ✨. Probably everything you need to know about R1 thoughts. If we missed something, please let us know.
@saraveramarjano
Sara Vera Marjanović
8 months
Models like DeepSeek-R1 🐋 mark a fundamental shift in how LLMs approach complex problems. In our preprint on R1 Thoughtology, we study R1’s reasoning chains across a variety of tasks; investigating its capabilities, limitations, and behaviour. 🔗: https://t.co/Cyy18kYQ45
0
24
83
@EranMalach
Eran Malach
7 months
How does RL improve performance on math reasoning? Studying RL from pretrained models is hard, as behavior depends on choice of base model. 🚨 In our new work, we train models *from scratch* to study the effect of the data mix on the behavior of RL. https://t.co/XtToYfkFiP
3
37
144
@tuvllms
Tu Vu
8 months
📢 Research internship @Google📢 I am looking for a PhD student researcher to work with me and my colleagues on advanced reasoning and/or RAG factuality this summer @Google Mountain View, CA. We will focus on open-source models and benchmarks, and aim to publish our findings.
3
38
343
@maksym_andr
Maksym Andriushchenko
8 months
prompt engineering -> thought engineering :-) https://t.co/2asYXIQqzd
3
13
76
@tuvllms
Tu Vu
8 months
🚨 New paper 🚨 Excited to share my first paper w/ my PhD students!! We find that advanced LLM capabilities conferred by instruction or alignment tuning (e.g., SFT, RLHF, DPO, GRPO) can be encoded into model diff vectors (à la task vectors) and transferred across model
14
94
445
@TsendeeMTS
Tsendsuren
8 months
Almost 7 years ago, Tu Vu and I wrote our first paper together, one of few. It is fantastic to see the first paper by Tu’s students this time. Congratulations and looking forward to many such great works from Tu’s group!
@tuvllms
Tu Vu
8 months
🚨 New paper 🚨 Excited to share my first paper w/ my PhD students!! We find that advanced LLM capabilities conferred by instruction or alignment tuning (e.g., SFT, RLHF, DPO, GRPO) can be encoded into model diff vectors (à la task vectors) and transferred across model
1
2
15
@linusdd44804
Linus Pin-Jie Lin
8 months
My first PhD paper is out 😆 took 7 months and lots of back-and-forth. Learned so much from Tu — sharp thinking, real feedback, and always pushing the idea further. Also, shoutout to my collaborators and the folks at @VT_CS!
@tuvllms
Tu Vu
8 months
Our paper is now available on arXiv:
0
0
4
@tuvllms
Tu Vu
8 months
Our paper is now available on arXiv:
Tweet card summary image
arxiv.org
Modern LLMs struggle with efficient updates, as each new pretrained model version requires repeating expensive alignment processes. This challenge also applies to domain- or languagespecific...
@tuvllms
Tu Vu
8 months
🚨 New paper 🚨 Excited to share my first paper w/ my PhD students!! We find that advanced LLM capabilities conferred by instruction or alignment tuning (e.g., SFT, RLHF, DPO, GRPO) can be encoded into model diff vectors (à la task vectors) and transferred across model
0
3
12