AdapterHub Profile
AdapterHub

@AdapterHub

Followers
1K
Following
205
Media
26
Statuses
168

A central repository for pre-trained adapter modules in transformers! Active maintainers: @clifapt @h_sterz @LeonEnglaender @timo_imhof @PfeiffJo

Joined May 2020
Don't wanna be here? Send us removal request.
@AdapterHub
AdapterHub
2 years
πŸŽ‰ Exciting news! The new Adapters library for modular and parameter-efficient transfer learning is out! πŸ€– Now simplified & disentangled from @huggingface pip install adapters pip install transformers πŸ“„ https://t.co/YUxmvjAf72 πŸ‘Ύ https://t.co/GTekd4MEFS #EMNLP2023 πŸ§΅πŸ‘‡
7
102
462
@AdapterHub
AdapterHub
7 months
As always, a huge thanks to our community for the awesome PRs that helped shape this release! πŸŽ‰ Read all about v1.2 on our blog: https://t.co/BwySYdB7Lt πŸ’» Explore the code, try it out & star our repo ⭐: https://t.co/GTekd4MEFS (5/5)
Tweet card summary image
github.com
A Unified Library for Parameter-Efficient and Modular Transfer Learning - GitHub - adapter-hub/adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
0
0
3
@AdapterHub
AdapterHub
7 months
Also new since v1.0: βœ… Added AdapterPlus βœ… Gradient Checkpointing support for memory efficiency βœ… Push & load complex adapter compositions (Stack, Fuse, etc.) directly via the Hugging Face Hub! These additions make Adapters even more powerful & usable. (4/5)
1
0
1
@AdapterHub
AdapterHub
7 months
We added 2 powerful new Adapter methods: 1️⃣ MTL-LoRA: Extends LoRA to multi-task learning, enabling efficient parameter sharing & combination across tasks (Docs: https://t.co/YF1Dp0Tmg3) 2️⃣ VeRA: LoRA variant with shared weights (Docs: https://t.co/H3a3UOuQOn) (3/5)
1
0
1
@AdapterHub
AdapterHub
7 months
This flexibility comes from our new Plugin Interface. It allows seamless integration into any model architecture. As Adapters stays up-to-date with the latest @huggingface transformers version, you can use adapters with any model they support! (2/5) Docs:
1
0
1
@AdapterHub
AdapterHub
7 months
πŸš€Adapters v1.2 is out!πŸš€ We've made Adapters incredibly flexible: Add adapter support to ANY Transformer architecture with minimal code! We used this to add 8 new models out-of-the-box, incl. ModernBERT, Gemma3 & Qwen3! Explore this +2 new adapter methods in this threadπŸ‘‡(1/5)
1
3
23
@PfeiffJo
Jonas Pfeiffer
9 months
I am hiring a Student Researcher for our Modularity team at the Google DeepMind office in ZurichπŸ‡¨πŸ‡­ Please fill out the interest form if you would like to work with us! The role would start mid/end 2025 and would be in-person in Zurich with 80-100% at GDM https://t.co/Vfypj91KHy
Tweet card summary image
docs.google.com
We are excited to offer an opportunity for students to work with our research team at the GDM Zurich office in 2025. Please provide the following information to express your interest.
3
57
296
@UKPLab
UKP Lab
1 year
πŸŽ‰M2QA has been accepted to #EMNLP Findings!πŸŽ‰ M2QA is a new multilingual and multidomain QA dataset. We show that current transfer methods are insufficient and that language & domain transfer aren't independent! πŸ“„ Paper: https://t.co/A23KymqS0b πŸ‘‡πŸ‘‡πŸ‘‡ https://t.co/yHn5KWrCMQ
@LeonEnglaender
Leon EnglΓ€nder
1 year
πŸ“’ New preprint πŸŽ‰ We introduce "M2QA: Multi-domain Multilingual Question Answering", a benchmark for evaluating joint language and domain transfer. We present 5 key findings - one of them: Current transfer methods are insufficient, even for LLMs! πŸ“œ https://t.co/PI2AitnxIp πŸ§΅πŸ‘‡
0
3
15
@jinghan23
Jinghan Zhang
1 year
Thank you @AdapterHub for implementing our #NeurIPS method ( https://t.co/hW3Sn4IAVF) in your latest update! πŸŽ‰ Great to see our work being applied for practical advancements. Check out their work! #MachineLearning #AdapterMerging #ModelMerging
Tweet card summary image
arxiv.org
As an efficient alternative to conventional full finetuning, parameter-efficient finetuning (PEFT) is becoming the prevailing method to adapt pretrained language models. In PEFT, a lightweight...
@AdapterHub
AdapterHub
1 year
πŸŽ‰Adapters 1.0 is here!πŸš€ Our open-source library for modular and parameter-efficient fine-tuning got a major upgrade! v1.0 is packed with new features (ReFT, Adapter Merging, QLoRA, ...), new models & improvements! Blog: https://t.co/Evp8kQG1je Highlights in the thread! πŸ§΅πŸ‘‡
0
2
11
@AdapterHub
AdapterHub
1 year
πŸ‘ Huge thanks to all contributors and our amazing community! Adapters is an open-source project, and we're excited to see what you build with it and how you use it for your research. If you have questions or ideas, join the discussion on GitHub! https://t.co/GTekd4MEFS
Tweet card summary image
github.com
A Unified Library for Parameter-Efficient and Modular Transfer Learning - GitHub - adapter-hub/adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
0
0
5
@AdapterHub
AdapterHub
1 year
πŸŽ™οΈ New Models Alert! Adapters now supports: - Whisper: Our first audio model! - Mistral - MT5 - PLBart With Whisper, we bring speech recognition capabilities to our library!πŸ”Š Notebook:
Tweet card summary image
github.com
A Unified Library for Parameter-Efficient and Modular Transfer Learning - adapter-hub/adapters
1
0
5
@AdapterHub
AdapterHub
1 year
✨ ReFT ReFT is an all-new adapter method that now integrates with all models supported by Adapters: https://t.co/QwaG3F4rEM
@clifapt
Clifton Poth
1 year
Efficient Fine-Tuning with ReFT has been merged into the Adapters library today and is now available for all models supported by our library ⬇️
1
0
4
@AdapterHub
AdapterHub
1 year
The 3 supported LoRA merging methods are: β‘  Method from #NeurIPS paper by @jinghan23 et al. β‘‘ Linear merging (@Google paper by @alexandraxron et al.) β‘’ New SVD-based method Detailed explanations here in the docs: https://t.co/1wIkVO3UZZ Notebook:
Tweet card summary image
github.com
A Unified Library for Parameter-Efficient and Modular Transfer Learning - adapter-hub/adapters
1
0
6
@AdapterHub
AdapterHub
1 year
πŸ”€ Adapter Merging Adapter Merging allows you to combine trained adapters without additional fine-tuning! It is perfect for domain, language, and task transfer. We now support 3 different ways to merge LoRA adapters.
1
0
4
@AdapterHub
AdapterHub
1 year
πŸŽ‰Adapters 1.0 is here!πŸš€ Our open-source library for modular and parameter-efficient fine-tuning got a major upgrade! v1.0 is packed with new features (ReFT, Adapter Merging, QLoRA, ...), new models & improvements! Blog: https://t.co/Evp8kQG1je Highlights in the thread! πŸ§΅πŸ‘‡
2
7
45
@AdapterHub
AdapterHub
1 year
πŸ“’ New preprint πŸŽ‰ We - the AdapterHub team - present the M2QA benchmark to evaluate joint domain and language transfer! πŸ”¬ Key highlight: We show that adapter-based methods on small language models can reach the performance of Llama 3 on M2QA! πŸš€ πŸ‘‡
@LeonEnglaender
Leon EnglΓ€nder
1 year
πŸ“’ New preprint πŸŽ‰ We introduce "M2QA: Multi-domain Multilingual Question Answering", a benchmark for evaluating joint language and domain transfer. We present 5 key findings - one of them: Current transfer methods are insufficient, even for LLMs! πŸ“œ https://t.co/PI2AitnxIp πŸ§΅πŸ‘‡
0
2
8
@LeonEnglaender
Leon EnglΓ€nder
1 year
πŸ“’ New preprint πŸŽ‰ We introduce "M2QA: Multi-domain Multilingual Question Answering", a benchmark for evaluating joint language and domain transfer. We present 5 key findings - one of them: Current transfer methods are insufficient, even for LLMs! πŸ“œ https://t.co/PI2AitnxIp πŸ§΅πŸ‘‡
2
2
14
@clifapt
Clifton Poth
1 year
Efficient Fine-Tuning with ReFT has been merged into the Adapters library today and is now available for all models supported by our library ⬇️
@aryaman2020
Aryaman Arora
2 years
New paper! 🫑 We introduce Representation Finetuning (ReFT), a framework for powerful, efficient, and interpretable finetuning of LMs by learning interventions on representations. We match/surpass PEFTs on commonsense, math, instruct-tuning, and NLU with 10–50Γ— fewer parameters.
2
17
57
@AdapterHub
AdapterHub
1 year
Thanks to @osanseviero for helping with the updates from the HF side!
0
0
4
@AdapterHub
AdapterHub
1 year
Read up how to upload your own adapters to HuggingFace Hub: https://t.co/1U4IhAiqyp
1
0
4