AdapterHub
@AdapterHub
Followers
1K
Following
205
Media
26
Statuses
168
A central repository for pre-trained adapter modules in transformers! Active maintainers: @clifapt @h_sterz @LeonEnglaender @timo_imhof @PfeiffJo
Joined May 2020
π Exciting news! The new Adapters library for modular and parameter-efficient transfer learning is out! π€ Now simplified & disentangled from @huggingface pip install adapters pip install transformers π https://t.co/YUxmvjAf72 πΎ https://t.co/GTekd4MEFS
#EMNLP2023 π§΅π
7
102
462
As always, a huge thanks to our community for the awesome PRs that helped shape this release! π Read all about v1.2 on our blog: https://t.co/BwySYdB7Lt π» Explore the code, try it out & star our repo β: https://t.co/GTekd4MEFS (5/5)
github.com
A Unified Library for Parameter-Efficient and Modular Transfer Learning - GitHub - adapter-hub/adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
0
0
3
Also new since v1.0: β
Added AdapterPlus β
Gradient Checkpointing support for memory efficiency β
Push & load complex adapter compositions (Stack, Fuse, etc.) directly via the Hugging Face Hub! These additions make Adapters even more powerful & usable. (4/5)
1
0
1
We added 2 powerful new Adapter methods: 1οΈβ£ MTL-LoRA: Extends LoRA to multi-task learning, enabling efficient parameter sharing & combination across tasks (Docs: https://t.co/YF1Dp0Tmg3) 2οΈβ£ VeRA: LoRA variant with shared weights (Docs: https://t.co/H3a3UOuQOn) (3/5)
1
0
1
This flexibility comes from our new Plugin Interface. It allows seamless integration into any model architecture. As Adapters stays up-to-date with the latest @huggingface transformers version, you can use adapters with any model they support! (2/5) Docs:
1
0
1
πAdapters v1.2 is out!π We've made Adapters incredibly flexible: Add adapter support to ANY Transformer architecture with minimal code! We used this to add 8 new models out-of-the-box, incl. ModernBERT, Gemma3 & Qwen3! Explore this +2 new adapter methods in this threadπ(1/5)
1
3
23
I am hiring a Student Researcher for our Modularity team at the Google DeepMind office in Zurichπ¨π Please fill out the interest form if you would like to work with us! The role would start mid/end 2025 and would be in-person in Zurich with 80-100% at GDM https://t.co/Vfypj91KHy
docs.google.com
We are excited to offer an opportunity for students to work with our research team at the GDM Zurich office in 2025. Please provide the following information to express your interest.
3
57
296
π A new update of the Adapters library is out! Check out all the novelties, changes & fixes here: https://t.co/muMqhP0XzA
github.com
This version is built for Hugging Face Transformers v4.47.x. New Add AdapterPlus adapters (@julian-fong via #746, #775): AdapterPlus (Steitz & Roth, 2024) is a new bottleneck adapter variant op...
0
4
5
πM2QA has been accepted to #EMNLP Findings!π M2QA is a new multilingual and multidomain QA dataset. We show that current transfer methods are insufficient and that language & domain transfer aren't independent! π Paper: https://t.co/A23KymqS0b πππ https://t.co/yHn5KWrCMQ
π’ New preprint π We introduce "M2QA: Multi-domain Multilingual Question Answering", a benchmark for evaluating joint language and domain transfer. We present 5 key findings - one of them: Current transfer methods are insufficient, even for LLMs! π https://t.co/PI2AitnxIp π§΅π
0
3
15
Thank you @AdapterHub for implementing our #NeurIPS method ( https://t.co/hW3Sn4IAVF) in your latest update! π Great to see our work being applied for practical advancements. Check out their work! #MachineLearning #AdapterMerging #ModelMerging
arxiv.org
As an efficient alternative to conventional full finetuning, parameter-efficient finetuning (PEFT) is becoming the prevailing method to adapt pretrained language models. In PEFT, a lightweight...
πAdapters 1.0 is here!π Our open-source library for modular and parameter-efficient fine-tuning got a major upgrade! v1.0 is packed with new features (ReFT, Adapter Merging, QLoRA, ...), new models & improvements! Blog: https://t.co/Evp8kQG1je Highlights in the thread! π§΅π
0
2
11
π Huge thanks to all contributors and our amazing community! Adapters is an open-source project, and we're excited to see what you build with it and how you use it for your research. If you have questions or ideas, join the discussion on GitHub! https://t.co/GTekd4MEFS
github.com
A Unified Library for Parameter-Efficient and Modular Transfer Learning - GitHub - adapter-hub/adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
0
0
5
ποΈ New Models Alert! Adapters now supports: - Whisper: Our first audio model! - Mistral - MT5 - PLBart With Whisper, we bring speech recognition capabilities to our library!π Notebook:
github.com
A Unified Library for Parameter-Efficient and Modular Transfer Learning - adapter-hub/adapters
1
0
5
β¨ ReFT ReFT is an all-new adapter method that now integrates with all models supported by Adapters: https://t.co/QwaG3F4rEM
Efficient Fine-Tuning with ReFT has been merged into the Adapters library today and is now available for all models supported by our library β¬οΈ
1
0
4
The 3 supported LoRA merging methods are: β Method from #NeurIPS paper by @jinghan23 et al. β‘ Linear merging (@Google paper by @alexandraxron et al.) β’ New SVD-based method Detailed explanations here in the docs: https://t.co/1wIkVO3UZZ Notebook:
github.com
A Unified Library for Parameter-Efficient and Modular Transfer Learning - adapter-hub/adapters
1
0
6
π Adapter Merging Adapter Merging allows you to combine trained adapters without additional fine-tuning! It is perfect for domain, language, and task transfer. We now support 3 different ways to merge LoRA adapters.
1
0
4
πAdapters 1.0 is here!π Our open-source library for modular and parameter-efficient fine-tuning got a major upgrade! v1.0 is packed with new features (ReFT, Adapter Merging, QLoRA, ...), new models & improvements! Blog: https://t.co/Evp8kQG1je Highlights in the thread! π§΅π
2
7
45
π’ New preprint π We - the AdapterHub team - present the M2QA benchmark to evaluate joint domain and language transfer! π¬ Key highlight: We show that adapter-based methods on small language models can reach the performance of Llama 3 on M2QA! π π
π’ New preprint π We introduce "M2QA: Multi-domain Multilingual Question Answering", a benchmark for evaluating joint language and domain transfer. We present 5 key findings - one of them: Current transfer methods are insufficient, even for LLMs! π https://t.co/PI2AitnxIp π§΅π
0
2
8
π’ New preprint π We introduce "M2QA: Multi-domain Multilingual Question Answering", a benchmark for evaluating joint language and domain transfer. We present 5 key findings - one of them: Current transfer methods are insufficient, even for LLMs! π https://t.co/PI2AitnxIp π§΅π
2
2
14
Efficient Fine-Tuning with ReFT has been merged into the Adapters library today and is now available for all models supported by our library β¬οΈ
New paper! π«‘ We introduce Representation Finetuning (ReFT), a framework for powerful, efficient, and interpretable finetuning of LMs by learning interventions on representations. We match/surpass PEFTs on commonsense, math, instruct-tuning, and NLU with 10β50Γ fewer parameters.
2
17
57
Thanks to @osanseviero for helping with the updates from the HF side!
0
0
4
Read up how to upload your own adapters to HuggingFace Hub: https://t.co/1U4IhAiqyp
1
0
4