
BigScience Research Workshop
@BigscienceW
Followers
14K
Following
813
Media
112
Statuses
353
A research workshop on large language model gathering 1000+ researchers around the world Follow the training of the 176B multilingual model live @BigScienceLLM
🌐
Joined April 2021
RT @jeffboudier: 4 years ago we were on the brink of AI becoming proprietary and centralized, when OpenAI kept GPT3 closed and VCs started….
0
30
0
🌸❤️.
Packing for a weekend I found this. It is hard to believe that @BigScienceLLM really happened. The first time I heard of the idea my take was "this is going to be fun. but not going to work". Kudos to @Thom_Wolf for the vision
0
0
7
RT @oiioxford: DPhil candidate @cailean_osborne shares reflections on the @OpenSourceOrg co-design process to define #opensourceAI and reco….
0
1
0
RT @StasBekman: The Universal Checkpointing paper is out! If you remember the @BigscienceW BLOOM-176B training, Tu….
0
34
0
RT @osanseviero: The top 15 most-liked organizations on @huggingface. 1. @StabilityAI 20k likes.2. @AIatMeta 20k.3. @runwayml 11k.4. CompVi….
0
104
0
RT @YJernite: I respect the caution, but also need to stress that efforts that pursue transparency as an operational value in service of ac….
0
5
0
RT @SashaMTL: Never thought I'd see the day I'd have a publication in JMLR 🥹 .So happy that the BLOOM carbon footprint paper has finally fo….
0
19
0
RT @mmitchell_ai: If you wanted to see the fun panel/Q&A we did with Londoners on AI, you can check out the recording here!.My preso at the….
0
5
0
RT @BigCodeProject: Introducing: 💫StarCoder. StarCoder is a 15B LLM for code with 8k context and trained only on permissive data in 80+ pro….
0
645
0
RT @BigCodeProject: Join us tomorrow, Wednesday 22nd (6:30 PM - 8:00PM CET) at the @mozillafestival Science Fair to learn more about our wo….
0
5
0
RT @GiadaPistilli: As you already know, I am very proud of the collective work that enabled the development of @BigscienceW's ethical chart….
0
8
0
RT @arankomatsuzaki: The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset. Documents the data creation and curation efforts….
0
36
0
RT @annargrs: Worried about benchmark data contamination? Studying LLM memorization or attribution? @BigscienceW BLOOM 🌸 now has exact & fu….
0
29
0
RT @yong_zhengxin: (Repost for corrected Arxiv).🧐What’s the best way to quickly adapt large multilingual language models to new languages?….
0
27
0
RT @m_ryabinin: Petals, a system for easy decentralized inference and adaptation of 100B+ LLMs, is now online!. 🌸Generate text with BLOOM-1….
0
57
0
RT @ClementDelangue: The Bloom paper is out. Looks like it's doing worse than current GPT3 API in zero-shot generation tasks in English but….
0
106
0
RT @SashaMTL: The @BigscienceW carbon footprint paper is live!! 🎉.Check it out to see how we calculated BLOOM's carbon footprint, covering….
0
49
0