
Edouard Grave
@EXGRV
Followers
3K
Following
236
Media
8
Statuses
128
large language models @kyutai_labs
paris, france
Joined October 2012
Today, we release our 🇫🇷 to 🇬🇧 simultaneous speech-to-speech translation system, called Hibiki. It runs on-device & the model, inference code and tech report are available. This is built using the same audio LLM as Moshi, showing its versatility. 🟢.
Meet Hibiki, our simultaneous speech-to-speech translation model, currently supporting 🇫🇷➡️🇬🇧. Hibiki produces spoken and text translations of the input speech in real-time, while preserving the speaker’s voice and optimally adapting its pace based on the semantic content of the
0
1
21
Excited to release a preview of Helium-1, our 2B LLM targeting edge and mobile devices. 🚀. More to come in the future: training code, support for more languages, data pipeline, tech report & more… 🟢.
Meet Helium-1 preview, our 2B multi-lingual LLM, targeting edge and mobile devices, released under a CC-BY license. Start building with it today! .
0
3
43
RT @kyutai_labs: Meet Helium-1 preview, our 2B multi-lingual LLM, targeting edge and mobile devices, released under a CC-BY license. Start….
huggingface.co
0
94
0
RT @honualx: Looking forward to discuss open research at @kyutai_labs. If you want to work on large scale multimodal LLMs, come and talk to….
0
9
0
✈️ I will be attending #NeurIPS2023: let me know if you want to chat about the future of LLMs, and how to democratize them. 🌐 We are also hiring members of technical staff and interns @kyutai_labs. Happy to talk about the lab and our mission.
1
6
58
/kyutai has landed! Super excited to build this new research lab. Pure focus on research. As open as it gets.
Announcing Kyutai: a non-profit AI lab dedicated to open science. Thanks to Xavier Niel (@GroupeIliad), Rodolphe Saadé (@cmacgm) and Eric Schmidt (@SchmidtFutures ), we are starting with almost 300M€ of philanthropic support. Meet the team ⬇️
13
6
152
Super excited by the release of LLaMA, a serie of large language models, from 7B to 65B parameters. 🎉. By training longer, LLaMA obtains GPT3 level performance with a 13B model, which can run on a single GPU. Excited to see what the research community will do with these models.
Today we release LLaMA, 4 foundation models ranging from 7B to 65B parameters. LLaMA-13B outperforms OPT and GPT-3 175B on most benchmarks. LLaMA-65B is competitive with Chinchilla 70B and PaLM 540B. The weights for all models are open and available at 1/n
0
7
52
Introducing PEER, a new language model which makes text generation and editing more collaborative and controllable. It adds human in the loop, by following instructions and providing explanations. Work lead @timo_schick. Paper:
🎉 New paper 🎉 We introduce PEER, a language model trained to incrementally write texts & collaborate w/ humans in a more natural way. It can write drafts, add suggestions, follow instructions, perform edits, correct itself & provide explanations. Link:
1
1
11
Joint work with the great following team:.@gizacard @PSH_Lewis @MariaLomeli_ @lucas_hosseini @Fabio_Petroni @timo_schick Jane Dwivedi-Yu @armandjoulin @riedelcastro.
0
0
7
Very excited to introduce Atlas, a new retrieval augmented language model which is competitive with larger models on few-shot tasks such as question answering or fact checking. Work lead by @gizacard and @PSH_Lewis. Paper:
🚨We’ve been working on better retrieval-augmented models & thrilled to present Atlas, led by @gizacard @EXGRV & myself🚨.Atlas is a end2end pretrained "RAG"-like model, beats models 50x its size on fewshot QA, sets numerous SotA on knowledge-intensive NLP
2
11
73
New release of our Contriever project! It includes multi-lingual models which can perform cross-lingual retrieval (eg, retrieve English documents to answer a question in Swahili), the code to (pre-)train your own retrievers, and an updated version of the paper with new results.
Code for Contriever is now available!. Code: Paper: Additionally we trained mContriever, a state-of-the-art multilingual neural retriever, by applying a similar contrastive learning method.
0
2
12