
Carlos Lassance
@cadurosar
Followers
484
Following
270
Media
20
Statuses
256
MTS @ Cohere, constantly trying to make Information Retrieval work better, while making mistakes on the process.
Grenoble
Joined March 2018
After 6 months, can finally tell everyone what we have been working on. Looking forward to seeing what people will be able to build with this :)
Introducing Embed 4: our latest state-of-the-art multimodal embedding model that enables enterprises to securely add powerful search and retrieval capabilities to their agentic AI applications!
8
5
80
Excited to share that Provence is accepted to #ICLR2025! Provence is a method for training an efficient & high-performing context pruner for #RAG, either standalone or combined with a reranker https://t.co/Q7TRYP0CPt w/ @thibault_formal @VNikoulina @sclincha @naverlabseurope
1
5
23
Today, weโre launching early access for North! Our all-in-one secure AI workspace platform combines LLMs, search, and agents into an intuitive interface that effortlessly integrates AI into your daily work to achieve peak productivity.
15
101
553
๐๐๐ฎ๐ง๐๐ก ๐จ๐ ๐๐จ๐ก๐๐ซ๐ ๐๐๐ซ๐๐ง๐ค ๐.๐ - ๐๐จ๐จ๐ฌ๐ญ ๐ฒ๐จ๐ฎ๐ซ ๐๐๐๐ซ๐๐ก ๐ What is new: - Large gains in multilingual retrieval ๐บ๐ณ - Reasoning Capabilities ๐งฎ - Strong gains Finance ๐ข, eCommerce ๐, project management ๐ฃ - New platforms: AWS Bedrock & Pinecone
4
19
123
Aya-Expanse, the strongest open weights multilingual LLM, was just released by @CohereForAI It beats Llama 70B multilingual, while being half the size and twice the speed.
4
40
240
Your search can see now. We're excited to release fully multimodal embeddings for folks to start building with!
15
72
437
Do not miss an application deadline for #ALPS2025 on October 15! https://t.co/j1NTFUxvNw ALPS is an Advanced Language Processing School, held in French Alps, with wonderful speakers, inspiring discussions around NLP, and outdoor activities such as skiing and hiking!
Deadline for applications is Oct 15th but that would be for the 2025 edition @mgalle ๐ #ALPS2025 ๐๏ธEvent: March 30th-April 4th 2025 โณ๏ธ Where: โท๏ธ Aussois (French Alps) โก๏ธ Applications & info: https://t.co/Lcn5s4LEHV
1
6
17
I will present our study on Multilingual Retrieval-augmented generation, tomorrow at #ACL2024NLP workshop on Knowledgeable LLMs, 16:00 (poster hall)! Come to discuss a multilingual extension of our BERGEN library: https://t.co/5uyOaRfEal
https://t.co/FpBfP09x7P
#NLProc
2
4
34
Looking for a team lead to join our search team at @cohere working with @Nils_Reimers and many other kind & smart people. Feel free to message me if you want to know more! here is the posting:
1
13
63
Announcing the private beta of our newest foundation embedding model, Cohere Compass: designed specifically for multi-aspect data like emails, invoices, CVs, and support tickets to offer superior enterprise search capabilities. Sign up to try it out! https://t.co/CO6OIrnpDg
cohere.com
Cohere announces North, an all-in-one secure AI workspace platform that empowers employees to significantly improve the quality and speed of their work.
4
50
222
0โฃ ๐๐จ๐ซ๐ฅ๐ ๐
๐ข๐ซ๐ฌ๐ญ ๐๐ข๐ง๐๐ซ๐ฒ ๐๐๐๐ญ๐จ๐ซ ๐๐๐ญ๐๐๐๐ฌ๐ 1โฃ Happy to annouce the world first ๐๐ข๐ง๐๐ซ๐ฒ ๐๐๐๐ญ๐จ๐ซ ๐๐๐ญ๐๐๐๐ฌ๐ (for educational purposes). ๐ฐ32x less memory ๐ฐ ๐ 40x faster search ๐ Github: https://t.co/4jNgl0fROq
8
62
402
People are asking me what to expect from "Faster Learned Sparse Retrieval with Block-Max Pruning" with @ntonellotto and Torsten Suel 10x faster safe retrieval wrt. Maxscore and you can check yourself the approximate retrieval trade-offs on naver/splade-cocondenser-ensembledistil
0
4
10
Hope to see you later today in the poster session, and feel free to send a DM if you want to chat about this! Note that this was done while I was still at @naverlabseurope and it is a collaboration with @sclincha @ntonellotto and @HerveDejean.
0
1
1
We note that we promise this can be used conveniently but showcase results in PISA. This is actually also possible in ES. Actually the team at ES proposed a similar technique but without saturation or doc pruning ( https://t.co/uFsMKhS23W) concurrently with the ECIR deadline
1
0
4
In making SPLADE more efficient, one normally needs to either sacrifice convenience (still use the same tool, like ES) or OOD effectiveness. Not the case of Two-Step, in the 30 datasets we tested, 22 did not saw statistical drop in effectiveness, while being as efficient as BM25
1
0
4
Today I'm presenting Two-Step SPLADE at #ECIR2024 findings! We show how to run SPLADE with BM25-like latency and SPLADE-v3 effectiveness TLDR: Prune vectors, retrieve(top-k), rescore with unpruned ones. Paper: https://t.co/rd9ezd5IeF Github: https://t.co/8qVgdBX2al
5
8
61
๐ ๐๐จ๐ก๐๐ซ๐ ๐๐ฆ๐๐๐ ๐๐ - ๐ข๐ง๐ญ๐ & ๐๐ข๐ง๐๐ซ๐ฒ ๐๐ฎ๐ฉ๐ฉ๐จ๐ซ๐ญ๐ I'm excited to launch our native support for int8 & binary embeddings for Cohere Embed V3. They slash your vector DB cost 4x - 32x while keeping 95% - 100% of the search quality. https://t.co/uJBg6nyPvf
17
80
450
Just as splade-v3 comes out ( https://t.co/j1CeQIO4iL), splade++ achieves 1M monthly downloads again, pretty cool seeing this happen again! Always curious to know what people are doing it with and congrats to the @naverlabseurope team @thibault_formal @sclincha HerveDejean
0
0
17
๐บ๐ณ๐๐๐๐ ๐๐ข๐ค๐ข๐ฉ๐๐๐ข๐ ๐๐ฆ๐๐๐๐๐ข๐ง๐ ๐ฌ ๐ข๐ง ๐๐๐+ ๐๐๐ง๐ ๐ฎ๐๐ ๐๐ฌ ๐บ๐ณ What could you build if your RAG has access to Wikipedia in all 300+ languages? Available for anyone to use, using our state-of-the-art multilingual embedding model: https://t.co/PCaAyboxsq
14
109
549
Welcome @CohereForAI Command-R! The top trending among over 500k open-access models! ๐ https://t.co/F4ee7TAulJ
8
62
374