davidrau Profile
davidrau

@davidmrau

Followers
42
Following
49
Media
0
Statuses
8

Member of Technical Staff @ Cohere

Amsterdam
Joined December 2020
Don't wanna be here? Send us removal request.
@davidmrau
davidrau
7 months
Incredibly proud to be building the best embedding models on the planet, alongside so many brilliant colleagues.
@aidangomez
Aidan Gomez
7 months
I’m excited to share @Cohere’s newest model, Embed 4! Embed 4 is the optimal search engine for secure enterprise AI assistants and agents.
0
0
12
@nadiinchi
Nadia Chirkova
1 year
If you are at #EMNLP2024 and interested in RAG, come to discuss our BERGEN library! @VNikoulina will present the BERGEN poster tomorrow, Nov 13, 16:00-17:30, location: Jasmine Repo: https://t.co/5uyOaRf6kN Paper: https://t.co/3prXmD9s4C @naverlabseurope #NLProc #RAG
1
4
17
@davidmrau
davidrau
1 year
Happy to see our work on Context Embeddings for efficient answer generation in RAG being featured among other great works. @dylan_wangs @HerveDejean @sclincha
@omarsar0
elvis
1 year
After reading 100s of AI papers this week, it's clear how useful small language models will be and the importance of efficiently enhancing reasoning and understanding in LLMs. If you are looking for some weekend reads, here are a few notable AI papers I read this week: -
0
0
5
@sclincha
Stéphane Clinchant
1 year
What’s a good baseline for RAG? 🤔 The literature shows consistent differences in experimental setups, retrievers, datasets, and metrics. So, we built the BERGEN library https://t.co/9srOoFQNQ5 to enhance reproducibility and identify strong baselines : 🧵 @naverlabseurope
Tweet card summary image
github.com
Benchmarking library for RAG. Contribute to naver/bergen development by creating an account on GitHub.
1
7
21
@omarsar0
elvis
1 year
Context Embeddings for Efficient Answer Generation in RAG Proposes an effective context compression method to reduce long context and speed up generation time in RAG systems. The long contexts are compressed into to a small number of context embeddings which allow different
3
86
331
@_reachsumit
Sumit
1 year
Context Embeddings for Efficient Answer Generation in RAG Speeds up generation time while improving answer quality by compressing multiple contexts into a small number of embeddings, offering flexible compression rates. 📝 https://t.co/ETcrjOnIgx
0
21
91
@tonylittlewine
Antonis Minas Krasakis
2 years
Excited to share that our participation in @trec_ikat 2023 ranked 1st! We submitted Retrieve-the-Generate and Generate-then-Retrieve runs,combining #LLMs & #search for conversational search. joint work with @z_abbasiantaeb @davidmrau @ChuanMg @srahmanidashti @maliannejadi
0
4
44
@irlab_amsterdam
The IRLab at the University of Amsterdam
3 years
📢 SEA December: A Year in IR in Amsterdam 🎇 With 8 speakers from the IR Amsterdam ecosystem: 3⃣ @davidmrau (U. Amsterdam) - The Role of Complex NLP in Transformers for Text Ranking 📄 4⃣ @bobvanluijt (@SeMI_Tech) - About @weaviate_io. 1/2
1
3
10