rodolphe_jenatton
@RJenatton
Followers
339
Following
393
Media
2
Statuses
186
I am so proud to share that we have raised an additional $41M round to fuel our ambitions @bioptimus_ai !
🎉 Excited to share that we've just hit a $76M funding milestone for @bioptimus_ai , with a fresh $41M round to build the first multiscale foundation model of biology:
1
1
11
Bravo Francis ! A definite must read
0
0
1
The first few months at @bioptimus_ai have been a blast! Check out our first released model.
@bioptimus_ai releases H-optimus-0, the largest #opensource AI foundation model for histopathology! - code: https://t.co/1XkNgx0FUC - press release: https://t.co/uba4cB5gpv Enjoy! Congrats Charlie Saillard @RJenatton @FelipeLlinares @ZeldaMariet @DavidCahane @ericdurand
1
2
13
We are building a fantastic team at @bioptimus_ai and we are hiring talents for a variety of roles. Check out 👇
bioptimus.com
We are building a team of creative minds. Together, we aspire to redefine the landscape of biology with AI, unlocking its potential for everyone.
Wondering if AI can learn the language of life? Come join the @bioptimus_ai crew to joyfully change the world and shape the future of biology and medicine with AI foundation models! Check out https://t.co/Uga5Gnk3Uz for roles in engineering, data science, product, operations...
0
1
20
Great to see @bioptimus_ai mentioned by @EmmanuelMacron in the context of the dynamic, French AI ecosystem !
Mistral, LightOn, Shift Technology, Alan, Bioptimus, Google : ils sont de plus en plus nombreux à choisir la France pour innover en matière d’intelligence artificielle. Fierté. En investissant, nous faisons de la France un pays à la pointe de l’IA. Une IA le dit aussi !
0
1
12
In case you missed it @bioptimus_ai: We are looking for the best talents (ML/biology/large-scale infrastructure) to join our fantastic technical team @ZeldaMariet @FelipeLlinares @jeanphi_vert 👉
bioptimus.com
We are building a team of creative minds. Together, we aspire to redefine the landscape of biology with AI, unlocking its potential for everyone.
0
10
19
Do you want to work with some of the best minds to transform biology? Then we want to hear from you! To find out how you can be a part of Bioptimus, visit https://t.co/yx6D7lJkIk (3/3)
bioptimus.com
We are building a team of creative minds. Together, we aspire to redefine the landscape of biology with AI, unlocking its potential for everyone.
1
2
11
With a successful seed funding round of $35M, we’ve combined a world-class team of scientists to revolutionize #biology with AI. (2/3)
1
0
7
I’m proud to announce our launch of @bioptimus_ai – with a mission to create and build the first #foundationmodel for biology. (1/3)
5
11
58
Pure science fiction 😳
ELOHIM PRANDI !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 🔥🔥🔥🔥🔥🔥🔥 Le tir de la dernière chance qui finit au fond de la cage suédoise ! Les Bleus égalisent à la dernière seconde ! Quelle folie ! #FRASUE #BleuetFier @FRAHandball Le direct sur TF1+ ➡️ https://t.co/Zzt73ywe75
0
0
1
Very excited to see the dynamism (& the quality !) of the French and Paris AI ecosystem.
Our founding team is covering many AI fields from vision, with Patrick Pérez and Hervé Jégou (@hjegou) to LLMs with Edouard Grave (@EXGRV), audio with Neil Zeghidour (@neilzegh) and Alexandre Défossez (@honualx) and infra with Laurent Mazaré (@lmazare).
0
0
9
Introducing Soft MoE! Sparse MoEs are a popular method for increasing the model size without increasing its cost, but they come with several issues. Soft MoEs avoid them and significantly outperform ViT and different Sparse MoEs on image classification. https://t.co/ozX9qPBe96
arxiv.org
Sparse mixture of expert architectures (MoEs) scale model capacity without significant increases in training or inference costs. Despite their success, MoEs suffer from a number of issues:...
5
61
245
If you are interested in how to best exploit pretrained models within the context of contrastive learning, go and check out our recent work led by @janundnik during a great @GoogleAI internship! (Full list of collaborators in the thread 👇)
👀 Looking for the best use of pre-trained classifiers in contrastive learning? 🏝Check out my @GoogleAI internship project at the ES-FoMo workshop @icmlconf in Hawaii next week! 🔥 With Three Towers, the image tower benefits from both contrastive learning and pre-training!
0
1
4
Having side information, even only available at training time, can be helpful to deal with label noise. We study this phenomenon and give practical methods to exploit that information. Check out our ICML paper led by @gortizji during his great @GoogleAI internship!
Label noise is a ubiquitous problem in machine learning! 💥 Our ICML work 🌴: “When does privileged information explain away label noise?” answers how meta-data can help us solve this issue 🤔 Come to our poster on Wed and check it out! 🏄 📄: https://t.co/RsKiZf3Bdk 🧵1/5
0
2
10
How to best take advantage of pretrained models for contrastive learning? Our approach is simple, flexible and robust. Joint work with fantastic colleagues at @GoogleAI. Special shout-out for @janundnik who led the project during his student researcher program 👏
New Preprint: 🔥Three Towers: Flexible Contrastive Learning with Pretrained Image Models🔥 We improve the contrastive learning of vision-language models by incorporating knowledge from pretrained image classifiers. 📄 https://t.co/sACCbj7Php 🧵[1/3]
0
2
13
New paper from my time as student researcher at Google :)
Three Towers: Flexible Contrastive Learning with Pretrained Image Models introduce Three Towers (3T), a flexible method to improve the contrastive learning of vision-language models by incorporating pretrained image classifiers. While contrastive models are usually trained from
0
2
9
Excited & proud to share our work on Scaling Vision Transformers to 22B params, i.e. largest vision model to date🚀! https://t.co/VYIKEQuWwc ViT-22B achieves excellent transfer on dense recognition tasks, i.e. semantic segmentation & depth prediction, with a *frozen* backbone❄️
3
22
189