Piotr Bojanowski Profile
Piotr Bojanowski

@p_bojanowski

Followers
2K
Following
640
Media
1
Statuses
64

Research Scientist at Facebook AI Research. Interested in Machine Learning and Computer Vision.

Joined April 2020
Don't wanna be here? Send us removal request.
@p_bojanowski
Piotr Bojanowski
14 days
🔥Our team is looking for a PhD-level intern🔥 We are looking for deep learning / machine learning PhD students! If you are interested, you can apply here: https://t.co/sWeTrzn6fY Don't hesitate to reach out!
Tweet card summary image
metacareers.com
Meta's mission is to build the future of human connection and the technology that makes it possible.
6
37
281
@p_bojanowski
Piotr Bojanowski
1 month
Congrats on this amazing paper @sainingxie, really love these results! I wonder if further improvements in representation learning can push this paradigm.
@sainingxie
Saining Xie
1 month
three years ago, DiT replaced the legacy unet with a transformer-based denoising backbone. we knew the bulky VAEs would be the next to go -- we just waited until we could do it right. today, we introduce Representation Autoencoders (RAE). >> Retire VAEs. Use RAEs. 👇(1/n)
1
1
35
@skalskip92
SkalskiP
1 month
segmenting concrete cracks is a difficult task for vision models; thin and long segments RF-DETR-Seg reaches near-perfect accuracy after only one epoch of training. that DINOv3 backbone is pretty crazy. notebook: https://t.co/Qd5SPUqSKx
6
7
62
@skalskip92
SkalskiP
1 month
we just released RF-DETR segmentation preview RF-DETR is 3x faster and more accurate than the largest YOLO11 when evaluated COCO segmentation benchmark we plan to launch full family of models by the end of October ↓ fine-tuning notebook below repo: https://t.co/6tF4mhWSs8
42
171
2K
@p_bojanowski
Piotr Bojanowski
3 months
Very interesting work by @JRaugel et al. from our friends from the Brain & AI team at FAIR about alignment between DINOv3 features and brain activations. I have always been fascinated by these similarities...
@JeanRemiKing
Jean-Rémi King
3 months
Can AI help understand how the brain learns to see the world? Our latest study, led by @JRaugel from FAIR at @AIatMeta and @ENS_ULM, is now out! đź“„ https://t.co/y2Y3GP3bI5 đź§µ A thread:
0
2
30
@jianyuan_wang
Jianyuan
3 months
It’s been a real pleasure playing with DINOv3 and training a new VGGT with it. 🚀 I believe its potential goes far beyond what current benchmarks can reveal. During training, you can feel the “character” of this model: smart, delicate, and surprisingly adaptable. It’s not just
@MichaelRamamon
Michaël Ramamonjisoa
3 months
DINOv3 is out! Super proud of our team's contribution to the computer vision community. Check out this great summary by @BaldassarreFe to understand dig deep in our features and how we got there! Now let's focus on applications enabled by DINOv3👇
3
13
236
@BensenHsu
BensenHsu
3 months
@AIatMeta Breakdown of the paper behind it: Imagine you want to teach a computer to understand pictures, like telling a cat from a dog, or finding all the roads in a map. Usually, you have to show the computer lots and lots of pictures and tell it what's in each one. This is like a
2
6
60
@WorldResources
World Resources Institute
3 months
@AIatMeta 🛰️🌱This is an exciting advance for monitoring the world's ecosystems from space! We were thrilled to collaborate with Meta to create DINOv3 and are now using it to measure and count the world's trees. Check it out➡️
Tweet card summary image
wri.org
While AI is often criticized for its negative environmental impacts, a new model could help unlock more finance for restoring degraded landscapes.
0
1
13
@p_bojanowski
Piotr Bojanowski
3 months
I am happy to share the work of our team. The outcome of a collaborative effort, by a joyful group of skilled and determined scientists and engineers! Congrats to the team on this amazing milestone!
@AIatMeta
AI at Meta
3 months
Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense
5
24
221
@BaldassarreFe
Federico Baldassarre
3 months
Say hello to DINOv3 🦖🦖🦖 A major release that raises the bar of self-supervised vision foundation models. With stunning high-resolution dense features, it’s a game-changer for vision tasks! We scaled model size and training data, but here's what makes it special 👇
41
273
2K
@p_bojanowski
Piotr Bojanowski
4 months
Why does Meta open-source its models? I talked about it with @kawecki_maciej looking at Dino, our computer vision model with applications in forest mapping, medical research, agriculture and more. Open-source boosts AI access, transparency, and safety. https://t.co/Kl7VObH1Tj
0
11
65
@ThKouz
Thodoris Kouzelis
7 months
1/n Introducing ReDi (Representation Diffusion): a new generative approach that leverages a diffusion model to jointly capture – Low-level image details (via VAE latents) – High-level semantic features (via DINOv2)🧵
3
66
399
@AIatMeta
AI at Meta
8 months
AI is helping researchers identify therapies for cancer patients. @orakldotbio trained our DINOv2 model on organoid images to more accurately predict patient responses in clinical settings. This approach outperformed specialized models and is helping accelerate their research.
40
164
670
@TimDarcet
TimDarcet
9 months
Want strong SSL, but not the complexity of DINOv2? CAPI: Cluster and Predict Latents Patches for Improved Masked Image Modeling.
22
110
608
@p_bojanowski
Piotr Bojanowski
9 months
🔥 The DINO team is looking for a PostDoc! 🔥 If you are about to graduate, and want to be part of what’s next for SSL, don’t hesitate to reach out! Link to job offer :
1
27
154
@p_bojanowski
Piotr Bojanowski
10 months
Another amazing use of DINOv2 out there. It is both humbling and heartwarming to see such hard problems being tackled in part leveraging your work!
@AIatMeta
AI at Meta
10 months
Using the open source DINOv2 model, a medtech company founded by two pediatric cardiologists developed tools to help clinicians identify or rule out signs of congenital heart defects in children faster and more accurately.
0
3
52
@p_bojanowski
Piotr Bojanowski
10 months
Very cool work from @GaoyueZhou from NYU showing that you can get zero-shot goal reaching behavior by training a world model on top of frozen DINOv2!
@GaoyueZhou
Gaoyue Zhou
10 months
Can we extend the power of world models beyond just online model-based learning? Absolutely! We believe the true potential of world models lies in enabling agents to reason at test time. Introducing DINO-WM: World Models on Pre-trained Visual Features for Zero-shot Planning.
1
3
11
@matt_is_nice
Matt Schwartz
10 months
🚀 The Future of AI in Healthcare... Language models and generative AI have attracted all the buzz, but I think 2025 is the year we see transformational breakthroughs around computer vision in healthcare. At Virgo, we've been exploring how DINOv2 has the potential to deliver
4
4
11
@AIatMeta
AI at Meta
10 months
The team at Inarix is using open source AI models from Meta FAIR to turn smartphones into pocket laboratories for farmers. Building a foundational model on top of DINOv2, the platform enables farmers to assess crop value in real time ➡️ https://t.co/oniHKuooZJ
24
77
333
@TmlrOrg
Transactions on Machine Learning Research
11 months
Outstanding Finalist 2: “DINOv2: Learning Robust Visual Features without Supervision," by Maxime Oquab, Timothée Darcet (@TimDarcet), Théo Moutakanni (@TheoMoutakanni) et al. 5/n https://t.co/XBQJk03R8R
@AIatMeta
AI at Meta
3 years
Announced by Mark Zuckerberg this morning — today we're releasing DINOv2, the first method for training computer vision models that uses self-supervised learning to achieve results matching or exceeding industry standards. More on this new work ➡️ https://t.co/h5exzLJsFt
2
7
25