Andrei Bursuc Profile Banner
Andrei Bursuc Profile
Andrei Bursuc

@abursuc

Followers
7,075
Following
1,102
Media
644
Statuses
8,484
Explore trending content on Musk Viewer
Pinned Tweet
@abursuc
Andrei Bursuc
7 months
Slides and most videos from @ICCVConference 🎭The Many Faces of Reliability of Deep Learning for Real-World Deployment🌍 tutorial are now up! Feel free to reach out w/ any questions and suggestions #ICCV2023 -📽️ videos: - 💻slides:
Tweet media one
@abursuc
Andrei Bursuc
8 months
Join us this Tuesday for our @ICCVConference tutorial on "The Many Faces of Reliability of Deep Learning for Real-World Deployment" prepared by @SharonYixuanLi , @puneetdokania , @tuan_hung_vu , Dengxin Dai, Patrick Pérez and @abursuc #ICCV2023
Tweet media one
3
16
61
0
29
120
@abursuc
Andrei Bursuc
5 years
A visual exploration of Gaussian Processes: beautiful interactive plots and a brief tutorial to make GPs more approachable
3
122
434
@abursuc
Andrei Bursuc
2 years
Decoder Denoising Pretraining for Semantic Segmentation: A fun and simple idea for pre-training the decoder for semantic segmentation 1/
Tweet media one
4
56
317
@abursuc
Andrei Bursuc
8 months
An excellent poor man's visual prompt engineering strategy for CLIP: draw red circles on an object in an image auto-magically focuses its attention on that region, leading to a specific embedding #ICCV2023
Tweet media one
2
39
279
@abursuc
Andrei Bursuc
2 years
Our work on learning to perform semantic segmentation without human supervision by driving around cities made it to #eccv2022 More info coming soon.
@_akhaliq
AK
2 years
Drive&Segment: Unsupervised Semantic Segmentation of Urban Scenes via Cross-modal Distillation abs: project page: @Gradio Demo:
1
41
201
2
40
260
@abursuc
Andrei Bursuc
3 years
ICCV: International CVPR Corrected Versions #cvpr2021 #iccv2021
2
14
255
@abursuc
Andrei Bursuc
4 years
An intriguing #iclr2020 paper on self-supervision studied over a single image: Starting from an image the authors generate a 1M images dataset of crops and augmentations from this image 1/
4
44
252
@abursuc
Andrei Bursuc
6 years
Slides for most talks at Good Citizen at CVPR workshop are up Lots of useful advice and experience for writing and reviewing papers, how to do good research and evaluation, talks, how to organise your time #CVPR2018
0
103
236
@abursuc
Andrei Bursuc
1 year
The brilliant Little Book of Deep Learning by @francoisfleuret is here! 🤩 Hoping now for an autograph session at a CV/ML venue soon.
Tweet media one
4
20
238
@abursuc
Andrei Bursuc
3 years
Leave Those Nets Alone: Advances in Self-Supervised Learning. Join us this Sunday for our #cvpr2021 tutorial to discover what's cooking these days in the different flavors of self-supervised learning. Recordings and slides will be online right after.
Tweet media one
3
42
203
@abursuc
Andrei Bursuc
4 years
Introducing new #cvpr2020 work with S. Gidaris and team on a new self-supervised task: Learning Representations by Predicting Bags of Visual Words 1/
Tweet media one
4
39
193
@abursuc
Andrei Bursuc
2 years
New and not-so-new computer vision geeks on the block rejoice: the 2nd edition of Rick Szeliski's famous book on Computer Vision Algorithms and Applications is up and free to download as PDF
2
40
179
@abursuc
Andrei Bursuc
1 year
Want to improve zero-shot performance of your CLIP model? Easy: just ask GPT-3 how it would recognize those objects and produce word embeddings from its descriptions. Bonus: you can get some explainability by analyzing decisions from embeddings #ICLR2023
Tweet media one
Tweet media two
Tweet media three
Tweet media four
3
28
186
@abursuc
Andrei Bursuc
11 months
So far, @georgiagkioxari 's talk on "Apples and Oranges: research in academia vs. industry" is the funniest and most thought-provoking talk at #CVPR2023
Tweet media one
Tweet media two
Tweet media three
Tweet media four
2
24
159
@abursuc
Andrei Bursuc
3 years
New work spearheaded by S. Gidaris on self-supervised learning: OBoW - Online Bag-of-Visual-Words Generation for Unsupervised Representation Learning Paper: Code: 🧵👇 1/N
Tweet media one
6
31
156
@abursuc
Andrei Bursuc
4 years
I'm a fan of papers proposing new "baselines". The methods are usually simpler and reach decent performance (few points below the usually more complex SoTA methods), while enabling a different view over the problem at hand.
5
14
153
@abursuc
Andrei Bursuc
11 months
CLIP-based research is moving so fast that even authors cannot keep pace with arXiv 🙃There are currently 3 MaskCLIP papers out there (all published in major conferences): - - -
2
32
155
@abursuc
Andrei Bursuc
11 months
For those tuning from home, @YejinChoinka 's excellent keynote at #CVPR2023 unravels and discusses many of the findings from this work on the limits of Transformers to Compositionality
Tweet media one
2
39
144
@abursuc
Andrei Bursuc
1 year
DINO learned excellent visual representations with impressive generalization. Ever since, many researchers have tried to outmatch (or at least match) that via various self-supervised and/or masked image modelling strategies without succeeding
1
14
144
@abursuc
Andrei Bursuc
4 years
TRADI: Tracking deep neural network weight distributions -- work with G. Franchi We’re proposing a cheap method for getting ensembles of networks from a single network training 1/
Tweet media one
3
32
132
@abursuc
Andrei Bursuc
2 years
#ICLR2023 submissions are now visible or it's that time of the year when you realize that most of your #CVPR2023 ideas are already scooped 🙃
2
25
137
@abursuc
Andrei Bursuc
4 years
Got a submission rejected from ICML, then NeurIPS. We've improved it further and sent it to ICLR. I'm happy that the paper is even better and clearer now, but, boy, this game can be annoying. Did I miss any news lately? :)
1
0
129
@abursuc
Andrei Bursuc
1 year
There were so many cool works on multi-camera bird's-eye-view perception recently. If you want to catch-up or just starting in the area, this figure by @AdamWHarley does an effective summary of main approaches. Capture from the awesome Simple-BEV work:
Tweet media one
0
27
118
@abursuc
Andrei Bursuc
3 years
No accepted papers to announce, but I'm happy to see that as reviewer I contributed to the improvement and acceptance of a paper #NeurIPS2021
2
3
121
@abursuc
Andrei Bursuc
7 years
Nice writeup by Chelsea Finn on recent meta-learning and few-shot learning techniques
2
39
119
@abursuc
Andrei Bursuc
6 years
Detectron is FAIR's new platform for state-of-the-art object detection algorithms in caffe2. It includes Mask R-CNN and a rich model zoo. All under Apache2 license.
0
56
113
@abursuc
Andrei Bursuc
6 years
Nice trick for initializing last layer of a CNN for fine-tuning or for adding an extra-class after training: add L2-norm layer and use CNN output from new class sample(s) as weights for new neuron. Results are good from first steps.
Tweet media one
1
45
112
@abursuc
Andrei Bursuc
11 months
The outstanding #CVPR2023 keynote talk by @YejinChoinka on "2050: An AI odyssey: dark matter of intelligence" is up
@abursuc
Andrei Bursuc
11 months
For those tuning from home, @YejinChoinka 's excellent keynote at #CVPR2023 unravels and discusses many of the findings from this work on the limits of Transformers to Compositionality
Tweet media one
2
39
144
1
26
113
@abursuc
Andrei Bursuc
5 years
New #ICCV2019 work lead by S. Gidaris on boosting few-shot learning methods with self-supervision Few-shot learning and self-supervised learning address different facets of the same problem: how to train a model with little or no labeled data 1/
Tweet media one
2
26
107
@abursuc
Andrei Bursuc
6 years
tempoGAN: nice use of adversarial training for super-resolution of temporally consistent fluid flows using a Volumetric-GAN
Tweet media one
Tweet media two
1
40
99
@abursuc
Andrei Bursuc
1 year
FlexiViT: One Model for All Patch Sizes by @giffmana et al. may have passed unnoticed over December. Randomizing patch size at training makes a good ViT across a range of patch sizes + you can tune patch size at runtime according to hardware and/or KPIs
Tweet media one
Tweet media two
Tweet media three
Tweet media four
3
21
110
@abursuc
Andrei Bursuc
5 years
New dataset INTERACTION: trajectories of traffic participants in interactive scenarios (eg roundabout, intersection, merging of lanes on highways) from US, Germany and China. It's useful for intention and behavior prediction, interactions between drivers.
Tweet media one
2
43
102
@abursuc
Andrei Bursuc
1 year
Slides and videos from our #eccv2022 tutorial "Self-Supervision on Wheels: Advances in Self-Supervised Learning from Autonomous Driving Data" are now available:
@abursuc
Andrei Bursuc
2 years
Join us online this Monday for our #eccv2022 tutorial "Self-Supervision on Wheels: Advances in Self-Supervised Learning from Autonomous Driving Data"
Tweet media one
3
17
82
2
28
105
@abursuc
Andrei Bursuc
3 years
"to arXiv or not (during review)" dilemma for less known labs: if you do it, you risk reviewers seeing it & hesitating due to lower fame, if you don't, you risk concurrent work from bigger lab on related idea or new SotA getting public & reducing your chances at resubmission 🤷‍♂️
4
8
104
@abursuc
Andrei Bursuc
3 years
From this view, @ykilcher is currently the most selective venue in CV+ML with 1-2 selected papers per week out of a pool of many 100s arXiv papers every week. And he was saying that ICML acceptance rate was low 🙃
@karpathy
Andrej Karpathy
3 years
There’s a few other prestigious venues like @ykilcher YouTube, paperswithcode, @ak92501 et al tweet streams etc :) but yes. I rather like the emerging hybrid model where the new cheap low latency async distributed consensus layer coexists with the legacy “Layer 1 chain” (pubs)
16
34
471
3
3
100
@abursuc
Andrei Bursuc
4 years
Since 1 week I'm no longer leaving the house without my DIY mask. I had initially bought in the "masks are not helpful" argument that lead to a lack of action from my side. Thanks to @jeremyphoward and @math_rachel for convincing me otherwise #masks4all
Tweet media one
2
14
98
@abursuc
Andrei Bursuc
10 months
The Grand Slam of the relentless computer vision researcher: submit a paper to CVPR -> ICCV/ECCV -> WACV/3DV hoping it will eventually get in that year.
3
6
103
@abursuc
Andrei Bursuc
3 years
After a time of relatively slow progress in core ResNet backbone, we're seeing now a mini Cambrian explosion of architectures: ViT, NFNet, RepVGG, LambdaNet, CaiT RedNet, BotNet, HaloNet, MLP-Mixer, etc. Luckily tireless @wightmanr keeps track of all 🙏
@CSProfKGD
Kosta Derpanis
3 years
Repeat after me, another day, another MLP architecture
Tweet media one
Tweet media two
10
24
210
3
16
100
@abursuc
Andrei Bursuc
6 years
Kudos to #cvpr2019 Program Chairs for preparing a super-handy reviewer tutorial, including a summary of the decision process (which is quite opaque to many reviewers), some annotated good and bad reviews, and a few tips
0
43
97
@abursuc
Andrei Bursuc
5 years
Fun and insightful paper and post from Uber studying the Lottery Ticket Hypothesis Signs of the weights count the most for the init after pruning and one can find a masking of the weights with good performances using random weights - the Supermasks.
0
23
98
@abursuc
Andrei Bursuc
7 months
An excellent survey on image-based object localization w/o human supervision. It's my go-to to catch up on the many new works in the area, here clearly organized and analyzed. Well crafted, it can also serve as a template for next surveys. Nice work by @oriane_simeoni & team👇
Tweet media one
Tweet media two
@oriane_simeoni
Oriane Siméoni
7 months
📢[Survey 📚] Object localization in images with zero manual annotation?🤩 ➡️ We propose a survey discussing recent works exploiting ❄️self-supervised ViTs (incl. ICCV’23 & NeurIPS’23 works💫) w/ @EloiZablocki @SpyrosGidaris @gillespuy & P. Pérez. 📄
2
17
91
1
17
96
@abursuc
Andrei Bursuc
7 years
How to Train a GAN - #iccv2017 version by @soumithchintala et al.
0
32
92
@abursuc
Andrei Bursuc
4 years
In the past years we've seen some successful transfers of ideas from NLP to CV: Bag-of-Visual Words, skip-connections in LSTMs to ResNets, ViT from Transformers. Do you have some good examples of idea transfer from CV to NLP?
10
9
93
@abursuc
Andrei Bursuc
3 years
0/ Discover LOST, our unsupervised object discovery method based on self-supervised transformers. While simple, quick to install and very efficient, LOST improves SoTA by several points. #bmvc2021 Paper: Code:
3
21
91
@abursuc
Andrei Bursuc
6 years
New release of 's advanced DL course. Content is great w/ incredible amount of very recent stuff. I find myself recommending this course all the time to bootstrap people into DL and good coding practices for it. Congrats @math_rachel @jeremyphoward
@jeremyphoward
Jeremy Howard
6 years
Launching Cutting Edge Deep Learning for Coders: 2018 edition I know a lot of folks have been waiting for this - hope it meets your expectations!
24
328
924
1
22
88
@abursuc
Andrei Bursuc
1 year
How Much More Data Do I Need? This is a nice #cvpr2022 study and solution: do multiple acquisition and training rounds to solidify the estimates (regression, statistical laws) by sweeping over multiple target KPIs. Lots of results from several datasets
Tweet media one
Tweet media two
1
23
91
@abursuc
Andrei Bursuc
6 years
In spite of the drama around #NIPS2018 sold-out, the organisers did well to release only 2.5k spots and reserve the others (up to 6-8k) to authors of accepted papers, top reviewers and workshop papers. There was a hick-up though with the delayed announcement of accepted papers
4
28
80
@abursuc
Andrei Bursuc
6 years
Using CNNs for explicit memorization of training data such that it can infer if an image was used for training, find which dataset was used for training, and if a validation image leaked during training Great work by @alexsablay @hjegou et al.
0
25
86
@abursuc
Andrei Bursuc
6 months
CVPR deadline week Stress level: 8/10 Home-alone father of 3 minions handling evening and morning routine, meals and taking kids to/back from school on time in different parts of town and then catch train to office Stress level: ... Ahahahahaha! 🥲
5
1
82
@abursuc
Andrei Bursuc
2 years
Join us online this Monday for our #eccv2022 tutorial "Self-Supervision on Wheels: Advances in Self-Supervised Learning from Autonomous Driving Data"
Tweet media one
3
17
82
@abursuc
Andrei Bursuc
7 years
The video tutorial on Capsule Networks from @aureliengeron is so good that even Hinton praises it
0
35
81
@abursuc
Andrei Bursuc
2 years
Ithaca365: a new dataset for AD under various weather conditions: 40 recordings (~7k frames) of a 15 km route under various conditions: weather, time of day, traffic conditions #CVPR2022
Tweet media one
2
15
78
@abursuc
Andrei Bursuc
8 years
New dataset of high quality 3D scans of ~10k real objects
1
37
70
@abursuc
Andrei Bursuc
11 months
What an incredible conference venue in Vancouver for #CVPR2023
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
6
79
@abursuc
Andrei Bursuc
2 years
An open letter from Russian scientists and science journalists against the war with Ukraine (google translate capture 👇): "The responsibility for unleashing a new war in Europe lies entirely with Russia. There is no rational justification for this war."
Tweet media one
0
12
76
@abursuc
Andrei Bursuc
3 years
Happy to announce relatively recent work with @GianniFranchi10 on efficient BNNs with fewer assumptions on weights, amenable to complex CV architectures (DeepLV3+ w/ ResNet50) and tasks (semantic segmentation) Paper: Code: 🧵1/
Tweet media one
2
19
79
@abursuc
Andrei Bursuc
7 months
Unsupervised 3D perception (object detection) w/ 2D vision-language distillation #ICCV2023 tl;dr: generate amodal 3D boxes and tracklets (for static and moving objects) + distill VLM features from images to point clouds. Works well for closed & open set
Tweet media one
Tweet media two
Tweet media three
0
16
78
@abursuc
Andrei Bursuc
10 months
The release of LLaMa-2 from @MetaAI with open weights and free for both research and commercial use, is definitely a seismic moment with ripples in academia, commercial applications, startups for LLMs or ML in general. I was not expecting such a release this soon.
@_akhaliq
AK
10 months
Meta releases Llama 2: Open Foundation and Fine-Tuned Chat Models paper: blog: develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion
Tweet media one
35
575
2K
2
13
78
@abursuc
Andrei Bursuc
4 years
@fchollet For that time-span I think trees are the most common project. Tech can't still accelerate it much and you know from the beginning that it's later generations that will benefit it.
Tweet media one
Tweet media two
0
3
66
@abursuc
Andrei Bursuc
3 years
An inspiring talk for all advisors from Kilian Weinberger @MLRetrospective workshop #neurips2020 Using 3 of his recent papers as example, he argues on the importance on focusing on the gained insight and not just settle for beating SoTA enough to get you a paper 1/
Tweet media one
Tweet media two
1
17
72
@abursuc
Andrei Bursuc
3 years
I've just received a copy of @TacoCohen 's super thesis and I'm looking forward to dive in. Such manuscripts are really handy for kickstarting you in a field and I'm sure I'll recommend it to students and collaborators in the future. Kudos!
Tweet media one
1
4
75
@abursuc
Andrei Bursuc
7 months
Battle of Backbones (BoB): Besides the awesome name, this paper is highly insighftul in comparing lots and lots of pretrained models on several various computer vision downstream tasks (my kind favorite way to analyze and understand models in addition to many metrics).
Tweet media one
@micahgoldblum
Micah Goldblum
7 months
🚨Excited to announce a large-scale comparison of pretrained vision backbones including SSL, vision-language models, and CNNs vs ViTs across diverse downstream tasks ranging from classification to detection to OOD generalization and more! NeurIPS 2023🚨🧵
6
93
415
1
9
78
@abursuc
Andrei Bursuc
5 years
MAML(s) for the masses: a new pytorch library for implementing existing and developing new meta-learning algos The source papers features a pedagogical description of inner loop meta-learning algos
@egrefen
Edward Grefenstette
5 years
Happy to announce our paper on Generalized Inner Loop Meta Learning, aka Gimli (), with @brandondamos , @denisyarats , Phu Mon Htut, Artem Molchanov, Franziska Meier, @douwekiela , @kchonyc , and @soumithchintala . THREAD [1/6]
Tweet media one
5
84
282
0
14
75
@abursuc
Andrei Bursuc
2 years
@deliprao 2nd author of BoWNet here (1st author is on paternity leave). I reply right away as it seems this tweet is getting more impact that the original paper itself 🙃 1/
1
3
76
@abursuc
Andrei Bursuc
1 year
OpenOOD: Benchmarking Generalized Out-of-Distribution Detection by @JingkangY @SharonYixuanLi @DanHendrycks et al. #NeurIPS2022 What a crazy effort implement and compare 35 ODD methods under 9 benchmarks! paper: code:
Tweet media one
Tweet media two
1
14
73
@abursuc
Andrei Bursuc
4 years
Student registration fees for ML conferences are a good bargain this year: ICML $25, COLT $30, AISTATS €40. While virtual conferences are less exciting than in-person ones, they can put you on track to watch talks, read papers and talk with authors (uncrowded poster sessions)
2
13
71
@abursuc
Andrei Bursuc
1 year
We're organizing a workshop on Uncertainty Quantification for Computer Vision @ICCVConference with a fantastic line-up of speakers, talks from contributed papers and a competition on uncertainty quantification for autonomous driving #iccv2023 Please spread the news.
Tweet media one
@GianniFranchi10
Gianni Franchi
1 year
We are happy to announce the #ICCV2023 #UNCV2023 Workshop: Workshop on Uncertainty Quantification for Computer Vision (). We welcome full papers and extended abstracts. Join also our ICCV competition for Uncertainty quantification
0
6
21
0
18
72
@abursuc
Andrei Bursuc
6 years
#pytorch code for "Dynamic few-shot visual learning without forgetting" by Gidaris and Komodakis at #cvpr2018 : The authors share even configs and learning rate schedules for experiments in paper.
0
37
67
@abursuc
Andrei Bursuc
4 years
This new augmented visualization system for fencing games is mesmerizing. Great work by @kcimc @rhizomatiks and collaborators
3
13
65
@abursuc
Andrei Bursuc
8 years
In other news, ~700 / 2500 submissions at #NIPS2016 were on Deep Learning or Neural Networks
Tweet media one
Tweet media two
2
42
59
@abursuc
Andrei Bursuc
7 months
Happy to see that Paris is becoming a hotspot for AI folks from academia, big tech or startups with a particular vision for openness and collaboration. Kudos to @ylecun his relentless quest over the years in advertising Paris as a excellent place for that.
@ylecun
Yann LeCun
7 months
Open source AI is the way to go! Proud to see @huggingface , @scaleway , & @meta joining to launch an AI startup accelerator at Station F. This will help concretize our common vision of an open and collaborative AI ecosystem. More from TechCrunch:
36
192
1K
0
6
68
@abursuc
Andrei Bursuc
7 years
Old-school GANs from early 2000s using humans as discriminator? #iclr2017
Tweet media one
2
21
62
@abursuc
Andrei Bursuc
5 years
Great blog-post by @lilianweng on recent progress in meta-learning (aka learning to learn fast, low-shot learning) covering the terminology and main families of approaches
0
23
66
@abursuc
Andrei Bursuc
5 months
Thrilled w/ CLIP-DINOiser by @mkwysoczanska et al. to compute dense CLIP features. CLIP's limitation to global features-only annoyed many vision folks. MaskCLIP (simple & fast) can get dense features, but noisy. We extend MaskCLIP w/ a few nudges from DINO localization priors 👇
@mkwysoczanska
Monika Wysoczańska
5 months
🚨Happy to release on arXiv CLIP-DINOiser: Teaching CLIP a few DINO tricks🦖🎓 We obtain dense CLIP features in 1 forward pass w/o feature alteration and w/ almost no computational extra cost to facilitate open-vocabulary semantic segmentation 🧶 🖥️: [1/N]
4
37
217
1
15
66
@abursuc
Andrei Bursuc
4 years
This dataset does not exist: training models from generated images -- new work with V. Besnier and team Recent GANs, e.g. BigGAN, output increasingly high-quality images. What if we trained classifiers over fake images only, an initial aim of GANs? 1/
Tweet media one
1
20
64
@abursuc
Andrei Bursuc
9 years
CelebA: a large-scale dataset of celebrity faces with attributes annotations - 10k celebs, 200k imgs, 40 attr/img http://t.co/IhqX1X3hea
0
39
57
@abursuc
Andrei Bursuc
7 months
So @ajmooch 's NFNets (Normalizing-Free Nets) are back in the spotlight w/ this mythbusting comparison of scaling properties against ViTs. NFNets can scale well to JFT-4B level w/ predictable learning rate behavior. Whether ConvNet or ViT, at scale, there's no BatchNorm.
@_akhaliq
AK
7 months
ConvNets Match Vision Transformers at Scale paper page: Many researchers believe that ConvNets perform well on small or moderately sized datasets, but are not competitive with Vision Transformers when given access to datasets on the web-scale. We
Tweet media one
20
178
881
2
17
63
@abursuc
Andrei Bursuc
3 years
I dare a challenge for #cvpr2021 twitter: for each paper of yours that you advertise on twitter, please share 1-3 interesting papers from other teams. Authors will appreciate it and the community will be stronger. Here are some champions: @ducha_aiki @CSProfKGD @artsiom_s
3
4
60
@abursuc
Andrei Bursuc
6 years
Oh boy! Faster R-CNN and Mask R-CNN code for PyTorch 1.0 from Facebook folks. It's 2x faster than Detectron. Welcome to a new level of fun for future deep learning courses
1
10
59
@abursuc
Andrei Bursuc
1 year
Fast Language-Image Pre-training via Masking (FLIP) It turns out that randomly masking out image patches with high mask ratio and encoding of visible patches only improves significantly the scalability and accuracy of CLIP (heavily used in so many tasks)
Tweet media one
Tweet media two
2
14
61
@abursuc
Andrei Bursuc
5 years
Two new posts from Vincent Vanhoucke on managing research teams. They're quite insightful for non-manager researchers as it covers typical time wasting practices and advice for guiding research projects
1
20
58
@abursuc
Andrei Bursuc
1 year
Fun paper towards more practical vision-based (multi-camera) perception for autonomous driving. Most benchmarks focus on accuracy often ignoring latency. Here, the authors evaluate online performance of SoTA methods with the inference delay impact:
Tweet media one
4
6
60
@abursuc
Andrei Bursuc
8 months
Join us this Tuesday for our @ICCVConference tutorial on "The Many Faces of Reliability of Deep Learning for Real-World Deployment" prepared by @SharonYixuanLi , @puneetdokania , @tuan_hung_vu , Dengxin Dai, Patrick Pérez and @abursuc #ICCV2023
Tweet media one
3
16
61
@abursuc
Andrei Bursuc
3 years
View from backstage #cvpr2021
Tweet media one
@abursuc
Andrei Bursuc
3 years
Leave Those Nets Alone: Advances in Self-Supervised Learning. Join us this Sunday for our #cvpr2021 tutorial to discover what's cooking these days in the different flavors of self-supervised learning. Recordings and slides will be online right after.
Tweet media one
3
42
203
2
3
56
@abursuc
Andrei Bursuc
2 years
#eccv2022 had 1488 accepted papers but only 683 were presented during the in-person sessions. The remaining 805 papers are virtual and their videos are available on the @eccvconf online platform. Make sure to take a look on them.
0
5
55
@abursuc
Andrei Bursuc
4 years
Join our #cvpr2020 tutorial "Towards Annotation-Efficient Learning: Few-Shot, Self-Supervised, and Incremental Learning Approaches" with @inthebrownbag , @relja_work , S. Gidaris and me. We will be live on YouTube on June 19th 08:30 PDT / 17:30 CET
1
13
55
@abursuc
Andrei Bursuc
8 years
List of #cvpr2016 papers with code made available online
0
33
49
@abursuc
Andrei Bursuc
4 years
This is among my favorite papers at #eccv2020 so far. A simple, elegant and effective approach. PyTorch code for the loss is up
@Andrew__Brown__
Andrew Brown
4 years
Come join us as #ECCV2020 for our Q&A sessions at 14:00 Tuesday and 00:00 Wednesday, UK Time! We will discuss "Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval" with myself, @WeidiXie , @VickyKalogeiton and Andrew Zisserman #VGGatECCV2020 @Oxford_VGG
1
15
64
2
13
56
@abursuc
Andrei Bursuc
8 months
The satisfaction of seeing a paper that you have defended hard to get accepted, now surrounded by lots of interested people at the poster #ICCV2023
0
1
56
@abursuc
Andrei Bursuc
5 years
#ECCV2020 PCs are taking such good care of reviewers this year: limiting number of papers to 5, more outstanding reviewer awards including free registration, reserving conference places for reviewers that met deadlines and gave reviews of acceptable quality.
0
14
53
@abursuc
Andrei Bursuc
3 years
Currently working on a PhD thesis proposal and I find the whole process so rewarding: brainstorming for directions, measuring progress, state and trends in an area, taking time to reflect on interesting avenues to take in the next few years as well as their potential applications
2
1
52
@abursuc
Andrei Bursuc
11 months
A crescendo talk on self-supervised pretraining and leveraging foundation models for visual navigation (ViNG, VIKING, LM-NAV) culminating with ViNT (Visual Navigation Transformer): with diffusion to generate subgoals candidates and prompt tuning for new actions #E2EAD #CVPR2023
Tweet media one
Tweet media two
Tweet media three
Tweet media four
@svlevine
Sergey Levine
11 months
I'll be giving a talk at the #CVPR2023 workshop on End-to-End Autonomous driving at 2:05 pm (in 5 minutes...): Will cover some of our very recent (unreleased!) work on building general-purpose pretrained Transformer backbones for robotic navigation.
3
16
123
0
11
49
@abursuc
Andrei Bursuc
2 years
This might not be obvious from the West, but surely clear in East Europe: Ukrainians are currently fighting for our freedom too, not just theirs.
0
8
47