Simon Vandenhende Profile
Simon Vandenhende

@svandenh1

Followers
119
Following
236
Media
7
Statuses
45

Research engineer @ Meta AI. Previously, KU Leuven.

San Fransisco, California, USA
Joined March 2019
Don't wanna be here? Send us removal request.
@svandenh1
Simon Vandenhende
3 years
RT @popular_ML: The most popular Arxiv link yesterday:
0
6
0
@svandenh1
Simon Vandenhende
3 years
RT @_akhaliq: Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training .abs: .
0
31
0
@svandenh1
Simon Vandenhende
3 years
RT @deeptigp: Please consider submitting to our workshop and attending the talks of our amazing list of speakers!!.
0
1
0
@svandenh1
Simon Vandenhende
3 years
RT @deeptigp: We are looking for more reviewers for the Dataset and Benchmarks track at NeurIPS'22. If you are interested in becoming a rev….
Tweet card summary image
docs.google.com
Please use this form to recommend reviewers for the NeurIPS 2022 Datasets and Benchmarks Track, or to nominate yourself. Ideally, qualified reviewers should have a level of experience equivalent to...
0
1
0
@svandenh1
Simon Vandenhende
4 years
Excited to be hosting the DeepMTL workshop on multi-task learning at @ICCV_2021 tomorrow. We have an excellent group of speakers ready for you: @zamir_ar I. Kokkinos @judyfhoffman @RaquelUrtasun R. Caruana A. Rabinovich .
Tweet card summary image
sites.google.com
0
7
37
@svandenh1
Simon Vandenhende
4 years
Thats enough self-promotion for today. Work done together with @WGansbeke , @stam_g and Luc Van Gool.
0
0
0
@svandenh1
Simon Vandenhende
4 years
[5/5] Training MoCo with a multi-crop strategy improves properties like object localization.
Tweet media one
1
0
0
@svandenh1
Simon Vandenhende
4 years
[4/5] Adopting a multi-crop strategy allows to learn spatially structured representations, which can be directly used for semantic segment retrieval and video instance segmentation without finetuning.
Tweet media one
1
0
0
@svandenh1
Simon Vandenhende
4 years
[3/5] The learned representations can be further improved by learning additional invariances.
Tweet media one
1
0
0
@svandenh1
Simon Vandenhende
4 years
[2/5] MoCo does not suffer from using non-overlapping views on scene-centric datasets. First, non-overlapping views do not occur when using the augmentation strategy from SimCLR. Second, even when lowering the overlap between crops, the performance remains stable.
Tweet media one
1
0
0
@svandenh1
Simon Vandenhende
4 years
[1/5] An existing approach like MoCo can handle object-centric versus scene-centric, uniform versus long-tailed and general versus domain-specific datasets well.
Tweet media one
1
0
0
@svandenh1
Simon Vandenhende
4 years
We uploaded a new paper on contrastive self-supervised learning. The paper examines the influence of dataset biases, uncovers several interesting qualities of the learned representations, and shows how to realize further gains.
1
3
9
@svandenh1
Simon Vandenhende
4 years
RT @WGansbeke: We study how biases in the dataset affect contrastive pretraining and explore additional invariances. What if we use non-cur….
0
2
0
@svandenh1
Simon Vandenhende
4 years
RT @SumanSa51694405: Glad to share two of our papers got accepted at CVPR2021:.- Learning to Relate Depth and Semantics for Unsupervised Do….
0
1
0
@svandenh1
Simon Vandenhende
4 years
Happy to announce we are organizing the 1st 'DeepMTL' workshop on multi-task learning @ICCV_2021 . Check out our call for papers and invited speakers!. Website:
Tweet media one
0
0
9
@svandenh1
Simon Vandenhende
5 years
RT @ak92501: Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals.pdf: abs: .
0
30
0
@svandenh1
Simon Vandenhende
5 years
We have updated our survey on multi-task learning for dense prediction tasks. The paper features an extensive literature review, intuitive comparisons, thorough experiments, etc. Code can be found here 🥳:
github.com
PyTorch implementation of multi-task learning architectures, incl. MTI-Net (ECCV2020). - GitHub - SimonVandenhende/Multi-Task-Learning-PyTorch: PyTorch implementation of multi-task learning archit...
1
6
13
@svandenh1
Simon Vandenhende
5 years
RT @ShuyuLin_n: Branched multi-task networks automatically decide which layers in a network to be shared among a few related tasks (https:/….
0
3
0
@svandenh1
Simon Vandenhende
5 years
RT @AntonObukhov1: Our paper “T-Basis: a compact representation for neural networks” is live at ICML! We learn compressed neural networks b….
0
6
0