Timothy Hospedales Profile
Timothy Hospedales

@tmh31

Followers
816
Following
81
Media
4
Statuses
144

Professor @ University of Edinburgh. Head of Samsung AI Research Centre, Cambridge.

Edinburgh, Scotland
Joined August 2009
Don't wanna be here? Send us removal request.
@radamar
Radamés Ajna
2 years
Here's the demo "Enhance This"! It's a surreal image magnifier that creates a high-res version by imagining new details, using the SDXL base model. Thanks to @RuoyiDu's DemoFusion research. It takes a ~minute to generate a 2024x2024 image. https://t.co/t36Dt4S2ll
15
124
631
@RuoyiDu
Ruoyi Du
2 years
💰DemoFusion: High-resolution generation using only SDXL and a RTX 3090 GPU! ... is now available in 🧨diffusers as a community pipeline! Check it out: https://t.co/Yr4xdjoKT4 Project Page: https://t.co/Ivvwds9jfv #generativeAI #ImageGeneration #diffusionmodels
4
20
88
@tmh31
Timothy Hospedales
2 years
Excited to have been part of DemoFusion, bringing UHD generation to SDXL on your desktop with no training! With @RuoyiDu @yizhe_song @DL_Chang Project: https://t.co/ygOnvdSJtN, paper: https://t.co/OR77mOvHjD #GenerativeAI
4
7
27
@tmh31
Timothy Hospedales
2 years
Interested in practical uncertainty quantification? Our new Bayesian NN library from Samsung AI Cambridge scales to large VITs! One line of code wraps any architecture with no modifying your model definition! https://t.co/xtTVn49UE9 https://t.co/7fraYSoEQC @minyoungkim21
Tweet card summary image
arxiv.org
We release a new Bayesian neural network library for PyTorch for large-scale deep networks. Our library implements mainstream approximate Bayesian inference algorithms: variational inference,...
1
8
51
@EdinburghVision
Edinburgh Vision
2 years
Meta Omnium is a multi-task few-shot learning benchmark to evaluate generalization across CV tasks. Work by @OBohdal @EricTian1102, @yongshuozong, @chavhan_ruchika, @dali_academic, @henrygouk, @tmh31, will be presented on 20/6 afternoon. Project page: https://t.co/rxpmndy7MF
0
5
20
@tmh31
Timothy Hospedales
2 years
Excited to give our @CVPR tutorial on few-shot learning today! Together with @ztwztq2006. Room East 5 starting 9AM PDT! #CVPR2023 .
0
1
13
@HegganCalum
Calum Heggan
3 years
🎉 New Work: https://t.co/hyORsHx2NC🎉 Excited to announce, our new work "MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations" is accepted at #INTERSPEECH2023 and is now public. Links and overview below ⬇️. @tmh31 @myprojection @BudgettSam
1
3
3
@RamanDutt4
Raman Dutt
3 years
🚨Parameter Efficient Fine-Tuning has been well researched for NLP, vision, and cross-modal tasks. So why should MedAI be left behind? Presenting the first evaluation on PEFT for medical AI - https://t.co/tYIh7DLNN7 16 PEFT methods, 5 datasets including a text-to-image task 🔥
3
11
40
@yongshuozong
Yongshuo Zong
3 years
Check out our latest survey on Self-Supervised Multimodal Learning!👇 Great to have the advisory from @oisinmacaodha and @tmh31. Paper:
1
3
22
@henrygouk
Henry Gouk
3 years
Hello, everyone! We will be organizing an online workshop at ICLR 2023 aimed at one question: What do we need for successful domain generalization? The workshop will include invited talks from David Lopez-Paz, @AmosStorkey, @tommasi_tatiana, and @ylqzd2011 1/2
2
9
36
@tmh31
Timothy Hospedales
3 years
My Lab at Samsung AI Research Cambridge is #hiring research scientist and ML research engineer positions. Skilled in meta-learning, neuro-symbolic, foundation models, vision and language, robot learning, on-device learning? Apply online https://t.co/ELTrrMFUKi #MachineLearning
1
28
97
@tmh31
Timothy Hospedales
3 years
A take home is that model selection strategy seems to be the only reliable way to affect fairness so far. Surprising? Good/Bad? At least it's easy to implement.
0
0
0
@tmh31
Timothy Hospedales
3 years
Wondering about state of the art in algorithmic fairness and bias in AI? @yongshuozong' benchmark suite evaluates algorithms comprehensively across medical AI tasks. Bias is pervasive and fairness is hard to find. Paper & Code: https://t.co/rkRZXc4vQ9
1
5
13
@tmh31
Timothy Hospedales
3 years
What happens when few-shot meta-learning meets foundation models? Check our paper with @shelling343 at CVPR'22 New Orleans today. https://t.co/8IrW2Ce3zK
2
3
33
@mgb_infers
Michael Burke
3 years
I'll be presenting some work on Vision based key point discovery and system identification at #l4dc2022 this Friday. https://t.co/rW3nHpSci0 - Work led by @migJaques with @masenov1 @tmh31
@mgb_infers
Michael Burke
3 years
I'll be in Palo Alto next week for #l4DC2022 if anyone is keen to meet up.
1
1
13
@HegganCalum
Calum Heggan
4 years
Is few-shot meta-learning successful for general modalities? Find out how well your favourite meta-learner does in few-shot audio tasks in our new benchmark "MetaAudio"! https://t.co/a1VuWqPzod
1
18
24
@ldeecke
Lucas Deecke
4 years
Presenting our paper on a new transfer learning setting called “latent domain learning” at #ICLR2022’s poster session 5 tomorrow (10:30am BST). https://t.co/iQG8Y7fv5q — joint work with @tmh31 and Hakan Bilen, hope to see you there!
0
3
6
@ducha_aiki
Dmytro Mishkin 🇺🇦
4 years
Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks @sillinuss, @henrygouk, @tmh31 tl;dr: color augmentation helps self-super MoCo2 pose estimation. https://t.co/uxD7BYcpjA 1/
2
5
22
@ruiruiliii
Rui Li
4 years
On this Friday, I will present our paper ‘A Channel Coding Benchmark for Meta-Learning’ at #NeurIPS2021 Benchmark and Dataset Track. Checkout the preprint at: https://t.co/1SXXRppTsK Credits to: Ondrej Bohdal, Hyeji Kim, Rajesh Mishra, Da Li, Nicholas Lane, and Timothy Hospedales
3
3
15
@OBohdal
Ondrej Bohdal
4 years
Would you like to learn how to make meta-learning more scalable? We’ll be presenting EvoGrad at Poster Session 1 at #NeurIPS today - starting from 16:30 UTC. Joint work with Yongxin Yang and Timothy Hospedales.
0
3
16