Timothy Hospedales
@tmh31
Followers
816
Following
81
Media
4
Statuses
144
Professor @ University of Edinburgh. Head of Samsung AI Research Centre, Cambridge.
Edinburgh, Scotland
Joined August 2009
Here's the demo "Enhance This"! It's a surreal image magnifier that creates a high-res version by imagining new details, using the SDXL base model. Thanks to @RuoyiDu's DemoFusion research. It takes a ~minute to generate a 2024x2024 image. https://t.co/t36Dt4S2ll
15
124
631
💰DemoFusion: High-resolution generation using only SDXL and a RTX 3090 GPU! ... is now available in 🧨diffusers as a community pipeline! Check it out: https://t.co/Yr4xdjoKT4 Project Page: https://t.co/Ivvwds9jfv
#generativeAI #ImageGeneration #diffusionmodels
4
20
88
Excited to have been part of DemoFusion, bringing UHD generation to SDXL on your desktop with no training! With @RuoyiDu @yizhe_song @DL_Chang Project: https://t.co/ygOnvdSJtN, paper: https://t.co/OR77mOvHjD
#GenerativeAI
4
7
27
Interested in practical uncertainty quantification? Our new Bayesian NN library from Samsung AI Cambridge scales to large VITs! One line of code wraps any architecture with no modifying your model definition! https://t.co/xtTVn49UE9
https://t.co/7fraYSoEQC
@minyoungkim21
arxiv.org
We release a new Bayesian neural network library for PyTorch for large-scale deep networks. Our library implements mainstream approximate Bayesian inference algorithms: variational inference,...
1
8
51
Meta Omnium is a multi-task few-shot learning benchmark to evaluate generalization across CV tasks. Work by @OBohdal @EricTian1102, @yongshuozong, @chavhan_ruchika, @dali_academic, @henrygouk, @tmh31, will be presented on 20/6 afternoon. Project page: https://t.co/rxpmndy7MF
0
5
20
Excited to give our @CVPR tutorial on few-shot learning today! Together with @ztwztq2006. Room East 5 starting 9AM PDT! #CVPR2023 .
0
1
13
🎉 New Work: https://t.co/hyORsHx2NC🎉 Excited to announce, our new work "MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations" is accepted at #INTERSPEECH2023 and is now public. Links and overview below ⬇️. @tmh31 @myprojection @BudgettSam
1
3
3
🚨Parameter Efficient Fine-Tuning has been well researched for NLP, vision, and cross-modal tasks. So why should MedAI be left behind? Presenting the first evaluation on PEFT for medical AI - https://t.co/tYIh7DLNN7 16 PEFT methods, 5 datasets including a text-to-image task 🔥
3
11
40
Check out our latest survey on Self-Supervised Multimodal Learning!👇 Great to have the advisory from @oisinmacaodha and @tmh31. Paper:
1
3
22
Hello, everyone! We will be organizing an online workshop at ICLR 2023 aimed at one question: What do we need for successful domain generalization? The workshop will include invited talks from David Lopez-Paz, @AmosStorkey, @tommasi_tatiana, and @ylqzd2011 1/2
2
9
36
My Lab at Samsung AI Research Cambridge is #hiring research scientist and ML research engineer positions. Skilled in meta-learning, neuro-symbolic, foundation models, vision and language, robot learning, on-device learning? Apply online https://t.co/ELTrrMFUKi
#MachineLearning
1
28
97
A take home is that model selection strategy seems to be the only reliable way to affect fairness so far. Surprising? Good/Bad? At least it's easy to implement.
0
0
0
Wondering about state of the art in algorithmic fairness and bias in AI? @yongshuozong' benchmark suite evaluates algorithms comprehensively across medical AI tasks. Bias is pervasive and fairness is hard to find. Paper & Code: https://t.co/rkRZXc4vQ9
1
5
13
What happens when few-shot meta-learning meets foundation models? Check our paper with @shelling343 at CVPR'22 New Orleans today. https://t.co/8IrW2Ce3zK
2
3
33
I'll be presenting some work on Vision based key point discovery and system identification at #l4dc2022 this Friday. https://t.co/rW3nHpSci0 - Work led by @migJaques with @masenov1 @tmh31
I'll be in Palo Alto next week for #l4DC2022 if anyone is keen to meet up.
1
1
13
Is few-shot meta-learning successful for general modalities? Find out how well your favourite meta-learner does in few-shot audio tasks in our new benchmark "MetaAudio"! https://t.co/a1VuWqPzod
1
18
24
Presenting our paper on a new transfer learning setting called “latent domain learning” at #ICLR2022’s poster session 5 tomorrow (10:30am BST). https://t.co/iQG8Y7fv5q — joint work with @tmh31 and Hakan Bilen, hope to see you there!
0
3
6
Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks @sillinuss, @henrygouk, @tmh31 tl;dr: color augmentation helps self-super MoCo2 pose estimation. https://t.co/uxD7BYcpjA 1/
2
5
22
On this Friday, I will present our paper ‘A Channel Coding Benchmark for Meta-Learning’ at #NeurIPS2021 Benchmark and Dataset Track. Checkout the preprint at: https://t.co/1SXXRppTsK Credits to: Ondrej Bohdal, Hyeji Kim, Rajesh Mishra, Da Li, Nicholas Lane, and Timothy Hospedales
3
3
15