MatthieuTerris Profile Banner
Matthieu Terris Profile
Matthieu Terris

@MatthieuTerris

Followers
193
Following
871
Media
11
Statuses
74

building https://t.co/vq0SKxwF3Z

Paris
Joined February 2011
Don't wanna be here? Send us removal request.
@MatthieuTerris
Matthieu Terris
2 months
As a summary, don't forget to degrade your images before applying a restoration model 😁 For more details, come chat with me in Nashville!. Last but not least, this is joint work with Thomas Moreau and @ukmlv 🫡. Paper: Code: next week!.
Tweet card summary image
arxiv.org
Selecting an appropriate prior to compensate for information loss due to the measurement operator is a fundamental challenge in imaging inverse problems. Implicit priors based on denoising neural...
0
0
3
@MatthieuTerris
Matthieu Terris
2 months
But a cool feature inherited from this fixed-point approach is the ability to combine restoration models!. Our results suggest that combining SR, deblurring and denoising priors works best.
Tweet media one
1
0
3
@MatthieuTerris
Matthieu Terris
2 months
This observation echoes recent works that proposed using other priors than denoisers, e.g. DPR and ShARP from @YuyangHu_666 et al., but also SNORE from M. Renaud. Yet, we obtain a different Tweedie formula, suggesting composing the restoration prior with its assoc. degradation.
1
0
2
@MatthieuTerris
Matthieu Terris
2 months
We can thus use this prior just like any denoising prior in an iterative algorithms, i.e. use a super-resolution (SR) model as a prior for deblurring, an inpainting model for SR, etc. Below, I show SRx4 results with various priors: LAMA (inpainting), deblurring, SRx2.
Tweet media one
1
0
2
@MatthieuTerris
Matthieu Terris
2 months
Instead, composing a restoration model with the degradation for which it has been trained does yield a fixed point and a tweedie-like prior!. In other words, the restoration model should not be considered independently from the degradation for which it was trained.
1
0
1
@MatthieuTerris
Matthieu Terris
2 months
🧵 I'll be at CVPR next week presenting our FiRe work 🔥. TL;DR: We go beyond denoising models in PnP with more general restoration (e.g. deblurring) models!. A starting point observation is that images are not fixed-points of restoration models:
1
5
16
@MatthieuTerris
Matthieu Terris
4 months
RT @TachellaJulian: I'll be in Singapore this week for #ICLR2025, presenting "UNSURE: self-supervised learning with Unknown Noise level and….
0
9
0
@MatthieuTerris
Matthieu Terris
4 months
The code and demo are live!.🔗 Paper: 🔗 Code: 🔗 Demo:
Tweet card summary image
huggingface.co
0
0
0
@MatthieuTerris
Matthieu Terris
4 months
And in challenging real-world imaging problems where there’s no ground truth and limited samples, we can still fine-tune… ✨ fully unsupervised! ✨.Using recent ideas from equivariant imaging + SURE, we adapt the model to a single noisy image, e.g. on this tough SPAD problem:
Tweet media one
1
0
0
@MatthieuTerris
Matthieu Terris
4 months
Despite not being unrolled, our network shows strong zero-shot generalization — even to unseen operators and noise levels!
Tweet media one
1
0
0
@MatthieuTerris
Matthieu Terris
4 months
We trained this network on multiple imaging tasks: motion blur, inpainting, MRI, CT, Poisson-Gaussian denoising, super-resolution. and across modalities (1, 2, 3 channels). It learned all tasks jointly — no task-specific retraining needed.
Tweet media one
1
0
0
@MatthieuTerris
Matthieu Terris
4 months
We propose two key architectural updates to make a UNet multitask and generalize:.✅ Inject knowledge of meas. operator A into inner layers (like conditioning in diffusion models). ✅ Share weights across modalities (grayscale, color, complex), adapting only input/output heads.
Tweet media one
1
0
0
@MatthieuTerris
Matthieu Terris
4 months
⚠️ But trained unrolled networks lose their theoretical properties anyway. So we asked: can we revive the UNet — and make it generalize?.
1
0
0
@MatthieuTerris
Matthieu Terris
4 months
The issue: plain UNets don’t generalize well to new forward operators A. They’re also rarely used in this space because unrolled networks are thought to offer better interpretability.
1
0
0
@MatthieuTerris
Matthieu Terris
4 months
Most architectures trained for solving y = Ax + e are tailored to specific operators A. But what if we used one universal UNet backbone?.No PnP, no unrolling, no retraining — just good inductive bias.
1
0
0
@MatthieuTerris
Matthieu Terris
4 months
🧵 What if a single UNet could solve all inverse problems?.In our latest preprint with @HuraultSamuel, Maxime Song, and @TachellaJulian, we build a single multitask UNet for computational imaging — and show it generalizes surprisingly well 👇.
Tweet media one
4
4
22
@MatthieuTerris
Matthieu Terris
4 months
RT @TachellaJulian: 🚢🚢 deepinv v0.3.0 is here, with many new features! 🚢 🚢. Our passionate team of contributors keeps shipping more excitin….
0
4
0
@MatthieuTerris
Matthieu Terris
5 months
RT @tobias_liaudat: ⚡ Last days to apply for this exciting project to make significant contributions to space-based telescopes like Euclid….
0
2
0
@MatthieuTerris
Matthieu Terris
5 months
RT @TachellaJulian: 🚀 Do you want to learn about self-supervised learning for inverse problems? . ▶️ Check out the 3-hour tutorial present….
Tweet card summary image
youtube.com
Tutorial by Julián Tachella (CNRS, ENS Lyon) & Mike Davies (University of Edinburgh) given at the University of Edinburgh, February 2025.I. "Introduction"II....
0
6
0
@MatthieuTerris
Matthieu Terris
5 months
RT @ukmlv: “Typical priors: ‘This image looks natural.’. FiRE🔥priors: ‘This image survives degradation + restoration.’. Our CVPR 2025 paper….
0
8
0