Jose Dolz
@josedolz_ets
Followers
312
Following
136
Media
42
Statuses
197
Passionate on medical imaging and computer vision. Associate Professor. ETS Montreal @etsmtl
Montréal, Québec
Joined December 2019
Orgulloso de mi pais en estos momentos tan difíciles 🇪🇸 Proud of my country in these extremely hard moments. All my heart is with mi city and all the victims from this tragedy :( 💔
0
0
4
If you are at #ECCV and want to know about calibrating large language-vision model adaptors, come now to our poster (nr 79)
0
0
5
All about Foundation Models today at the last day of summer school #DLMI with Prof @josedolz_ets! #ChatGPT #FoundationModels #AI #LLMs #VLMs
0
1
2
We begin our day with a talk on Weakly supervised #deeplearning , constrained losses and semantic segmentation! #DLMI2024
0
1
3
I am hiring two post-docs to work at the intersection of medical imaging and machine learning (modelling the uncertainty of large language-vision models). If you are interested, drop me an email for more information jose.dolz@etsmtl.ca
0
1
8
If you are interested in doing a PhD in deep learning and computer vision at ETS Montreal, drop me an email with your CV and research interests (more info in the attached image)
0
3
11
5/5. Link to the paper: https://t.co/0PiBZ7QIbY Github: https://t.co/pBtEPCkF5N Congrats to all co-authors!! @bing_bingyuan @adrian_galdran @IsmailBenAyed1
github.com
Code for the paper : Do we really need dice? The hidden region-size biases of segmentation losses. MeDIA 2023. https://www.sciencedirect.com/science/article/abs/pii/S136184152300275X - by-liu/SegLo...
1
0
2
4/5. Dice has an intrinsic bias towards specific extremely imbalanced solutions, whereas CE implicitly encourages the ground-truth region proportions. This explains the wide experimental evidence in medical-imaging, where Dice loss brings improvements for imbalanced segmentation.
1
0
1
3/5.And the second one, a region-size penalty term imposing different biases on the size (or proportion) of the predicted regions. Our information-theoretic analysis uncovers hidden region-size biases.
1
0
0
2/5. In this work, we provide a theoretical analysis, which shows that CE and Dice share a deep connection. They both decompose into two components. The first one, a similar ground-truth matching term, which pushes the predicted foreground regions towards the ground-truth;
1
0
2
1/5.Do you wonder which is the best loss function to use in your medical segmentation model? It is widely argued within the medical-imaging community that Dice and CE losses are complementary, which has motivated the use of compound CE-Dice losses (the de-facto solution nowadays)
2
2
16
🚨One of our latest papers, where we propose to use Denoising Auto-Encoders to model the uncertainty of the predictions in semi-supervised segmentation has been accepted in MedIA. Congrats @sukeshadiga !! 🎉🎉💪💪 Arxiv: https://t.co/UQW5ckj2w1 Github:
github.com
Anatomically-aware Uncertainty for Semi-supervised Image Segmentation - adigasu/Anatomically-aware_Uncertainty_for_Semi-supervised_Segmentation
0
2
14
Kudos to all the first authors and co-authors on these papers!! @93Balamuralim @jul_nicol @IsmailBenAyed1 @imtiaz_masud
0
0
1
2) MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models for Domain Incremental Learning. Paper: https://t.co/LOBH2lHJCW
1
0
3
It seems some of my students will go to Hawaii this winter to present their works at @wacv_official 1) Prompting classes: Exploring the Power of Prompt Class Learning in Weakly Supervised Semantic Segmentation. Paper: https://t.co/fs5cUV7tRL Github: https://t.co/lRG9Q7xzAo
1
3
14