Computer Vision and Machine Learning - MPI-INF
@cvml_mpiinf
Followers
285
Following
38
Media
24
Statuses
79
Computer Vision and Machine Learning Department (D2) at the Max Planck Institute for Informatics
Saarbrücken, Germany
Joined April 2024
Summary: Single image to realistic 3D avatar. We jointly train a multiview diffusion model with a generative 3d reconstruction model. Paper:
0
0
1
"Human-3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion Models" by @yxue_yxue, @XianghuiXie, @_R_Marin_, and @GerardPonsMoll1. Fri 13 Dec 4:30 p.m. PST–7:30 p.m. PST, East Exhibit Hall A-C #1202
1
1
3
We propose a pruning framework, DASH, that considers prior domain knowledge to obtain meaningfully sparse neural networks. For the prediction of gene regulatory dynamics, we show that DASH recovers neural structures that provides data-specific insights aligned with biology.
1
0
2
"Pruning neural network models for gene regulatory dynamics using data and domain knowledge" by @InteHossain93*, @JonasFischerML*, @BurkholzRebekka^, and @johnquackenbush^. Fri 13 Dec 11 a.m. PST–2 p.m. PST, East Exhibit Hall A-C #1100
1
1
2
...B-cos networks at the fraction of the training cost—up to 9x faster for some architectures. We use this to also B-cosify a CLIP model, which, even with limited data, performs competitively on zero shot performance while being interpretable. Paper:
arxiv.org
B-cos Networks have been shown to be effective for obtaining highly human interpretable explanations of model decisions by architecturally enforcing stronger alignment between inputs and weight....
1
0
1
B-cos networks provide interpretable and faithful-by-design explanations, but training them from scratch is costly. We study how to “B-cosify” already trained DNNs, and show that we can achieve the same accuracy and interpretability benefits of...
1
0
1
"B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable" by @shrebox*, @sukrutrao*, @MoritzBoehle*, and Bernt Schiele. Thu 12 Dec 11 a.m. PST–2 p.m. PST, East Exhibit Hall A-C #3108
1
2
7
This approach opens new avenues for research and advances in weakly supervised segmentation. Project Page: https://t.co/zyevvhAqLC ArXiv: https://t.co/a0v2EA30Uv GitHub:
github.com
Code and Datasets for the NeurIPS24 Paper "Scribbles for All: Benchmarking Scribble Supervised Segmentation Across Datasets" - wbkit/Scribbles4All
1
0
1
...with significantly reduced annotation effort. Scribbles for All provides pre-generated scribble labels for several popular segmentation datasets and introduces an algorithm capable of automatically generating scribble labels for any dataset with dense annotations.
1
0
1
We present Scribbles for All, a novel algorithm for generating training data and labels for semantic segmentation using scribble annotations. Scribble labels offer a promising alternative to traditional pixel-wise dense annotations, delivering high-quality segmentation results...
1
0
1
"Scribbles for All: Benchmarking Scribble Supervised Segmentation Across Datasets" by Wolfgang Boettcher, @lukashoyer3, @uenalozan, @janericlenssen, and Bernt Schiele. Wed 11 Dec 4:30 p.m. PST–7:30 p.m. PST, East Exhibit Hall A-C #1702
1
1
2
Hello from Vancouver! Visit us at our posters at #NeurIPS2024 to know about our latest work! A summary 🧵
1
3
11
Congratulations to our group's senior researcher @janericlenssen and PhD alumnus @yaoyaoliu1 for winning the ECVA PhD Award at #ECCV2024! 🎉🥳 Hearty congratulations also to @songyoupeng and @elliottszwu!
1
1
25
Less than three days to go for the eXCV Workshop at #ECCV2024! Join us on Sunday from 14:00-18:00 in Brown 1 to hear about the state of XAI research from an exciting lineup of speakers! @orussakovsky, @vidal_rene, @sunniesuhyoung, @YGandelsman, @zeynepakata
@eccvconf (1/4)
1
16
75
Current visual foundation models are trained purely on unstructured 2D data, limiting their understanding of 3D structure of objects and scenes. In this work, we show that fine-tuning on 3D-aware data improves the quality of emerging semantic features.
0
0
2
“Improving 2D Feature Representations by 3D-aware Fine-tuning” by Yuanwen Yue, Anurag Das, Francis Engelmann, Siyu Tang, and Jan Eric Lenssen.
1
1
9
We show that sparse autoencoders can help extract and name concepts known to CLIP vision encoders in a fully automated manner, and find that the extracted concepts can be used to form performant concept bottleneck models on downstream datasets. Paper:
arxiv.org
Concept Bottleneck Models (CBMs) have recently been proposed to address the 'black-box' problem of deep neural networks, by first mapping images to a human-understandable concept space and then...
1
0
2
“Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery” by @sukrutrao, @SwetaMahajan1, @MoritzBoehle, and Bernt Schiele. https://t.co/rfZy8X23ge
Happy and excited to have two papers accepted at #ECCV2024! 1. “Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery” with @SwetaMahajan1, @MoritzBoehle, and Bernt Schiele at @cvml_mpiinf. (1/7)
1
0
4