Hello from Vancouver! Visit us at our posters at #NeurIPS2024 to know about our latest work! A summary đź§µ
1
3
11
Replies
"Scribbles for All: Benchmarking Scribble Supervised Segmentation Across Datasets" by Wolfgang Boettcher, @lukashoyer3, @uenalozan, @janericlenssen, and Bernt Schiele. Wed 11 Dec 4:30 p.m. PST–7:30 p.m. PST, East Exhibit Hall A-C #1702
1
1
2
We present Scribbles for All, a novel algorithm for generating training data and labels for semantic segmentation using scribble annotations. Scribble labels offer a promising alternative to traditional pixel-wise dense annotations, delivering high-quality segmentation results...
1
0
1
...with significantly reduced annotation effort. Scribbles for All provides pre-generated scribble labels for several popular segmentation datasets and introduces an algorithm capable of automatically generating scribble labels for any dataset with dense annotations.
1
0
1
This approach opens new avenues for research and advances in weakly supervised segmentation. Project Page: https://t.co/zyevvhAqLC ArXiv: https://t.co/a0v2EA30Uv GitHub:
1
0
1
"B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable" by @shrebox*, @sukrutrao*, @MoritzBoehle*, and Bernt Schiele. Thu 12 Dec 11 a.m. PST–2 p.m. PST, East Exhibit Hall A-C #3108
1
2
7
B-cos networks provide interpretable and faithful-by-design explanations, but training them from scratch is costly. We study how to “B-cosify” already trained DNNs, and show that we can achieve the same accuracy and interpretability benefits of...
1
0
1
...B-cos networks at the fraction of the training cost—up to 9x faster for some architectures. We use this to also B-cosify a CLIP model, which, even with limited data, performs competitively on zero shot performance while being interpretable. Paper:
1
0
1
"Pruning neural network models for gene regulatory dynamics using data and domain knowledge" by @InteHossain93*, @JonasFischerML*, @BurkholzRebekka^, and @johnquackenbush^. Fri 13 Dec 11 a.m. PST–2 p.m. PST, East Exhibit Hall A-C #1100
1
1
2
We propose a pruning framework, DASH, that considers prior domain knowledge to obtain meaningfully sparse neural networks. For the prediction of gene regulatory dynamics, we show that DASH recovers neural structures that provides data-specific insights aligned with biology.
1
0
2
"Human-3Diffusion: Realistic Avatar Creation via Explicit 3D Consistent Diffusion Models" by @yxue_yxue, @XianghuiXie, @_R_Marin_, and @GerardPonsMoll1. Fri 13 Dec 4:30 p.m. PST–7:30 p.m. PST, East Exhibit Hall A-C #1202
1
1
3