
Alec Helbling
@alec_helbling
Followers
6K
Following
8K
Media
191
Statuses
948
ML Interpretability, Diffusion, Visualization. Intern @Apple, PhDing @GeorgiaTech. NSF Fellow. Prev intern @Adobe, @IBM, @NASAJPL.
us-east-1
Joined December 2017
Our work ConceptAttention was accepted to ICML 2025 as a Spotlight Poster ("top" 2.6% of submissions)! ConceptAttention creates rich saliency maps of text concepts present in generated images and videos. It requires no additional training, only repurposing existing parameters.
4
107
740
committing my authentication token to the GitHub repo so that I don’t accidentally lose it
4
0
17
procrastinating looking deeper into a positive result because you don't want to disprove it
2
0
10
Hot take. We need more ostentatious displays of wealth from tech Billionaires. The Bay Area has some of the richest zip codes in the world but it looks a suburb of San Antonio. I want to be driving in Sunnyvale and see the Burj Khalifa. Something with gravitas.
3
0
16
Increasingly convinced we are headed towards a world with long context length AI companions that can engage with streams of multi modal data (i.e, text, image, video) spanning an entire human lifetime.
0
0
6
It’s interesting that in certain fields it is considered immodest to name discoveries/methods after yourself, but in other fields (e.g Math) it is super common.
0
0
5
Why is KL Divergence a more commonly used term in the ML literature than the (in my opinion) much more intuitive “relative entropy”?
38
22
760
Concept Attention explores how the representations of diffusion transformers learn highly interpretable representations in an emergent manner. These representations can be used to create high-quality saliency maps of text concepts without training https://t.co/gk7u8jaGbj
arxiv.org
Do the rich representations of multi-modal diffusion transformers (DiTs) exhibit unique properties that enhance their interpretability? We introduce ConceptAttention, a novel method that leverages...
2
3
23
I gave my first oral presentation at ICML 2025 last week. I was very nervous haha 😱. Our paper ConceptAttention showed that you can repurpose the representations of diffusion transformers to create rich saliency maps of text concepts in generated images.
4
15
242
i wanna pretrain an LLM on a massive synthetic dataset of only lord of the rings text, so that becomes its entire universe
2
0
8
Excited that our work, Concept Attention, was selected as an Oral Presentation at ICML 2025 and recently won the Best Paper Award at the CVPR Workshop on Visual Concepts. Concept Attention allows you to visualize the presence of text concepts in generated videos and images.
4
61
453
0
8
46