Dr. Karen Ullrich Profile Banner
Dr. Karen Ullrich Profile
Dr. Karen Ullrich

@karen_ullrich

Followers
4,420
Following
564
Media
19
Statuses
216

Research scientist at FAIR NY + collab w/ Vector Institute. ❤️ Machine Learning + Information Theory. Previously, PhD at UoAmsterdam, intern at DeepMind + MSRC.

she / her
Joined December 2013
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
@karen_ullrich
Dr. Karen Ullrich
3 years
I finally uploaded my PhD thesis “A coding perspective on deep latent variable models” () on Gscholar 🙃 It’s my ❤️ letter to the minimum description length principle for machine learning (+ pastel gradients 😋).
Tweet media one
22
112
1K
@karen_ullrich
Dr. Karen Ullrich
7 months
🚨 Internship opportunity 🚨 You are interested in working in information theory+neural compression (+representation learning+continual learning+mechanistic interpretability more broadly) + work at FAIR NY + flexible starting date in 2024. DM me + send a CV/link to your website
13
90
537
@karen_ullrich
Dr. Karen Ullrich
4 years
During my @deepmind internship supervised by @deepspiker , I have been working on improving the quality of skype/hangout/zoom calls with generative models. Our paper (w @FabioViola ) 👇 "Neural Communication Systems with Bandwidth-Limited Channel" () [1/3]
Tweet media one
6
58
485
@karen_ullrich
Dr. Karen Ullrich
3 years
Dear neural compression enthusiasts, there is a new @PyTorch -repo in town Includes; 🔥neural image and video compression 🔥bits-back coders 🔥GPU entropy coders We already work on extensions, feel invited to contribute ❤️❤️❤️
3
72
365
@karen_ullrich
Dr. Karen Ullrich
3 years
🚨Internship application season is open y’all 🚨 If you are interested in information theory, neural compression, and continual learning; let’s learn and explore together @ FAIR New York. I got one internship spot for 2022 with a flexible starting date. DM me.
14
50
304
@karen_ullrich
Dr. Karen Ullrich
5 years
Proud to show our work on differentiable graphical models in the Fourier domain for protein reconstruction and other projection methods: Thanks to my amazing collaborators David Fleet, @marcusabrubaker , @vdbergrianne and @wellingmax . ❤️❤️❤️
1
44
238
@karen_ullrich
Dr. Karen Ullrich
4 years
2015: I was the only girl in @wellingmax 's AMLAB and pretty lonely at Neurips 2019: 👇Look at us now 😍 Thanks #WiML2019
Tweet media one
1
4
120
@karen_ullrich
Dr. Karen Ullrich
3 years
🔥🔥🔥NEW PAPER ON ARXIV 🔥🔥🔥👇🏻
@yanndubs
Yann Dubois
3 years
Most data is processed by algorithms, but compressors (eg JPEG) are for human eyes. 🤓Our fix: formalize lossy compression that ensures perfect downstream predictions 🔥1000x gains vs JPEG on ImageNet🔥 w. Ben Bloem-Reddy @karen_ullrich @cjmaddison 1/9
Tweet media one
9
140
820
1
3
78
@karen_ullrich
Dr. Karen Ullrich
2 years
📢 Neural Compression Enthusiasts @ #Neurips2022 ; Tue Nov 29th, 3.30 pm , Room 282 inside the Convention Center. Let’s meet, chat, and get inspired! Hope to see ya there ❤️
3
10
77
@karen_ullrich
Dr. Karen Ullrich
3 years
@y0b1byte @BahareFatemi I am sorry to hear about your experience. I would like to add that we should start to understand the depression of grad students as systematic, not just individual problems. Consequently, we should also seek systematic change to how grad school works.
1
1
70
@karen_ullrich
Dr. Karen Ullrich
10 months
Conference marathon continues: Yesterday #ICML2023 in Hawaii today #UAI2023 in Pittsburgh. See you at the Neural Compression Tutorial at 4pm.
Tweet media one
0
2
68
@karen_ullrich
Dr. Karen Ullrich
3 years
New paper accepted as long talk at ICML! We improve the compression capabilities of latent variable models. TLDR; 👇🏻 w/ the fantastic @YangjunR @_dsevero @_j_towns @AliMakhzani Arnaud Doucet, Ashish Khisti and all held together by @cjmaddison
@cjmaddison
Chris J. Maddison
3 years
You want to compress data with a latent variable model, but bits-back achieves a suboptimal code length (neg. ELBO). We show how to break this barrier with asympt. optimal coders: Monte Carlo Bits-Back (McBits, ) 1st auths @YangjunR @karen_ullrich @_dsevero
Tweet media one
2
30
157
0
9
63
@karen_ullrich
Dr. Karen Ullrich
4 years
2.5y ago I started work on cryo electron microscopy data. My first paper, w David Fleet and @WellingMax , introduced fast variational approx. to unknown proteins. @a_punjani , also student of David, leads a company that is vital in visualizing #COVID19 right now. 🤩👇
1
4
60
@karen_ullrich
Dr. Karen Ullrich
7 years
Eliminating architecture as an HP. Compressing models by a factor up to 700x. Code for our #nips2017 paper
0
19
59
@karen_ullrich
Dr. Karen Ullrich
10 months
🤩 Tomorrow the #ICML2023 workshops shall begin.🎙️ Join me at 9 AM I will discuss absolutely all there is to know about #BitsBackCoding @ the Structured Probabilistic Inference & Generative Modeling workshop. #AI #ML #DataCompression
1
7
58
@karen_ullrich
Dr. Karen Ullrich
3 years
And you thought we couldn't make another bits back paper 😈 This time, we use bits back to strictly remove information🤯Specifically, we turn a sequence into a set by removing permutations 👉🏻 Compress a dataset when all you got is a sequence codec, e.g. an arithmetic coder.
@_dsevero
Daniel Severo
3 years
We've found a way to save bits during compression by forgetting the order between examples in a dataset. No machine learning required! Authors: @_j_towns (eq contr) A. Khisti @AliMakhzani @karen_ullrich 1/6
4
21
114
1
9
53
@karen_ullrich
Dr. Karen Ullrich
3 years
Check out @wellecks ' podcast the @thesisreview . He entertains conversations around the development of research ideas through researchers PhD thesis. Great resource for undergrads and early PhD students.
@thesisreview
The Thesis Review Podcast
3 years
Episode 28 of The Thesis Review: Karen Ullrich ( @karen_ullrich ), "A Coding Perspective on Deep Latent Variable Models" We discuss information theory & minimum description length, covering her PhD research on compression and communication.
Tweet media one
1
4
21
2
10
53
@karen_ullrich
Dr. Karen Ullrich
5 years
I couldn't find a simple implementation of a binary VAE on github, so I made one: . ❤️ #tensorflow_probability
2
9
53
@karen_ullrich
Dr. Karen Ullrich
2 years
Another one of @3blue1brown ’s educational master pieces. Great intro to what we understand as information. Will steal for presentations and lectures for sure ❤️😍💗
1
4
52
@karen_ullrich
Dr. Karen Ullrich
7 years
My slides on sparse Bayesian learning from yesterday's talk at the MILA Deep Learning Summer School.
1
22
47
@karen_ullrich
Dr. Karen Ullrich
2 years
Data/Model Compression enthusiasts, what do we think about a mini-meetup at #Neurips2022 on Monday Nov 28th ? @_dsevero @lucastheis @mentzer_f @YiboYang @s_mandt
6
0
34
@karen_ullrich
Dr. Karen Ullrich
3 years
Jakub explains lossy neural compression 🔥🔥🔥
@jmtomczak
Jakub Tomczak
3 years
After a long time, the 8th blog post is out! This time I discuss neural compression with deep generative modeling. Post 🤓: Code 💻:
Tweet media one
Tweet media two
Tweet media three
4
64
310
0
1
30
@karen_ullrich
Dr. Karen Ullrich
3 years
I will be at the affinity groups poster session (!!) today. Really excited to meet and discuss with everyone there🤩 If you are interested in information theory and coding look for me👇 I can’t wait to meet and discuss your work, potential collaborations and/or a FAIR internship.
Tweet media one
1
0
29
@karen_ullrich
Dr. Karen Ullrich
3 years
🔥New implementation 🔥 of our set-compressor available. Use your favorite codec, e.g. PNG, JPEG or flow+ANS and get significant gains 👇🏻
@_dsevero
Daniel Severo
3 years
Best part: this method doesn't compete with your favorite codec, it only improves it! 🔥 7.6% reduction in the number of bits needed to compress BMNIST using the "Bits-back with ANS" neural codec, with only a 10% increase in compute time Craystack code:
0
2
27
1
3
28
@karen_ullrich
Dr. Karen Ullrich
2 years
🌍, I am on a 🇪🇺 tour: I will be giving talks in Leipzig on Sep 7th, Vienna Sep 9th and Amsterdam Sep 16th + 20th. Come by if you are interested in chatting about machine learning and information theory IRL. more infos 👇🏻
2
0
27
@karen_ullrich
Dr. Karen Ullrich
3 years
👇🏻We showed not just THAT but HOW known equivalence relationships reduce compression rates by orders of magnitude. 🔥 And got a spotlight at #NeurIPS 🔥 Grateful to have worked w. @yanndubs , Ben Bloem-Reddy + @cjmaddison on this project ❤️
@yanndubs
Yann Dubois
3 years
We released the code for our paper (now spotlight at #NeurIPS2021 🥳): Pretrained compressors are also on torch hub, use the following few lines to compress your image datasets (1000x gains vs JPEG on ImageNet): Colab:
Tweet media one
3
14
104
0
1
26
@karen_ullrich
Dr. Karen Ullrich
3 years
AI for the Planet mini conference starting now! Spend 4h to hear about waste management, climate change and biodiversity. #FaceTheClimateEmergency 👉
0
6
25
@karen_ullrich
Dr. Karen Ullrich
3 years
@emidup @sedielem @adam_golinski @notmilad @yeewhye @ArnaudDoucet1 😍 love the execution of the realisation that in compression we may want to overfit !!
3
0
23
@karen_ullrich
Dr. Karen Ullrich
1 year
Join us in discussing the frontiers of communication at our ICML “Neural Compression workshop”. And stay tuned: call for papers coming soon.
@StephanMandt
Stephan Mandt
1 year
🎉Exciting news! Our "Neural Compression" workshop proposal has been accepted at #ICML 2023! Join us to explore the latest research developments, including perceptual losses and more compute-efficient models! @BerivanISIK , @YiboYang , @_dsevero , @karen_ullrich , @robamler
4
24
99
0
2
23
@karen_ullrich
Dr. Karen Ullrich
3 years
Also, I hid an unrequested but fun little puzzle in the cover page.
3
0
22
@karen_ullrich
Dr. Karen Ullrich
6 years
. @hen_drik and I will be speaking tomorrow at #34c3 12:45 in #SaalAdams on #ArtificialIntelligence and how it influences our public life
Tweet media one
0
6
20
@karen_ullrich
Dr. Karen Ullrich
10 months
If you missed it yesterday, I heard the poster will make a reappearance at the Neural Compression workshop today in Room 317A 😉
@compute_ri
Putri van der Linden
10 months
Point clouds are very flexible, but due to their irregular nature, learning on them is much more expensive than grid data. We show how we can process them efficiently without accuracy drops by getting rid of their irregularity. See how at @TAGinDS at #icml2023 !
Tweet media one
2
14
129
0
2
20
@karen_ullrich
Dr. Karen Ullrich
4 years
2. We propose a design to model missing information instead of ignoring it. 3. By introducing auxiliary latent variables in the decoder, we can sample more realistic messages. [3/3]
2
0
17
@karen_ullrich
Dr. Karen Ullrich
6 years
We just put our slides for "Beeinflussung durch Künstliche Intelligenz" our talk at #34c3 online.
0
5
13
@karen_ullrich
Dr. Karen Ullrich
4 years
After sitting on both sites of the reviewing process, here is a proposal to make this more enjoyable for everyone. [1/4]
2
0
16
@karen_ullrich
Dr. Karen Ullrich
5 years
Using a generative model to analyse a generative model, so clever :)
@DaniloJRezende
Danilo J. Rezende
5 years
Happy share our work: Shaping Belief States with Generative Environment Models for RL Thanks Karol Gregor, Frederic Besse, Yan Wu, Hamza Merzic and @avdnoord ! #RL #SelfSupervised #GenerativeWorldModels #BeliefStates
Tweet media one
Tweet media two
Tweet media three
Tweet media four
2
57
217
0
2
14
@karen_ullrich
Dr. Karen Ullrich
2 years
"Hard to believe" is an understatement for large language model atm.
@hausman_k
Karol Hausman
2 years
Me: Nobody understands my sense of humor AI: I got you.
Tweet media one
17
132
785
2
1
14
@karen_ullrich
Dr. Karen Ullrich
3 years
@maosbot TBH I think this statement is tone deaf. It ignores that we are 👉🏻already massively contributing to providing tools that help marginalization at scale 👉🏻often do not understand the consequences of our tools 👉🏻traditionally do not give space to underrep. groups in this community
2
0
13
@karen_ullrich
Dr. Karen Ullrich
2 years
Huge service to the community by @julberner : 1st open-source implementation of bits-back compression for diffusion models (= SOTA for lossless compression)❤️🔥🔥
@julberner
Julius Berner @ICLR
2 years
📢📢New feature in #NeuralCompression repo: Bits-Back compression for diffusion models! Compress image data 🖼️ using diffusion models at an effective rate close to the (negative) ELBO. See: Some context ⏩[1/4]
1
6
35
0
2
13
@karen_ullrich
Dr. Karen Ullrich
6 years
Excited to be at the #Bayesian #DeepLearning Workshop all day. Also presenting a poster, come by and have a chat #AMLab #NIPS2017
Tweet media one
0
3
13
@karen_ullrich
Dr. Karen Ullrich
5 years
Did any of u try ELIMINATING ALL BAD LOCAL MINIMA (by @jaschasd + Kenji Kawaguchi) for real? what application? what is ur experience? theoretical argument or actually useful? how did u tune optimization for a and b?
Tweet media one
1
4
12
@karen_ullrich
Dr. Karen Ullrich
3 years
@danijarh @emidup @sedielem @adam_golinski @notmilad @yeewhye @ArnaudDoucet1 Yeah :) From a minimum description length point of view, neural architectures act as universal computer language and the weight vector as code! Very OG Solomonoff interpretation of MDL :) Love it.
0
0
11
@karen_ullrich
Dr. Karen Ullrich
10 months
Would love to chat to folks tonight where is everyone going? #ICML2023
1
0
11
@karen_ullrich
Dr. Karen Ullrich
7 years
My invited talk at the CIFAR DL Summer School is now online :) I am on TV, mom!
0
0
9
@karen_ullrich
Dr. Karen Ullrich
4 years
We found 3 modelling choices relevant when the bandwidth of the noisy channel, aka weak wifi, varies. 1. Instead of separating the sub-tasks of compression (source coding) and error correction (channel coding), we propose to model both jointly. [2/3]
1
0
10
@karen_ullrich
Dr. Karen Ullrich
5 years
0
0
9
@karen_ullrich
Dr. Karen Ullrich
2 years
I will be at the #WiMLworkshop mentorship tables now! Come by and talk about neural compression, internships and whatever is on your mind.
0
1
10
@karen_ullrich
Dr. Karen Ullrich
4 years
🤦I somehow messed up both links to my co-authors @DaniloJRezende and @fabiointheuk .
1
0
10
@karen_ullrich
Dr. Karen Ullrich
5 years
I made another tool for bit enthusiasts: Turn your float tensor to binary (and back) according to IEEE-754 standard (or any custom format) It's differentiable 🤩 but not very efficient 😬 . ❤️ #PyTorch
0
0
8
@karen_ullrich
Dr. Karen Ullrich
4 years
All my amazing colleagues Putri van der Linden, @ElisevanderPol , @vdbergrianne , @YugeTen , @LindaPetrini and @sindy_loewe
0
0
9
@karen_ullrich
Dr. Karen Ullrich
7 years
Convolutional seq2seq models for music. Poster from #MILA #DLSS Montreal. Work with my student @eelcovdw .
0
2
8
@karen_ullrich
Dr. Karen Ullrich
3 years
0
0
8
@karen_ullrich
Dr. Karen Ullrich
7 months
Note that you must currently be in a relevant PhD program.
3
0
7
@karen_ullrich
Dr. Karen Ullrich
3 years
Made with ❤️ by @mattmucklm , @_dsevero + Jordan Juravsky
3
0
7
@karen_ullrich
Dr. Karen Ullrich
6 years
Tweet media one
0
1
5
@karen_ullrich
Dr. Karen Ullrich
2 years
Amsterdam Sep 16th, 13.oo @ Vrije Universiteit Amsterdam Hosted by Prof. Jakub Tomczak
2
1
6
@karen_ullrich
Dr. Karen Ullrich
2 years
This is no corporate event: no fancy food, no preparations, it’s just us in a room. Bring your own coffee (BYOC). Maybe I can hook up the stereo with my playlist, that's top fancy to be expected. Also no registration required.
1
0
6
@karen_ullrich
Dr. Karen Ullrich
2 years
Amsterdam Sep 20th 16.oo @ University of Amsterdam, Science Park LAB42 Hosted by Prof. Jan-Willem van de Meent (might change, this one is still under construction)
0
0
6
@karen_ullrich
Dr. Karen Ullrich
2 years
@brandondamos @neurips Been there myself, sorry to hear, always sucks.
0
0
6
@karen_ullrich
Dr. Karen Ullrich
7 years
automatic hyperparameter optimization for deep learning
0
3
5
@karen_ullrich
Dr. Karen Ullrich
2 years
Leipzig September 7th, 15.oo @ AI Institute / Leipzig room A.03.07 Hosted by Prof. Sayan Mukherjee
1
0
4
@karen_ullrich
Dr. Karen Ullrich
7 years
@ElisevanderPol @zinmalu Hihi. How many CS professors do you need to fix ...
0
0
4
@karen_ullrich
Dr. Karen Ullrich
3 years
@zeynepakata @ml4science @uni_tue @MPI_IS Well deserved Zeynep! You got vision (I know great pun 😅)
0
0
4
@karen_ullrich
Dr. Karen Ullrich
2 years
@senya_ashuha @liyzhen2 @dpkingma @latentjasper Pleasure and privilege to have been invited! Congratulations 🔥🎉❤️❤️
1
0
4
@karen_ullrich
Dr. Karen Ullrich
4 years
1.Don’t say “This is bad”, ask “Is this bad?”. 2. Let's not grade a paper before the author rebuttals.[2/4]
1
0
4
@karen_ullrich
Dr. Karen Ullrich
3 years
@george_toderici @mattmucklm @_dsevero It does open the door 🤩 and with that we may well blur the boundaries between probabilistic modelling and arithmetic coding. Very exciting, I think.
0
0
4
@karen_ullrich
Dr. Karen Ullrich
2 years
Vienna September 9th 10.oo @ Institute of Science and Technology Austria Hosted by Prof Christoph Lampert
1
0
4
@karen_ullrich
Dr. Karen Ullrich
6 years
Great (technical) read about reality after AI-generated-fake-content takes over
@GiorgioPatrini
Giorgio Patrini
6 years
After many exciting discussions with @SimoneLini @mortendahlcs and Hamish Ivey-Law, we wrote on the implications of AI and digital forgery on modern society and discuss plausible technical solutions: And we did human vs. machine exp, e.g. which is real?
Tweet media one
Tweet media two
3
19
34
0
0
4
@karen_ullrich
Dr. Karen Ullrich
1 year
@AIandMLonly Having a PhD is actually not a requirement. But being in a PhD program is. Hope I did not sound rude, just wanted to save myself from replying to many emails having to clarify that <3
1
0
4
@karen_ullrich
Dr. Karen Ullrich
5 years
Guilty 🙈
@amuellerml
Andreas Mueller (also at mastodon)
5 years
First blog post in 5 years! Don't cite the "No Free Lunch" theorem! cc @betatim @mrocklin @hug_nicolas This was probably the longest of the blog-posts I was thinking about writing, maybe the next one won't take 5 years.
6
66
262
0
0
4
@karen_ullrich
Dr. Karen Ullrich
7 months
Proposal for AI conscienceless test: Instead of just reward hacking, show me your AI demonstrated malicious compliance. How can we test that?
1
0
3
@karen_ullrich
Dr. Karen Ullrich
3 years
@mmbronstein @adjiboussodieng @_joaogui1 Doesn’t that very example show that the issue is not 0 and 1? There are different levels of community (family, municipality, state ect) with different levels of solidarity (shared income, taxation, infrastructure ect). We can define the rules of this continuum however we see fit.
1
0
3
@karen_ullrich
Dr. Karen Ullrich
3 years
@hen_drik @FIfF_de Best ❤️❤️❤️
0
0
3
@karen_ullrich
Dr. Karen Ullrich
7 years
How to ... Ethical NLP by Katharine Jarmul ( @kjam )
@kjam
katharine jarmul
7 years
Wanna watch my (first ever) keynote? On Ethical Machine Learning? You can! 🎉 Thanks @pydataamsterdam
2
16
36
0
0
3
@karen_ullrich
Dr. Karen Ullrich
7 months
@victor_p91 Thank you if fixed my settings. There will likely be more open spots for FAIR in 2024.
0
0
3
@karen_ullrich
Dr. Karen Ullrich
2 years
@_dsevero @lucastheis @mentzer_f @YiboYang @s_mandt @mentzer_f @jefrankle @_dsevero @Alii_Ganjj Love that there is so much interest 😍 I sent a room request to the #Neurips2022 organizers, and will send an invite link 📅 to the event on Monday. In case twitter is down by than follow me on karen_ullrich @sigmoid .social
0
0
2
@karen_ullrich
Dr. Karen Ullrich
3 years
@F_Kaltheuner @karen_ullrich (Postdoc probabilistic AI) @MaxIlse (PhD cand. causal AI) @SarahEskens (Ass. prof. for digital rights + AI)
0
0
2
@karen_ullrich
Dr. Karen Ullrich
4 years
Plus if you happen to be wrong as a reviewer you don’t actually have to admit to it ;). After all you was only asking a question.[4/4]
1
0
2
@karen_ullrich
Dr. Karen Ullrich
1 year
0
0
2
@karen_ullrich
Dr. Karen Ullrich
7 months
@Alii_saays Preferably yes, but we can also arrange for a remote option if visa is an issue (not for all remote locations though). Pref. will not influence hiring process.
1
0
2
@karen_ullrich
Dr. Karen Ullrich
7 months
@cjmaddison Feels like a very big publicity stunt to me. Also MSF drawing up a CEO contract for Altman over the course of a weekend... don't know if I believe that.
0
0
2
@karen_ullrich
Dr. Karen Ullrich
3 years
@maosbot I know, otherwise I wouldn’t have responded :) I think it would be more helpful to discuss here instead of deleting anything.
1
0
2
@karen_ullrich
Dr. Karen Ullrich
6 years
@YoSiJo @hen_drik Zb Wenn ich meinen Algorithmus darauf trainiere, Katzen von Hunden zu unterscheiden, und ihm dann einen Teller Spagetti zeige, wird er mir sagen das ist 99%ig eine Katze und 1%ig ein Hund. Ein Problem mit dem sich meine Forschungsgruppe in Amsterdam intensiv auseinandersetzt. 2/2
0
0
1
@karen_ullrich
Dr. Karen Ullrich
3 years
@mmbronstein @adjiboussodieng @_joaogui1 The EU being a good example of how national borders got dissolved slightly. And IMO it is about time for more global solidarity.
1
0
1
@karen_ullrich
Dr. Karen Ullrich
4 years
0
0
1
@karen_ullrich
Dr. Karen Ullrich
3 years
@DaniloJRezende Aww Thank you, Danilo ❤️
0
0
1
@karen_ullrich
Dr. Karen Ullrich
3 years
@maosbot I work in data compression and I worry a lot about biases! I think we could be better at developing standardized tests for AI instead of presenting a novel method yet again. What do you think can be done by us (concretely)?
1
0
1
@karen_ullrich
Dr. Karen Ullrich
3 years
0
0
1