Yibo Yang Profile
Yibo Yang

@YiboYang

Followers
285
Following
150
Media
9
Statuses
28

Research scientist at Chan Zuckerberg Initiative, working on ML/AI + science.

Joined August 2013
Don't wanna be here? Send us removal request.
@YiboYang
Yibo Yang
1 month
I had the pleasure of giving a talk and sharing some recent work on diffusion + compression (together with @justuswill and @StephanMandt) at the Learn to Compress workshop at #isit2025. Here are my slides: Thanks again for the invitation!
Tweet media one
0
3
20
@YiboYang
Yibo Yang
4 months
RT @StephanMandt: Thrilled to share that my student Justus Will and former student @YiboYang had their work selected as an ICLR 2025 Oral (….
0
4
0
@YiboYang
Yibo Yang
11 months
RT @lazar_atan: 🚀Introducing — Meta Flow Matching (MFM) 🚀. Imagine predicting patient-specific treatment responses for unseen cases or buil….
0
54
0
@YiboYang
Yibo Yang
11 months
📢Reminder to submit your cool results to our @NeurIPS2024 workshop on ML & Compression!! .Deadline: Sept 30 (AoE). Submission link: .Fill out this form if you'd like to become a reviewer:
Tweet card summary image
docs.google.com
Please fill out this form if you are interested in being a reviewer for the NeurIPS 2024 Workshop on Machine Learning and Compression. To accommodate as many potential submissions as possible, the...
@YiboYang
Yibo Yang
1 year
Excited to co-organize another workshop on machine learning and compression, at #NeurIPS2024!. Topics:.📦 ML + data/model compression.⚡Resource-efficient representations.🧠Info-theoretic aspects of learning & intelligence . More details to come at
2
8
34
@YiboYang
Yibo Yang
11 months
RT @Official_CLIC: We’re happy to announce that CLIC 2025 will be co-located with Picture Coding Symposium in December 2025. More details t….
0
6
0
@YiboYang
Yibo Yang
1 year
Excited to co-organize another workshop on machine learning and compression, at #NeurIPS2024!. Topics:.📦 ML + data/model compression.⚡Resource-efficient representations.🧠Info-theoretic aspects of learning & intelligence . More details to come at
3
7
34
@YiboYang
Yibo Yang
2 years
RT @AliMakhzani: I'm excited to be in New Orleans for #NeurIPS2023! Looking forward to catching up with old friends and meeting new folks.….
0
7
0
@YiboYang
Yibo Yang
2 years
RT @LinYorker: For the first time, we (with @f_dangel, @runame_, @k_neklyudov @akristiadi7, Richard E. Turner, @AliMakhzani) propose a spar….
0
8
0
@YiboYang
Yibo Yang
2 years
Excited to be @NeurIPSConf next week and chat about compression, steering LLMs, diffusion and more! Also looking for internship & full-time jobs in 2024. And stop by poster #1905 on Wed Dec 13, 5 - 7 pm to learn about lossy compression + optimal transport!
Tweet media one
@YiboYang
Yibo Yang
2 years
What do optimal lossy compression and projection under an entropic-OT cost have in common? Both are equivalent to a denoising/maximum-likelihood estimation problem! .We show this in our NeruIPS23 paper and study the fundamental limit of lossy compression using optimal transport🧵
Tweet media one
0
0
10
@YiboYang
Yibo Yang
2 years
Here's the paper: 5-minute talk: Code: Poster: Joint work with the wonderful Stephan Eckstein (ETH Zürich/Tübingen), Marcel Nutz (Columbia), and @s_mandt (UCI) 🎷🎶 (5/5).
Tweet card summary image
github.com
Official project page for Estimating the Rate-Distortion Function by Wasserstein Gradient Descent - yiboyang/wgd
0
1
9
@YiboYang
Yibo Yang
2 years
We also solve the optimization problems (1) (2) (3) by gradient flow in the space of probability measures (the "ν"s). Our resulting Wasserstein gradient descent algorithm is neural-network free, and essentially computes the rate-distortion optimal quantization points. (4/5)
1
1
10
@YiboYang
Yibo Yang
2 years
These connections allow us to adapt sample complexity results for entropic-OT ("how many samples from µ&ν does it take to get a good estimate of EOT(µ, ν)?") to characterize the sample complexity of estimating the rate-distortion function of information theory! (3/5)
Tweet media one
1
1
5
@YiboYang
Yibo Yang
2 years
Turns out they are equivalent optimization problems! To see this, note the objective in 1) is the negative ELBO, problem 2) can be relaxed to 1) by a property of mutual info, and the objective in 3) is the marginal log-likelihood (the value of NELBO after the E-step of EM). (2/5).
1
0
5
@YiboYang
Yibo Yang
2 years
Say your data follows a distribution µ. Fix a ground cost ρ, e.g., squared error. Consider 1) lossy compression of your data; 2) finding the "closest" distribution to µ under the entropic-OT cost; 3) denoising your data under a Gaussian noise model, via non-parametric MLE. (1/5)
Tweet media one
1
1
6
@YiboYang
Yibo Yang
2 years
What do optimal lossy compression and projection under an entropic-OT cost have in common? Both are equivalent to a denoising/maximum-likelihood estimation problem! .We show this in our NeruIPS23 paper and study the fundamental limit of lossy compression using optimal transport🧵
Tweet media one
3
23
135
@YiboYang
Yibo Yang
2 years
RT @zicokolter: New work with @andyzou_jiaming on analyzing internal representations of LLMs, to find phenomena that correlate quite well w….
0
5
0
@YiboYang
Yibo Yang
2 years
My latest ICCV paper with @s_mandt did this and pushes the performance of nonlinear transform coding at the lower limit of decoding complexity. Sadly I won't be at ICCV to present it but please check out our paper and code: (2/2)
Tweet media one
1
0
6
@YiboYang
Yibo Yang
2 years
(paper plug alert) Neural image compression has handsomely beaten classical methods in compression performance. But they demand much higher computation. What happens if we replace the deep conv net decoder with something much simpler/cheaper like JPEG? (1/2)
Tweet media one
1
6
27
@YiboYang
Yibo Yang
2 years
RT @neural_compress: Please join our social at Maui Brewing Co. Waikiki at 6pm after the workshop. Everyone, especially compression and inf….
0
4
0