juliachae_ Profile Banner
Julia Chae Profile
Julia Chae

@juliachae_

Followers
335
Following
398
Media
0
Statuses
54

phd @MIT_CSAIL; prev @VectorInst, @UofTRobotics

Cambridge, Massachusetts
Joined August 2022
Don't wanna be here? Send us removal request.
@juliachae_
Julia Chae
11 months
My first first-authored (w/ @shobsund) paper of my phd is finally out! 🚀 Check out our thread to see how general-purpose representations + personalized synthetic data enable personalized vision representations. 🌐:
@shobsund
Shobhita Sundaram
11 months
Personal vision tasks–like detecting *your mug*-are hard; they’re data scarce and fine-grained. In our new paper, we show you can adapt general-purpose vision models to these tasks from just three photos! 📝: https://t.co/MDsEYLBKvS 💻: https://t.co/z86GGcRpkl (1/n)
5
7
161
@juliachae_
Julia Chae
1 month
Attending ICCV next week? Stop by @__justinkay 's poster to hear about how CODA enables efficient active model selection to support real world deployments!
@__justinkay
Justin Kay
1 month
There are now millions of publicly-available AI models – which one is right for you? We introduce CODA (@ICCVConference Highlight!), a method for *active model selection.* CODA selects the best model for your data with any labeling budget – often as few as 25 labeled examples.1/
0
3
7
@juliachae_
Julia Chae
1 month
Excited to be co-organizing @CV4E_ICCV for the second year at #ICCV2025 in Honolulu! We have an amazing suite of panelists (+ speakers) this year, don't miss it 👀
@CV4E_ICCV
CV4E Workshop @ ICCV
1 month
Excited to announce our #CV4E panel at @ICCVConference: “When Domains Collide: Ecology and CV in Practice”! 🌍🔍 Leaders across the AI ↔️ Ecology spectrum share how they’re connecting methods, data, and impact in the wild. 🌿
0
5
13
@phillip_isola
Phillip Isola
1 month
Over the past year, my lab has been working on fleshing out theory/applications of the Platonic Representation Hypothesis. Today I want to share two new works on this topic: Eliciting higher alignment: https://t.co/KY4fjNeCBd Unpaired rep learning: https://t.co/vJTMoyJj5J 1/9
10
119
696
@sharut_gupta
Sharut Gupta
1 month
[1/7] Paired multimodal learning shows that training with text can help vision models learn better image representations. But can unpaired data do the same? Our new work shows that the answer is yes! w/ @shobsund @ChenyuW64562111, Stefanie Jegelka and @phillip_isola
11
53
437
@CV4E_ICCV
CV4E Workshop @ ICCV
5 months
🚨 Good news! The submission deadline for the CV4E workshop at #ICCV2025 has been extended from July 4 to July 7! #CV4E #ICCV
0
1
2
@sarameghanbeery
Sara Beery
5 months
Call for papers for the second CV for Ecology workshop at ICCV is live!!
@CV4E_ICCV
CV4E Workshop @ ICCV
5 months
We are thrilled to announce that CV for Ecology Workshop is returning for its second year at #ICCV2025 in Honolulu, Hawaii! If your work combines computer vision and ecology, submit a paper and join us! Deadlines: July 4 (Proceedings) / July 22 (Non-Archival)
0
3
19
@cv4ecology
CV4Ecology Workshop
6 months
The call for applications has just been released for #CV4Ecology2026!! This three-week intensive program trains ecologists and conservation practitioners to develop their own AI tools for their own data. When: Jan 12-30, 2026 Where: SCBI @SMConservation
1
10
21
@ShivamDuggal4
Shivam Duggal
7 months
Drop by our poster at Hall 3 + Hall 2B, #99 at 10 AM SGT! Unfortunately none of us could travel, but our amazing friends @jyo_pari @juliachae_ @shobsund & @mhamilton723 — will be presenting it 🙌 Feel free to reach out with any questions — I’ll be online & cheering them on 💖
@ShivamDuggal4
Shivam Duggal
1 year
Current vision systems use fixed-length representations for all images. In contrast, human intelligence or LLMs (eg: OpenAI o1) adjust compute budgets based on the input. Since different images demand diff. processing & memory, how can we enable vision systems to be adaptive ? 🧵
0
5
15
@Sa_9810
Shaden
7 months
Excited to share our ICLR 2025 paper, I-Con, a unifying framework that ties together 23 methods across representation learning, from self-supervised learning to dimensionality reduction and clustering. Website: https://t.co/QD6OciHzmt A thread 🧵 1/n
1
24
94
@juliachae_
Julia Chae
7 months
Presenting PRPG at @iclr_conf on Saturday in Singapore! Would love to chat / catch up throughout the week to feel free to reach out :)
@shobsund
Shobhita Sundaram
11 months
Personal vision tasks–like detecting *your mug*-are hard; they’re data scarce and fine-grained. In our new paper, we show you can adapt general-purpose vision models to these tasks from just three photos! 📝: https://t.co/MDsEYLBKvS 💻: https://t.co/z86GGcRpkl (1/n)
0
1
30
@__justinkay
Justin Kay
7 months
Adapt object detectors to new data *without labels* with Align and Distill (ALDI), our domain adaptation framework published last week in @TmlrOrg (with a Featured Certification [Spotlight]! @TmlrCert)
1
3
20
@EdwardVendrow
Eddie Vendrow
9 months
Very excited to share *GSM8K-Platinum*, a revised version of the GSM8K test set! If you’re using GSM8K, I highly recommend you switch to GSM8K-Platinum! We built it as a drop-in replacement for the GSM8K test set. Check it out:
Tweet card summary image
huggingface.co
@aleks_madry
Aleksander Madry
9 months
GSM8K has been a cornerstone benchmark for LLMs, but performance seemed stuck around 95%. Why? Turns out, the benchmark itself was noisy. We fixed that, and found that it significantly affects evals. Introducing GSM8K-Platinum! w/@EdwardVendrow @josh_vendrow @sarameghanbeery
1
13
38
@ishapuri101
Isha Puri
9 months
had a great time giving a talk about probabilistic inference scaling and the power of small models at the IBM Research ML Seminar Series - the best talks end with tons of questions, and it was great to see everyone so engaged : )
2
23
141
@AhmadMustafaAn1
Ahmad Mustafa Anis
9 months
Very excited to host @juliachae_ and @shobsund next week at @CohereForAI to present their research on "Personalized Representation from Personalized Generation". Register for the session for free (Link in the original tweet).
@Cohere_Labs
Cohere Labs
9 months
We're excited to welcome @juliachae_ and @shobsund next week on Wednesday, March 5th for a presentation on "Personalized Representation from Personalized Generation" - be sure to check out this session! Thanks to @AhmadMustafaAn1 for organizing this community event ✨
0
1
8
@Cohere_Labs
Cohere Labs
9 months
We're excited to welcome @juliachae_ and @shobsund next week on Wednesday, March 5th for a presentation on "Personalized Representation from Personalized Generation" - be sure to check out this session! Thanks to @AhmadMustafaAn1 for organizing this community event ✨
1
3
15
@ishapuri101
Isha Puri
10 months
[1/x] can we scale small, open LMs to o1 level? Using classical probabilistic inference methods, YES! Joint @MIT_CSAIL / @RedHat AI Innovation Team work introduces a particle filtering approach to scaling inference w/o any training! check out https://t.co/Iz8zoVbZPn
2
69
234
@EdwardVendrow
Eddie Vendrow
9 months
This is a fascinating demo for exploring how o1, Claude, DeepSeek, and other LLMs fail on basic tasks. See for yourself: https://t.co/r1RwYPKeDI
@aleks_madry
Aleksander Madry
9 months
Do current LLMs perform simple tasks (e.g., grade school math) reliably? We know they don't (is 9.9 larger than 9.11?), but why? Turns out that, for one reason, benchmarks are too noisy to pinpoint such lingering failures. w/ @josh_vendrow @EdwardVendrow @sarameghanbeery 1/5
2
5
9
@EdwardVendrow
Eddie Vendrow
9 months
Excited to share our work on evaluating LLM reliability! Even top LLMs still make mistakes on basic tasks–but this was previously hidden by noisy benchmarks. To fix this, we carefully cleaned 15 popular benchmarks to create "platinum" versions that better measure reliability.
@aleks_madry
Aleksander Madry
9 months
Do current LLMs perform simple tasks (e.g., grade school math) reliably? We know they don't (is 9.9 larger than 9.11?), but why? Turns out that, for one reason, benchmarks are too noisy to pinpoint such lingering failures. w/ @josh_vendrow @EdwardVendrow @sarameghanbeery 1/5
1
8
23
@MIT_CSAIL
MIT CSAIL
11 months
Could multimodal vision language models (VLMs) help biodiversity researchers retrieve images for their studies? 🤔 MIT CSAIL, @ucl, @inaturalist, @EdinburghUni, & @UMassAmherst researchers designed a performance test to find out. Each VLM’s task: Locate & reorganize the most
2
16
42