
Julia Chae
@juliachae_
Followers
308
Following
368
Media
0
Statuses
50
phd @MIT_CSAIL; prev @VectorInst, @UofTRobotics
Cambridge, Massachusetts
Joined August 2022
My first first-authored (w/ @shobsund) paper of my phd is finally out! š . Check out our thread to see how general-purpose representations + personalized synthetic data enable personalized vision representations. š:
Personal vision tasksālike detecting *your mug*-are hard; theyāre data scarce and fine-grained. In our new paper, we show you can adapt general-purpose vision models to these tasks from just three photos!. š: š»: (1/n)
5
7
159
RT @CV4E_ICCV: šØ Good news! The submission deadline for the CV4E workshop at #ICCV2025 has been extended from July 4 to July 7! #CV4E #ICCVā¦.
0
1
0
RT @sarameghanbeery: Call for papers for the second CV for Ecology workshop at ICCV is live!!.
0
3
0
RT @cv4ecology: The call for applications has just been released for #CV4Ecology2026!! This three-week intensive program trains ecologistsā¦.
0
10
0
RT @ShivamDuggal4: Drop by our poster at Hall 3 + Hall 2B, #99 at 10 AM SGT!.Unfortunately none of us could travel, but our amazing friendsā¦.
0
5
0
RT @Sa_9810: Excited to share our ICLR 2025 paper, I-Con, a unifying framework that ties together 23 methods across representation learningā¦.
0
24
0
Presenting PRPG at @iclr_conf on Saturday in Singapore! . Would love to chat / catch up throughout the week to feel free to reach out :).
Personal vision tasksālike detecting *your mug*-are hard; theyāre data scarce and fine-grained. In our new paper, we show you can adapt general-purpose vision models to these tasks from just three photos!. š: š»: (1/n)
0
1
30
RT @__justinkay: Adapt object detectors to new data *without labels* with Align and Distill (ALDI), our domain adaptation framework publishā¦.
0
3
0
RT @EdwardVendrow: Very excited to share *GSM8K-Platinum*, a revised version of the GSM8K test set!. If youāre using GSM8K, I highly recommā¦.
huggingface.co
0
11
0
RT @ishapuri101: had a great time giving a talk about probabilistic inference scaling and the power of small models at the IBM Research MLā¦.
0
23
0
RT @AhmadMustafaAn1: Very excited to host @juliachae_ and @shobsund next week at @CohereForAI to present their research on "Personalized Reā¦.
0
1
0
RT @CohereForAI: We're excited to welcome @juliachae_ and @shobsund next week on Wednesday, March 5th for a presentation on "Personalizedā¦.
0
3
0
RT @ishapuri101: [1/x] can we scale small, open LMs to o1 level? Using classical probabilistic inference methods, YES! Joint @MIT_CSAIL / @ā¦.
0
68
0
RT @EdwardVendrow: This is a fascinating demo for exploring how o1, Claude, DeepSeek, and other LLMs fail on basic tasks. See for yourselfā¦.
0
5
0
RT @EdwardVendrow: Excited to share our work on evaluating LLM reliability!. Even top LLMs still make mistakes on basic tasksābut this wasā¦.
0
6
0
RT @MIT_CSAIL: Could multimodal vision language models (VLMs) help biodiversity researchers retrieve images for their studies? š¤. MIT CSAILā¦.
0
16
0
RT @EdwardVendrow: šÆ How can we empower scientific discovery in millions of nature photos?. Introducing INQUIRE: A benchmark testing if AIā¦.
0
10
0
RT @shobsund: What happens when models see the world as humans do?. In our #NeurIPS2024 paper we show that aligning to human perceptual preā¦.
0
84
0
RT @CV4E_ECCV: Thank you everyone for participating in the first ever CV4E @eccvconf poster session! It was so exciting to see this growingā¦.
0
5
0
RT @sarameghanbeery: Amazing turnout for the @CV4E_ECCV poster session!! 28 papers looking at everything from trees to coral to wildlife poā¦.
0
3
0