Allison Chen
@allisonchen_227
Followers
280
Following
25
Media
7
Statuses
44
Computer Science PhD student @Princeton University working in computer vision | hummus enthusiast
Joined March 2023
Today seems to be a fitting day for @GoogleDeepMind news, so I'm excited to announce our new preprint! Prior work suggests that text & img repr's are converging, albeit weakly. We found these same models actually have strong alignment; the inputs were too impoverished to see it!
11
25
132
Prof. Johannes Lutzeyer (@JLutzeyer) and I are co-recruiting a fully funded PhD student at the Department of Computer Science, École Polytechnique in France, to start in September 2026. Details below or at https://t.co/fiQVnX07oc
#PhDhiring #AI
3
5
23
Text-to-image (T2I) models can generate rich supervision for visual learning but generating subtle distinctions still remains challenging. Fine-tuning helps, but too much tuning → overfitting and loss of diversity. How do we preserve fidelity without sacrificing diversity (1/8)
2
13
39
(6/6) This work was done in collaboration with @sunniesuhyoung, @Amaya_Dharmasir, @orussakovsky, and @judyefan. I’ll be presenting this work as a poster at CogSci this Thursday at 1pm (Poster #P1-C-30). Please stop by and say hi! 👋
0
0
2
(5/6) Our work demonstrates the importance of messaging around AI technology in shaping people’s beliefs about them, as people are still navigating how to conceptualize AI. It also highlights the need for appropriate science communication of AI for the general public.
1
0
1
(4/6) 🔑Key findings! 1) people watching LLMs-as-companions (vs. no video) believed LLMs more capable of a variety of cognitive and emotional capacities; 2) watching LLMs-as-tools or machines (vs no video) increased trust in, overall feelings towards, or confidence in using LLMs.
1
0
2
(3/6) The machines video described how LLMs generate text using next-word-prediction, the tools video suggested possible uses of LLMs, and the companions video presented LLMs as able to provide emotional support.
1
0
1
(2/6) We ran a between-subjects (N=470) experiment showing participants short videos portraying LLMs as machines 📠, tools ⚒️, or companions 👥. Then, all participants reported 1) their attitudes towards LLMs and 2) how much they believed LLMs are capable of 40 mental capacities.
1
0
1
(1/6) Organizations and people describe and portray AI, such as large language models (LLMs) in different ways. How might this shape what people believe about the technology–specifically, the mental capacities (e.g., the ability to have intentions) they attribute to LLMs?
1
0
1
Does how we talk about AI matter? Yes, yes it does! In our #Cogsci2025 paper, we explore how different messages affect what people believe about AI systems. See 🧵for more! Paper: https://t.co/kOIQw6unG7
1
16
42
New paper & surprising result. LLMs transmit traits to other models via hidden signals in data. Datasets consisting only of 3-digit numbers can transmit a love for owls, or evil tendencies. 🧵
287
1K
8K
Delighted to announce our CogSci '25 workshop at the interface between cognitive science and design 🧠🖌️! We're calling it: Minds in the Making🏺 https://t.co/dP3eMNTxuc Register now! June – July 2024, free & open to the public. (all career stages, all disciplines)
1
8
39
Accepted to ICML 2025! See you all in Vancouver 🎉
Have you ever wondered why we don’t use multiple visual encoders for VideoLLMs? We thought the same! Excited to announce our latest work MERV, on using Multiple Encoders for Representing Videos in VideoLLMs, outperforming prior works with the same data. 🧵
1
3
80
Does all LLM reasoning transfer to VLM? In context of Simple-to-Hard generalization we show: NO! We also give ways to reduce this modality imbalance. Paper https://t.co/S0HhYN7cvz Code https://t.co/GJsgZof2k7
@Abhishek_034 @chengyun01 @dingli_yu @anirudhg9119 @prfsanjeevarora
1
18
70
Want to train large vision-language models but drowning in data? https://t.co/MEpKvDegv2 Introducing ICONS - we demonstrate how to select only 20% of training samples while maintaining 98.6% of the performance, and 60% of training samples to achieve 102.1% of the performance.
arxiv.org
Training vision-language models via instruction tuning often relies on large mixtures of data spanning diverse tasks and domains. However, these mixtures frequently include redundant information,...
5
64
306
There are numerous leaderboards for AI capabilities and risks, for example fairness. In new work, we argue that leaderboards are misleading when the determination of concepts like "fairness" is always contextual. Instead, we should use benchmark suites.
1
3
55
We pack muscle (so you don’t have to). Our arm has an 18kg payload, which makes us 1.8x stronger than other #robots in our class🏋️. Meet the #StandardBots solution at Booth N-5474 today. #packexpo2024
0
1
5
I am recruiting PhD students for Fall 2025 at Cornell Tech! If you are interested in topics relating to machine learning fairness, algorithmic bias, or evaluation, apply and mention my name in your application: https://t.co/EU0Zu56Qo9 Also, go vote!
15
234
935
The way we talk about AI matters. “The model understands how to…” implies much more powerful capabilities than “The model is used to…” We present AnthroScore, a measure of how much tech is anthropomorphized, i.e talked about in human-like ways. #EACL2024
https://t.co/VDCW3qB3Gh
5
41
224
Giving a few presentations this afternoon @ECCV, come and say hi! 👋 3:15 pm ConceptMix: A Compositional Benchmark for Evaluating Text-to-Image Models (poster) @ EVAL-FoMo workshop, room Amber 5 4:00 pm ConceptMix (spotlight) @ Knowledge in Generative Models workshop, room
2
5
35