Yang Li Profile
Yang Li

@yangli169

Followers
489
Following
180
Media
15
Statuses
94

Research Scientist @GoogleDeepMind, working on deep learning and human computer interaction.

Mountain View, CA
Joined April 2009
Don't wanna be here? Send us removal request.
@yangli169
Yang Li
5 months
RT @huashen218: 🚀 Are you passionate about #Alignment Research? Exciting news! Join us at the ICLR 2025 Workshop on 👫<>🤖Bidirectional Human….
0
37
0
@yangli169
Yang Li
8 months
RT @ACMUIST: 🏆 Congratulations to @wobbrockjo @thundercarrot @yangli169 on winning the Lasting Impact Award for “Gestures without libraries….
0
4
0
@yangli169
Yang Li
9 months
RT @wobbrockjo: .@thundercarrot THE A D Wilson. Now tagged.
0
1
0
@yangli169
Yang Li
1 year
Congrats all the co-authors on receiving CVPR'2024 best paper award on “Rich Human Feedback for Text-to-Image Generation” This work also paves way for how we can automate the eval of user interfaces or visual content in general
0
5
19
@yangli169
Yang Li
1 year
Our model also exhibits strong properties such as out-of-distribution adaptive computation, i.e. given an already trained model, we can use a different number of group tokens to perform inference at test time.
Tweet media one
0
0
0
@yangli169
Yang Li
1 year
Multiple grouping heads are used to capture various grouping possibilities and each provides a context for feature refinement. The final output contains refined input and group tokens, and group assignments. Our model is competitive compared to SoTA vision architectures.
Tweet media one
1
0
0
@yangli169
Yang Li
1 year
PGT takes in a sequence of patches (or pixels), generates high-dimensional embeddings for all patches, then passes them through a series of grouping layers to refine the embeddings. Each layer performs multiple rounds of binding from input tokens to group tokens.
Tweet media one
1
0
0
@yangli169
Yang Li
1 year
Our model entirely relies on grouping operations to extract visual features and perform self-supervised representation learning, where a series of grouping operations are used to iteratively hypothesize the context for pixels or superpixels to refine feature representations.
Tweet media one
1
0
0
@yangli169
Yang Li
1 year
I’m delighted to share our new ICLR paper ( on perceptual group tokenizers (PGT), pursuing a novel architecture for vision encoding.
1
4
8
@yangli169
Yang Li
2 years
Thanks again to the keynote speakers @landay, @ancadianadragan, @merrierm, @QVeraLiao, @nikola_banovic for giving in-person talks and @ChenhaoTan for contributing a recorded talk.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
2
9
@yangli169
Yang Li
2 years
A big shoutout to all the reviewers for writing thorough reviews for the submissions to the workshop.
1
0
2
@yangli169
Yang Li
2 years
Special thanks to my co-organizers @RanjayKrishna, @HelenasResearch, @bryanhaoenwang, @forrestnz. A big shoutout to @zinosys, @jasonwuishere, @korymath for contributing to the program as ACs and @yuwen_lu_ and Peitong Duan as SVs.
Tweet media one
1
0
27
@yangli169
Yang Li
2 years
AI&HCI Workshop ( at #ICML2023 had a great turnout, with 5 in-person and 1 recorded keynotes, 60 papers (from 207 authors and 63 institutions), 2/3 of participants from AI/ML and 1/3 from HCI.
1
6
43
@yangli169
Yang Li
2 years
Keynote: Detecting and Countering Untrustworthy Artificial Intelligence by Nikola Banovic @nikola_banovic
Tweet media one
Tweet media two
0
0
6
@yangli169
Yang Li
2 years
0
0
0
@yangli169
Yang Li
2 years
@merrierm Keynote: Human-Centered AI Transparency: Lessons Learned and Open Questions in the Age of LLMs by Q. Vera Liao
Tweet media one
Tweet media two
1
0
4
@yangli169
Yang Li
2 years
Keynote: Beyond RLHF: A Human-Centered Approach to AI Development and Evaluation by Meredith Ringel Morris.@merrierm
Tweet media one
Tweet media two
1
0
6
@yangli169
Yang Li
2 years
Keynote: Designing Easy and Useful Human Feedback by Anca Dragan
Tweet media one
1
0
2
@yangli169
Yang Li
2 years
0
0
0