Xinyi Chen
@XinyiChen2
Followers
639
Following
504
Media
0
Statuses
61
Joined September 2012
Looking forward to speaking at @PrincetonSML and next week about neural architectures inspired by dynamical systems: https://t.co/nJR47lla2T
csml.princeton.edu
Lunch is available beginning at 12 PMSpeaker to begin promptly at 12:30 PMAbstract: Can we build neural architectures that go beyond Transformers by leveraging principles from dynamical systems? In...
0
3
20
Thank you @max_simchowitz for the shoutout! Making ML more efficient by reasoning about dynamical systems is a really exciting direction, and I look forward to advancing more in this space!
As someone who loves dynamical systems and control, I've been really excited to see @XinyiChen2 and @HazanPrinceton 's recent papers making control work for deep learning! Very cool insights, both on the architecture and optimization side. I encourage you to check them out!
0
1
6
imsi.institute
0
1
6
Together with @HazanPrinceton, Cong, @Zanette_ai, and Nati, we are organizing a long program on reinforcement learning and control at @IMSI_Institute! Join us for workshops on frontiers of online/offline RL, control, multi-agent RL, and opportunities to present your research.
imsi.institute
1
6
71
🚨 @SCSatCMU PhD applications close tomorrow, December 11, at 3:00pm ET! I’m actively recruiting masters and PhD students interested in the theory and practice of decision making with generative models, especially for robotics, RL and world models! CMU is one of the most
5
10
63
Very excited about our work on spectral transformers!
All you want to know about spectral transformers in one webpage, papers & code: (& we'll try to keep it updated!) https://t.co/a0ey8ADfK9
0
0
13
Want to learn about the math behind robot learning? I'll be presenting an invited talk on "Provable Guarantees for Generative Behavior Cloning" at 11:55am CEST at the 2024 ICML Workshop on Reinforcement Learning at Control (link in 🧵)
1
15
63
I'll be at @icmlconf next week! Giving a plenary talk at the HiLD workshop and an oral on our recent paper ( https://t.co/xQF54RAl1D) at the MHFAIA workshop! Pls reach out to chat if you're also interested in any of these topics! 😊
2
8
50
New work w/@sadhikamalladi, @lilyhzhang, @xinyichen2, @QiuyiRichardZ, Rajesh Ranganath, @kchonyc: Contrary to conventional wisdom, RLHF/DPO does *not* produce policies that mostly assign higher likelihood to preferred responses than to less preferred ones.
4
46
238
Open source code for spectral SSM is now available! https://t.co/HihUUvUV2q Thanks to our Google DeepMind Princeton team: @danielsuo @naman33k @XinyiChen2
github.com
Contribute to google-deepmind/spectral_ssm development by creating an account on GitHub.
most exciting paper *ever* from our @GoogleAI lab at @Princeton: @naman33k @danielsuo @XinyiChen2
https://t.co/aSkBZJ6S9t *** Convolutional filters predetermined by the theory, no learning needed! ***
0
8
44
most exciting paper *ever* from our @GoogleAI lab at @Princeton: @naman33k @danielsuo @XinyiChen2
https://t.co/aSkBZJ6S9t *** Convolutional filters predetermined by the theory, no learning needed! ***
arxiv.org
This paper studies sequence modeling for prediction tasks with long range dependencies. We propose a new formulation for state space models (SSMs) based on learning linear dynamical systems with...
13
123
829
Excited w. our first research in AI safety & alignment: A game theoretic approach for AI safety via debate: https://t.co/TfAA5dqEel This is a collaboration with my student @XinyiChen2 , our alumna @_angie_chen from NYU, and Dean Foster:
arxiv.org
We consider regret minimization in repeated games with a very large number of actions. Such games are inherent in the setting of AI Safety via Debate \cite{irving2018ai}, and more generally games...
1
11
83
w. @FeinbergVlad, @jsun105, @_arohan_, @HazanPrinceton: Sketchy (Wed 10:45AM, #1115), check out the linked blog post
See y'all at NeurIPS next week. Presenting Sketchy w @XinyiChen2 Jennifer Sun @_arohan_ @HazanPrinceton. High level blog post: https://t.co/EulFLHtVRu Also, looking for student researchers for OCO🤝Control theory+applied internship! HMU @ NOLA
0
0
2
w. @HazanPrinceton: Online control for meta-optimization (Wed 5PM, #2023) https://t.co/PzBdJI8I6X We’re looking for student researchers to explore more application of this method! Please reach out if you're interested.
Optimizer tuning can be manual and resource-intensive. Can we learn the best optimizer automatically with guarantees? With @HazanPrinceton, we give new provable methods for learning optimizers using a control approach. Excited about this result! https://t.co/GTpNSdcQlm (1/n)
1
2
18
At NeurIPS 2023 until next Sunday! Excited to reconnect with friends, meet new ones, and chat about optimization/online control. I’ll be co-presenting Meta-optimization and Sketchy, details below:
1
1
20
See y'all at NeurIPS next week. Presenting Sketchy w @XinyiChen2 Jennifer Sun @_arohan_ @HazanPrinceton. High level blog post: https://t.co/EulFLHtVRu Also, looking for student researchers for OCO🤝Control theory+applied internship! HMU @ NOLA
vladfeinberg.com
Vlad's Blog
3
8
37
I will be at NeurIPS - 12th, 13th and will be hanging out at the posters, Sketchy: @FeinbergVlad @XinyiChen2 Jennifer Sun @HazanPrinceton
https://t.co/gersHK5u2i SoNew: @Devvrit_Khatri @dvsaisurya @GuptaVineetG Cho-Jui Hsieh, @inderjit_ml
https://t.co/4C2xeqtTml
1
5
33
Happy to share a new blog post w. @XinyiChen2 on meta-optimization, and its relationship to adaptive gradient methods and parameter-free optimization! https://t.co/EoqvlRKyqz
minregret.com
The study of mathematical optimization is a hallmark of the application of the scientific method to almost all engineering fields. With the rise of machine learning and large scale problems, attent...
1
19
68
@HazanPrinceton Our regret bounds in control imply optimization performance vs. the best optimizer from a class of methods, giving guarantees for meta-optimization. Check out the manuscript at https://t.co/5uQ21va3WD for full details! (n/n)
arxiv.org
Selecting the best hyperparameters for a particular optimization instance, such as the learning rate and momentum, is an important but nonconvex problem. As a result, iterative optimization...
1
2
5