
AutoRL Workshop
@AutoRL_Workshop
Followers
113
Following
13
Media
0
Statuses
35
The Automated RL Workshop Coming to ICML'24 with a focus on LLMs & In-Context Learning!
Joined March 2024
We have a speaker change: instead of @jparkerholder we'll hear from @MichaelD1729 - the focus of the talk is the same, though, so join if you're interested in generating any environments you can imagine!.
0
4
14
After a day of presentations, posters and a breakout session, we'll close with a panel discussion. @pcastr @AlexDGoldie @jakeABeck and Doina Precup will tell us their views on the present and future of AutoRL - join us for an exciting finale to @icmlconf 2024! ๐.
0
4
9
Last but not least: Pablo Samuel Castro @pcastr is a senior researcher at @GoogleDeepMind known for his musical endeavors and pushing the limits of the ALE with algorithmic innovations and design decisions. Heโll talk about why the ALE is a great benchmark for AutoRL ๐พ๐น๏ธ.
1
3
12
Up next: Jack Parker-Holder @jparkerholder works on open-endedness as a research scientist at @GoogleDeepMind and honorary lecturer with @UCL_DARK . His focus is on unlimited training data for open-ended RL - being able to generate interactive tasks controllably and at will ๐ฎ.
0
3
14
Our third speaker is Pierluca D'Oro @proceduralia, researcher at @AIatMeta and PhD student at @Mila_Quebec. You likely know his work combining RL with LLMs to create capable AI assistants. Weโre excited to hear how he envisions the future of LLMs in RL and vice versa! ๐ฃ๏ธ๐ง .
0
0
8
Speaker number two! ๐ฅ Roberta Raileanu @robertarail, research scientist at @AIatMeta, has worked on different aspects of RL, most recently teaching LLMs to make better decisions. Sheโll discuss generalization for robust and capable agents in practice ๐ช.
0
1
15
Chelsea Finn @chelseabfinn barely needs an introduction: If youโre interested in robotics or meta-learning, you almost certainly know her work. Sheโs an assistant professor at @Stanford, co-founder of @physical_int and expert in making RL work for real-world robotics tasks ๐ค๐ซ.
0
1
3
Please send us your CV and a short statement as to why you should receive the complimentary registration to autorlworkshop@ai.uni-hannover.de by 8 July AOE in case you want one.
0
0
2
"Self-Exploring Language Models: Active Preference Elicitation for Online Alignment" adds optimism to the RLHF objective for better out-of-distribution sampling. By @ShenaoZhang @Yudh9662, Hiteshi Sharma, @yzy_ai @shuohangw @hany_hassan & @zhaoran_wang.
1
0
3
"BOFormer: Learning to Solve Multi-Objective Bayesian Optimization via Non-Markovian RL" combines BO and RL for a powerful MOBO solution. By Yu Heng Hung, Kai-Jie Lin, Yu-Heng Lin, @chienyi_wang & Ping-Chun Hsieh.
1
1
5
"Can Learned Optimization Make Reinforcement Learning Less Difficult?" shows how to learn an optimizer for RL, considering plasticity, exploration and non-stationarity. By @AlexDGoldie @_chris_lu_ @JacksonMattT @shimon8282 & @j_foerst .
1
7
19