Chandar Lab
@ChandarLab
Followers
535
Following
13
Media
10
Statuses
38
Sarath Chandar's research group at @polymtl and @Mila_Quebec. Our research focuses on lifelong learning, DL, RL, and NLP.
Montréal, Québec, Canada
Joined June 2021
Please join us this Monday (Aug 19th) for a two-day symposium highlighting the research we did at @ChandarLab for the past year! Schedule: https://t.co/h1G2FPvTZY Registration: https://t.co/2OCZGMMqSI (with remote and in-person options)
1
7
38
Are self-explanations from Large Language Models faithful? We are answering this question at ACL 2024. Where: ACL 2024, A1. When: August 12th, 17:45-18:45. arXiv: 2401.07927.
1
19
78
I am very proud and happy to announce that our MSc graduate Ali Rahimi received the best Master's thesis award from the Canadian AI Association for 2024! Ali's masters thesis shows that SOTA MBRL methods like Dreamer and MuZero are not adaptive and he also has a fix!
We are very pleased to announce Ali Rahimi Kalahroudi (Université de Montréal) as the recipient of the CAIAC 2024 Best Master's Thesis Award. Ali's thesis was "Towards Adaptive Deep Model-Based Reinforcement Learning." https://t.co/xTbpb3ko7O
0
0
6
Check out one of our latest lab papers
🚨Is solving complex tasks still challenging for your RL agent? 👑 Subgoal Distillation: A Method to Improve Small Language Agents Paper: https://t.co/c6G0i49VJ8 w/ @EliasEskin @Cote_Marc @apsarathchandar
0
0
0
Our recent AAAI paper shows that certain attention heads in transformers are responsible for bias and pruning them improves fairness! In collaboration with Goncalo Mordido, @SamiraShabanian , @ioanauoft, and @apsarathchandar Paper 📰: https://t.co/t2OzlFeh68
0
12
45
🎉 Exciting start to 2024 for our lab! 🚀 Two papers accepted at ICLR, with one ranking in the top 1.2%! Plus, a publication in Digital Discovery journal. We are proud of our team's hard work and innovative research. #ICLR2024 #ResearchExcellence #MachineLearning #crystalDesign
0
1
12
At @ChandarLab, we are happy to announce the launch of our assistantce program to provide feedback for members of communities underrepresented in AI who wants to apply to high-profile graduate programs. Want feedback? Details: https://t.co/QSWBH7aoZf. Deadline: Nov 15!
3
21
96
Can large language models consolidate world knowledge? The answer turns out to be "NO". I am very excited to present to you our @emnlpmeeting 2023 paper (main track) which studies this important limitation of LLMs. Work led by my amazing PhD student @GabrielePrato!
0
5
23
If you wanna learn more about the recent advances in deep learning, reinforcement learning, and NLP that has come out of my lab in the past year, consider attending our lab's annual research symposium on Aug 8 and 9: https://t.co/HgEsGFWSJ5 You can join remotely too!
1
9
34
It's time to mark your calendars! 🗓️ The official schedule for #CoLLAs2023 is now up at https://t.co/5ZuW4EQoWz. Brace yourselves for a thrilling lineup of posters, tutorials, orals, talks, unconferences, and a dinner. See you in Montreal! 🌞🧠 Register at
1
7
21
Introducing an improved adaptive optimizer: Adam with Critical momenta (Adam+CM)! Unlike traditional Adam, it promotes exploration that paves the way to flatter minima and leads to better generalization. Link to our paper: https://t.co/4XipKKutfl Work led by: @pranshumalviya8
0
8
45
Want to know more about what is happening at @ChandarLab? Please join our annual research symposium ( https://t.co/Uz5HUMBPUC) virtually or in person (Montreal) this August 11! You will hear my students talking about lifelong learning, reinforcement learning, NLP, and DL!
1
6
37
This is one of our efforts to promote research in lifelong learning. We are also organizing a new focused conference on Lifelong Learning (@CoLLAs_Conf) which is happening next month. You can register for the conference here: https://t.co/G7saoSU4nM 4/4
1
1
15
I am very excited to release this primer on lifelong supervised learning: https://t.co/3nSifLinWi. Lifelong learning is one of the most promising learning paradigms to achieve artificial general intelligence. 1/n
6
93
566
We are excited to invite submissions to the Workshop track of CoLLAs 2022! The workshop track has no proceedings and all accepted papers will be presented in a poster session. More details are available at https://t.co/RT9yTeu0lX Submission deadline: May 19, 2022, 11:59 pm (AoE)
0
6
28
We hope you are doing well during these tough times. The deadline for CoLLAs has been extended as follows: Abstract Deadline: March 7th (AOE) Paper Deadline: March 10th (AOE) We look forward to seeing your submission on lifelong learning!
0
8
27
The abstract deadline for CoLLAs is in 5 days, midnight on March 1st (AoE). We look forward to seeing your submissions on lifelong learning! https://t.co/sc7lRwrrsI
0
5
26
Only one week left for the application deadline! In addition to the listed topics (memory augmented networks, learning through language interaction, optimization, lifelong learning, RL), I am also looking for MSc/PhD students to work in the intersection of ML and Drug Discovery.
I have multiple open MSc/PhD positions on memory augmented neural nets, RL, Lifelong Learning, NLP for Fall 2022 at @ChandarLab / @Mila_Quebec /@polymtl! Details: https://t.co/DjjQj2Y2b6 Applications due Dec 1st: https://t.co/L3tizSr0Vb
1
4
23
I am very excited to release the recordings of my Reinforcement Learning lectures! You can watch the first-week lectures here: https://t.co/v6TTdsxKL2. If you want to follow the course, readings, lecture notes, and assignments will be made available at
4
67
375
There are two motivations for interpretability: “scientific understanding" and “trust in AI”. Unfortunately, these are sometimes mixed which leads to an inappropriate judgment of papers. A 🧵 based on our survey, "Post-hoc Interpretability for Neural NLP". https://t.co/SJx4RJy236
Our new survey on post-hoc interpretability methods for NLP is out! This covers 19 specific interpretability methods, cites more than 100 publications, and took 1 year to write. I'm very happy this is now public, do consider sharing. Read https://t.co/03TmDZRsfy. A thread 🧵 1/6
1
3
16