Navid Azizan
@NavidAzizan
Followers
2K
Following
277
Media
21
Statuses
101
MIT Prof | AI & machine learning, systems & control, optimization | Fmr postdoc @Stanford, PhD @Caltech
Cambridge, MA
Joined June 2018
Introducing Instance-Adaptive Inference-Time Scaling! Paper: https://t.co/0mGdkUjMXK Code: https://t.co/uENXKuoL0T
๐ง Inference-time scaling lets LLMs spend more compute to solve harder problems, but not every question needs that! After all, we donโt use a whiteboard to solve 1 + 1. So why should an LLM? Introducing Instance-Adaptive Inference-Time Scaling, a smarter way to allocate
0
2
16
In collaboration with the @MITIBMLab, thanks to the one and only @HW_HaoWang!
0
0
5
Paper: https://t.co/EHpD7XUMWg Code:
github.com
Contribute to azizanlab/repreli development by creating an account on GitHub.
How to assess a general-purpose AI modelโs reliability before itโs deployed. A new technique from MIT LIDS researchers @NavidAzizan and Young-Jin Park enables users to compare several large models and choose the one that works best for their task. https://t.co/AHEqwkFYkS
2
3
16
Wondering when to trust pre-trained AI models and how to assess their reliability before deployment? Check out our work at #UAI2024! If youโre in Barcelona, visit my poster (#368) tomorrow!! ๐ Read More: https://t.co/qSW0IH51zj (Paper), https://t.co/SD8ioXjaWM (MIT News).
news.mit.edu
A new technique estimates the reliability of a self-supervised foundation model, like those that power ChatGPT, without the need to know what task that model will be deployed on later.
1
3
4
๐ข Still a few days left to apply for our postdoc position: https://t.co/8GYqB9Nz9N Candidates who wish to be considered for the "MIT Postdoctoral Fellowship for Engineering Excellence" may also apply here and list my name: https://t.co/uquuPYdNlv Deadline: Jan 31 @MIT @MIT_SCC
0
10
39
Fri, Dec 15, 17:20-17:40: ๐๐ง ๐ญ๐ก๐ ๐๐จ๐ง๐ฏ๐๐ซ๐ ๐๐ง๐๐ ๐๐๐ญ๐ ๐จ๐ ๐๐ข๐ฌ๐ญ๐ซ๐ข๐๐ฎ๐ญ๐๐ ๐๐ข๐ง๐๐๐ซ ๐๐ฒ๐ฌ๐ญ๐๐ฆ ๐๐จ๐ฅ๐ฏ๐๐ซ๐ฌ Session: Distributed Control III (Roselle Junior 4711) ๐๐จ๐ซ๐ข๐ฌ ๐๐๐ฅ๐๐ฌ๐๐ฏ๐ข๐ (MIT) https://t.co/6aVJftw3Mw
1
0
3
Fri, Dec 15, 10:20-10:40: ๐๐๐ญ๐-๐๐ซ๐ข๐ฏ๐๐ง ๐๐จ๐ง๐ญ๐ซ๐จ๐ฅ ๐ฐ. ๐๐ง๐ก๐๐ซ๐๐ง๐ญ ๐๐ฒ๐๐ฉ๐ฎ๐ง๐จ๐ฏ ๐๐ญ๐๐๐ข๐ฅ๐ข๐ญ๐ฒ Session: Data-Driven Verification & Control of Cyber-Physical Systems (Orchid Main 4202-4303) ๐๐จ๐ฎ๐ง๐ ๐ฃ๐๐ ๐๐ข๐ง (MIT) @youngjaem0
https://t.co/WQDMXYpvli
1
1
9
Today, Dec 14, 16:20-16:40: ๐๐ง๐ฅ๐ข๐ง๐ ๐๐๐๐ซ๐ง๐ข๐ง๐ ๐๐จ๐ซ ๐๐ช๐ฎ๐ข๐ฅ๐ข๐๐ซ๐ข๐ฎ๐ฆ ๐๐ซ๐ข๐๐ข๐ง๐ ๐ฎ๐ง๐๐๐ซ ๐๐ง๐๐จ๐ฆ๐ฉ๐ฅ๐๐ญ๐ ๐๐ง๐๐จ๐ซ๐ฆ๐๐ญ๐ข๐จ๐ง Session: Learning, Optimization, & Game Theory (Orchid Main 4202-4306) ๐๐๐จ๐ฒ๐ฎ๐๐ง ๐๐ฎ๐ง (MIT) https://t.co/Ze7Q4T47Xg
1
0
3
๐Excited to be @IEEECDC2023 in Singapore with three of my brilliant students presenting their papers today and tomorrow! (See details below) P.s. Yes, we ditched @NeurIPSConf this year, sorry! #IEEECDC2023 #CDC2023
1
1
27
A new machine-learning technique can efficiently learn to control a robot, leading to better performance. Using this method, โweโre able to naturally create controllers that function much more effectively in the real world,โ Navid Azizan says. https://t.co/bkSQV8ylLH
3
27
94
If you are at #ICML2023, check out our oral by @spenMrich! Schedule:
Excited to present "Learning Control-Oriented Dynamical Structure from Data" next week at #ICML2023! We enforce factorized structure in learned dynamics models to enable performant nonlinear control. Paper: https://t.co/f79wPtohz9 Code (w/ #JAX): https://t.co/jqorikwxt5
0
1
7
When can we trust the output representations of "foundation models"? Turns out one may be able to tell: https://t.co/EHpD7XUMWg Skillfully done by my wonderful student @Young_J_Park @MIT & the amazing @HW_HaoWang of @MITIBMLab See the๐งตbelow
So many pre-trained models fueling diverse downstream tasks! When can we confidently trust and leverage these models? ๐ค Check it out! โRepresentation Reliability and Its Impact on Downstream Tasksโ ( https://t.co/kloJzG6JUG)
@HW_HaoWang, @ShervinArdeshir, and @NavidAzizan
1
8
30
SketchOGD: Memory-Efficient Continual Learning. (arXiv:2305.16424v1 [cs.LG])
0
1
7
Professor Navid Azizan has been selected as the 2023 Outstanding UROP Faculty Mentor. UROP (Undergraduate Research Opportunities Program) students nominate research mentors who have demonstrated exceptional guidance and teaching in a research setting each spring.
0
3
35
Youngjae will be presenting his work on one-pass learning at 2:50-3:10pm https://t.co/C6lVAY7Srq
Can we learn sequentially available data without retraining on previous datapoints? We propose ๐ข๐ฅ๐๐ถ๐ (Orthogonal Recursive Fitting), an algorithm for "one-pass" learning which seeks to fit every new datapoint while minimally changing the predictions on previous data. 1/3
0
1
2
If you are at #CDC22 @CSSIEEE, come to the invited session on ๐๐๐๐๐ง๐ญ ๐๐๐ฏ๐๐ง๐๐๐ฌ ๐ข๐ง ๐๐๐๐ซ๐ง๐ข๐ง๐ ๐๐ง๐ ๐๐จ๐ง๐ญ๐ซ๐จ๐ฅ at ๐:๐๐-๐:๐๐๐ฉ๐ฆ in Tulum Ballroom ๐ w. @KaiqingZhang @guannanqu & @AdamWierman
#CDC2022
1
1
19