Omar Montasser Profile
Omar Montasser

@montasser_omar

Followers
160
Following
202
Media
3
Statuses
15

Assistant Professor @Yale. Previously, FODSI-Simons Postdoc @UCBerkeley, PhD @TTIC_Connect. Interested in theoretical aspects of machine learning.

New Haven, CT
Joined July 2012
Don't wanna be here? Send us removal request.
@yaledatascience
Yale Data Science
1 year
We are now accepting applications for our prestigious postdoc program. 3 year appointments. Flexible mentorship. No teaching requirement. $100,000/yr + $10k/yr in travel and research funding. Excellent benefits. We’re looking for the best. Join us at FDS.
1
26
52
@BarackObama
Barack Obama
2 years
If you’re looking to help people impacted by the floods in Libya, check out these organizations providing relief:
@ObamaFoundation
The Obama Foundation
2 years
🧵Emergency and relief workers are on the ground providing urgent aid in the aftermath of the catastrophic flash floods in northeast Libya. The toll of this natural disaster is unimaginable, and support is desperately needed.
21K
20K
148K
@montasser_omar
Omar Montasser
3 years
This will be presented today at #NeurIPS (11am-1pm) Poster #1036. Come through if interested in boosting robustness to adversarial examples!
@montasser_omar
Omar Montasser
4 years
Can we boost barely robust learning algorithms to learn predictors with high robust accuracy? I am very excited to share new work putting forward a theory for boosting adversarial robustness: https://t.co/PLioXF04yX. (1/6)
1
2
18
@montasser_omar
Omar Montasser
4 years
Joint work with Avrim Blum, Greg Shakhnarovich, and @hongyangzh. I hope you enjoy the read!
0
0
3
@montasser_omar
Omar Montasser
4 years
Our results reveal that the two problems of barely robust learning and strongly robust learning are actually equivalent. There is also an interesting landscape for boosting robustness that emerges with connections to the classic and pioneering works on boosting the accuracy.
1
0
4
@montasser_omar
Omar Montasser
4 years
Our formalized notion of "barely" robust learning requires robustness with respect to a "larger" perturbation set, which we show is *necessary* and that weaker relaxations such as robustness with respect to the actual perturbation set that we care about is *not* sufficient.
1
0
2
@montasser_omar
Omar Montasser
4 years
Motivated by this, we study the theoretical question of boosting "barely" robust learning algorithms, and we provably show that it is possible to boost their robustness with a novel boosting algorithm.
1
0
2
@montasser_omar
Omar Montasser
4 years
Adversarially robust learning has been quite challenging in practice. Our current algorithms are able to learn predictors with low natural error but robust only on a small fraction of the data distribution (sometimes even less than 50%).
1
0
2
@montasser_omar
Omar Montasser
4 years
Can we boost barely robust learning algorithms to learn predictors with high robust accuracy? I am very excited to share new work putting forward a theory for boosting adversarial robustness: https://t.co/PLioXF04yX. (1/6)
3
17
89
@montasser_omar
Omar Montasser
11 years
Thank you @KPCBFellows for the wonderful gift! @justinsayarath your tip was on the spot, I love gelato :D http://t.co/mMSOuxnxl6
0
1
4