Adversarial Machine Learning
@adversarial_ML
Followers
181
Following
44
Media
0
Statuses
14
I tweet about #MachineLearning and #MachineLearningSecurity.
Joined April 2018
Just read this paper. Short summary: when thinking of defenses to adversarial examples in ML, think of the threat model carefully. Nice paper. Also won the best paper award at ICML 2018 (@icmlconf ) Congrats to the authors!! https://t.co/iEdYbZI6VL
0
3
29
Adversarial robustness is not free: decrease in natural accuracy may be inevitable. Silver lining: robustness makes gradients semantically meaningful (+ leads to adv. examples w/ GAN-like trajectories) https://t.co/dynU5RQpM7 (@tsiprasd @ShibaniSan @logan_engstrom @alex_m_turner)
4
35
107
Here's an article by @UofT about our new work on adversarial attacks on Face Detectors that help you preserve your privacy.
news.engineering.utoronto.ca
New algorithm protects users’ privacy by dynamically disrupting facial recognition tools designed to identify faces in photos
1
6
12
Think BatchNorm helps training due to reducing internal covariate shift? Think again. (What BatchNorm *does* seem to do though, both empirically and in theory, is to smoothen out the optimization landscape.) (with @ShibaniSan @tsiprasd @andrew_ilyas) https://t.co/Tlo71NSebi
5
61
144
Excited by this direction of formal investigation for adversarial defences: Adversarial examples from computational constraints, Bubeck et al https://t.co/FKUqwuTyE7
1
8
22
"No pixels are manipulated in this talk. No pandas are harmed..." Great ways to differentiate your talk from the rest of talks on adversarial examples... no more pandas please 😀
1
8
15
I'm speaking at the 1st Deep Learning and Security workshop (co-located with @IEEESSP ) at 1:30 today: https://t.co/AaeTVerKNy I'll discuss research into defenses against adversarial examples, including future directions. Slides and lecture notes here: https://t.co/fCcnDo5tib
6
100
360
This paper shows how to make adversarial examples with GANs. No need for a norm ball constraint. They look unperturbed to a human observer but break a model trained to resist large perturbations. https://t.co/m8W1WpQQwu
7
176
517
LaVAN: Localized and Visible Adversarial Noise. A method to generate adversarial noise which is confined to small, localized patch of the image without covering any main objects of the image. https://t.co/QBmMN2hfcL
0
4
2
Two papers accepted to ICML 2018. Congrats to all my amazing co-authors. Both on adversarial ML. The arxiv version of the papers are up, but we will update it soon based on reviewer comments. Arxiv versions: https://t.co/VoBqhj0jK9 and https://t.co/gPno2WR0sy
arxiv.org
Motivated by safety-critical applications, test-time attacks on classifiers via adversarial examples has recently received a great deal of attention. However, there is a general lack of...
2
9
70
A detailed post on privacy in machine learning describing why do we need private ML algorithms, and how can we leverage PATE framework to achieve the same (by @NicolasPapernot and @goodfellow_ian) https://t.co/qdRiDVV9Qr
#MachineLearning
1
0
1
Securing Distributed Machine Learning in High Dimensions https://t.co/XslQZc7VqN
#MachineLearningSecurity #AdversarialML
arxiv.org
We consider unreliable distributed learning systems wherein the training data is kept confidential by external workers, and the learner has to interact closely with those workers to train a model....
0
0
1
IBM Ireland just released "The Adversarial Robustness Toolbox: Securing AI Against Adversarial Threats". This library will allow rapid crafting and analysis of attacks and defense methods for machine learning models. https://t.co/rzRRLhUZ8H
#MachineLearningSecurity #AdversarialML
research.ibm.com
The IBM Research blog is the home for stories told by the researchers, scientists, and engineers inventing What’s Next in science and technology.
0
0
0