MGoibert Profile Banner
Morgane Goibert Profile
Morgane Goibert

@MGoibert

Followers
158
Following
353
Media
15
Statuses
107

PhD student in ML/Deep Learning @CriteoAILab and @IP_Paris_ / @TelecomParis, specialized in Adversarial Robustness problems and related stuff.

Ile-de-France, France
Joined May 2019
Don't wanna be here? Send us removal request.
@davidstutz92
David Stutz
3 years
To kick off my blog article series on adversarial examples and adversarial training, I just published the first three articles that lay some of the foundations: monitoring training and good clean performance - 2.56% test error on CIFAR10. Start here:
Tweet card summary image
davidstutz.de
Top-tier conferences in machine learning or computer vision generally require state-of-the-art results as baseline to assess novelty and significance of the paper. Unfortunately, getting state-of-t...
1
9
23
@MGoibert
Morgane Goibert
3 years
Check out Elvis's presentation of our paper at @aistats_conf yesterday 👇
@dohmatobelvis
Elvis Dohmatob
3 years
Here are the slides for "Origins of Low-Dimensional Adversarial Perturbations"
0
1
4
@CriteoAILab
Criteo AI Lab
3 years
Do you want to know how to generate fluent text from structured data and vice versa by leveraging heterogeneous data sources?👀 Don’t miss the opportunity to discuss it with @songdng today at @aistats_conf🙌 ⏰Auditorium 1 Foyer 6 - 4:30pm - 7pm https://t.co/0bl75NVyO1 #aistats23
1
3
6
@imad_aouali
Imad Aouali
3 years
🥳🎉 Our paper "Exponential Smoothing for Off-Policy Learning" has been accepted at #ICML2023 @icmlconf for an *Oral Presentation*. Many thanks to my co-authors David Rohde (@CriteoAILab), Victor-Emmanuel Brunel and @Korba_Anna (@Ensaeparis/@CrestUmr). 🤩🤩 See you in Hawaii!
2
4
25
@MGoibert
Morgane Goibert
3 years
Today at @aistats_conf, @dohmatobelvis will present our paper "Origins of Low-Dimensional Adversarial Perturbations" in Oral session 2😃 Check it here 👉 https://t.co/mSfVL9jaWc The paper studies the conditions under which low-dimensional black box attacks succeed 😎
0
4
10
@icmlconf
ICML Conference
3 years
#ICML2023 Workshops have been posted:
0
14
61
@MGoibert
Morgane Goibert
3 years
If the reviewers for my paper that asked for clarifications/experiments that we indeed provided in the rebuttal (after working quite a lot during a week-end, remember this very good timeline from @icmlconf ) could see this...😅
@murat_kocaoglu_
Murat Kocaoglu
3 years
If you are reviewing for ICML, please pay attention to the rebuttals of papers you gave low scores to and engage with the authors. It takes 5 min of your time to correct a misunderstanding that will save someone else's 1-year worth of work. #ICML2023
0
0
9
@murat_kocaoglu_
Murat Kocaoglu
3 years
If you are reviewing for ICML, please pay attention to the rebuttals of papers you gave low scores to and engage with the authors. It takes 5 min of your time to correct a misunderstanding that will save someone else's 1-year worth of work. #ICML2023
1
24
219
@MGoibert
Morgane Goibert
3 years
🥇Results: much better accuracy on underspecified problems 🔥even in cases where labels are completely correlated with spurious features🔥 👉 Such an OOD setting is quite similar to the adversarial one. I wonder if such strategies can be adapted to it 🧐
0
0
0
@MGoibert
Morgane Goibert
3 years
Main strength 🔥 the "Diversify" stage 👉 heads are forced to be different because of a term minimizing mutual information between predictions from different heads in the training loss, simple and effective👌 The best head is then selected in the "Disambiguate" stage😊
1
0
0
@MGoibert
Morgane Goibert
3 years
Morning read [1/3] 💡Training multiple NNs heads while enforcing diversity of predictions ensure great accuracy results for out-of-distribution or underspecified problems. 👉 https://t.co/JUW6mu2uT3 from @yoonholeee @HuaxiuYaoML @chelseabfinn at #iclr2023
2
2
11
@MGoibert
Morgane Goibert
3 years
Very nice way to accompany your paper. Great job! 👌
@randall_balestr
Randall Balestriero
3 years
Awesome website summarizing our latest TMLR paper demonstrating how deep networks pruning can be easily explained/visualized and improved simply by formulating it in terms of the DN's spline partition! Paper: https://t.co/30CXYmAWKM Code: https://t.co/heGXelObVh
0
0
1
@MGoibert
Morgane Goibert
3 years
[3/3] The exps are done on NNs with large filters as it is harder to see something on smaller ones. I'm curious if some structure can still be extracted (but how?) from smaller filters🧐 Anyway, very interesting to see that structure is once again important for #deeplearning 😊
0
0
5
@MGoibert
Morgane Goibert
3 years
[2/3] The strength of their initialization is that it's a simple Gaussian one where just the covariance matrix is carefully crafted, meaning it's very easy to implement 🤩 Results 🧐 1⃣Improved accuracy and 2⃣faster convergence 🔥
1
0
3
@MGoibert
Morgane Goibert
3 years
[Morning read 1/3] 💡Better accounting for the structure of NNs conv filters NNs improves performance 👉 https://t.co/7fHmb6f6IG by @ashertrockman & al. @iclr_conf They found that conv filters are highly structured, and designed a new initialization method to improve NNs perf
2
5
48
@ParisNLP
Meetup NLP Paris
3 years
Hello everyone, The next Paris NLP meetup session will take place Wednesday, 29th of March, at Criteo's office, 32 rue Blanche 75009 Paris, 7p.m Join us to listen to Marina Vinyes and Nils Holzenberger More info https://t.co/jiXf5aGoXR
0
3
4
@dair_ai
DAIR.AI
3 years
ML Papers Explained (3.2K⭐️) These short paper summaries are great to learn about some of the most important methods in ML. We just surpassed 50 paper explanations! https://t.co/mbAnOUszzE
7
171
762
@MGoibert
Morgane Goibert
3 years
Very interesting 👍💡
@ashertrockman
Asher Trockman
3 years
Do we need to train *all* the weights in NNs?🤔Could we instead just *initialize* some of them very well? We present a closed-form expression for the distribution of filters in ConvMixer/ConvNeXt, allowing good performance even with ❄️frozen❄️ filters. https://t.co/xG3xmIMoYp 1/n
0
0
6
@MGoibert
Morgane Goibert
3 years
👉 Inspect adversarial robustness neuron-wise or layer-wise 👉 Identify main general or class-dependent 'routes'/parameters in NNs (eg for pruning, distillation, transfer) Topology can really help better understand NNs 🔥🤩 I hope to see more papers like that 👍
0
0
4
@MGoibert
Morgane Goibert
3 years
Using Betti numbers (very simple topological statistic) they are able to differentiate randomly initialized neurons or trained ones👌 More: their method can be used to assess the generalization power of entire NNs 🧠 I see several ideas that could be built on that 🧐👇
1
0
3