Alexia Jolicoeur-Martineau Profile Banner
Alexia Jolicoeur-Martineau Profile
Alexia Jolicoeur-Martineau

@jm_alexia

Followers
10,334
Following
1,513
Media
124
Statuses
6,970

AI Researcher at the Samsung SAIT AI Lab 🐱‍💻

Montréal, Québec
Joined March 2017
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@jm_alexia
Alexia Jolicoeur-Martineau
8 months
Fashion repeats itself: Generating and Imputing Tabular Data via Diffusion and Flow-based Gradient-Boosted Trees (XGBoost) 🌲 New work with @FatrasKilian and @TalKachman . 😎 Blog: Website: arXiv:
8
61
270
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
🙀🙀🙀 I just got accepted, I will be starting a PhD at @MILAMontreal next month! 😸 This was my third application, it took two years, but I finally got accepted!
91
31
1K
@jm_alexia
Alexia Jolicoeur-Martineau
6 years
My new paper is out! " The relativistic discriminator: a key element missing from standard GAN" explains how most GANs are missing a key ingredient which makes them so much better and much more stable! #Deeplearning #AI
13
284
949
@jm_alexia
Alexia Jolicoeur-Martineau
3 years
If you want to start experimenting with GANs in PyTorch, I have made available the following codes: For GANs on toy data: For GANs + denoising score matching hybrids on toy data: For image-generation:
3
152
763
@jm_alexia
Alexia Jolicoeur-Martineau
1 year
I successfully defended my PhD thesis today!
Tweet media one
49
4
749
@jm_alexia
Alexia Jolicoeur-Martineau
6 months
My PhD is finally available online! 😸 Check it out if you are interested in GANs and diffusion models.
14
81
744
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
My new paper is out! We show a framework in which we can both derive #SVMs and gradient penalized #GANs ! We also show how to make better gradient penalties!
@hardmaru
hardmaru
5 years
Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs New work by @jm_alexia and @bouzoukipunks 🔥🔥
Tweet media one
1
86
381
9
156
720
@jm_alexia
Alexia Jolicoeur-Martineau
7 months
Don't listen to this person. Ask around to your favorite companies if they take non-PhD for your preferred position. I asked most of them and they all said no, except for the (temporary) Google residency. That's why I did a PhD and I'm glad I did.
@sshkhr16
Shashank Shekhar
7 months
As PhD applications season draws closer, I have an alternative suggestion for people starting their careers in artificial intelligence/machine learning: Don't Do A PhD in Machine Learning ❌ (or, at least, not right now) 1/4 🧵
36
55
517
33
37
623
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
My paper "On Relativistic f-divergences" got accepted to #icml2020 ! 🥳 This means that 2 of the 3 papers I wrote alone before getting accepted to the Phd were published! The paper provides a strong theoretical foundation for Relativistic GANs:
12
36
586
@jm_alexia
Alexia Jolicoeur-Martineau
6 months
Are transformers having their GAN moment? 😰
29
27
545
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Just released a new blog post on the recent approach that will likely supersede GANs in the soon future: Score Matching with Langevin Sampling by @YSongStanford . I explain how the approach works and its pros and cons.
8
111
504
@jm_alexia
Alexia Jolicoeur-Martineau
3 years
Summary of my Neurips reviews: "Beautiful solution of an important problem, but not novel and too simple". I guess I should have put some meaningless theorems and complex formulas. Oh well.
18
21
480
@jm_alexia
Alexia Jolicoeur-Martineau
1 year
I just discovered 'torchview', a cool new python library to visualize the graph of a neural network in Torch. It seems a lot better than previous existing libraries, it shows you all the important details without the fluff.
5
64
460
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
New paper on adversarial😠 score matching and an alternative to Langevin Sampling for better generative models! 😸 We show how we can obtain results better than SOTA GANs. 😻 Blog: Paper: Code:
3
71
394
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Story about a woman being laughed at in a machine learning seminar who is now doing AI research.
9
75
379
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
@osazuwa @BarackObama @AOC What is surprising is that StyleGAN is trained on FFHQ, which is supposed to be a much more diverse face dataset alternative to CelebA (which is extremely biased to good-looking white people). So either FFHQ is not diverse enough or StyleGAN/Pulse mode collapse hard.
9
25
352
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
I'm a bit tired of GANs for image generation, so I'm making a switch to NLP. I will try making good GAN text generators with a great personality. It can't be that hard... 😹🙃
16
12
339
@jm_alexia
Alexia Jolicoeur-Martineau
3 years
I just found this amazing Phd thesis (300+pages) by Alessandro Barp on statistical divergences, Hamiltonian Monte Carlo, Score matching, Steins Operators and related subjects. Look it up if you are interested in the theory of generative model.
3
41
343
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
1/ I have been asked by many people how I managed to be accepted to @MILAMontreal after two years of trying. Here's my story. Last year I reached out to a new prof at MILA and I audited their class.
15
77
338
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
I recommend checking out the tool by @charlesmartin14 . It fits a power law distribution to the eigenvalues of the weight matrices at each layer of your neural net. Surprisingly, this allows to know if a model trained well without looking at the test set.
4
64
339
@jm_alexia
Alexia Jolicoeur-Martineau
6 months
Many recent LLM transformer architectures (nanoGPT, Mistral) do not use a bias term. Is there any intuition to why they function so well without a bias term?
19
15
311
@jm_alexia
Alexia Jolicoeur-Martineau
5 months
- Stable Diffusion uses GANs under the hood (adversarial autoencoders) - Knowledge distillation through weights is powerful (PAPA) - Diffusion models can be trained with decision trees (ForestDiffusion) - Weight quantization (e.g., QLoRA) is useless for large-batch training
@jeffbigham
hci.social/@jbigham
5 months
what did you learn this past year?
2
0
3
5
28
310
@jm_alexia
Alexia Jolicoeur-Martineau
1 year
Diffusion models have become overcrowded and more focused on scaling/engineering than algorithms. I'm slowly weaning off from diffusion models, just like I did with GANs. I'm now exploring small subareas of AI that have great potential. I will have cool stuff to show in Spring.
8
12
304
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
There's a whole community on enhancing the resolution of old #videogames mostly using ESRGAN (which uses Relativistic GANs)! 😻 Feels good to know I am having a impact on "real" life 😺😎.
7
55
288
@jm_alexia
Alexia Jolicoeur-Martineau
2 years
I'm glad to announce that I will be joining the Samsung AI Lab (SAIL) as an AI Researcher! 😸 This is small academic-style lab led by @SimonLacosteJ located right alongside @Mila_Quebec . I will continue working on generative models while expanding toward cool new applications.
14
6
286
@jm_alexia
Alexia Jolicoeur-Martineau
3 years
I just passed my predoctoral comprehensive examination! I'm finally done with everything except the thesis/defense.
21
5
286
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
I'm working on a totally new adversarial approach (not just a modified loss or small trick) and its working very well right off the bat without tuned hyperpameters. This one is going make a great publication. I won't be able to make in time for #NeurIPS though. Stay tuned! 😎
7
15
281
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
Having a fancy diploma ( #PhD ) does not make one a #scientist . Living the life of a scientist (doing experiments, summarizing your results in publications, attending and presenting at conferences, applying for research grants, etc.) makes one a scientist.
9
38
281
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
I'm one of the 10 winners from the Borealis Fellowship! 🙀😸
@BorealisAI
Borealis AI
5 years
Borealis AI is thrilled to announce our 2018-2019 Graduate Fellowship winners! Meet our 10 outstanding finalists and learn more about the research topics they'll be pursuing this year. #AI #Canada
Tweet media one
2
22
95
24
13
277
@jm_alexia
Alexia Jolicoeur-Martineau
6 years
MFW @goodfellow_ian ask me to present my paper to his team at Google and wants to meet with me
6
4
274
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Tweet media one
5
29
270
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
1. I wrote my first 2 AI papers on my own, one got accepted to @iclr2019 2. I presented my work and attended my first conferences; I took the effort to network and socialize which really improved my confidence and social skills 3. I got accepted to the PhD @MILAMontreal
9
9
271
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
Proof that SVMs are just a trivial case of GAN. 🤠
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
My new paper is out! We show a framework in which we can both derive #SVMs and gradient penalized #GANs ! We also show how to make better gradient penalties!
9
156
720
4
45
268
@jm_alexia
Alexia Jolicoeur-Martineau
3 years
New paper is out! 😻 We show how to generate high-quality data as fast as possible with score-based (diffusion) models! 🏃🏻💨💨 Blog: Paper: Code: Work with @KL_Div @Remi29048827 @TalKachman @bouzoukipunks
7
51
262
@jm_alexia
Alexia Jolicoeur-Martineau
6 years
Homosexuality is a criminal offense in Ethiopia, there's no way I'm ever submitting an abstract to #ICLR next year... I can't risk my life. I guess there will be more racial diversity, at the cost of no more #LGBT 🏳️‍🌈 diversity. Is this really confirmed?
@slashML
/MachineLearning
6 years
ICLR 2020 will be in Addis Ababa, Ethiopia
1
23
68
14
54
258
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Funny to think that I had my work published in a serious academic conference with this table.
Tweet media one
6
18
251
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
9/ You have to take initiative and show that you will do research anyway and that they are losing by not taking you. I reached out again to the professor and he fast-tracked me into the #PhD program so I could start next month.
7
25
242
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
I was excited and then read "2500 V100-days" 😹! This is pure brute forcing the problem and can you imagine the environmental impact of this. If I understand correctly, with 100 V100 GPUs, it would still take 25 days running 24/7 to train the model.
@OpenAI
OpenAI
4 years
We found that just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples.
Tweet media one
58
745
3K
10
23
234
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
My new #paper "On relativistic f-divergences" is out 😺! It has a very in-depth intro to make it easier for those less familiar with #GANs and a shortened one-page #proof so you don't have to read the full proof.
4
51
233
@jm_alexia
Alexia Jolicoeur-Martineau
3 years
@ringo_ring The FBI just showed that they are against open science. It always comes down to protecting the rich corporations whatever the cost is.
4
12
211
@jm_alexia
Alexia Jolicoeur-Martineau
2 years
I'm super excited to share this new joint work with @VikramVoleti and @chrisjpal where we show that a single diffusion model can solve many video tasks through a masking scheme! 🧙‍♂️ Blog: Website: Arxiv:
8
43
224
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
My Relativistic GANs paper was accepted to #ICLR2019 !
12
9
222
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
I'd argue the reverse too. You can outperform in pretty much any scientific field by bringing with you an artistic mindset.
@fchollet
François Chollet
4 years
You can outperform in pretty much any craft by bringing with you a scientific mindset
7
30
237
10
23
215
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
Just wrote 5 pages for my new paper! I finally have something interesting to present! 😺 I show a geometric interpretation of classifiers. WGAN-GP is different from the Wasserstein Distance and I show what it is actually optimizing. I also talk about RaGANs.
6
12
214
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Here are better ideas: - Finish your homeworks/school-projects - stay in bed and recuperate from sickness - smoke some legal Canadian weed - chill on the sofa watching housewives of Beverly hills - play video games online with friends
8
12
206
@jm_alexia
Alexia Jolicoeur-Martineau
9 months
Reviewer: I can't find any paper doing X, but X is not novel Author: 🤔This is literally the definition of novelty Reviewer: 🤬I lower my score even more, your work is trivial and not novel
8
7
208
@jm_alexia
Alexia Jolicoeur-Martineau
7 months
You can now get the feature importance (SHAP) from the Gradient-Boosted Tree Diffusion/Flow models! As far as I know, this is the only diffusion model that can extract the feature importance for a better understanding of the generation mechanism!
Tweet media one
0
51
206
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Energy-based models are the future of generative modelling.
7
23
202
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Good paper on a GAN-VAE hybrid. What I like most is the math in their derivation of the objective function. Every probability distribution is properly defined and there is no major trick or leap-of-faith involved.
2
26
201
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Rtx 3090 will have 24Gb of memory, even more than the Tesla V100 16Gb we have at Mila! 🙌👏
7
16
201
@jm_alexia
Alexia Jolicoeur-Martineau
6 years
First publication using Relativistic #GANs ! 🥳 They beat the state-of-the-art in super-resolution using a Standard Relativistic Average GAN! 👍 This is just the beginning! 😎
4
50
200
@jm_alexia
Alexia Jolicoeur-Martineau
8 months
State-of-the-art generative model 🌳 using CPUs and "classic ML" coming up tomorrow
@chrisalbon
Chris Albon
8 months
Recently I heard someone call it “classic machine learning” Like damn bro that hurts
36
16
349
14
8
192
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
A fascinating paper on GANs just came out. It studies the dynamics of the generator (contrary to most papers which only focus on the discriminator dynamics). They show why non-saturating loss functions cause mode collapse and ideas on how to lessen it.
3
30
192
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
I'd argue that whale vocalizations are the closest thing to an alien language on Earth. They may be as smart as us and likely smarter than us. They have cultures. The fact that we can't understand them shows our lack of ability to understand other intelligent species languages.
5
20
186
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Most modern generative models (VAE and GANs) produce a generator network which transforms Gaussian samples into fake samples of data. Although it may appear possible to obtain a optimal generator which always generate realistic data, I suspect that it is generally impossible. 1/5
5
24
177
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
This is my experience too. You need a PhD to get a Research Scientist job. You need papers to get in a Phd (which means that it's a minimum-wage job rather than a training). The main thing that you learn in a PhD is how to deal with the broken conference peer review system.
@andreas_madsen
Andreas Madsen
4 years
After getting published in ICLR as an Independent Researcher, I have received nearly 100 messages from others who are looking to do the same. So I wrote a blog post on why I decided to do it and my advice to others.
47
491
2K
4
14
175
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Really cool new package for PyTorch which allows you to construct neural networks without specifying sizes/dimensions. It seems to make constructing a neural network very easy and require very few lines. 🤩
2
30
176
@jm_alexia
Alexia Jolicoeur-Martineau
8 months
New paper coming up next week!!!! Deep learning is not all you need 😎
8
6
173
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Pre-doctoral exam in one hour!!! Let's hope it all goes well.
16
3
173
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
You miss the old times when SVMs where at the top of the food-chain? Turns out that gradient-penalized classifiers are a generalization of Soft-SVMs. Read my paper to find out how to make your #NeuralNetworks act like SVMs. #ML #AI
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
My new paper is out! We show a framework in which we can both derive #SVMs and gradient penalized #GANs ! We also show how to make better gradient penalties!
9
156
720
5
30
173
@jm_alexia
Alexia Jolicoeur-Martineau
7 months
It's silly how controversial my tweet got. I have always been on the side that you don't need a PhD to be a great researcher, and we shouldn't need one. Yet, not having a Phd is a handicap for most research jobs, salary, promotion, and job mobility, so it's worth getting.
8
5
170
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Life is meaningless, so we invent meaning. Science and art are some way to get meaning. But rather than do research for the sake of advancing knowledge, we decided to invent our own reviewing system to make research into a win/lose game. It's so incredibly silly and pointless.
5
16
172
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
I never expected to say this, but @GaryMarcus was the clear winner of the #AIDebate . 🙌 He advocated for hybrid methods combining deep learning and symbolic AI and injecting stronger priors into AI models.
8
30
168
@jm_alexia
Alexia Jolicoeur-Martineau
3 months
The biggest limitations of 1-2bit LLMs is that Torch store Booleans with 8 bits and and matrix multiplication is done in FP32 instead of XNOR-popcount (or similar). We need software changes (1-2bit objects in Torch) and hardware changes (super fast XNOR-popcount on GPUs).
@_akhaliq
AK
3 months
Microsoft presents The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits Recent research, such as BitNet, is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single
Tweet media one
53
626
3K
8
21
170
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
You want GAN stability without having to resort to complex architectures? It seems that my easy-to-implement "causal GAN" lead to even more stability than Relativistic GAN (see table from my old paper with new results).
Tweet media one
5
20
170
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
If you're wondering how to keep up-to-date with AI research, I recommend following your favorite papers using google scholar alerts. I follow a few GAN papers to be able to catch most GAN papers.
3
15
165
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
Just reached 100 total citations and my single-author Relativistic GAN paper now has more citations than all my other grant-funded publications from a few years ago! 😸
4
6
167
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
NVIDIA basically fixed most of the issues with their architecture. They now use gradient penalty only 1/16 of the times making it much faster and they replaced progressive growing by a modified MSG-GAN ( @AnimeshKarnewar ). The amount of work done is insane for one single paper.
@_akhaliq
AK
4 years
Analyzing and Improving the Image Quality of StyleGAN pdf: abs: github:
Tweet media one
4
75
277
3
46
166
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
I recommend watching the presentation by Yann Lecun on the subject: . I'm working on a Energy-based generative method right now and its very promising.
0
23
160
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
Frequent users of gradient penalty (WGAN-GP, StyleGAN, etc.), make sure to try out the new Linfinity hinge gradient penalty from for better results. See for how to quickly and easily implement it in #PyTorch .
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
My new paper is out! We show a framework in which we can both derive #SVMs and gradient penalized #GANs ! We also show how to make better gradient penalties!
9
156
720
1
43
160
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Ouch 🤦‍♀️
@shivamshrirao
Shivam Shrirao
4 years
Tweet media one
4
12
52
6
19
160
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
I would never trust a neural network on something like this! 😹
@feras_dayoub
Feras Dayoub
5 years
Full confidence in your own code.
104
2K
6K
5
36
158
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Finally one papers with all accepts and 4 reviews! After so many rejected submissions, this feels good.
8
0
159
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
PairGAN is a new GAN similar to relativistic GAN which performs well. RaGAN do f(C(x)-E[C(y)]), while PairGAN do f(D(C(x)+E[C(y)])), where D is a small fully-connected network. I was always against adding D to RaGANs, but it seems that +E[C(y)] instead of -E[C(y)] make it works.
@tim_garipov
Timur Garipov
4 years
First paper written at a new lab is out on arxiv! "The Benefits of Pairwise Discriminators for Adversarial Training" Joint work with @ShangyuanTong and Tommi Jaakkola. Arxiv: Code:
Tweet media one
Tweet media two
2
18
75
3
27
153
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Problem is that everyone talks about causality, but very few study it and those people focus on Pearl's framework of causality which is hard to translate to AI (finding the true causal graph is intractable). Mila doesn't offer any course on causality.
@IntuitMachine
Carlos E. Perez
4 years
Yoshua Bengio lays down his roadmap of where Deep Learning research is heading:
2
48
161
6
25
152
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
My idea from last month didn't scale to images, so I abandoned it. Now, I have a non-adversarial approach that works better than GANs (no mode collapse) in toy datasets. Tomorrow, I'm trying to scale it to images, wish me good luck. 😨This is so nerve wracking. 😖
6
1
145
@jm_alexia
Alexia Jolicoeur-Martineau
2 years
This new paper speeds up diffusion models by analytically solving the linear part of the reverse ODE to make precise solvers. I find it super cool that they use our adaptive step-size algorithm () to get their best results! 😸
2
27
144
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
Beautiful dataset! Will be fun working with this instead of CIFAR-10 and the boring CELEBA.
Flickr-Faces-HQ (FFHQ) is out now. "... 70,000 high-quality PNG images at 1024×1024 resolution and contains considerable variation in terms of age, ethnicity and image background." The dataset used for the Style-GAN paper.
Tweet media one
5
154
506
1
34
145
@jm_alexia
Alexia Jolicoeur-Martineau
2 years
Wow, this paper is super neat! It shows how to use GANs to make score-based diffusion models super fast! We are close to the optimum, diffusion models now only need 2-4 iterations to work.
@_akhaliq
AK
2 years
Tackling the Generative Learning Trilemma with Denoising Diffusion GANs project page: show that denoising diffusion GANs obtain sample quality and diversity competitive with original diffusion models while being 2000× faster on the CIFAR-10 dataset
2
56
224
0
26
144
@jm_alexia
Alexia Jolicoeur-Martineau
5 months
CNNs (just like Transformers) can work well on any modality! Treat everything (sound, point-cloud, video, time series) as images even if they are not and use large kernels to match or beat Transformers! See the reddit post for more details:
@ge_yixiao
Yixiao Ge
6 months
🔥CNN unifies many modalities and outperforms modality-specific models! 🤩Check out our “UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio, Video,Point Cloud, Time-Series and Image Recognition”. paper: code:
Tweet media one
3
13
74
3
25
141
@jm_alexia
Alexia Jolicoeur-Martineau
3 years
New version of the paper is up on Arxiv! Check it out, especially if you haven't read it yet. Maximum-margin classifiers (like SVMs) are approximately equivalent to classifiers with gradient-norm penalties. We now also show empirically that the margin becomes much larger.
@hardmaru
hardmaru
5 years
Connections between Support Vector Machines, Wasserstein distance and gradient-penalty GANs New work by @jm_alexia and @bouzoukipunks 🔥🔥
Tweet media one
1
86
381
0
19
139
@jm_alexia
Alexia Jolicoeur-Martineau
2 years
New paper📜 coming up on Monday! 😸 I haven't been this excited about a paper since Relativistic GAN! It's one hell of a paper jam-packed with cool stuff. You get a single generative model that can solve so many tasks.
4
9
137
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
Can you train a #SupportVectorMachine (SVM) when your classifier is a #NeuralNetwork ? 🤔 Yes, use a Hinge loss classifier with a L2-norm gradient penalty, see: #MachineLearning #AI #ArtificialInteligence #Math
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
My new paper is out! We show a framework in which we can both derive #SVMs and gradient penalized #GANs ! We also show how to make better gradient penalties!
9
156
720
1
29
140
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
Relativistic GAN is now my second most cited paper (at 21 citations)!!! 😸 It will soon have more citations than any of the work I did with huge teams of psychologists, MDs, geneticists which must have required hundred thousands of dollars in funding every year.
1
9
134
@jm_alexia
Alexia Jolicoeur-Martineau
5 months
Massive update for the ForestDiffusion🌲package! We incorporate the XGBoost data iterator, making scaling to massive datasets a breeze! You can now train a Forest-Flow model on your laptop with CPUs on the colossal Higgs dataset (N=11M, d=21 features)!
4
24
135
@jm_alexia
Alexia Jolicoeur-Martineau
5 months
GANs are not dead yet! 🧟‍♀️ By leveraging frozen pretrained classifiers with a R1 gradient-norm penalty in feature space and a relativistic loss, one can get high-quality image generation without data augmentation and using only tiny batch sizes (8).
5
21
134
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
@lacymjohnson In Quebec, Canada loans only start when you stop school. They have very low rates and all the interest you pay are TAX DEDUCTIBLE. This means that you effectively don't pay for interests since you get it back in your tax returns. They only want you to pay the loan. US is crazy.
5
7
127
@jm_alexia
Alexia Jolicoeur-Martineau
2 years
📢🚨 Looking for work 📢🚨 I am now looking for a Research Scientist role locally or remotely worldwide 🌎! My research focuses on generative modeling (GANs, score-based diffusion) and my long-term goal is to achieve high-quality video generation.
4
21
133
@jm_alexia
Alexia Jolicoeur-Martineau
3 years
Submitted to Neurips and ArXiv! Finally done, it's time to relax.
5
3
132
@jm_alexia
Alexia Jolicoeur-Martineau
4 years
Trying out an idea I had for a way to train a classifier with causality in mind. My second experiment show a 25% improvement over my previous best GAN on this dataset by using a "causal" discriminator! Hopefully, remaining experiments will also show improvements. 🙏
6
4
128
@jm_alexia
Alexia Jolicoeur-Martineau
6 months
Emu is an amazing new video generation model by Meta! 😻 We actually discovered the power of masking in video diffusion models last year (although Meta did not cite us)! Check out our work if you want to better understand the power of masking:
@_akhaliq
AK
6 months
Meta presents Emu Edit: Precise Image Editing via Recognition and Generation Tasks paper: blog: Instruction-based image editing holds immense potential for a variety of applications, as it enables users to perform any editing
9
113
462
1
20
123
@jm_alexia
Alexia Jolicoeur-Martineau
6 years
My suggestion is to create some kind of possibly international AI lab outside of academia with NO HIERARCHY, where anyone can propose a project and/or be a PI (without needing any fancy title). If people are interested in this idea, I’d love to talk more about it. 13/15
11
27
120
@jm_alexia
Alexia Jolicoeur-Martineau
5 years
I might have figured out how to make WGAN better! It wasn't even my intention, I just thought it would be mathematically simpler. The FID went from 30 (WGAN-GP or RaGAN) to 22 with this approach on CIFAR-10 using n_D=1, no spectral, no ResNet. The SOTA is 25.5 with SpectralGAN.
6
11
119