Emilien Dupont Profile
Emilien Dupont

@emidup

Followers
2K
Following
186
Media
31
Statuses
93

phd student in machine learning @oxcsml @UniofOxford 🐳 previously research intern @Apple, computational maths @Stanford, theoretical physics @imperialcollege

California, USA
Joined October 2017
Don't wanna be here? Send us removal request.
@emidup
Emilien Dupont
3 months
We introduce 🌸✨ AlphaEvolve ✨🌸, an evolutionary coding agent using LLMs coupled with automatic evaluators, to tackle open scientific problems 🧑‍🔬 and optimize critical pieces of compute infra ⚙️.
0
13
55
@emidup
Emilien Dupont
6 months
RT @jeanfrancois287: 📢New Paper on Reward Modelling📢. Ever wondered how to choose the best comparisons when building a preference dataset f….
0
9
0
@emidup
Emilien Dupont
1 year
RT @chrysalis_ai: Take a look at our latest work. One of my proudest works so far :).
0
4
0
@emidup
Emilien Dupont
1 year
RT @stepjamUK: 🚨Important update from our Robot Learning Lab in London. Following recent news, we’re moving on after a wonderful 2 years….T….
0
38
0
@emidup
Emilien Dupont
1 year
0
0
3
@emidup
Emilien Dupont
1 year
For technical details, please refer to the paper and code. 📜: 🧑‍💻: ⚙️: We hope this is a step towards making neural codecs a practical reality ✨.
Tweet card summary image
github.com
Contribute to google-deepmind/c3_neural_compression development by creating an account on GitHub.
1
0
5
@emidup
Emilien Dupont
1 year
We introduce C3, which significantly improves COOL-CHIC compression performance, approaching the SOTA neural codec (MLIC+) while requiring 200x fewer FLOPs to decode. We also extend C3 to videos 🎥
Tweet media one
2
0
2
@emidup
Emilien Dupont
1 year
COOL-CHIC (Ladune et al., 2023), learns a decoder per image, as well as a latent grid and an entropy model *per image*. This dramatically improves compression at a very low decoding cost.
Tweet media one
1
0
3
@emidup
Emilien Dupont
1 year
COIN (Dupont et al, 2021) learns a small decoder *per image*, leading to low decoding cost. However, compression performance is weak.
Tweet media one
1
0
2
@emidup
Emilien Dupont
1 year
Traditional neural compression models are based on autoencoders trained on datasets of natural images or videos. While these achieve good compression, the decoder is often large as it needs to generalize to arbitrary images, leading to expensive decoding.
Tweet media one
1
0
4
@emidup
Emilien Dupont
1 year
We build neural codecs from a *single* image or video, achieving compression performance close to SOTA models trained on large datasets, while requiring ~100x fewer FLOPs for decoding ⚡ #CVPR2024.
Tweet media one
4
35
161
@emidup
Emilien Dupont
2 years
0
0
1
@emidup
Emilien Dupont
2 years
We present #FunSearch in @Nature today - a system combining LLMs with evolutionary search to generate new discoveries in math and computer science! 👩‍🔬🔬✨.
@GoogleDeepMind
Google DeepMind
2 years
Introducing FunSearch in @Nature: a method using large language models to search for new solutions in mathematics & computer science. 🔍. It pairs the creativity of an LLM with an automated evaluator to guard against hallucinations and incorrect ideas. 🧵
3
4
45
@emidup
Emilien Dupont
2 years
RT @jinxu06: We construct neural processes by iteratively transforming a simple stochastic process into an expressive one, similar to flow/….
0
8
0
@emidup
Emilien Dupont
2 years
RT @itsbautistam: Introducing Manifold Diffusion Fields (MDF), our new work on learning generative models over fields defined on curved geo….
0
28
0
@emidup
Emilien Dupont
2 years
RT @jinxu06: We introduce a new class of stochastic process models, which are constructed by stacking sequences of neural parameterised Mar….
0
34
0
@emidup
Emilien Dupont
2 years
RT @schwarzjn_: Very happy to announce that our latest paper on Neural data compression with INRs, Meta Learning & Sparse Subnetwork select….
0
14
0
@emidup
Emilien Dupont
2 years
RT @bobby_he: Can deep transformers be trained without skip connections nor normalisation layers?. Our ICLR 2023 paper shows you how, using….
0
76
0
@emidup
Emilien Dupont
2 years
RT @hyunjik11: Previously we had introduced *functa*, a framework for representing data as neural functions (aka neural fields, INRs) and d….
0
36
0