
Isak.Falk
@isakfalk_
Followers
153
Following
380
Media
1
Statuses
101
RT @bsimsek13: 📢 I'm on the faculty job market this year! . My research explores the foundations of deep learning and analyzes learning and….
0
22
0
RT @bsimsek13: The landscape complexity analysis is in action at #ICML2024 ! Results of my thesis predicted a mild overparameterizarion (OP….
0
1
0
RT @bsimsek13: Excited to share a blog post on our recent work ( on neural network distillation . .
bsimsek.com
It is important to understand how large models represent knowledge to make them efficient and safe. We study a toy model of neural nets that exhibits non-linear dynamics and phase transition....
0
8
0
Thanks to @PilarCossio2 for letting me present MEKRR ( at CCM / CCB at @FlatironInst yesterday, it was great!.
0
0
3
RT @recursecenter: Hi, folks! We're back for a moment to share that we're hiring an Office & Operations Assistant. It's a full-time, onsit….
0
16
0
RT @bsimsek13: I used to solve many geometry problems with trigonometry when I was an IMO contestant. It usually worked quite well. Excitin….
0
1
0
RT @OrdonezApraez: Exited so see interest in our recent work. Please visit our project website after the holidays, as we will release all c….
0
6
0
RT @pie_novelli: Heya, our team from @IITalk got the second place in this year’s @OpenCatalyst challenge!
0
6
0
Currently presenting MEKRR at poster #107! Come chat about GNNs, mean embeddings and physics!
1
1
10
This was a great experience where we got to combine modern GNNs with kernels and apply it to potential energy prediction. Reach out if you want to have a chat about our paper!.
How can we efficiently learn ML potentials for atomistic systems with few and expensive reference data? Here we explored a transfer learning approach via a combination of pre-trained graph neural networks & kernel ridge regression. #NeurIPS2023 #ML4science
0
3
6
RT @bsimsek13: During the last year of my Ph.D., I studied learning in neural networks with few neurons, and more generally in under-parame….
arxiv.org
Any continuous function $f^*$ can be approximated arbitrarily well by a neural network with sufficiently many neurons $k$. We consider the case when $f^*$ itself is a neural network with one...
0
27
0
My girlfriend just released her PhD-thesis into the wild!.
After getting much-needed sunlight and Mediterranean vibes after successfully defending my Ph.D., I am ready to share my thesis entitled "A Theory of Finite-Width Neural Networks: Generalization, Scaling Laws, and the Loss Landscape”
0
0
5
RT @yanndubs: I looked a little into the Gzip OOD results and there seems to be another big problem: train-test overlap. E.g. DengueFilipi….
0
51
0