David Ruhe
@djjruhe
Followers
1K
Following
1K
Media
19
Statuses
161
Research Scientist @GoogleDeepMind, PhD Student @AmlabUva, @ai4science_lab @UvA_Amsterdam. Previously @MSFTResearch @FlatironInst
Joined December 2012
Together with the amazing @jo_brandstetter and Patrick Forré, we present Clifford Group Equivariant Neural Networks: a new E(n) steerable equivariant neural architecture that respects rotations, reflections, and more symmetries that operates on multivectors!
2
33
179
Clifford Algebra Neural Networks are undeservedly dismissed for being too slow, but they don't have to be! 🚀Introducing **flash-clifford**: a hardware-efficient implementation of Clifford Algebra NNs in Triton, featuring the fastest equivariant primitives that scale.
6
33
131
Really awesome to see the full potential of real-time generative modeling being realized! Helping to enable this has been a driving goal of our research in efficient generative models for a long time!
What if you could not only watch a generated video, but explore it too? 🌐 Genie 3 is our groundbreaking world model that creates interactive, playable environments from a single text prompt. From photorealistic landscapes to fantasy realms, the possibilities are endless. 🧵
3
5
49
I expressed more-or-less the same opinion in my first lecture yesterday :) _Don't_ think of GDL as a mechanism to exploit symmetries of data which has a "spatial" geometry. Think of it as a language to express neural network constraints!
One of my favorite classes was Geometric Methods for Machine Learning with the incredible @mweber_PU! The biggest shift in my mind after taking her class is thinking about DL architecture as a means of preserving certain properties (permutation equivariance, homophily, etc.)
6
11
85
I’m hiring a postdoc to work with me on exciting projects in generative modelling (AI) and/or uncertainty quantification. You'll be part of a great team, embedded in @AmlabUva and the UvA-Bosch Delta Lab. Apply here: https://t.co/nJ2phFRqIr RT appreciated! #ML #GenAI
1
30
95
"There's a STAResNet waiting in the sky..." ⭐STAResNet⭐, or "on the importance of choosing the right algebra in #GA networks", is out now on ArXiv and live tomorrow at #AGACSE 2024 in Amsterdam! paper: https://t.co/cgSB9mNT14 code: https://t.co/MbDNVyAMq1 more below:
researchgate.net
PDF | We introduce STAResNet, a ResNet architecture in Spacetime Algebra (STA) to solve Maxwell's partial differential equations (PDEs). Recently,... | Find, read and cite all the research you need...
2
1
4
📰 blogpost: https://t.co/9OWML2JPkX 🕹️ google colab: https://t.co/vBxnTWhALp I tried to make the blog post more accessible than the paper and added a lot of supporting visualizations. Please check it out if you are curious about spacetime-equivariant CNNs 🚀
colab.research.google.com
Colab notebook
Excited to introduce Clifford-Steerable CNNs: a framework that expands equivariant CNNs to pseudo-Euclidean groups, including the Poincaré group - the group of isometries of spacetime! Joint work w/ @djjruhe, @maurice_weiler, @__alucic, @jo_brandstetter, and Patrick Forré 1/12
1
45
234
Exciting News from Chatbot Arena! @GoogleDeepMind's new Gemini 1.5 Pro (Experimental 0801) has been tested in Arena for the past week, gathering over 12K community votes. For the first time, Google Gemini has claimed the #1 spot, surpassing GPT-4o/Claude-3.5 with an impressive
Today, we are making an experimental version (0801) of Gemini 1.5 Pro available for early testing and feedback in Google AI Studio and the Gemini API. Try it out and let us know what you think! https://t.co/fBrh6UGcJz
83
401
2K
Had a great time presenting our latest paper on hybrid modeling of broad-front bird migration at the @AI_for_Science workshop at #ICML24 We can now make detailed and interpretable forecasts of departure, flight, and landing at the continental scale! 📰➡️ https://t.co/iF7t0rHv1V
0
7
22
Thrilled to receive the outstanding paper award together with @Mangal_Prakash_ for our work on SE(3)-Hyena https://t.co/qAqNTxM3eu !🤘 Go long-convolutions!
1
6
35
Looks at this stars 💫⭐️🌟! @a_kzna and @bob_smiley_ at want workshop sharing how to construct optimizer if you are Bayesian guy!
0
5
40
Come to see our poster on @GRaM_org_ Schubert 1-3 today at 16-17:00 🚀
🏄♂️Long-convolutional models go equivariant! Check our new work on SE(3)-Hyena for scalable equivariant learning. Can equivariantly process up to 3.5M tokens with global (aka all-to-all) context on a single A10 GPU. To appear at @GRaM_workshop. Paper: https://t.co/HZOwdFxcj8 1/5
0
2
9
🐊 LaB-GATr is a transformer neural network designed for large-scale biomedical surface. It is inspired from GATr (pronounced gator 🐊), the Geometric Algebra Transformer (GATr) that cleverly represents inputs and states using projective geometric algebra. A much needed tutorial!
1
4
17
Hello #GRaM enthusiasts, Sadly our previous twitter account has been compromised, and we won't be able to get hold of it soon. Hence, we have moved to our new handle @GRaM_org_. Kindly unfollow @GRaM_workshop and follow us @GRaM_org_ for more information. @icmlconf
2
28
44
Looking forward to a full week with @aabi_org and @icmlconf starting this Sunday! 🌟I am co-organising AABI on July 21 https://t.co/AGTLzRUCzB ✨Presenting a paper (Neural Diffusion Models), and several workshop contributions during #ICML2024. Reach out if you want to chat!
approximateinference.org
6
2
15
📢 In our new @UncertaintyInAI paper, we do neural optimal transport with costs defined by a Lagrangian (e.g., for physical knowledge, constraints, and geodesics) Paper: https://t.co/C4d2f3e9Db JAX Code: https://t.co/sDigFva0kd (w/ A. Pooladian, C. Domingo-Enrich, @RickyTQChen)
6
64
411
Coming to devour your 3D anatomies: 🥼LaB-GATr🐊, geometric algebra transformers for large, biomedical surface and volume meshes. Will be presented @MICCAI_Society and @GRaM_workshop. Joint work with @ImreBaris, @pimdehaan and @jelmerwolterink. (code) https://t.co/QEcC0486wg
5
24
106
Learning to sample: time-reversibilty meets Metropolis-Hastings. Check our #icml2024 paper where we propose an objective that upper-bounds the TV distance between the stationary distribution of a parametric Markov kernel, and the distribution you want to sample from! (1/6)
6
27
164