In this new paper, led by
@giovannimarchet
, we present a unique theoretical result that provides guarantees for the concrete representational structure expected to emerge in a learning system
Glad to announce our new preprint with C. Hillar, D. Kragic, and
@naturecomputes
!
We show via group theory how Fourier features emerge in invariant neural networks -- a step towards a mathematical understanding of representational universality. 🧵1/n
This figure summarizes the landscape of topological neural network architectures on hypergraphs, simplicial, cellular, & combinatorial complexes in a unified graphical notation. Check out our paper and full repository of TNN equations for more ✨
Topological Deep Learning is an immensely powerful and fast emerging field. Our new literature review is out and here’s why I’m very excited about it🧵1/5
Bispectral Neural Networks go to
#ICLR2023
!
In this work, we present a new neural network architecture capable of learning unknown groups purely from the symmetries implicit in data
—with
@cashewmake2
, Bruno Olshausen, and Christopher Hillar
1/17
🎉 If you're interested in working on understanding visual representations in deep networks through the lens of symmetry & geometry, the
@geometric_intel
lab is recruiting!
📣 The Geometric Intelligence lab receives a 1.2M$ NSF Grant to work on Lie Group Representation Learning for Vision
Lead by lab members Christian Shewmake and Sophia Sanborn, the grant will support research on vision models that incorporate hierarchical learnable symmetries
Our workshop on Symmetry and Geometry in Neural Representations has been accepted to
@NeurIPSConf
2022!
We've put together a lineup of incredible speakers and panelists from 🧠 neuroscience, 🤖 geometric deep learning, and 🌐 geometric statistics.
If you find this interesting, check out our ICLR 2023 paper, in which we demonstrate that you can *learn* group-Fourier transforms by learning to be invariant to transformations in data:
Seeking: Creative and ambitious computational neuroscience / ML PhDs interested in building next-generation brain-computer interfaces. My team at is hiring research scientists. Reach out at sophias
@science
.xyz for more info.
About a month ago, I joined to work on fundamental problems in neural coding & build next-generation high-dimensional brain computer interfaces. The team is amazing and gives new meaning to the term "full-stack"
Looking forward to presenting our work at
@NeurIPSConf
in December!
Here we use the triple correlation on groups to define a group-invariant layer for group-equivariant networks that improves both accuracy and robustness. Stay tuned for the camera-ready release!
w/
@ninamiolane
On this episode of the
@twimlai
podcast, I talk about some of my favorite topics - universality, compression, group theory - and how mathematics reveals principles of neural representation that transcend substrate
Today we’re joined by Sophia Sanborn (
@naturecomputes
) from
@UCSB
to discuss the universality between neural representations and deep neural networks along with her
#ICLR2023
paper on Bispectral Neural Networks.
🎧🎥 Check out the full episode at
🦠🤖 Bio/ML grad students: The bio team at is seeking summer interns with background in ML/computer vision and interest in biomedical applications. See more details below & DM if interested 👇
The phenomenon of universality is a fascinating one:
Why do certain features consistently emerge across different neural networks (incl. the brain!) trained on different datasets?
In our forthcoming paper, we provide one answer, grounded in group representation theory
Glad to share that our paper titled 'Harmonics of Learning' got accepted at the Conference on Learning Theory (COLT 2024)!
This is a joint work with C. Hillar, D. Kragic and S. Sanborn (
@naturecomputes
)
Meet Sophia
@naturecomputes
from our lab🤩
Sophia studies fascinating geometric properties of neural representations
@neur_reps
@ucsbcs
She was just awarded the prestigious PIMS-Simons fellowship for her outstanding research in mathematical sciences!🏆
📄 Call for papers
🌐 The ICML Workshop on Topology, Algebra, and Geometry in Machine Learning is accepting 8 page papers that incorporate higher mathematics into ML
📅 Deadline: May 8th AOE
👇 More details below
✨ Consider submitting your work!
@icmlconf
#ICML2023
The submission site for TAG-ML at ICML
@icmlconf
is live! Please see the attached call for papers and consider submitting your work! Looking forward to your submissions and another fantastic event with terrific keynotes (to be revealed soon!) 🙂
The brain🧠uses a distributed code - where variables are represented by the activity of a large number of neurons - to store information about the outside world. In this sense, our brains compute *morphisms* (structure-preserving maps) from the physical environment and ... 1/6
We are thrilled to announce that we have had TAG-ML accepted for the 2nd year as an
@icmlconf
workshop! A call for paper will be forthcoming and we look forward to receiving your submissions and seeing folks in Hawaii in July for ICML 2023!
#TAGML
Our latest pre-print develops a method for automating interpretability research in vision models 🤖👁️
Our method is grounded in human perceptual judgments, but is fully scalable 👥
Preprint alert: Superposition in CNNs and Brains
Context: Recent work on LLMs showed how sparse coding can recover interpretable representations in transformers
@AnthropicAI
TL;DR: Concurrently, we have been doing similar analyses in vision models and brains.
Curious about how Riemannian geometry and VAEs can be used to understand neural population codes? Check out our latest pre-print, lead by Francisco Acosta (
@hopfbifurcator
), with myself,
@kdaoduc
,
@manusmad
, and
@ninamiolane
Check out my first (!) first-author preprint, with
@naturecomputes
,
@kdaoduc
,
@manusmad
, and
@ninamiolane
:
We propose a method for calculating the curvature of neural manifolds, using deep generative models and Riemannian geometry.
1/7
A very interesting and prescient quote from Mikhail Gromov in
@doristsao
's
@neur_reps
talk on constructing visual representations through local diffeomorphisms
I am recruiting postdocs for 2024!😃
3 fellowships available:
🌐Geometric and Topological Deep Learning
🧠Foundation Models for Neuroscience
⚕️AI for Women’s Brain Health.
Interested? Apply here!
This is how our campus looks like in the winter🌴😜
Are there principles of representation learning that transcend model architecture and substrate?
Looking forward to discussing this and more in the new 🔵
@unireps
@NeurIPSConf
workshop 🔴
We're excited to announce the first edition of 🔵🔴 UniReps: the Workshop on Unifying Representations in Neural Models! 🧠
To be held at
@NeurIPSConf
2023!
SUBMISSION DEADLINE: 4 October
Check out our Call for Papers, lineup of speakers and schedule at:
@CimesaLjubica
I highly recommend Principles of Neural Design by Sterling & Laughlin: a beautiful book that reverse engineers the structure of neural systems from the physical / environmental constraints faced by organisms
NeurReps is this Saturday Dec 16 in Ballroom A/B at
@NeurIPSConf
!
Come out for an exciting program of talks, posters, and discussions at the intersection of deep learning, higher mathematics, and computational neuroscience
Thrilled to announce the first Topological Deep Learning Challenge hosted at
@icmlconf
2023 by
@TAGinDS
🎉🍩 Build a topological neural network with the tools of TopoModelX, and get published! Help us spread the word📣 Challenge website:
🧐 Getting into Topological Deep Learning?
👩💻 Try PyT, a new and comprehensive platform for deep learning on topological domains
🪢 Built on top of PyTorch, PyT libraries standardize diverse models and common operations into a single unifying framework
Check out the proceedings of the 2022 NeurIPS Workshop on Symmetry and Geometry in Neural Representations, now online with PMLR ✨
@NeurIPSConf
@neur_reps
The proceedings of the 2022 NeurReps Workshop are now online!
PMLR Volume 197: NeurIPS Workshop on Symmetry and Geometry in Neural Representations features 21 fantastic papers from our contributing authors
View online here:
@NeurIPSConf
NeurReps is starting today Dec 16th at 8.55 am in ballrooms A/B!
Join us for a lively day of talks, panel and posters discussing symmetry and geometry in neural representations 🧠🤖
Harmonic analysis
The common musical intervals have special relationships to each other in terms of their wavelengths:
Octave - 1/2
Fifth - 1/3
Fourth - 1/4
Discovered by Pythagorus in 500 BC
We're also happy to share our 'tutorial on generative models', which
@marineschimel
,
@davindi09
and I generated for this workshop:
It consists of three notebooks giving an overview of some of the models discussed in the workshop!
At NeurIPS this week. Come find me at the following places:
- Thu Poster Session 5 10:45:
#1016
on Invariance in G-CNNs
- Fri
@unireps
Workshop 9:00 - 9:30: Talk on Symmetry & Universality
- Sat
@neur_reps
Workshop 11:30 - 12:00: Moderating the panel / organizing all day ✨
Check out our work featured in
@patrickmineault
's excellent NeuroAI paper roundup: polysemanticity & mechanistic interpretability in deep vision models & the visual cortex, led by
@klindt_david
👁️🧠
I always questioned when do you really **need** hypergraph GNNs?
For modelling biological interactions beyond pairs, such as gene expression: HYFA processes gene expression values for patients collected from multiple organs.
Paper:
⏰ Just two weeks to the deadline for the ICML Workshop on Topology, Algebra, and Geometry in ML!
🗓️ Deadline: May 8 AOE
🔗 Click here for submission instructions and more details
🧑🏫 We look forward to seeing your work!
📄 Call for papers
🌐 The ICML Workshop on Topology, Algebra, and Geometry in Machine Learning is accepting 8 page papers that incorporate higher mathematics into ML
📅 Deadline: May 8th AOE
👇 More details below
✨ Consider submitting your work!
@icmlconf
#ICML2023
I'm at
@NeurIPSConf
all week and looking to meet as many new people as possible
If you're interested in the intersection of geometry, deep learning, and neuroscience, let's chat
Shoot me a DM or come by our NeurReps Workshop on Saturday ✨
@neur_reps
We think this provides theoretical backing for the emergence of Fourier features / irreps in the nice recent papers by
@NeelNanda5
,
@bilalchughtai_
, et al in the context of learning modular arithmetic / group composition:
Human behavior is hierarchically structured. But what determines *which* hierarchies people use? In a preprint, we run an experiment where people create programs that correspond to hierarchies, finding that people prefer structures with more reuse.
1/7
Check out ✨awesome-neural-geometry✨ - a curated selection of resources and research for those wishing to dive into this interdisciplinary area.
We link our favorite 📚 math books, 📜 blogposts, 👩🏫online lectures, 📄 papers, and 🧠 more.
We are beginning to look towards
#NeurIPS2023
& we want to incorporate your ideas!
For the 2023 NeurReps Workshop:
🔣 What topics do you want to see?
👩🏫 Who do you want to hear from?
👥 Have ideas for interactive programming?
Leave comments & submit suggestions below 👇
The deadline for NeurReps has been extended to October 4th AOE 🌟
We have two tracks: Extended Abstracts (4 pg, non-archival) and Proceedings (9 pg, published in PMLR)
Last year saw a fantastic batch of submissions. We look forward to seeing this year's creative work! 🧠
📢 Deadline extension
The NeurReps submission deadline has been extended to October 4th AOE.
If you're working in geometric deep learning, topological data analysis, applied geometry, computational neuroscience, or somewhere in the intersection, send us your work!
✈️Amsterdam bound. Thrilled to be participating in the Geometry and Shape Analysis for Neuroscience Minisymposium at
#SIAMCSE23
this Monday. Please reach out if you are attending or in the area. Looking forward to great conversations
Cayley tables contain all information about the structure of a finite group. Our work is the first to show that a group's Cayley table can be learned purely from observing transformations. This is an exciting result for machine learning and computational mathematics alike!
8/17
In the
@neur_reps
slack community, we've been compiling our favorite resources on geometry for neuroscience and deep learning. Check it out on Github and contribute yours!
Check out ✨awesome-neural-geometry✨ - a curated selection of resources and research for those wishing to dive into this interdisciplinary area.
We link our favorite 📚 math books, 📜 blogposts, 👩🏫online lectures, 📄 papers, and 🧠 more.
Our call for papers is out!
Seeking your latest work on GDL, geometric statistics, and geometric/topological methods for neuroscience.
Hope to see you in New Orleans 💫
📣 Announcing 📣
💥 The NeurIPS 2022 Workshop on Symmetry and Geometry in Neural Representations💥
(NeurReps 😉)
We're bringing a killer lineup to NeurIPS this year — spanning GDL, applied geometry, and neuroscience.
See our call for papers below 👇
We are looking for reviewers for the 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (
#TAGML
) at
@icmlconf
. If you are willing to help us out, please sign up via the following survey:
The TAG-ML proceedings are now online. Check out this collection of excellent papers at the intersection of topology, algebra, geometry, and machine learning! 🌐
Symmetry has the potential to serve as an organizing principle for theories of neural coding, much like it did for 20th century physics. Thrilled to be a part of this exciting workshop organized by
@simoneazeglio
and
@ari_dibe
.
It's official!
@BernsteinNeuro
accepted our - with
@ari_dibe
- workshop proposal 🧠
In "Symmetry, Invariance and Neural Representations" we will explore the intimate relationship between the physical world and neural representations.
Join our speakers in this experience!
Fantastic and fascinating new work from the
@atolias
lab. I've been wanting to this experiment run like this since the original Circuits thread by
@ch402
et al 👏
@patrickmineault
If you're interested: we applied a similar approach to both deep image models and data from the visual cortex and found some interesting results, in line with the concept of "mixed selectivity" / "polysemanticity"
Our results offer an explanation for the emergence of certain universal features across both artificial and biological learning systems, including the localized Fourier features common to vision models and the visual cortex
An exciting program on geometry and dynamics in the brain put together by
@adelardalan
,
@tafazolisina
, and
@timbuschman
. Looking forward to discussing these topics with everyone in Mont Tremblant 🍁
Our theoretical investigations were originally motivated by the highly consistent and precise phenomena we observed in Bispectral Neural Networks, which reliably learn the irreps of unknown groups from observations of transformed data
Our goal is to bring together researchers in these fields to illuminate geometric principles for neural representations across both biological and artificial systems -- with special focus on invariant and equivariant representations and neural manifolds.
How can we learn equivariant neural networks, where the group actions are interpretable and completely learnable from data?
Happy to share our preprint with Karim Helwani and Demba Ba: (1/n)
Last year, we started a Slack workspace to build community at the intersection of math, deep learning, + neuroscience. Today we are ~900 members strong 🦾
We have some exciting new plans in the works, including seminars, tutorials, + hackathons
Join us online to take part! 🔗👇
How can
#GenerativeAI
help scientists discover symmetry from data? Check out our
#ICML2023
paper on ``Generative Adversarial Symmetry Discovery''.
Paper:
Code:
(1/3)
The model makes use of some of my favorite mathematics—harmonic analysis, group representation theory, and an object called the *bispectrum*—to simultaneously learn a group-equivariant and -invariant map
2/17
Despite the simplicity of Bispectral Networks, their mathematical foundations give them power💥
We're excited about their potential as a computational primitive for robust invariant representation learning.
To dive deeper, check out our paper:
15/17
This Friday I'll be playing a live set in Oakland - all new music coded live in TidalCycles + Ableton + Vital, with
@d0nxyz
on vis. Come out to see 4 algorithmic audiovisual live sets from the
@avclubsf
crew, 8pm - midnight. DM for location 🌠
The future of BCI goes far beyond traditional control. I'm thrilled to join the team and build out this new paradigm. Reach out if you're interested in getting involved -
Now is an incredible time to work on neurotechnology, leveraging the inherent synergy between the brain's representations of the world & the rich sensory-semantic representations derived by large-scale artificial neural networks
While powerful, Group-Equivariant CNNs require knowing and building in the relevant groups by hand. In this paper, we present a method for *learning* the groups that structure the data by learning to collapse image orbits
5/17
At the request of several authors we have extended the paper submission deadline for TAG-ML at
@icmlconf
until May 24th, 2023. Please see the attached flyer for an updated timeline! We look forward to your submissions!
@CimesaLjubica
One thing I appreciate greatly is that they start from single-celled organisms that have no neurons at all. An individual cell exhibits an immense amount of intelligence & I think the field would do well to release our fixation on the brain as the locus of intelligent computation
This simple objective combined with mathematical constraints permits the model to learn the irreducible representations of the group. That is, the model learns to perform a Fourier transform on an unknown group. The result is a learned equivariant layer
6/17
The bispectrum, by contrast, is a *complete* invariant. This means that it only removes variations due to group actions on the domain, preserving all signal structure
11/17
Reviewers needed for the Topology Algebra and Geometry in Pattern Recognition Applications workshop
@CVPR
! Fill out the Google form below if you are able to help out 👇🙏
🚨 Looking for workshop reviewers🚨
We are still looking for additional reviewers for TAG-PRA
@CVPR
!
If you can volunteer to review, please let us know by filling out the short form or by reaching out to us at info
@tagds
.com ‼️
And be sure to check out our poster at
#ICLR2023
: Wednesday May 3rd 11:30AM to 1:30PM in MH 1-2-3-4
#90
I was not able to make it in person, but
@Yubei_Chen
has transported the poster across the globe 🗺️🙏
16/17
Thrilled to have contributed to this work! Loihi 2 allows engineers to program custom neuron models, facilitating the design of all sorts of exotic spiking neural networks.
“We are trying to establish a new flexible and versatile, general purpose intelligent computing chip,” says Mike Davies,
@Intel
’s
#Neuromorphic
Computing Director. Read about Intel’s neuromorphic research journey and the Loihi-2 chip in
@ScienceMagazine
.
📄 Call for papers
🌐 The ICML Workshop on Topology, Algebra, and Geometry in Machine Learning is accepting 8 page papers that incorporate higher mathematics into ML
📅 Deadline: May 8th AOE
👇 More details below
✨ Consider submitting your work!
@icmlconf
#ICML2023
📢 Last call for papers 📢
NEW DEADLINE: Friday Oct 6 23:59 AoE
We are giving authors until the end of the week to submit their work
Get all the info you need to submit here! 👇
Groups are mathematical objects that describe many of the transformations that show up in natural data, such as shifts, rotations, reflections, and many more
3/17
As a consequence, Bispectral Neural Networks are both invariant and robust. We demonstrate this in experiments in which we attempt to generate model "metamers"—inputs that yield the same model representation but do not look the same
12/17
Fluint Series B has it's own language of generative logic for cut and paint paths. You will get to play with that code in live sessions. Here is a simple but beautiful example we made last night ⚫️⚪️⚫️
Watch the full length version (17 minutes) here:
The structure of our model is remarkably simple: It's comprised of a single linear layer that parameterizes a Fourier transform on an unknown group, followed by a bispectral layer that computes invariant coefficients from the first. Note: the weights are random to start
14/17
Many operations are invariant. The max of an image is the same if the image is rotated. However, it's *also* the same if the pixels are randomly permuted. Most invariant maps are degenerate in this way—they lose signal structure & throw the baby out with the bathwater 🛁
10/17
Looking forward to speaking in the Mathematics of Neuroscience Symposium on the beautiful island of Crete this summer! 🌊☀️🇬🇷
The call for talk/poster submissions is now open, check it out below 👇
#ICNAAM2022