agopal42 Profile Banner
Anand Gopalakrishnan Profile
Anand Gopalakrishnan

@agopal42

Followers
301
Following
888
Media
16
Statuses
121

PhD student at The Swiss AI Lab (IDSIA) with @SchmidhuberAI. Previously: Apple MLR, Amazon AWS AI Lab. 7\. Same handle on 🦋

Lugano, Switzerland
Joined January 2018
Don't wanna be here? Send us removal request.
@agopal42
Anand Gopalakrishnan
8 months
Excited to present "Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery" at #NeurIPS2024!.TL;DR: Our model, SynCx, greatly simplifies the inductive biases and training procedures of current state-of-the-art synchrony models. Thread 👇 1/x.
Tweet media one
2
41
165
@agopal42
Anand Gopalakrishnan
1 month
RT @robert_csordas: Your language model is wasting half of its layers to just refine probability distributions rather than doing interestin….
0
137
0
@agopal42
Anand Gopalakrishnan
4 months
RT @t_andy_keller: In the physical world, almost all information is transmitted through traveling waves -- why should it be any different i….
0
929
0
@agopal42
Anand Gopalakrishnan
4 months
RT @SchmidhuberAI: Congratulations to @RichardSSutton and Andy Barto on their Turing award!.
0
138
0
@agopal42
Anand Gopalakrishnan
4 months
RT @TheOfficialACM: Meet the recipients of the 2024 ACM A.M. Turing Award, Andrew G. Barto and Richard S. Sutton! They are recognized for d….
0
472
0
@agopal42
Anand Gopalakrishnan
4 months
RT @AmiiThinks: BREAKING: Amii Chief Scientific Advisor, Richard S. Sutton, has been awarded the A.M. Turing Award, the highest honour in c….
0
50
0
@agopal42
Anand Gopalakrishnan
4 months
RT @gkreiman: Brains, Minds and Machines Summer Course 2025. Application deadline: Mar 24, 2025 See more informatio….
0
21
0
@agopal42
Anand Gopalakrishnan
7 months
Come visit our poster East Exhibit Hall A-C #3707, today (Thursday) between 4:30-7:30pm to learn about how complex-valued NNs perform perceptual grouping. #NeurIPS2024.
@agopal42
Anand Gopalakrishnan
8 months
Excited to present "Recurrent Complex-Weighted Autoencoders for Unsupervised Object Discovery" at #NeurIPS2024!.TL;DR: Our model, SynCx, greatly simplifies the inductive biases and training procedures of current state-of-the-art synchrony models. Thread 👇 1/x.
Tweet media one
0
0
10
@agopal42
Anand Gopalakrishnan
7 months
RT @AggieInCA: Interested in JEPA/visual representation learning for diverse downstream tasks like planning and reasoning? Check out "Enhan….
0
7
0
@agopal42
Anand Gopalakrishnan
7 months
RT @SchmidhuberAI: Please check out a dozen 2024 conference papers with my awesome students, postdocs, and collaborators: 3 papers at NeurI….
0
132
0
@agopal42
Anand Gopalakrishnan
8 months
Let's see how this goes.
0
0
0
@agopal42
Anand Gopalakrishnan
8 months
0
0
0
@agopal42
Anand Gopalakrishnan
8 months
What happened here?! Lol
Tweet media one
2
0
2
@agopal42
Anand Gopalakrishnan
8 months
Paper: Code: Joint work with @aleks_stanic @SchmidhuberAI @mc_mozer.Hope to see you all at our poster at #NeurIPS2024! 10/x.
0
0
5
@agopal42
Anand Gopalakrishnan
8 months
Phase synchronization in SynCx towards objects is more robust compared to baselines. It can successfully separate similarly colored objects, which is a common failure mode of other synchrony models that simply rely on color as a shortcut feature for grouping. 9/x.
1
0
0
@agopal42
Anand Gopalakrishnan
8 months
SynCx outperforms current state-of-the-art unsupervised synchrony-based models on standard multi-object datasets while using between 6-23x fewer model parameters compared to the baseline models. 8/x
Tweet media one
1
0
1
@agopal42
Anand Gopalakrishnan
8 months
Our model does not need additional inductive biases (gating mechanisms), strong supervision (depth masks) and/or contrastive training as used by current state-of-the-art synchrony models to achieve phase synchronization towards objects in a fully unsupervised way. 7/x.
1
0
0
@agopal42
Anand Gopalakrishnan
8 months
SynCx processes complex-valued inputs at every layer using complex-valued weights. It is trained to reconstruct the input image at every iteration using the output magnitudes. Output phases are fed back as input to the next step with input magnitudes clamped to the image. 6/x
Tweet media one
1
0
1
@agopal42
Anand Gopalakrishnan
8 months
Hidden units in such a system must activate based on the presence of features (magnitudes) but also consider their relative phases. Matrix-vector products between complex-valued weights and complex-valued activations are a natural way to implement such functionality. 5/x.
1
0
0
@agopal42
Anand Gopalakrishnan
8 months
This is a conceptual flaw in current synchrony models all of which use feedforward convolutional nets but we can solve this in an iterative fashion. Starting with random phases, hidden units compute phase updates to propagate local constraints to reach a stable configuration. 4/x.
1
0
0
@agopal42
Anand Gopalakrishnan
8 months
Green and red circles highlight junctions that belong to the same and different objects respectively. Here, we cannot decide which junctions belong to which object using only the local features as they are indistinguishable from one another in the two cases. 3/x.
1
0
1