Bruno Ribeiro Profile Banner
Bruno Ribeiro Profile
Bruno Ribeiro

@brunofmr

Followers
2,040
Following
273
Media
32
Statuses
770

(Currently on sabbatical @ Stanford) Associate Professor of Computer Science Purdue University; Causal & Invariant Representation Learning

West Lafayette, IN
Joined December 2009
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
@brunofmr
Bruno Ribeiro
7 months
My lab @PurdueCS will be hiring a PhD student Fall 2024. No need to have top conference papers (not everybody has a good undergraduate/MSc research experience). Compelling application showing depth and eagerness goes a long way. December 20, 2023 deadline
10
90
482
@brunofmr
Bruno Ribeiro
1 year
PhD students: If your advisor is distracted by LLMs to the point of being uninterested in your research, seek support from your research community. Collaborate with other people. And know that some of us are not looking forward to the avalanche of nonsensical LLMs+graph papers
7
24
217
@brunofmr
Bruno Ribeiro
3 years
Today our lab is launching 🚀 which will feature a series of tutorials (text 📜 +videos 📺) relating G-invariances, (graph) representation learning, extrapolation, and causality
0
32
176
@brunofmr
Bruno Ribeiro
1 year
🚀 What are knowledge graphs? Are they just attributed graphs? Or are KGs more? (i.e., new equivariances) Gao, @YangzeZhou , & I postulate KGs have a different equivariance The consequences are astounding🤯, including 100% test accuracy in KGC... 1/🧵
4
22
159
@brunofmr
Bruno Ribeiro
8 months
Excited to be spending my post-tenure sabbatical year (until July 2024) in the Bay Area (at SNAP hosted by @jure ). Spending time with old friends in the area has been a blast. If the in the Bay Area and interested in graph, OOD and causal representation learning, reach out!
Tweet media one
3
0
118
@brunofmr
Bruno Ribeiro
2 years
ICML will be hybrid. If you are in India or Brazil and need a tourist/business visa to attend in person, remember to set -2 years on your Time Machine before applying so we can welcome you to Baltimore PSA: US tourist visa lines are still ~2 years!! long in India and Brazil
3
6
111
@brunofmr
Bruno Ribeiro
1 month
😅 The view that GNNs are cool but not super-useful seems somewhat prevalent in the Bay Area 🙃 Parts of the graph ML community now see invariances as unnecessary (note: without them the inputs are sequences not graphs) 🙃 The truth is, GeomDL needs to make a stronger case...🧵
@xbresson
Xavier Bresson
1 month
Indeed, graphs are a scam -- an invention from mathematicians to control people! I can prove it -- it is well-known that data is truly i.i.d. (written in all machine learning textbooks). So there exists no relationship between data and graph representation is just an illusion.
6
8
113
4
6
104
@brunofmr
Bruno Ribeiro
4 years
Thanks to the efforts of my students! Looking forward to what comes next.
@PurdueCS
Purdue Computer Science
4 years
Congratulations to @LifeAtPurdue Purdue Professor Bruno Ribeiro ( @brunofmr ) on receiving your CAREER Award.
Tweet media one
0
2
25
15
2
86
@brunofmr
Bruno Ribeiro
6 months
Something that may surprise the geometric DL community: There are now many folks in graph ML that believe *symmetries are irrelevant for graph learning* Q: Are there papers that *prove* that symmetries are *necessary* for inductive learning on graphs? If not, we must write one
11
6
74
@brunofmr
Bruno Ribeiro
20 days
What @nvidia is not quite seeing it yet, is that their prices to academia are putting us under intense pressure to move away from their hardware. And we will start seeing this move at the acknowledgment sections at #ICLR2025
@tsarnick
Tsarathustra
21 days
Fei-Fei Li says Stanford's Natural Language computing lab has only 64 GPUs and academia is "falling off a cliff" relative to industry
98
206
1K
2
7
68
@brunofmr
Bruno Ribeiro
3 years
Really awesome tutorial on set representation learning! Has everything one needs to know 🚀! Great job 👏🏼
@FabianFuchsML
Fabian Fuchs
3 years
A year ago I asked: Is there more than Self-Attention and Deep Sets? - and got very insightful answers. 🙏 Now, Ed, Martin and I wrote up our own take on the various neural networks architectures for sets. Have a look and tell us what you think! :) ➡️ ☕️
Tweet media one
0
74
333
0
3
53
@brunofmr
Bruno Ribeiro
5 years
Entering #ICLR2020 reviews. Witnessing some harsh words by other reviewers, #DontBeMean . Why not assume authors make mistakes, miss prior work, or overstate their claims in good faith? The goal is to reward good papers, we are not the Spanish Inquisition.
3
6
52
@brunofmr
Bruno Ribeiro
2 years
What is the OOD generalization capability of (structural) message-passing GNNs for link prediction? "OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs" @YangzeZhou w/ @GittaKutyniok @brunofmr A 🧵 1/n
2
15
51
@brunofmr
Bruno Ribeiro
20 days
@docmilanfar I am sorry, but industry really needs to rethink this. If Stanford had sold its 8000 acres to Baldwin Locomotive Works in order to get their best Locomotives for study in the 1800s, we today would look at that with regret. Universities plan for centuries, not the next quarter.
1
0
40
@brunofmr
Bruno Ribeiro
5 years
"Structural representations (GNNs,...) are to positional node embeddings (matrix factorization,...) as distributions are to samples." @balasrini32 new preprint: Implications? They can do the same tasks. A thread: 1/6
1
6
37
@brunofmr
Bruno Ribeiro
4 years
Research news: Want to boost the expresiness of your favorite GNN architecture? Collective learning can help: a hybrid GNN representation with Hang & @ProfJenNeville GCNs ( @thomaskipf , @wellingmax ) benefit the most (up to +15% in accuracy)!
3
10
38
@brunofmr
Bruno Ribeiro
3 years
Thanks @emaros96 for the invitation! Great questions! Here are the slides of my talk on Position vs Structural graph representations & Counterfactual Graph Representation learning
0
7
38
@brunofmr
Bruno Ribeiro
2 years
Accepted at #NeurIPS2022 . Congrats @YangzeZhou !
@brunofmr
Bruno Ribeiro
2 years
What is the OOD generalization capability of (structural) message-passing GNNs for link prediction? "OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs" @YangzeZhou w/ @GittaKutyniok @brunofmr A 🧵 1/n
2
15
51
4
0
33
@brunofmr
Bruno Ribeiro
4 months
Excited to give a plenary talk on the *GNNs for the Sciences Workshop* this Thursday and Friday 🚀. Algorithmic alignment is *key* for OOD robustness in AI4Science applications. Our #ICLR2024 spotlight will be one of the topics
0
1
34
@brunofmr
Bruno Ribeiro
1 year
ChatGPT is not magic. It is called inference-time guidance. And it is adversarially designed to make you think it can do any task (which is its explicit objective function). Well-engineered ML is really awesome, but it can't do tasks not explicit in its objective function.
1
2
32
@brunofmr
Bruno Ribeiro
2 years
Thanks @AmazonScience for the support! Very excited about this. Counterfactual inference with direct (measurable) impact at the scale of millions of subjects. Work jointly with the amazing Sanjay Rao
@PurdueCS
Purdue Computer Science
2 years
Is it possible to improve video streaming without using paying-customers as test subjects? ➡️ @LifeAtPurdue 's Prof. Bruno Ribeiro shares a novel approach of using #ML tools to answer counterfactual questions for streaming services. #PurdueCS #Purdue
Tweet media one
1
2
5
1
5
32
@brunofmr
Bruno Ribeiro
4 years
Slides of my invited talk at #SIAMCSC20 on the equivalence between Matrix Factorization and GNNs
0
8
32
@brunofmr
Bruno Ribeiro
1 year
To anyone receiving #ICML2023 scores. My AC batch has pretty low scores all around. I am not sure if ACs having to aggressively chase reviewers this year (all papers were assigned only 3 reviewers) correlates with low scores. Or maybe an influence of the job market tightening
8
2
31
@brunofmr
Bruno Ribeiro
21 days
How can Physics+ML become robust ODD? *Causal structure discovery offers a promising direction* ✨Check out @mouli_sekar ‘s #ICLR2024 Spotlight✨ Today poster #24 at 4:30pm -
Tweet media one
1
0
28
@brunofmr
Bruno Ribeiro
2 years
Same here, @LogConference review quality/timeliness much superior to ICML, NeurIPS, ICLR. Now it seems inevitable that these top 3 ML conferences will eventually become federated conferences like ACM's FCRC, where specialized conference papers (e.g., LOG papers) are presented.
@PetarV_93
Petar Veličković
2 years
Amazed by both the review timeliness and quality on my AC stack for this year's @LogConference ! All papers have 3 reviews, on time, with lots of detail, and no pushes from my side. @NeurIPSConf @icmlconf @iclr_conf take note: offering monetary reviewer prizes can mean a lot!
2
10
96
0
3
28
@brunofmr
Bruno Ribeiro
3 years
Looking forward!
@GclrW
4th GCLR workshop @AAAI2024
3 years
Excited to announce our next speaker @brunofmr from @PurdueCS at the 2nd GCLR workshop at AAAI 2022 @RealAAAI . Join us to listen to Bruno's work. For more details: @ravi_iitm @PhilChodrow @kerstingAIML @Sriraam_UTD @gin_bianconi @rbc_dsai_iitm
Tweet media one
0
7
18
0
2
26
@brunofmr
Bruno Ribeiro
3 years
Fun easter egg in Appendix G.1: Q: Do OGBG graph classification tasks really need more expressive GNN representations than WL? 🤔 A: No, WL power is enough
@cottascience
Leonardo Cotta
3 years
Reconstruct to empower! 🚀 Our new work (w/ @chrsmrrs and @brunofmr ) accepted at #NeurIPS2021 shows how graph reconstruction —an exciting field of (algebraic) combinatorics— can build expressive graph representations and empower existing (GNNs) ones! 👉 1/3
Tweet media one
4
11
99
3
2
25
@brunofmr
Bruno Ribeiro
4 years
This is an energy-based model (EBM) using GNNs
@cottascience
Leonardo Cotta
4 years
Our new work accepted at NeurIPS (w/ @carloshct , Ananthram, @brunofmr ) introduces unsupervised joint k-node graph representations! 1/3
Tweet media one
3
6
43
0
1
24
@brunofmr
Bruno Ribeiro
1 year
I plan to read all papers on my AC batch after rebuttal. I will try to save the interesting ideas from the overall negativity. Try to be positive in your rebuttals. Tell us what makes your work interesting (SOTA is not as interesting as students tend to think) Good luck!
1
1
24
@brunofmr
Bruno Ribeiro
1 year
Congrats @PetarV_93 , @beabevi_ and co-authors! Neural algorithmic reasoning drastically improves with causal graph ML!
@PetarV_93
Petar Veličković
1 year
Two papers accepted #ICML2023 🎉 We use causality to drastically improve neural algorithmic reasoning (as foretold by @beabevi_ @YangzeZhou @brunofmr two ICMLs ago) 🔢 and upsample graphs to slow down message passing and stop over-smoothing in its tracks 🐌 See you in Honolulu 🏝️
1
6
153
1
1
24
@brunofmr
Bruno Ribeiro
11 months
Just in case anyone is wondering 🤔, out-of-distribution tasks are *far* from solved in ML‼️ On graphs there is lots to do. We can't even reliably generalize on graph sizes yet (and mostly on graphons)!
@LukeGessler
Luke Gessler
11 months
this paper's nuts. for sentence classification on out-of-domain datasets, all neural (Transformer or not) approaches lose to good old kNN on representations generated by.... gzip
Tweet media one
134
901
5K
1
1
24
@brunofmr
Bruno Ribeiro
5 years
Reviewer logic these days: I don't work on this topic -> I am unaware of the literature on this topic -> Nobody writes papers on this topic -> Reject It defies understanding that anyone would agree to review a paper on a topic they know nothing about (and are unwilling to learn)
1
0
23
@brunofmr
Bruno Ribeiro
3 years
Awesome paper that shows KGE link prediction performance depends heavily on distribution shifts of the test data (i.e., what the task is)
0
4
23
@brunofmr
Bruno Ribeiro
2 years
Really interesting! Reinforces our observations on graph tasks (such as graph classification, link prediction) where OOD with larger graphs will obliterate methods learning spurious associations
@PetarV_93
Petar Veličković
2 years
Out-of-distribution (4x larger inputs), the average F1 score collapses to ~0.5! Hence the models are still overfitting to the specifics of the training data distribution, and not truly learning the algorithm! We have a long way to go before we can truly "solve" the benchmark.
Tweet media one
1
1
15
1
2
23
@brunofmr
Bruno Ribeiro
6 years
(updated) Machine Learning in Network Science, @netsci2018 satellite. 5 amazing keynote speakers! Deadline for abstracts 30/04. Info and links @net_science @MartonKarsai @ciro
Tweet media one
0
11
21
@brunofmr
Bruno Ribeiro
3 years
My GraphEx talk slides 1.Graph Representation Learning (GRL) is currently observational (💯fine) 2.But to go beyond observational tasks, we need Counterfactual GRL🚀[1] #ICML2021 3.Invariant Risk Minimization fails in GRL [1] 1/🧵
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
2
22
@brunofmr
Bruno Ribeiro
3 years
Yet another example why controls are so important. Very interesting read for folks working on graphs and misinformation.
@jonassjuul
Jonas L. Juul
3 years
Does false news spread differently than true news online? For example, does false news spread faster or deeper into the Twitter network? How about videos vs. petitions? New paper out in @PNASnews with @jugander 1/14
Tweet media one
3
157
500
1
3
22
@brunofmr
Bruno Ribeiro
1 year
PS: Meta-learning is not magic either. The objectives of the multiple tasks must align.
1
0
21
@brunofmr
Bruno Ribeiro
2 years
It was a privilege to have worked with @cottascience . Follow his next steps because he has really deep insights and bold projects
@cottascience
Leonardo Cotta
2 years
Last Tuesday I defended and ended my PhD journey @PurdueCS ! I'll join @VectorInst as a postdoc fellow in the Fall, working w/ @cjmaddison and others in more aspects of ML ∩ Combinatorics ∩ Invariant theory. +
Tweet media one
11
7
134
0
1
21
@brunofmr
Bruno Ribeiro
5 years
@brunofmr
Bruno Ribeiro
5 years
Can optimization alone make GNNs more powerful than the Weisfeiler-Lehman test? The answer is YES! (w/ @RyanLMurphy1 @balasrini32 & Rao)
0
10
22
1
0
21
@brunofmr
Bruno Ribeiro
9 months
Excellent work on causal link prediction by @cottascience and @beabevi_
@cottascience
Leonardo Cotta
9 months
🗞️ Exciting news! This is now published in the proceedings of @royalsociety A (w/ open access). Make sure to check the original thread and drop me a line if you have any questions or feedback 😊
Tweet media one
0
12
39
0
1
21
@brunofmr
Bruno Ribeiro
4 years
This is going to be a fun course!
@SINSA2020
SINSA 2020
4 years
. @brunofmr & @jure will teach the #SINSA2020 course in Machine Learning in Networks: @NUnetsi @IUNetSci
Tweet media one
0
5
14
0
1
21
@brunofmr
Bruno Ribeiro
1 year
ChatGPT is not magic. Its inference-time guidance is adversarially trained to make you think it can do any task (which is its explicit objective function) Spending time "breaking" chatGPT = Free training data for OpenAI The "breaking" happens once... you will never see it again
Tweet media one
2
1
20
@brunofmr
Bruno Ribeiro
2 years
Excellent post on how we can overcome the limitations of GNNs
@TDataScience
Towards Data Science
2 years
Physics-inspired continuous learning models on graphs can overcome the limitations of traditional GNNs, @mmbronstein explains.
0
17
67
0
2
20
@brunofmr
Bruno Ribeiro
2 months
Inspiration for #ICML2024 discussion week Reviewer 2: Percolation is just a fancy name for diffusion! Reject Frisch & Hammersley (1963): We thank the reviewer for their insightful comment. We will add a paragraph explaining the difference... Be kind to each other out there
Tweet media one
0
0
20
@brunofmr
Bruno Ribeiro
3 months
👇OOD is a fundamental challenge in AI4Science but metalearning+causality may help: @mouli_sekar 's #ICLR24 spotlight shows scenarios where Physics-ML performs poorly OOD. And how a causal-equivalent method (MetaPhysiCA) can help. 1/n
Tweet media one
Tweet media two
1
3
20
@brunofmr
Bruno Ribeiro
5 years
Slides of my keynote at GrAPL (GABB + GraML) @IPDPS #ipdps2019 in Rio. Thanks @ananth_k , Manoj Kumar, Antonino Tumeo , & Tim Mattson for the invitation! I had a great time.
Tweet media one
Tweet media two
0
5
18
@brunofmr
Bruno Ribeiro
1 year
After buying a keyboard, matrix-factorization recommenders think you were born to buy keyboards, and will recommend more There are mitigation strategies to “update the factors” but the factorization approach fundamentally gets the causal task wrong
@cottascience
Leonardo Cotta
1 year
⛔️Most (causal and obs) link prediction systems use matrix factorization methods. What's the issue here? Causal assumption: Link formation is driven by an innate set of node factors given at nodes' birth. We need models that can react to interventions: path dependency! 4/10
1
0
4
0
1
17
@brunofmr
Bruno Ribeiro
6 months
@PetarV_93 Surprising folks still don't know the transformer architecture is equivariant. Also surprising: How many people still think symmetries are useful only for reducing sample complexity (in-distribution). An OK use case but that is not what makes symmetries exciting in ML.
1
0
19
@brunofmr
Bruno Ribeiro
2 years
Can we learn counterfactually-invariant representations with counterfactual data augmentation?🧐 ‼️ Maybe not if the augmentation was done by human annotators (a context-guessing machine) Relevant to NLP efforts by @mouli_sekar & @YangzeZhou @crl_uai
0
6
19
@brunofmr
Bruno Ribeiro
7 months
As an undergrad I studied group theory & Groebner basis. I ended up writing a paper on it (an algorithm to find holomorphic foliations without algebraic solutions) but I might not had it published in time for grad school applications.
2
1
18
@brunofmr
Bruno Ribeiro
2 years
Awesome post by @mmbronstein explaining how subgraph representations can help improve GNNs! Subgraph-based representations are extremely interesting (with applications even in counterfactual invariance, @beabevi_ @YangzeZhou ) PS: @mmbronstein 's viz game is just another level
@mmbronstein
Michael Bronstein
2 years
New blog post coauthored with @cottascience @ffabffrasca @HaggaiMaron @chrsmrrs Lingxiao Zhao on a new class of "Subgraph GNN" architectures that are more expressive than WL test
Tweet media one
3
51
234
1
3
17
@brunofmr
Bruno Ribeiro
2 years
A friendly reminder from your neighborhood #NeurIPS2022 AC. Please add a clear causal model (with a DAG encompassing all variables) if talking about causal (graph) representation learning 💕
1
2
16
@brunofmr
Bruno Ribeiro
20 days
@docmilanfar It is not a criticism of industry at all. Or envy. It is just a statement of facts. Happens the same way at Purdue or Stanford. We go to the university admin (myself, Fei-Fei) and ask for resources, they say we can't have it.
1
1
16
@brunofmr
Bruno Ribeiro
4 years
Great to see many good benchmarks! 1. It is disheartening to see GNNs requiring 48GB GPUs for medium-sized graphs. 2. The node classification tasks could be harder: GNNs achieve only up to 35% accuracy on this Friendster dataset
@weihua916
Weihua Hu
4 years
Super excited to share Open Graph Benchmark (OGB)! OGB provides large-scale, diverse graph datasets to catalyze graph ML research. The datasets are easily accessible via OGB Python package with unified evaluation protocols and public leaderboards. Paper:
Tweet media one
2
112
409
0
0
13
@brunofmr
Bruno Ribeiro
3 years
Interesting graph quirk: Node attributes are self-edge attributes. But predicting node attributes and predicting edges are (provably) fundamentally different tasks. It has to do with how symmetries work... always exists g(v) == f(v,v) but there may be no g(v)g(u) == f(v,u)
1
0
15
@brunofmr
Bruno Ribeiro
3 years
@deaneckles Maybe this counts: Herman Rubin once simplified one of Chernoff's proofs in a manuscript with an inequality. Chernoff thought it was "so trivial that I did not trouble to cite his contribution"... this is now known as Chernoff's inequality
1
0
15
@brunofmr
Bruno Ribeiro
4 years
@jure presenting a much-needed initiative for benchmarking graph representation learning methods. More than datasets, it also seeks to standardize data splits.
Tweet media one
1
3
14
@brunofmr
Bruno Ribeiro
2 years
I am really excited about this work! 🚀So far the focus has been on learning symmetries ❄️. But in a way, the asymmetries are where the (causal) information is for OOD robustness (Wed noon eastern) #ICLR2022
@mouli_sekar
Chandra Mouli Sekar
2 years
How can we build OOD robust classifiers when test inputs are transformed differently from training inputs? Our paper at #ICLR2022 introduces *asymmetry learning* to solve such OOD tasks (need counterfactual reasoning). Come attend our oral presentation:
0
6
10
0
2
14
@brunofmr
Bruno Ribeiro
7 months
Excellent resource for Brazilian students applying for PhDs in CS. Highly recommended!
@cottascience
Leonardo Cotta
7 months
We hope others will create similar programs as well! We have helped brazilians students get into U of Toronto, JHU and other places. This really works! Information is the most valuable asset in the process.
1
2
12
0
2
14
@brunofmr
Bruno Ribeiro
6 years
Janossy Pooling: Learnable pooling layers for deep neural networks , w/ @RyanLMurphy1 , Srinivasan, and Rao. Generalizes Deep Sets (Zaheer et al. @rsalakhu ) and others. LSTMs as Graph Neural Net aggregators is theoretically sound! @ProfJenNeville @jure
2
6
14
@brunofmr
Bruno Ribeiro
4 years
Great resource for students trying to understand a bit more about GNNs
@xbresson
Xavier Bresson
4 years
Sharing my lecture slides on "Recent Developments of Graph Network Architectures" from my deep learning course. It is a review of some exciting works on GNNs published in 2019-2020. #feelthelearn
Tweet media one
Tweet media two
12
254
1K
0
1
13
@brunofmr
Bruno Ribeiro
10 months
This is really cool work! Highly recommend
@HaggaiMaron
Haggai Maron
1 year
(1/10) New paper! A deep architecture for processing (weights of) other neural networks while preserving equivariance to their permutation symmetries. Learning in deep weight spaces has a wide potential: from NeRFs to INRs; from adaptation to pruning 👇
Tweet media one
8
129
742
0
0
13
@brunofmr
Bruno Ribeiro
20 days
@docmilanfar At Purdue, the answer I get is student affordability. We froze tuition for 13 years now. All endowment interest goes to cover the gap. I am sure Fei-Fei also gets some answer. Her plea is probably also partially directed at Stanford brass.
1
1
13
@brunofmr
Bruno Ribeiro
1 year
This is an awesome resource on temporal graph learning! Highly recommended
@emaros96
Emanuele Rossi
1 year
Check out our new blog post summarising key advances in Temporal Graph Learning over the last 12 months! Written with amazing co-authors @shenyangHuang @michael_galkin
3
87
422
1
1
13
@brunofmr
Bruno Ribeiro
5 years
In need of a harder classification task for your GNN? Also, useful to check the calibration of the GNN #calibrationalsomatters (w/ @leolvt @BJalaian ) #NeurIPS2019
Tweet media one
1
7
13
@brunofmr
Bruno Ribeiro
20 days
@docmilanfar Just for scale, since many students are curious: Purdue expenses are 2.2B. Endowment interest is ~0.13B/yr. If we spend on GPUs, we need to make up the gap. I.e, raise tuition. Purdue gets ~390M/yr from state, that is equivalent to an endowment of 14.4B. Not as far from 36B
3
1
11
@brunofmr
Bruno Ribeiro
7 months
PSA: For faculty thinking of advertising a "no top paper needed policy". Somehow many students take it as "lowering the bar", when it is actually the opposite: max quality(student) s.t. <no constraint> > max quality(student) s.t. must have top papers
1
0
13
@brunofmr
Bruno Ribeiro
3 years
Great commentary by O'Bray, Horn, @Pseudomanifold , @kmborgwardt on the evaluation of Graph Generative Models (unlike images, we understand graph topology a lot better) There should be a workshop with the ERGM folks like Krista Gile, Handcock, Snijders
2
2
12
@brunofmr
Bruno Ribeiro
1 month
@jure is proposing a promising direction with relbench () but there are also many other unexplored directions...
0
0
12
@brunofmr
Bruno Ribeiro
2 years
Same here for Graph representation learning: 5.5 is top 10%
@autreche
Manuel Gomez-Rodriguez
2 years
To those disappointed with low neurips scores in their submissions, it may be reassuring for the rebuttal period to know that, in my batch as SAC, papers with an average score of 5.5 make it to the top 10%! #neurips2022
3
10
183
2
2
12
@brunofmr
Bruno Ribeiro
1 year
This is definitely one of the most interesting works I have ever been involved 👇 In 1927, Spearman (inventor of factorization) warned us not to use matrix factorization for (causal) recommendations… how should we do it then? Via 👉Causal Lifting👈 See @cottascience ‘s thread
@cottascience
Leonardo Cotta
1 year
⚠️ Most modern link prediction tasks are actually causal! ❓ What are the needed causal assumptions and estimators for this task? 💡 Our new work ( w/ @beabevi_ , Nesreem, @brunofmr ) shows that INVARIANCES ARE SUFFICIENT answers to both! 🧵1/10
Tweet media one
3
26
118
0
1
12
@brunofmr
Bruno Ribeiro
1 year
For the folks writing follow-up work. Some updates: 1. New task to showcase the fully-inductive nature of the double equivariance link prediction approach 2. Theorem 4.10 was (obviously) showing the reverse relation (thx Jincheng Zhou!). This is now fixed
@brunofmr
Bruno Ribeiro
1 year
🚀 What are knowledge graphs? Are they just attributed graphs? Or are KGs more? (i.e., new equivariances) Gao, @YangzeZhou , & I postulate KGs have a different equivariance The consequences are astounding🤯, including 100% test accuracy in KGC... 1/🧵
4
22
159
1
0
11
@brunofmr
Bruno Ribeiro
7 months
@zhu_zhaocheng @iclr_conf Just leave a public comment when it opens. Make sure to mention the AC in the comment.
1
0
11
@brunofmr
Bruno Ribeiro
7 months
Thanks @michael_galkin for the invitation. This is indeed an exciting new research direction in KG research! There is still much to do in double equivariant architectures.
@michael_galkin
Michael Galkin
7 months
In our new Medium blog post with @XinyuYuan402 @zhu_zhaocheng and special guest @brunofmr we explore - the theory of inductive reasoning - foundation models for KGs - and explain our recent ULTRA in more detail!
1
14
73
0
0
11
@brunofmr
Bruno Ribeiro
1 month
I will be at #ICLR2024 interested in defining new (more challenging) tasks I worry that the graphML community focus older benchmark datasets is impeding progress, creating an aversion to new tasks that current methods can't perform
1
0
11
@brunofmr
Bruno Ribeiro
8 months
Excited to be spending the next two days at the workshop on Foundations of Fairness, Privacy and Causality in Graphs! 🚀
@elenadata
Elena Zheleva
8 months
An exciting lineup of speakers at the workshop on Foundations of Fairness, Privacy and Causality in Graphs. Looking forward to hearing from @kaltenburger @bksalimi @berkustun @luchengSRAI @gfarnadi @brunofmr @KrishnaPillutla @yangl1u and others!
0
0
22
0
1
11
@brunofmr
Bruno Ribeiro
6 months
@dereklim_lzh @PetarV_93 For me at least, 1. How symmetries allow us to transfer from training to OOD test zero-shot. E.g., adding an extra equivariance to Knowledge Graphs models allows zero-shot domain transfer (to be presented @ GLFrontiers #NeurIPS2023 )
Tweet media one
1
0
11
@brunofmr
Bruno Ribeiro
7 years
If at @netsci2017 this Monday, come check out our workshop Machine Learning in Network Science.
@net_science
NetScience
7 years
The final program for the workshop on Machine Learning in Network Science @netsci2017 is now available. Great lineup of speakers!
Tweet media one
0
11
25
0
4
11
@brunofmr
Bruno Ribeiro
20 days
And to whoever says “it is expensive to produce”, a 6000 ADA (48GB) is $8k while the same chip at a 4090 (24 GB) costs $2k. Is NVidia spending $6k for the extra 24GB?
1
0
10
@brunofmr
Bruno Ribeiro
4 years
@eliasbareinboim @tdietterich 1. 100% with @eliasbareinboim . Planck's principle cannot be the only way forward 2. Real progress: Tasks grounded in real scientific progress, not only immediate industry applications 3. Education: Deep Learning courses must teach causality. So students understand limitations:
Tweet media one
2
4
10
@brunofmr
Bruno Ribeiro
3 years
Great step by Xia et al. 🚀 formalizing the connection between neural nets that follow an SCM and whether they work for different causal tasks The key challenge is creating representations that don’t feel like feature engineering… for GNNs we have clues on what it looks like
@eliasbareinboim
Elias Bareinboim
3 years
If you are curious about when neural nets can perform causal inferences, & more fundamentally, how neural & causal models are related, check out: “The Causal-Neural Connection: Expressiveness, Learnability, and Inference”: (with K Xia, K Lee, Y Bengio)
9
121
553
1
0
10
@brunofmr
Bruno Ribeiro
3 years
👇must-read My experience performing proper experiments in ML papers: Reviewers will reject the paper, since it looks strange to them and doesn’t match prior results or how everybody does it.
@robtibshirani
rob tibshirani
3 years
With postdoc Stephen Bates and Trevor Hastie, I have just completed a new paper "Cross-validation: what does it estimate and how well does it do it?"
18
313
1K
0
0
9
@brunofmr
Bruno Ribeiro
10 months
@ZakJost There is a theory describing the connection between positional and structural (invariant/equivariant representations). The equivariant (invariant) representation arises when you consider the set of all possible eigenvectors of the node.
1
1
10
@brunofmr
Bruno Ribeiro
5 years
My favorite Herb Simon quote. And it has deep implications on how FB and Twitter must use AI... to keep a potential attention cycle going (). Can we design a better attention market for social media? Any interesting recent papers?
@etzioni
Oren Etzioni
5 years
The wealth of information means a poverty of attention. Wrote Herb Simon in 1971. 1971!
10
73
303
1
0
10
@brunofmr
Bruno Ribeiro
1 year
PSA: It is surprising how many junior researchers don't know this. Your (New) Definitions & Theory statements *should* be self-contained. It is not "useless repetition" to reintroduce all relevant variables in every statement. It is how it is supposed to be.
@shortstein
Thomas Steinke
1 year
A theorem statement should be *self-contained*. It shouldn't look like this. Ideally, you should be able to cut a theorem statement out of your paper and show it to an expert in the area and they should be able to understand it without seeing the rest of your paper. 1/
Tweet media one
6
8
109
1
0
10
@brunofmr
Bruno Ribeiro
1 year
In an overly simplistic way, think of ChatGPT as a "conditional GAN" (Generative Adversarial Network). You are the classifier it is trying to fool. I imagine it would have the same failures modes as conditional GANs (mode collapse, etc.).
@brunofmr
Bruno Ribeiro
1 year
ChatGPT is not magic. Its inference-time guidance is adversarially trained to make you think it can do any task (which is its explicit objective function) Spending time "breaking" chatGPT = Free training data for OpenAI The "breaking" happens once... you will never see it again
Tweet media one
2
1
20
2
1
10
@brunofmr
Bruno Ribeiro
4 years
Cool work by @xbresson et al.! Hybrid (positional + structural) methods to get GNNs to be more expressive. Awesome to see #eigenvectors mixed with GNNs. Making hybrids inductive is a challenge. Why is it hard to make positional embeddings inductive? Too much we don’t know
@xbresson
Xavier Bresson
4 years
At last, we proposed the use of Laplacian eigenvectors as graph positional encodings to overcome the limitation of low structural expressivity in GCNs @brunofmr . Interestingly, Lap eigs are the graph generalizations of Transformer positional encodings. 5/
2
3
17
1
1
10
@brunofmr
Bruno Ribeiro
2 years
Student visas are between 2 and 40 calendar days, which is more reasonable
0
0
10
@brunofmr
Bruno Ribeiro
4 months
Thanks @bksalimi for hosting me in this incredible workshop! Really amazing work that you and your group are doing. I had a lot of fun!
@bksalimi
Babak Salimi
4 months
Had the pleasure of hosting Bruno @brunofmr , Harsh @parikh_harsh_ , and Benjie @benjiewang_cs at @HDSIUCSD for a mini workshop on causal inference from relational data. Special thanks to Bruno for his insightful talk and excellent tutorial on invariant theory and graph learning.
Tweet media one
1
1
16
0
0
10
@brunofmr
Bruno Ribeiro
5 years
Consider submitting your work. Its previous versions have been a great forum to exchange ideas.
@net_science
NetScience
5 years
3rd edition of "Machine Learning in Network Science" @2019NetSci ! Submit your abstracts! Information about the invited speakers will come over the next days @ciro @MartonKarsai @brunofmr @chanmageddon
0
31
54
0
3
9
@brunofmr
Bruno Ribeiro
3 years
What if unlearn G-invariances? 1. NN starts w/multiple G-invariances as priors 2. Given data, learning makes NN sensitive to G-invariances inconsistent with data Occam's razor: "As G-invariant as possible but not more" @mouli_sekar poster today 12pm ET, 4pm GMT #ICLR2021 1/ 🧵
@mouli_sekar
Chandra Mouli Sekar
3 years
How to build neural networks that extrapolate from a single training environment? Come visit my poster with @brunofmr at #ICLR2021 (today 12pm ET) where we show that (un)learning symmetries could be key! Paper:
0
4
12
1
2
9
@brunofmr
Bruno Ribeiro
3 years
👇 Deadline extended
@GclrW
4th GCLR workshop @AAAI2024
3 years
Call for papers is open for the workshop on Graphs & more Complex structures for Learning and Reasoning at AAAI-22 @RealAAAI Extended submission deadline: Nov 12, 2021 More details at @ravi_iitm @kerstingAIML @Sriraam_UTD @PhilChodrow @gin_bianconi
Tweet media one
0
8
11
0
0
9
@brunofmr
Bruno Ribeiro
3 years
Lots of Graph Representation Learning papers @icmlconf . Unfortunately, these are spread throughout the conference (with only one dedicated session). Why not make a few sessions on Invariances in Machine Learning and put the GRL papers there?
0
0
9
@brunofmr
Bruno Ribeiro
1 month
2. 🧐 What is the statistical connection between sequence methods (like transformers), matrix and tensor factorizations, and GNNs? Very few understand
1
1
9
@brunofmr
Bruno Ribeiro
10 months
@ZakJost This means equivariant/invariant representations for eigenvectors must come from set representations. There are a few papers on procedures to get set representations from eigenvectors without having to resample: E.g.:
1
1
9
@brunofmr
Bruno Ribeiro
3 years
Today’s tutorial w/ @mouli_sekar (text 📜 +videos 📺) is a step-by-step guide to G-invariant neural networks, #eigenvectors + invariant subspaces + transformation #groups + Reynolds operator We welcome any feedback
0
0
9
@brunofmr
Bruno Ribeiro
6 months
I will be at the GLFrontiers workshop #NeurIPS2023 if anyone wants to chat about this. I can imagine a graph ML paper rejected in the future because a reviewer says "equivariances are irrelevant for graphs since there are no symmetries on real-world graphs, reject".
0
2
9