New paper alert:
Self-Supervised Learning of Graph Neural Networks: A Unified Review
A comprehensive code library is coming in a few weeks. Stay tuned...
Two ICML papers on GNN explainability and generation got accepted, with code available:
On Explainability of Graph Neural Networks via Subgraph Explorations
GraphDF: A Discrete Flow Model for Molecular Graph Generation
1/2
NeurIPS 2021 Track Datasets and Benchmarks Round2 AC recommendation. Very hard to see any logic:
"While all reviewers agree on acceptance, this is still a borderline paper, tending to reject, due to the lack of enthusiasm from the reviewers."
DIG: A research-oriented library that includes unified and extensible implementations of algorithms for (1) graph generation, (2) self-supervised learning on graphs, (3) explainability of graph neural networks, and (4) deep learning on 3D graphs.
🔬 An in-depth yet intuitive discussion on symmetry, as well as explainability, out-of-distribution generalization, large language models, and uncertainty.
📖 Access categorized lists of resources to enhance learning and education.
🌐 Website: .
Interested in GNN and molecular property prediction? We have been working on a major project with many talented students. The result is a comprehensive ML/DL software package for graphs, sequences, and molecular property prediction, combining new methods with turnkey software.
Congratulations to the 153 individuals elected to the AIMBE College of Fellows Class of 2022! We're so glad to welcome you to the AIMBE community. You can browse the full list of the new cohort, here:
The Department of Computer Science and Engineering at Texas A&M University invites applications for two full-time tenure-track or tenured positions in the area of Data Science.
highlights:
🌟 A 263-page paper by 63 authors from 14 institutions, including 41 figures and 36 tables.
🔍 A spectrum of scales: subatomic (wavefunctions, electron density), atomic (molecules, proteins, materials, interactions), and macro (fluid, climates, subsurface).
The reason is the exponentially-growing receptive field, that squashes exponentially-growing information into fixed-length vectors.
If you want to pass a message to a distance of K - the cost is O(degree^K) messages that are squashed into a single vector.
(2/n)
@NeurIPSConf
@kchonyc
Will reviewer-AC discussions be visible to authors? If yes, will authors be able to respond to discussions if necessary? SO many authors have worked very hard on rebuttal, but many reviewers did not even click the ACK box.
@jure
@BaiduResearch
@DeepMind
@Synerise
Thank you for organizing this great event. We are excited to be one of the few teams from academia on the winner's list. Our tool is available at which also include code and tools to achieve
#1
on the AI Cures
NeurIPS: I have a reviewer who raised a few concerns a few days ago. Most of these concerns are caused by misunderstandings. Once we pointed these out in response, this reviewer lowered his/her score without even responding with a word. What can we do?
The Department of Computer Science and Engineering at Texas A&M University invites applications for multiple full-time tenured or tenure-track positions. Open rank, open area.
@chaitjo
I think the major challenge of apply GDL to physics and science problems are how to encode physics priors like symmetries into the model in an flexible and efficient manner. Of course the GNN community might have a different focus/challenge.
@ylecun
Small batch size makes learning quick, but could be noisy. Quick science might be good, but is too noisy. If I have to make a binary decision between quick and slow science, I will chose slow.
@beenwrekt
One cause is the large number of submissions, leading to large numbers of (thus potentially noisy) reviewers, AC etc. How about limit the number of submissions under each name?
DOJ just ended the China Initiative for "broader approach" to counter threats incl foreign exploitation of US science. I write
@WIRED
why fixation on borders & national interest obscures fundamental ethical concerns & incurs much more profound loss on all:
@ylecun
@ericxing
@YiMaTweets
There is randomness in any review process, and limiting the number of accepted papers may only increase randomness. I found many of the criticisms we received during peer review are constructive, and addressing them led to better quality work.
Nice. Although I am not aware of a formal paper/document on this, I found there have been some discussions and hints on the relations. I compiled a lecture note used in my DL/ML class here
@chaitjo
If there is a known prior, you better encode it explicitly, rather than letting data and optimization figure out. If optimization works perfectly, do we need ResNet?
@kchonyc
@NeurIPSConf
Any reason for excluding authors? The author-reviewer discussions are not mediated by AC/SAC and thus many reviewers did not even read rebuttals. It will be beneficial to allow author participation during reviewer-AC discussions.
@peter_richtarik
Very true and the root cause is the large number of submissions. Either group leaders should do their QC or conferences should put a limit on the number of submissions each author can do.
@AnimaAnandkumar
@jo_brandstetter
@rejuvyesh
@Mniepert
Not a surprise. We have worked very extensively on all three and have tried to optimize everything we can. One thing we observe was that conclusions can be overturned easily if models are not optimized properly.