SchmidhuberAI Profile Banner
Jürgen Schmidhuber Profile
Jürgen Schmidhuber

@SchmidhuberAI

Followers
152K
Following
695
Media
49
Statuses
100

Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.

Joined August 2019
Don't wanna be here? Send us removal request.
@SchmidhuberAI
Jürgen Schmidhuber
4 months
DeepSeek [1] uses elements of the 2015 reinforcement learning prompt engineer [2] and its 2018 refinement [3] which collapses the RL machine and world model of [2] into a single net through the neural net distillation procedure of 1991 [4]: a distilled chain of thought system.
Tweet media one
253
938
5K
@SchmidhuberAI
Jürgen Schmidhuber
8 months
The #NobelPrizeinPhysics2024 for Hopfield & Hinton rewards plagiarism and incorrect attribution in computer science. It's mostly about Amari's "Hopfield network" and the "Boltzmann Machine." . 1. The Lenz-Ising recurrent architecture with neuron-like elements was published in.
212
1K
5K
@SchmidhuberAI
Jürgen Schmidhuber
2 years
Thanks @elonmusk for your generous hyperbole! . Admittedly, however, I didn’t invent sliced bread, just #GenerativeAI and things like that:  And of course my team is standing on the shoulders of giants:  Original tweet by @elonmusk:
Tweet media one
114
457
5K
@SchmidhuberAI
Jürgen Schmidhuber
6 months
The #NobelPrize in Physics 2024 for Hopfield & Hinton turns out to be a Nobel Prize for plagiarism. They republished methodologies developed in #Ukraine and #Japan by Ivakhnenko and Amari in the 1960s & 1970s, as well as other techniques, without citing the original inventors.
Tweet media one
96
426
3K
@SchmidhuberAI
Jürgen Schmidhuber
1 year
The GOAT of tennis @DjokerNole said: "35 is the new 25.” I say: “60 is the new 35.” AI research has kept me strong and healthy. AI could work wonders for you, too!
Tweet media one
159
165
2K
@SchmidhuberAI
Jürgen Schmidhuber
3 years
LeCun's "5 best ideas 2012-22” are mostly from my lab, and older: 1 Self-supervised 1991 RNN stack; 2 ResNet = open-gated 2015 Highway Net; 3&4 Key/Value-based fast weights 1991; 5 Transformers with linearized self-attention 1991. (Also GAN 1990.) Details:
Tweet media one
30
197
2K
@SchmidhuberAI
Jürgen Schmidhuber
5 years
Quarter-century anniversary: 25 years ago we received a message from N(eur)IPS 1995 informing us that our submission on LSTM got rejected. (Don’t worry about rejections. They mean little.) #NeurIPS2020
Tweet media one
6
278
2K
@SchmidhuberAI
Jürgen Schmidhuber
6 years
In 2020, we will celebrate that many of the basic ideas behind the Deep Learning Revolution were published three decades ago within fewer than 12 months in our "Annus Mirabilis" 1990-1991:
23
375
2K
@SchmidhuberAI
Jürgen Schmidhuber
1 year
Q*? 2015: reinforcement learning prompt engineer in Sec. 5.3 of “Learning to Think. ”  A controller neural network C learns to send prompt sequences into a world model M (e.g., a foundation model) trained on, say, videos of actors. C also learns to
Tweet media one
42
251
2K
@SchmidhuberAI
Jürgen Schmidhuber
6 months
Re: The (true) story of the "attention" operator . that introduced the Transformer . by @karpathy. Not quite! The nomenclature has changed, but in 1991, there was already what is now called an unnormalized linear Transformer with "linearized self-attention" [TR5-6]. See (Eq.
@karpathy
Andrej Karpathy
6 months
The (true) story of development and inspiration behind the "attention" operator, the one in "Attention is All you Need" that introduced the Transformer. From personal email correspondence with the author @DBahdanau ~2 years ago, published here and now (with permission) following
Tweet media one
54
285
2K
@SchmidhuberAI
Jürgen Schmidhuber
3 months
Congratulations to @RichardSSutton and Andy Barto on their Turing award!.
29
157
1K
@SchmidhuberAI
Jürgen Schmidhuber
2 years
Meta used my 1991 ideas to train LLaMA 2, but made it insinuate that I “have been involved in harmful activities” and have not made “positive contributions to society, such as pioneers in their field.” @Meta & LLaMA promoter @ylecun should correct this ASAP. See
Tweet media one
54
159
1K
@SchmidhuberAI
Jürgen Schmidhuber
2 years
As 2022 ends: 1/2 century ago, Shun-Ichi Amari published a learning recurrent neural network (1972) much later called the Hopfield network (based on the original, century-old, non-learning Lenz-Ising recurrent network architecture, 1920-25)
Tweet media one
19
291
1K
@SchmidhuberAI
Jürgen Schmidhuber
1 year
So @ylecun: "I've been advocating for deep learning architecture capable of planning since 2016" vs me: "I've been publishing deep learning architectures capable of planning since 1990." I guess in 2016 @ylecun also picked up the torch. (References attached)
Tweet media one
54
142
1K
@SchmidhuberAI
Jürgen Schmidhuber
2 years
Regarding recent work on more biologically plausible "forward-only" backprop-like methods: in 2021, our VSML net already meta-learned backprop-like learning algorithms running solely in forward-mode - no hardwired derivative calculation!
Tweet media one
16
175
1K
@SchmidhuberAI
Jürgen Schmidhuber
3 years
Train a weight matrix to encode the backpropagation learning algorithm itself. Run it on the neural net itself. Meta-learn to improve it! Generalizes to datasets outside of the meta-training distribution. v4 2022 with @LouisKirschAI
Tweet media one
11
205
1K
@SchmidhuberAI
Jürgen Schmidhuber
2 years
Machine learning is the science of credit assignment. My new survey (also under arXiv:2212.11279) credits the pioneers of deep learning and modern AI (supplementing my award-winning 2015 deep learning survey): P.S. Happy Holidays!
Tweet media one
29
270
1K
@SchmidhuberAI
Jürgen Schmidhuber
2 years
Silly AI regulation hype. One cannot regulate AI research, just like one cannot regulate math. One can regulate applications of AI in finance, cars, healthcare. Such fields already have continually adapting regulatory frameworks in place. Don’t stifle the open-source movement!
Tweet media one
51
203
1K
@SchmidhuberAI
Jürgen Schmidhuber
3 years
30 years ago: Transformers with linearized self-attention in NECO 1992, equivalent to fast weight programmers (apart from normalization), separating storage and control. Key/value was called FROM/TO. The attention terminology was introduced at ICANN 1993
Tweet media one
27
138
1K
@SchmidhuberAI
Jürgen Schmidhuber
3 years
25th anniversary of the LSTM at #NeurIPS2021. reVIeWeR 2 - who rejected it from NeurIPS1995 - was thankfully MIA. The subsequent journal publication in Neural Computation has become the most cited neural network paper of the 20th century:
Tweet media one
13
155
1K
@SchmidhuberAI
Jürgen Schmidhuber
3 years
Lecun (@ylecun)’s 2022 paper on Autonomous Machine Intelligence rehashes but doesn’t cite essential work of 1990-2015. We’ve already published his “main original contributions:” learning subgoals, predictable abstract representations, multiple time scales…
31
179
1K
@SchmidhuberAI
Jürgen Schmidhuber
1 year
How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. More than a dozen concrete AI priority disputes under
Tweet media one
50
145
1K
@SchmidhuberAI
Jürgen Schmidhuber
3 years
Yesterday @nnaisense released EvoTorch (, a state-of-the-art evolutionary algorithm library built on @PyTorch, with GPU-acceleration and easy training on huge compute clusters using @raydistributed. (1/2).
8
198
1K
@SchmidhuberAI
Jürgen Schmidhuber
1 year
Best paper award for "Mindstorms in Natural Language-Based Societies of Mind" at #NeurIPS2023 WS Ro-FoMo. Up to 129 foundation models collectively solve practical problems by interviewing each other in monarchical or democratic societies
Tweet media one
22
138
893
@SchmidhuberAI
Jürgen Schmidhuber
2 years
Unlike diffusion models, Bayesian Flow Networks operate on the parameters of data distributions, rather than on noisy versions of the data itself. I think this paper by Alex Graves et al. will be influential.
@nnaisense
NNAISENSE
2 years
📣 BFNs: A new class of generative models that.- brings together the strengths of Bayesian inference and deep learning.- trains on continuous, discretized or discrete data with simple end-to-end loss.- places no restrictions on the network architecture.
11
133
860
@SchmidhuberAI
Jürgen Schmidhuber
4 months
It has been said that AI is the new oil, the new electricity, and the new internet. And the once nimble and highly profitable software companies (MSFT, GOOG, . ) became like utilities, investing in nuclear energy, among other things, to run AI data centres. Open Source and the
Tweet media one
54
270
841
@SchmidhuberAI
Jürgen Schmidhuber
6 months
Yet another award for plagiarism. Of all the papers that could have won the #NeurIPS2024 Test of Time Award, it had to be the #NeurIPS 2014 paper on "Generative Adversarial Networks" [GAN1]. This is the notorious paper that republished the 1990 principle of Artificial Curiosity
Tweet media one
31
188
766
@SchmidhuberAI
Jürgen Schmidhuber
2 years
AI boom v AI doom: since the 1970s, I have told AI doomers that in the end all will be good. E.g., 2012 TEDx talk:  “Don’t think of us versus them: us, the humans, v these future super robots. Think of yourself, and humanity in general, as a small stepping
Tweet media one
57
117
769
@SchmidhuberAI
Jürgen Schmidhuber
5 years
Congrats to the awesome Sepp Hochreiter for the well-deserved 2021 IEEE Neural Networks Pioneer Award! . It was my great honor to be Sepp's nominator.
Tweet media one
10
62
716
@SchmidhuberAI
Jürgen Schmidhuber
1 year
In 2016, at an AI conference in NYC, I explained artificial consciousness, world models, predictive coding, and science as data compression in less than 10 minutes. I happened to be in town, walked in without being announced, and ended up on their panel. It was great fun.
41
126
677
@SchmidhuberAI
Jürgen Schmidhuber
4 years
Kunihiko Fukushima was awarded the 2021 Bower Award for his enormous contributions to deep learning, particularly his highly influential convolutional neural network architecture. My laudation of Kunihiko at the 2021 award ceremony is on YouTube:
Tweet media one
6
128
660
@SchmidhuberAI
Jürgen Schmidhuber
4 years
The most cited neural nets all build on our work: LSTM. ResNet (open-gated Highway Net). AlexNet & VGG (like our DanNet). GAN (an instance of our Artificial Curiosity). Linear Transformers (like our Fast Weight Programmers).
30
84
644
@SchmidhuberAI
Jürgen Schmidhuber
3 months
My recent TED talk has just been published
63
164
621
@SchmidhuberAI
Jürgen Schmidhuber
3 months
@geoffreyhinton Hinton should be stripped of his awards for plagiarism and misattribution:.
@SchmidhuberAI
Jürgen Schmidhuber
6 months
The #NobelPrize in Physics 2024 for Hopfield & Hinton turns out to be a Nobel Prize for plagiarism. They republished methodologies developed in #Ukraine and #Japan by Ivakhnenko and Amari in the 1960s & 1970s, as well as other techniques, without citing the original inventors.
Tweet media one
77
42
659
@SchmidhuberAI
Jürgen Schmidhuber
3 years
Now on YouTube: “Modern Artificial Intelligence 1980s-2021 and Beyond.” My talk at AIJ 2020 (Moscow), also presented at NVIDIA GTC 2021 (US), ML Summit 2021 (Beijing), Big Data and AI (Toronto), IFIC (China), AI Boost (Lithuania), ICONIP 2021 (Jakarta)
Tweet media one
11
105
609
@SchmidhuberAI
Jürgen Schmidhuber
2 years
In 2010, we used Jensen Huang's @nvidia GPUs to show that deep feedforward nets can be trained by plain backprop without any unsupervised pretraining. In 2011, our DanNet was the first superhuman CNN. Today, compute is 100+ times cheaper, and NVIDIA 100+ times more valuable.
Tweet media one
5
50
610
@SchmidhuberAI
Jürgen Schmidhuber
2 months
What can we learn from history? The FACTS: a novel Structured State-Space Model with a factored, thinking memory [1]. Great for forecasting, video modeling, autonomous systems, at #ICLR2025. Fast, robust, parallelisable. [1] Li Nanbo, Firas Laakom, Yucheng Xu, Wenyi Wang, J.
Tweet media one
12
137
609
@SchmidhuberAI
Jürgen Schmidhuber
1 year
Our #GPTSwarm models Large Language Model Agents and swarms thereof as computational graphs reflecting the hierarchical nature of intelligence. Graph optimization automatically improves nodes and edges.
Tweet media one
20
115
599
@SchmidhuberAI
Jürgen Schmidhuber
1 month
My first work on metalearning or learning to learn came out in 1987 [1][2]. Back then nobody was interested. Today, compute is 10 million times cheaper, and metalearning is a hot topic 🙂 It’s fitting that my 100th journal publication [100] is about metalearning, too. [100]
Tweet media one
18
123
606
@SchmidhuberAI
Jürgen Schmidhuber
5 years
Stop crediting the wrong people for inventions made by others. At least in science, the facts will always win in the end. As long as the facts have not yet won, it is not yet the end. No fancy award can ever change that. #selfcorrectingscience #plagiarism
19
155
583
@SchmidhuberAI
Jürgen Schmidhuber
4 years
26 March 1991: Neural nets learn to program neural nets with fast weights - like today’s Transformer variants. Deep learning through additive weight changes. 2021: New work with Imanol & Kazuki. Also: fast weights for metalearning (1992-) and RL (2005-)
6
108
574
@SchmidhuberAI
Jürgen Schmidhuber
3 months
Do you like RL and math? Our collaboration, IDSIA-KAUST-NNAISENSE, has the most detailed exploration of the convergence and stability of modern RL frameworks like Upside-Down RL, Online Decision Transformers, and Goal-Conditioned Supervised Learning 
6
184
576
@SchmidhuberAI
Jürgen Schmidhuber
5 years
The 2010s: Our Decade of Deep Learning / Outlook on the 2020s (also addressing privacy and data markets).
0
191
566
@SchmidhuberAI
Jürgen Schmidhuber
1 year
2010 foundations of recent $NVDA stock market frenzy: our simple but deep neural net on @nvidia GPUs broke MNIST Things are changing fast. Just 7 months ago, I tweeted: compute is 100x cheaper, $NVDA 100x more valuable. Today, replace "100" by "250."
Tweet media one
17
74
521
@SchmidhuberAI
Jürgen Schmidhuber
1 year
Counter-intuitive aspects of text-to-image diffusion models: only a few steps require cross-attention; most don’t. Skipping the extras gives a great speed-up! Many stars on GitHub :-) ..
Tweet media one
9
82
469
@SchmidhuberAI
Jürgen Schmidhuber
3 years
1/3: “On the binding problem in artificial neural networks” with Klaus Greff and @vansteenkiste_s. An important paper from my lab that is of great relevance to the ongoing debate on symbolic reasoning and compositional generalization in neural networks:
Tweet media one
5
98
501
@SchmidhuberAI
Jürgen Schmidhuber
4 years
375th birthday of Leibniz, founder of computer science (just published in FAZ, 17/5/2021): 1st machine with a memory (1673); 1st to perform all arithmetic operations. Principles of binary computers (1679). Algebra of Thought (1686). Calculemus!
7
87
481
@SchmidhuberAI
Jürgen Schmidhuber
4 months
To be clear, I'm very impressed by #DeepSeek's achievement of bringing life to the dreams of the past. Their open source strategy has shown that the most powerful large-scale AI systems can be something for the masses and not just for the privileged few. It's a pleasure to see.
61
47
502
@SchmidhuberAI
Jürgen Schmidhuber
2 months
What if AI could write creative stories & insightful #DeepResearch reports like an expert? Our heterogeneous recursive planning [1] enables this via adaptive subgoals [2] & dynamic execution. Agents dynamically replan & weave retrieval, reasoning, & composition mid-flow. Explore
Tweet media one
15
150
484
@SchmidhuberAI
Jürgen Schmidhuber
4 years
I was invited to write a piece about Alan M. Turing. While he made significant contributions to computer science, their importance and impact is often greatly exaggerated - at the expense of the field's pioneers. It's not Turing's fault, though.
33
105
471
@SchmidhuberAI
Jürgen Schmidhuber
2 years
Instead of trying to defend his paper on OpenReview (where he posted it), @ylecun made misleading statements about me in popular science venues. I am debunking his recent allegations in the new Addendum III of my critique
Tweet media one
15
63
454
@SchmidhuberAI
Jürgen Schmidhuber
2 years
2023: 20th anniversary of the Gödel Machine, a mathematically optimal, self-referential, meta-learning, universal problem solver making provably optimal self-improvements by rewriting its own computer code 
Tweet media one
12
85
446
@SchmidhuberAI
Jürgen Schmidhuber
5 years
GANs are special cases of Artificial Curiosity (1990) and also closely related to Predictability Minimization (1991). Now published in Neural Networks 127:58-66, 2020. #selfcorrectingscience #plagiarism.Open Access: Preprint:
Tweet media one
9
80
448
@SchmidhuberAI
Jürgen Schmidhuber
4 months
1995-2025: The Decline of Germany & Japan vs US & China. Can All-Purpose Robots Fuel a Comeback?. In 1995, in terms of nominal GDP, a combined Germany and Japan were almost 1:1 economically with a combined USA and China. Only 3 decades later, this ratio is now down to 1:5!
Tweet media one
40
183
408
@SchmidhuberAI
Jürgen Schmidhuber
2 years
Re: more biologically plausible "forward-only” deep learning. 1/3 of a century ago, my "neural economy” was local in space and time (backprop isn't). Competing neurons pay "weight substance” to neurons that activate them (Neural Bucket Brigade, 1989)
Tweet media one
9
59
423
@SchmidhuberAI
Jürgen Schmidhuber
2 years
30 years ago in a journal: "distilling" a recurrent neural network (RNN) into another RNN. I called it “collapsing” in Neural Computation 4(2):234-242 (1992), Sec. 4. Greatly facilitated deep learning with 20+ virtual layers. The concept has become popular.
Tweet media one
7
62
413
@SchmidhuberAI
Jürgen Schmidhuber
6 months
@NobelPrize Sorry to rain on your parade. Sadly, the Nobel Prize in Physics 2024 for Hopfield & Hinton turns out to be a Nobel Prize for plagiarism. They republished methodologies developed in #Ukraine and #Japan by Ivakhnenko and Amari in the 1960s & 1970s, as well as other techniques,.
14
37
420
@SchmidhuberAI
Jürgen Schmidhuber
3 years
With Kazuki Irie and @robert_csordas at #ICML2022: any linear layer trained by gradient descent is a key-value/attention memory storing its entire training experience. This dual form helps us visualize how neural nets use training patterns at test time
Tweet media one
5
82
389
@SchmidhuberAI
Jürgen Schmidhuber
8 months
I am hiring 3 postdocs at #KAUST to develop an Artificial Scientist for discovering novel chemical materials for carbon capture. Join this project with @FaccioAI at the intersection of RL and Material Science. Learn more and apply:
Tweet media one
14
72
376
@SchmidhuberAI
Jürgen Schmidhuber
4 years
KAUST (17 full papers at #NeurIPS2021) and its environment are now offering huge resources to advance both fundamental and applied AI research. We are hiring outstanding professors, postdocs, and PhD students:
Tweet media one
6
82
370
@SchmidhuberAI
Jürgen Schmidhuber
2 years
KAUST, the university with the highest impact per faculty, has 24 papers #NeurIPS2022. Visit Booth#415 of the @AI_KAUST Initiative! We are hiring on all levels.
Tweet media one
11
31
371
@SchmidhuberAI
Jürgen Schmidhuber
4 years
1/3 century anniversary of thesis on #metalearning (1987). For its cover I drew a robot that bootstraps itself. 1992-: gradient descent-based neural metalearning. 1994-: meta-RL with self-modifying policies. 2003-: optimal Gödel Machine. 2020: new stuff!
2
58
351
@SchmidhuberAI
Jürgen Schmidhuber
2 years
We address the two important things in science: (A) Finding answers to given questions, and (B) Coming up with good questions. Learning one abstract bit at a time through self-invented (thought) experiments encoded as neural networks
Tweet media one
9
68
361
@SchmidhuberAI
Jürgen Schmidhuber
6 months
Please check out a dozen 2024 conference papers with my awesome students, postdocs, and collaborators: 3 papers at NeurIPS, 5 at ICML, others at CVPR, ICLR, ICRA:. 288. R. Csordas, P. Piekos, K. Irie, J. Schmidhuber. SwitchHead: Accelerating Transformers with Mixture-of-Experts.
9
158
336
@SchmidhuberAI
Jürgen Schmidhuber
6 months
Re: 2024 #NobelPrize Debacle. The President of the #NeurIPS Foundation (overseeing the ongoing #NeurIPS2024 conference) was a student of Hopfield, and a co-author of Hinton (1985) [BM]. He is also known for sending "amicus curiae" ("friend of the court") letters to award.
@SchmidhuberAI
Jürgen Schmidhuber
6 months
The #NobelPrize in Physics 2024 for Hopfield & Hinton turns out to be a Nobel Prize for plagiarism. They republished methodologies developed in #Ukraine and #Japan by Ivakhnenko and Amari in the 1960s & 1970s, as well as other techniques, without citing the original inventors.
Tweet media one
14
144
326
@SchmidhuberAI
Jürgen Schmidhuber
4 years
2021: Directing AI Initiative at #KAUST, university with highest impact per faculty. Keeping current affiliations. Hiring on all levels. Great research conditions. Photographed dolphin on a snorkeling trip off the coast of KAUST
12
51
342
@SchmidhuberAI
Jürgen Schmidhuber
4 years
30-year anniversary of #Planning & #ReinforcementLearning with recurrent #WorldModels and #ArtificialCuriosity (1990). Also: high-dimensional reward signals, deterministic policy gradients, #GAN principle, and even simple #Consciousness & #SelfAwareness
2
61
340
@SchmidhuberAI
Jürgen Schmidhuber
4 years
In 2001, I discovered how to make very stable rings from only rectangular LEGO bricks. Natural tilting angles between LEGO pieces define ring diameters. The resulting low-complexity artworks reflect the formal theory of beauty/creativity/curiosity:
6
39
348
@SchmidhuberAI
Jürgen Schmidhuber
4 years
90th anniversary of Kurt Gödel's 1931 paper which laid the foundations of theoretical computer science, identifying fundamental limitations of algorithmic theorem proving, computing, AI, logics, and math itself (just published in FAZ @faznet 16/6/2021)
3
68
325
@SchmidhuberAI
Jürgen Schmidhuber
8 days
AGI? One day, but not yet. The only AI that works well right now is the one behind the screen [12-17]. But passing the Turing Test [9] behind a screen is easy compared to Real AI for real robots in the real world. No current AI-driven robot could be certified as a plumber
27
47
344
@SchmidhuberAI
Jürgen Schmidhuber
3 months
I am hiring postdocs at #KAUST to develop an Artificial Scientist for the discovery of novel chemical materials to save the climate by capturing carbon dioxide. Join this project at the intersection of RL and Material Science:
Tweet media one
18
149
297
@SchmidhuberAI
Jürgen Schmidhuber
4 years
10-year anniversary: Deep Reinforcement Learning with Policy Gradients for LSTM. Applications: @DeepMind’s Starcraft player; @OpenAI's dextrous robot hand & Dota player - @BillGates called this a huge milestone in advancing AI #deeplearning
4
56
304
@SchmidhuberAI
Jürgen Schmidhuber
5 years
10-year anniversary of our deep multilayer perceptrons trained by plain gradient descent on GPU, outperforming all previous methods on a famous benchmark. This deep learning revolution quickly spread from Europe to North America and Asia. #deeplearning
3
73
299
@SchmidhuberAI
Jürgen Schmidhuber
5 years
15-year anniversary: first paper with "learn deep" in the title (2005). On deep #ReinforcementLearning & #NeuroEvolution solving problems of depth 1000 and more. 1st author: Faustino Gomez! #deeplearning #deepRL
0
51
281
@SchmidhuberAI
Jürgen Schmidhuber
4 years
3 decades of artificial curiosity & creativity. Our artificial scientists not only answer given questions but also invent new questions
3
66
283
@SchmidhuberAI
Jürgen Schmidhuber
7 months
Some people have lost their titles or jobs due to plagiarism, e.g., Harvard's former president. But after this #NobelPrizeinPhysics2024, how can advisors now continue to tell their students that they should avoid plagiarism at all costs? Of course, it is well known that.
10
24
252
@SchmidhuberAI
Jürgen Schmidhuber
3 months
During the Oxford-style debate at the "Interpreting Europe Conference 2025," I persuaded many professional interpreters to reject the motion: "AI-powered interpretation will never replace human interpretation." Before the debate, the audience was 60-40 in favor of the motion;
6
109
203
@SchmidhuberAI
Jürgen Schmidhuber
4 years
2021: 10-year anniversary of deep CNN revolution through DanNet (2011), named after my outstanding postdoc Dan Ciresan. Won 4 computer vision contests in a row before other CNNs joined the party. 1st superhuman result in 2011. Now everybody is using this
0
31
221
@SchmidhuberAI
Jürgen Schmidhuber
6 months
@NobelPrize Sadly, the #NobelPrize in Physics 2024 for Hopfield & Hinton is a Nobel Prize for plagiarism. They republished methodologies developed in #Ukraine and #Japan by Ivakhnenko and Amari in the 1960s & 1970s, as well as other techniques, without citing the original papers. Even in.
10
23
196
@SchmidhuberAI
Jürgen Schmidhuber
3 months
@RichardSSutton Some background to reinforcement learning in Sec. 17 of the "Annotated History of Modern AI and Deep Learning:"
1
11
131
@SchmidhuberAI
Jürgen Schmidhuber
3 years
@hardmaru This was accepted at ICML 2022. Thanks to Kazuki Irie, Imanol Schlag, and Róbert Csordás!.
3
1
103
@SchmidhuberAI
Jürgen Schmidhuber
6 months
@goodfellow_ian As mentioned in Sec. B1 of reference [DLP]:. The priority dispute above was picked up by the popular press, e.g., Bloomberg [AV1], after a particularly notable encounter between me and Bengio's student Dr. @goodfellow_ian at a N(eur)IPS conference. He gave a talk on GANs,.
2
2
91
@SchmidhuberAI
Jürgen Schmidhuber
6 months
@goodfellow_ian "Self-aggrandizement" says the researcher who claims he invented GANs :-) See references [PLAG1-7] in the original tweet, for example, [PLAG6]: "May it be accidental or intentional, plagiarism is still plagiarism." Unintentional plagiarists must correct their publications.
5
5
83
@SchmidhuberAI
Jürgen Schmidhuber
6 months
@goodfellow_ian As mentioned in Sec. B1 of reference [DLP]: the priority dispute above was picked up by the popular press, e.g., Bloomberg [AV1], after a particularly notable encounter between me and Bengio's student Dr. @goodfellow_ian at a N(eur)IPS conference. He gave a talk on GANs,.
5
1
61
@SchmidhuberAI
Jürgen Schmidhuber
6 months
@NobelPrize At the risk of beating a dead horse: sadly, the #NobelPrize in Physics 2024 for Hopfield & Hinton is a Nobel Prize for plagiarism. They republished methodologies developed in #Ukraine and #Japan by Ivakhnenko and Amari in the 1960s & 1970s, as well as other techniques, without.
0
4
56
@SchmidhuberAI
Jürgen Schmidhuber
5 months
@goodfellow_ian Again: ad hominem arguments against facts 🙂 See [DLP, Sec. 4] on ad hominem attacks [AH1-3] true to the motto: "If you cannot dispute a fact-based message, attack the messenger himself" . "unlike politics, however, science is immune to ad hominem attacks" . "in the hard.
6
1
46
@SchmidhuberAI
Jürgen Schmidhuber
6 months
@goodfellow_ian "Self-aggrandizement" says the researcher who claims he invented GANs :-) See references [PLAG1-7] in the original tweet, for example, [PLAG6]: "May it be accidental or intentional, plagiarism is still plagiarism." Unintentional plagiarists must correct their papers.
10
2
38
@SchmidhuberAI
Jürgen Schmidhuber
6 months
@goodfellow_ian I could spend more time answering in detail, but all the answers are actually in the original tweet and its references
@SchmidhuberAI
Jürgen Schmidhuber
6 months
Yet another award for plagiarism. Of all the papers that could have won the #NeurIPS2024 Test of Time Award, it had to be the #NeurIPS 2014 paper on "Generative Adversarial Networks" [GAN1]. This is the notorious paper that republished the 1990 principle of Artificial Curiosity
Tweet media one
1
0
36
@SchmidhuberAI
Jürgen Schmidhuber
5 months
@goodfellow_ian Again: ad hominem against facts. See [DLP, Sec. 4]: ". conducted ad hominem attacks [AH2-3] against me true to the motto: 'If you cannot dispute a fact-based message, attack the messenger himself'" . "unlike politics, however, science is immune to ad hominem attacks — at.
4
4
27
@SchmidhuberAI
Jürgen Schmidhuber
6 months
@goodfellow_ian I could spend more time answering in detail, but all the answers are actually in the original tweet and its references
@SchmidhuberAI
Jürgen Schmidhuber
6 months
Yet another award for plagiarism. Of all the papers that could have won the #NeurIPS2024 Test of Time Award, it had to be the #NeurIPS 2014 paper on "Generative Adversarial Networks" [GAN1]. This is the notorious paper that republished the 1990 principle of Artificial Curiosity
Tweet media one
0
1
19
@SchmidhuberAI
Jürgen Schmidhuber
2 years
@yannx0130 sure, see the experiments.
1
0
2