George Giapitzakis
@ggiapitz
Followers
72
Following
42
Media
0
Statuses
23
Master's student in CS @uwaterloo | Onassis Foundation Scholar | Prev: Research Assistant @quantumlah | https://t.co/pGpvW97csM
Waterloo, ON, Canada
Joined July 2024
Lecture on "Learnability of Algorithms". 1. What does shuffling cards have to do with the hardness of learnability of algorithms? 2. Can neural networks be efficiently trained to learn to execute algorithms without error?
1
3
21
I'll be presenting our poster on instruction execution at #NeurIPS2025 today! Swing by poster #4015 between 11:00โ2:00 to chat
0
2
4
.@NeurIPSConf (tomorrow) Date/time: Thu, Dec 4, 2025 โข 11:00 AM - 2:00 PM PST Location: Exhibit Hall C,D,E #4015
Can neural networks perform arithmetic and instruction execution without error? We show how this can be achieved using only a logarithmic amount of training data in the input size. However, we require a sufficiently large ensemble of two-layer feedforward models, which can be
2
5
25
Omw to California for #NeurIPS2025 next week! I'll be presenting our work on how neural networks can learn from instructions and execute binary algorithms. Feel free to reach out!
3
7
75
when it rains, it pours! for years, it seemed like the ML community had lost interest in PAC learning automata and formal languages the topic had seemed "exhausted" -- mainly because essentially any reasonable thing you'd want to do was proven to be computationally hard in some
7
12
108
I am hiring one PhD student. Subject: Reasoning and AI, with a focus on computational learning for long reasoning processes such as automated theorem proving and the learnability of algorithmic tasks. Preferred background: A mathematics student interested in transitioning to
13
97
526
On the Statistical Query Complexity of Learning Semiautomata: a Random Walk Approach Work with @ggiapitz, @EshaanNichani and @jasondeanlee. We prove the first SQ hardness result for learning semiautomata under the uniform distribution over input words and initial states,
1
11
42
Accepted at NeurIPS 2025, thanks! Part II on "Learning to Execute Graph Algorithms Exactly with Graph Neural Networks" is coming soon.
Can neural networks perform arithmetic and instruction execution without error? We show how this can be achieved using only a logarithmic amount of training data in the input size. However, we require a sufficiently large ensemble of two-layer feedforward models, which can be
0
1
5
Positional Attention: Expressivity and Learnability of Algorithmic Computation ๐งฎ https://t.co/XMguulytYT On Thursday (Poster Session 5 East) Presented by @backdeluca
0
2
16
This paper has been accepted to the 3rd Workshop on High-Dimensional Learning Dynamics (HiLD) at ICML 2025. @backdeluca will present this paper, and the โPositional Attention: Expressivity and Learnability of Algorithmic Computationโ paper at the main conference. Be sure to meet
Can neural networks learn to copy or permute an input exactly with high probability? We study this basic and fundamental question in "Exact Learning of Permutations for Nonzero Binary Inputs with Logarithmic Training Size and Quadratic Ensemble Complexity" Using the NTK
0
2
5
Learning to Add, Multiply, and Execute Algorithmic Instructions Exactly with Neural Networks ๐ฐ๐ ๐๐๐ ๐ต๐ป๐ฒ ๐๐๐๐๐๐, ๐๐๐. A simple two-layer ReLU network (infinite-width) trained with gradient descent can exactly learn to add, multiply, permute bits. It can even run
0
5
31
LLMs can solve complex tasks that require combining multiple reasoning steps. But when are such capabilities learnable via gradient-based training? In our new COLT 2025 paper, we show that easy-to-hard data is necessary and sufficient! https://t.co/rl7aBrap0W ๐งต below (1/10)
3
49
264
Can neural networks perform arithmetic and instruction execution without error? We show how this can be achieved using only a logarithmic amount of training data in the input size. However, we require a sufficiently large ensemble of two-layer feedforward models, which can be
2
15
101
Positional Attention is accepted at ICML 2025! Thanks to all co-authors for the hard work (64 pages). If youโd like to read the paper, check the quoted post. That's a comprehensive study on the expressivity for parallel algorithms, their in- and out-of-distribution learnability,
Positional Attention: Expressivity and Learnability of Algorithmic Computation (v2) We study the effect of using only fixed positional encodings (referred to as positional attention) in the Transformer architecture for computational tasks. These positional encodings remain the
1
10
46
Computational Capability and Efficiency of Neural Networks: A Repository of Papers I compiled a list of theoretical papers related to the computational capabilities of Transformers, recurrent networks, feedforward networks, and graph neural networks. Link:
6
35
159
Can neural networks learn to copy or permute an input exactly with high probability? We study this basic and fundamental question in "Exact Learning of Permutations for Nonzero Binary Inputs with Logarithmic Training Size and Quadratic Ensemble Complexity" Using the NTK
0
4
17
Positional Attention: Expressivity and Learnability of Algorithmic Computation (v2) We study the effect of using only fixed positional encodings (referred to as positional attention) in the Transformer architecture for computational tasks. These positional encodings remain the
3
17
61
.@shenghao_yang passed his PhD defence today. Shenghao is the second PhD student to graduate from our group. I am very happy for Shenghao and the work that he has done! I would also like to thank the members of the committee: Stephen Vavasis, Yaoliang Yu, Lap Chi Lau and Satish
5
4
70
.@aseemrb (co-supervised with A. Jagannath) passed his PhD defence yesterday. Aseem is the first PhD student to graduate from our group. I am very happy for Aseem and the work that he has done. I would also like to thank the members of the committee, @xbresson, @thegautamkamath,
1
21
168