George Giapitzakis Profile
George Giapitzakis

@ggiapitz

Followers
72
Following
42
Media
0
Statuses
23

Master's student in CS @uwaterloo | Onassis Foundation Scholar | Prev: Research Assistant @quantumlah | https://t.co/pGpvW97csM

Waterloo, ON, Canada
Joined July 2024
Don't wanna be here? Send us removal request.
@kfountou
Kimon Fountoulakis
4 days
Lecture on "Learnability of Algorithms". 1. What does shuffling cards have to do with the hardness of learnability of algorithms? 2. Can neural networks be efficiently trained to learn to execute algorithms without error?
1
3
21
@backdeluca
Artur
5 days
I'll be presenting our poster on instruction execution at #NeurIPS2025 today! Swing by poster #4015 between 11:00โ€“2:00 to chat
0
2
4
@kfountou
Kimon Fountoulakis
6 days
.@NeurIPSConf (tomorrow) Date/time: Thu, Dec 4, 2025 โ€ข 11:00 AM - 2:00 PM PST Location: Exhibit Hall C,D,E #4015
@kfountou
Kimon Fountoulakis
7 months
Can neural networks perform arithmetic and instruction execution without error? We show how this can be achieved using only a logarithmic amount of training data in the input size. However, we require a sufficiently large ensemble of two-layer feedforward models, which can be
2
5
25
@backdeluca
Artur
10 days
Omw to California for #NeurIPS2025 next week! I'll be presenting our work on how neural networks can learn from instructions and execute binary algorithms. Feel free to reach out!
3
7
75
@aryehazan
Aryeh Kontorovich
2 months
when it rains, it pours! for years, it seemed like the ML community had lost interest in PAC learning automata and formal languages the topic had seemed "exhausted" -- mainly because essentially any reasonable thing you'd want to do was proven to be computationally hard in some
7
12
108
@kfountou
Kimon Fountoulakis
2 months
I am hiring one PhD student. Subject: Reasoning and AI, with a focus on computational learning for long reasoning processes such as automated theorem proving and the learnability of algorithmic tasks. Preferred background: A mathematics student interested in transitioning to
13
97
526
@kfountou
Kimon Fountoulakis
2 months
On the Statistical Query Complexity of Learning Semiautomata: a Random Walk Approach Work with @ggiapitz, @EshaanNichani and @jasondeanlee. We prove the first SQ hardness result for learning semiautomata under the uniform distribution over input words and initial states,
1
11
42
@kfountou
Kimon Fountoulakis
3 months
Accepted at NeurIPS 2025, thanks! Part II on "Learning to Execute Graph Algorithms Exactly with Graph Neural Networks" is coming soon.
@kfountou
Kimon Fountoulakis
7 months
Can neural networks perform arithmetic and instruction execution without error? We show how this can be achieved using only a logarithmic amount of training data in the input size. However, we require a sufficiently large ensemble of two-layer feedforward models, which can be
0
1
5
@PetarV_93
Petar Veliฤkoviฤ‡
5 months
Positional Attention: Expressivity and Learnability of Algorithmic Computation ๐Ÿงฎ https://t.co/XMguulytYT On Thursday (Poster Session 5 East) Presented by @backdeluca
0
2
16
@PetarV_93
Petar Veliฤkoviฤ‡
5 months
i will not be going to @icmlconf #icml2025 this year but my colleagues will be presenting four of our papers throughout the week -- please feel free to stop by for a chat if you're in vancouver! details in thread: expect * chess โ™Ÿ๏ธ * graphs ๐Ÿ•ธ๏ธ * softmax ๐ŸŒก๏ธ * algorithms ๐Ÿงฎ
2
6
63
@kfountou
Kimon Fountoulakis
6 months
This paper has been accepted to the 3rd Workshop on High-Dimensional Learning Dynamics (HiLD) at ICML 2025. @backdeluca will present this paper, and the โ€œPositional Attention: Expressivity and Learnability of Algorithmic Computationโ€ paper at the main conference. Be sure to meet
@kfountou
Kimon Fountoulakis
10 months
Can neural networks learn to copy or permute an input exactly with high probability? We study this basic and fundamental question in "Exact Learning of Permutations for Nonzero Binary Inputs with Logarithmic Training Size and Quadratic Ensemble Complexity" Using the NTK
0
2
5
@gm8xx8
๐š๐”ช๐Ÿพ๐šก๐šก๐Ÿพ
6 months
Learning to Add, Multiply, and Execute Algorithmic Instructions Exactly with Neural Networks ๐‘ฐ๐’ ๐’•๐’‰๐’† ๐‘ต๐‘ป๐‘ฒ ๐’“๐’†๐’ˆ๐’Š๐’Ž๐’†, ๐’š๐’†๐’”. A simple two-layer ReLU network (infinite-width) trained with gradient descent can exactly learn to add, multiply, permute bits. It can even run
0
5
31
@zzZixuanWang
Zixuan Wang
6 months
LLMs can solve complex tasks that require combining multiple reasoning steps. But when are such capabilities learnable via gradient-based training? In our new COLT 2025 paper, we show that easy-to-hard data is necessary and sufficient! https://t.co/rl7aBrap0W ๐Ÿงต below (1/10)
3
49
264
@kfountou
Kimon Fountoulakis
7 months
Can neural networks perform arithmetic and instruction execution without error? We show how this can be achieved using only a logarithmic amount of training data in the input size. However, we require a sufficiently large ensemble of two-layer feedforward models, which can be
2
15
101
@kfountou
Kimon Fountoulakis
7 months
Positional Attention is accepted at ICML 2025! Thanks to all co-authors for the hard work (64 pages). If youโ€™d like to read the paper, check the quoted post. That's a comprehensive study on the expressivity for parallel algorithms, their in- and out-of-distribution learnability,
@kfountou
Kimon Fountoulakis
10 months
Positional Attention: Expressivity and Learnability of Algorithmic Computation (v2) We study the effect of using only fixed positional encodings (referred to as positional attention) in the Transformer architecture for computational tasks. These positional encodings remain the
1
10
46
@kfountou
Kimon Fountoulakis
9 months
Computational Capability and Efficiency of Neural Networks: A Repository of Papers I compiled a list of theoretical papers related to the computational capabilities of Transformers, recurrent networks, feedforward networks, and graph neural networks. Link:
6
35
159
@kfountou
Kimon Fountoulakis
10 months
Can neural networks learn to copy or permute an input exactly with high probability? We study this basic and fundamental question in "Exact Learning of Permutations for Nonzero Binary Inputs with Logarithmic Training Size and Quadratic Ensemble Complexity" Using the NTK
0
4
17
@kfountou
Kimon Fountoulakis
10 months
Positional Attention: Expressivity and Learnability of Algorithmic Computation (v2) We study the effect of using only fixed positional encodings (referred to as positional attention) in the Transformer architecture for computational tasks. These positional encodings remain the
3
17
61
@kfountou
Kimon Fountoulakis
11 months
.@shenghao_yang passed his PhD defence today. Shenghao is the second PhD student to graduate from our group. I am very happy for Shenghao and the work that he has done! I would also like to thank the members of the committee: Stephen Vavasis, Yaoliang Yu, Lap Chi Lau and Satish
5
4
70
@kfountou
Kimon Fountoulakis
1 year
.@aseemrb (co-supervised with A. Jagannath) passed his PhD defence yesterday. Aseem is the first PhD student to graduate from our group. I am very happy for Aseem and the work that he has done. I would also like to thank the members of the committee, @xbresson, @thegautamkamath,
1
21
168