Piotr Miłoś
@PiotrRMilos
Followers
979
Following
5K
Media
52
Statuses
689
AI @ Google | Researcher in ML | Prof@University of Warsaw | ex-visiting prof@U. Oxford. Member of Ellis Society.
Warsaw
Joined November 2021
Our doctoral student Alicja Ziarko just received the Google PhD Fellowship, presumably the most prestigious award of its kind intended for "exceptional graduate students working on innovative research in computer science and related fields" 🤩@ZiarkoAlicja
https://t.co/I0uoL4gKLk
blog.google
Today, we are announcing the recipients of the 2025 Google PhD Fellowship Program.
1
3
20
The competition is growing, but Computer Science studies held at #MIMUW are still on top in Times Higher Education World University Ranking @THEworldunirank. We took places 176-200, which is the best result for a Polish university! https://t.co/o4dZRZW1U0
timeshighereducation.com
Top Computing Science school rankings 2025 Discover the best Computer Science schools in the world with the definitive Times Higher Education Subject Rankings 2025.
0
1
2
Equating AI with chatbots is like equating IT with web search.
5
52
132
Naprawdę duże osiągnięcie naukowców z Polski zajmujących się sztuczną inteligencją. Na najbardziej prestiżową konferencję w dziedzinie uczenia maszynowego – NeurIPS 2025 @NeurIPSConf – zostało zaakceptowanych aż 10 prac z afiliacją Wydziału Matematyki, Informatyki i Mechaniki
4
37
235
This is a great news! My faculty has a great track record in producing alumni which turned into top-notch AI researchers. At the same time, it did not have that much AI researchers, ... until recently! Proud of the faculty, and proud of my students who have been contributing
10 accepted papers with our Faculty affiliation at @NeurIPSConf - the most prestigious AI&ML conference! Congrats @hbaniecki, @PrzeBiec, @marek_a_cygan, @PiotrRMilos, @mic_nau, @piotrskowron_uw, @CStanKonrad, @ewa_szczurek, @MiZawalski, @ZiarkoAlicja, S.Płotka, B.Sobieski !!!
1
2
17
Dodać można, że mamy w PL światowe osiągnięcia w dziedzinie. Algorytm BRO (Nauman, Ostaszewski et al) jest _najlepszym_ algorytmem obecnie dla środowisk ciągłych (np. roboty). https://t.co/dJ3RwGRlRm
Uczenie przez wzmocnienie (ang. reinforcement learning) stało się ostatni dość gorącym trendem w świecie sztucznej inteligencji. Uczenie przez wzmocnienie nie jest nową techniką, ale teraz często staje się centralnym punktem marketingu i innowacji startupów AI, gdzie
0
3
23
Check out our new work on scaling RL via iterative computation. We apply flow-matching to value function learning and it works really well 🔥
🚨🚨New paper on core RL: a way to train value-functions via flow-matching for scaling compute! No text/images, but a flow directly on a scalar Q-value. This unlocks benefits of iterative compute, test-time scaling for value prediction & SOTA results on whatever we tried. 🧵⬇️
0
10
27
We’ve raised €1.7B to accelerate technological progress with AI! This Series C funding round, led by @ASMLcompany, fuels Mistral AI scientific research to keep pushing the frontier of AI to tackle the most critical technological challenges faced by strategic industries.
141
426
4K
Almost all agentic pipelines prompt LLMs to explicitly plan before every action (ReAct), but turns out this isn't optimal for Multi-Step RL 🤔 Why? In our new work we highlight a crucial issue with ReAct and show that we should make and follow plans instead🧵
5
40
171
Walka trwa! Zachęcam wszystkich do wsparcia. Łukasz zrobił bardzo dużo dla społeczności AI w Polsce (na przykład animując olimpiadę AI). Dużo świetnych rzeczy naukowych. A przede wszystkim jest wspaniałym ojcem, mężem i człowiekiem. Dzięki pieniądzom zebranym dotychczas Łukasz
My good friend has an ongoing fight with cancer. A great father and husband for his family. An excellent co-author for me and many other ML folks. Please support and share! (link in the comment!)
0
4
22
Can complex reasoning emerge directly from learned representations? In our new work, we study representations that capture both perceptual and temporal structure, enabling agents to reason without explicit planning. https://t.co/gGdnAUixcv
4
110
773
EU opublikowała nowy raport dotyczący innowacyjności poszczególnych krajów - Polska trochę poprawiła swoje wyniki od zeszłego roku, ale nadal pozostajemy w końcówce stawki (23 miejsce na 27) wyprzedzając tylko Słowacje, Łotwę, Bułgarię i Rumunię. Raport wymienia nasze największe
research-and-innovation.ec.europa.eu
This provides a comparative analysis of innovation performance in EU countries, other European countries, and regional neighbours.
18
96
295
Today, come and see us at the poster session; East Exhibition Hall - Joint MoE Scaling Laws (E-2609); tl;dr MoE can be memory efficient - Since Faithfulness Fails (E-2101); tl;dr inferring causal relationships turns out to be surprisingly hard Let’s chat more, my great
0
4
25
Today, come and see us at the poster session; East Exhibition Hall - Joint MoE Scaling Laws (E-2609); tl;dr MoE can be memory efficient - Since Faithfulness Fails (E-2101); tl;dr inferring causal relationships turns out to be surprisingly hard Let’s chat more, my great
0
4
25
The holy grail of AI safety has always been interpretability. But what if reasoning models just handed it to us in a stroke of serendipity? In our new paper, we argue that the AI community should turn this serendipity into a systematic AI safety agenda!🛡️
A simple AGI safety technique: AI’s thoughts are in plain English, just read them We know it works, with OK (not perfect) transparency! The risk is fragility: RL training, new architectures, etc threaten transparency Experts from many orgs agree we should try to preserve it:
6
15
101
On the way to #ICML25 🇨🇦! ✈️ The craziness starts again! 🤪 Hope to meet many old AI friends and make new ones. Let’s have a chat! 🗣️ I am proud to present: - Joint MoE Scaling Laws: MoE Can Be Memory Efficient 🚀 - Since Faithfulness Fails: The Performance Limits of Neural
2
3
18
Excited to be at ICML next week presenting our paper Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient! If you want to talk about scaling laws and MoEs, or you're interested in pretraining at @MistralAI - hit me up.
0
0
5
And please read their threads: https://t.co/qxMrknwdMa
Excited to be going to #ICML2025 next week! I’ll be presenting our paper: "Since Faithfulness Fails: The Performance Limits of Neural Causal Discovery" Here’s a quick breakdown of what we found 🧵👇 #CausalDiscovery #MachineLearning #AIResearch
1
0
5