Jakub Piotr Cłapa Profile
Jakub Piotr Cłapa

@jpclap

Followers
249
Following
4K
Media
34
Statuses
2K

I love to learn new things and use them to solve problems. Currently at @hume_ai Previously built WhisperSpeech an Open Source AI TTS model at @Collabora

Lodz, Poland
Joined February 2014
Don't wanna be here? Send us removal request.
@halvarflake
Halvar Flake
21 days
@lemire As a young man, I had absorbed the idea that theoretical progress preceeds practical progress. It took me a good decade to understand that practical progress tends to preceed theoretical progress - we often figure out how to do something *before* we understand how it works.
7
9
71
@AIatMeta
AI at Meta
24 days
We’re in San Diego this week for #NeurIPS2025! Stop by the Meta booth (#1223) to meet our team and check out: 🔎 Demos of our latest research including DINOv3 and UMA ⚡ Lightning talks from researchers behind SAM 3, Omnilingual ASR and more (see schedule below) 👓 Hands-on
41
40
463
@alfcnz
Alfredo Canziani
6 years
On Tuesday, in my class, we have learnt that all a neural net does is stretching / contracting the space fabric. For example this 3-layer net (1 hidden layer of 100 positive neurons) gets its 5D logits (2D projections) linearly separable by the classifier hyperplanes (lines).
27
211
1K
@ennucore
Lev Chizhov
1 month
Can ultrasound make you smell things that aren’t there? Turns out, yes! We reliably triggered distinct scents like a campfire burn or a garbage truck by targeting our brains with ultrasound. To our knowledge, this has never been done before, even in animals. This may be a
256
599
4K
@jonathoda
Jonathan Edwards
1 month
Substrates 2025 Proceedings
1
4
10
@patrickc
Patrick Collison
1 month
Came across this summary that pulls together some of the recent findings on the links between commonplace (including ostensibly "harmless") pathogens and long-term harm to human health. The war on infectious diseases is far from over!
108
243
2K
@micahgoldblum
Micah Goldblum
1 month
🚨We converted pretrained LLMs into looped LLMs that can crank up performance by looping for more iterations. Our looped models surpass the performance of the pretrained models we started out with, showing that existing models benefit from increased computational depth. 📜1/9
10
26
153
@jpclap
Jakub Piotr Cłapa
1 month
Interesting paper, I’ve seen similar results in S2A training for WhisperSpeech and when doing enc-dec ASR. It’s surprising how the optimal width to depth ratio changes when you optimize for fast inference on modern GPUs.
@PandaAshwinee
Ashwinee Panda
1 month
our Gemstones paper on Scaling Laws is accepted at @NeurIPSConf! we release a bunch of models trained up to 2B params with varying width / depth and analyze the impact of scaling hidden dimension vs number of blocks in terms of FLOP-optimal and GPUhr-optimal. 🧵
0
0
0
@Dorialexander
Alexander Doria
1 month
Breaking: we release a fully synthetic generalist dataset for pretraining, SYNTH and two new SOTA reasoning models exclusively trained on it. Despite having seen only 200 billion tokens, Baguettotron is currently best-in-class in its size range.
81
152
1K
@lcamtuf
lcamtuf
2 months
What's the deal with Euler's identity?
Tweet card summary image
lcamtuf.coredump.cx
Untangling a cursed formula from 1748.
1
1
11
@stevekrouse
Steve Krouse
2 months
OpenAI – now worth half a trillion dollars – infamously started as a research lab Only 14 companies ever reached a trillion dollar valuation. Two of these – Apple and Microsoft – were also built on the work of a research lab, XEROX PARC This is not a coincidence In 2017, Alan
4
6
36
@jonathoda
Jonathan Edwards
2 months
“the structure has to happen somewhere because that's what the output page looks like and that doing it outside the database isn't working very well.” - Jamie Brandon
scattered-thoughts.net
0
2
10
@jeremyphoward
Jeremy Howard
2 months
It could be argued it makes sense for OpenAI to bet the company on the assumption they'll reach AGI soon, since that's their mission. But for any org for which AGI is not their mission, it's a bad bet: if we reach AGI, you're obsolete. If we don't, you're bankrupt.
1
5
33
@DanHollick
Dan Hollick
2 months
Before we all mute the word 'dithering' I thought I'd explain a little bit about why we needed to dither digital images in the first place. Although it's an aesthetic now, we used to need dithering to trick our eyes into seeing more colors than were actually there. 👇
23
50
738
@mitchellh
Mitchell Hashimoto
2 months
Some early HC employees will probably remember me joking that it was my divine mission to eliminate YAML from the world. I joked I started HC only to kill YAML. Like, back in 2013. And we (as an industry) were so close! Then Kubernetes came out and fucked it all up.
@livingdevops
Akhilesh Mishra
2 months
--- - Kubernetes uses YAML - Helm uses YAML - ArgoCD uses YAML - Ansible uses YAML - GitHub Action uses YAML - Gitlab CI uses YAML - Azure DevOps uses YAML Terraform uses YAML - GCP cloud build uses YAML Get good at YAML
104
95
3K
@hume_ai
Hume AI
2 months
Today, @NianticSpatial released an update to their AR companion, Dot, at Snap's Lens Fest, with new voice capabilities powered by Hume AI. Dot's new interactive dialogue capabilities allow the AI companion to guide users through physical spaces, offering contextual information
7
13
43
@ShamKakade6
Sham Kakade
2 months
1/8 Second Order Optimizers like SOAP and Muon have shown impressive performance on LLM optimization. But are we fully utilizing the potential of second order information? New work: we show that a full second order optimizer is much better than existing optimizers in terms of
26
80
596
@perigean
Brian Moon
2 months
📣 Now Available: Darwin's People: How Naturalists Explain Our Behavior. Links 👇 I wrote this book because I'm weary of all the pseudo-social-science that courses through social and traditional media, academic journals, and casual conversation. Our explanations of people and
10
8
39
@jeremyphoward
Jeremy Howard
2 months
One thing I do differently/more than basically everyone I've ever worked with is to make every single task & subtask an opportunity to learn something new or make something kinda cool. That's a cool thing about computers. You can do the work, or build a thing to make them do it
4
11
118
@docmilanfar
Peyman Milanfar
3 months
How Kernel Regression is related to Attention Mechanism - a summary in 10 slides. 0/1
13
170
1K