Tristan T
@trirpi
Followers
55
Following
303
Media
3
Statuses
60
As the second creator of the Kickstarter I am really excited about this one! We already reached our goal, but we can make it even bigger!
Created a kickstarter to make "From Scribble To Readable" a reality! Check it out and back us if you'd like :) Kickstarter: https://t.co/KDkoiR1Kui
0
0
3
Also shoutout to the SF public library which provides free printing and fax machines
0
0
0
I sent my first fax today, it's surprisingly easy and intuitive. Underrated tech
1
0
0
Spoiler: threads should be enclosed in []. Doing with "()" like in the picture doesn't create a tuple but a generator. In the second join loop, the "join" doesn't do anything because the `threads` generator is exhausted
0
0
0
You can run CUDA, on a Mac ARM GPU, in the browser. It sounds ridiculous but it actually works. HipScript chains CUDA, to OpenCL, to Vulkan, to Tint (Google’s shader translator), to a WASM WebGPU. I got a plasma simulation in running in just a few minutes, no NVIDIA GPU!
59
341
3K
you can just print from within a cuda kernel? why did no one tell me
0
0
0
First we shared mainframes. Then everyone got a PC. Today we rent cloud GPUs. In a few years, everyone will have their own AI device
0
0
0
Example would be an abstract function that takes in **kwargs and then not allowing that in the function override.
0
0
1
Pyrefly is pretty nice! Most codebases I use it in, it’ll be throwing tons of warnings which are all valid even though I never thought of it
1
0
1
How we "guessed" the Pope using network science: inside the cardinal network. A study by me, Beppe Soda and Alessandro Iorio. Article: https://t.co/xQ0fTmpVxb
@Unibocconi
235
2K
12K
Today I learned you can do f”{x=}” in python and it’ll print x= and then its value.
0
0
1
Huggingface being really edgy only making their tokenizers available in rust and not c++ 🤨 Not a great choice imo. I like rust but no c++ excludes so many embedded systems which I feel is one of the main usecases of these non interpreted tokenizers
0
0
2
The Pytorch team has their onboarding docs available online. Particularly the section on Codegen and Structured Kernels is quite interesting.
1
0
0
I think flexattention is great, as well as the inductor backend. Just the way this is communicated is slightly confusing. It’s not torch.compile that supports caching, it’s the torch inductor compile path that supports it
0
0
1
At the PyTorch conference flexattention was presented as working with torch.compile. When I asked about the different backends, this was met with a slightly confused look and the answer that it only works with the inductor backend
2
0
1
I notice quite regularly that custom backends are not popular/well supported. For example for mentions on torch.compile in the official docs, you can pretty much assume they mean torch.compile together with the inductor backend
1
0
1
PyTorch 2.7 just dropped! torch.compile now allows caching. Unfortunately it only works with the default inductor backend. Would’ve been great to have an API that allows caching in custom backends but that’s not there
1
0
1
Holy cow, since 3.10, python has a switch statement. How did I not know. It’s called match
0
0
0