
Taelin
@VictorTaelin
Followers
63K
Following
45K
Media
828
Statuses
16K
Kind / Bend / HVM / INets / λCalculus
São Paulo
Joined March 2011
RELEASE DAY After almost 10 years of hard work, tireless research, and a dive deep into the kernels of computer science, I finally realized a dream: running a high-level language on GPUs. And I'm giving it to the world! Bend compiles modern programming features, including: -
451
2K
15K
Note to self: SupGen should have a clean API completely separated from Bend (was already the plan, but still)
0
0
25
MODERATOR: Are you willing to commit to NOT raise the sales tax? MIKIE SHERRILL: I'm not going to commit to anything right now. On Nov. 4, vote NO on Mikie Sherrill. ❌
37
85
231
ok so for the first time ever, windsurf launches something that I'm actually interested in using, except I can't really use it, because it is tied to other products that I have no interest in great model, bad launch :/
We're so focused on AGI that we're missing so many opportunities of building task-specific models that are incredibly effective in practice. I've asked for something like this a long time ago, and I'm glad someone finally shipped it. Perhaps it is time to write my own Codex
5
1
116
We're so focused on AGI that we're missing so many opportunities of building task-specific models that are incredibly effective in practice. I've asked for something like this a long time ago, and I'm glad someone finally shipped it. Perhaps it is time to write my own Codex
We trained a first-of-its-kind family of models: SWE-grep and SWE-grep-mini. Designed for fast agentic search (>2,800 TPS), surface the right files to your coding agent 20x faster. Now rolling out gradually to Windsurf users via the Fast Context subagent.
34
21
599
ok, people who use OSS models and Codex: is there any model all that I could train specifically on my stuff, that you'd say would perform as well as gpt-5-high on it? :') people don't get that these posts are a cry for help I desperately need a faster gpt-5-high
I'm bottlenecked by Codex. Every day, I have to choose 2 or 3 things to work on, because it will spend a long time on each. My routine is like: - wake up - write a prompt - send it to Codex - go eat something - come back, it is still not done - take a shower - come back, still
49
2
196
Reposted due to typos Rereposted due to typos Agora vai, sem falha
0
0
12
"How SupGen works under the hoods?" Explanation attempt targeting experienced Lean / Agda / Haskell devs
@vr4300 I can share this! Under the hoods, it is just a proof checker on HVM, except that we don't implement unification like traditional verifiers do. Instead, we use a "superposition of all terms" to replace metavars. For example, if you write: id : ∀a. A → A id = λA. λx. x In
2
3
68
I want to launchhhhh Why there are always so many small things to do It never ends
22
4
199
Oops, small typo on the screenshots: https://t.co/jSZRN1WR78 I manually translated the HVM4 output to Bend since this feature is not on it yet. Here's the raw HVM4 input / output: https://t.co/dxMn8hLGxE
gist.github.com
GitHub Gist: instantly share code, notes, and snippets.
@VictorTaelin ( `add(x, 0)` is `x` not `0` )
0
0
18
Unrelated (and nobody will understand shit, but I just want to make a note). HVM4 has a new completion: dynamic sups handle superposed labels like... & (&X{a,b}) {c,d} ----------------- ! C &X = c ! D &X = d &X{ &a{C₀,D₀} , &b{C₁,D₁} } HVM3 would just abort in this case!
Big breakthrough on SupGen today Up to this day, SupGen could only discover functions when given their type signatures. So, for a simple example, suppose you wanted to synthesize a multiplication algorithm for Peano Nats. Currently, the best you could do is type: ``` def add(x:
7
0
84
Big breakthrough on SupGen today Up to this day, SupGen could only discover functions when given their type signatures. So, for a simple example, suppose you wanted to synthesize a multiplication algorithm for Peano Nats. Currently, the best you could do is type: ``` def add(x:
12
17
278
Building HVM2 CUDA kernels traumatized me enough HVM3 is much more complex, so yeah I'm good Glad we got the mac mini cluster :)
0
0
37
Update: no, it isn't ): GPT-5 is now hard-stuck trying to port a function from C to CUDA No progress at all, despite just having to copy it I also don't think investing in a GPU runtime for HVM4 is worth my time right now. We need to launch. Perhaps on the next AI leap...
18
6
295
Going smoothly = showing signs this could perhaps work? Worth noting I assumed lazy-mode GPU evaluation would never work, and that's what motivated me to invest on a cluster of CPUs...
2
0
25
yes these are convenient made up questions that nobody ever asked
8
0
141
"Why don't you use gpt-codex / gpt-5-medium?" I tried but the additional errors cost me more time than I saved by the shorter inference times. "Why don't you work on multiple things in parallel?" Very hard in a single project, but I'm starting to consider that. Hard to me
16
1
246
I'm bottlenecked by Codex. Every day, I have to choose 2 or 3 things to work on, because it will spend a long time on each. My routine is like: - wake up - write a prompt - send it to Codex - go eat something - come back, it is still not done - take a shower - come back, still
208
74
2K
imagine if aging turns out to be not damage nor loss of information, but an actual evolved feature of our bodies that we can just turn off
Big anti aging news Scientists Identify Core "Aging Genes" Shared Across Species. Silencing Them Extends Lifespan Researchers publishing in Aging Cell have pinpointed a set of core genes that drive aging across multiple species, from humans to worms. By analyzing 25 large
60
24
553