Nick Lothian
@nlothian
Followers
1K
Following
1K
Media
560
Statuses
9K
This is my experience too. Not sure we have solved this yet
Iâve been trying to simulate using Codex for the next year and what will change about my perspectives on software engineering as I transition from being a computer programmer to a harness engineer. There are so many, but here are a couple that have stuck with me: Software
1
0
3
Deepseek got called out for scraping 150k Claude messages. So I'm releasing 155k of my personal Claude Code messages with Opus 4.5. I'm also open sourcing tooling to help you fetch your data, redact sensitive info & make it discoverable on HF - link below to liberate your data!
621
2K
14K
I vibe coded a bluetooth proxy for Home Assistant so I could run it in a VM and the Intel CNVi would work. Details: https://t.co/Hru1aRw4xf Reop:
nicklothian.com
Proxying Bluetooth to a Home Assistant VM
0
0
0
Yeah Talaas is pretty impressive. Llamma3.1 8B at 16,000TPS? Yes please...
6
8
66
I implemented and then removed an entire multi-threaded version of a major data processing component. The TextEdit window I was keeping my performance metrics in lasted longer than the code đ€Ż
0
0
0
My son built this tool: https://t.co/UjBgADWmb1 It's like Geocodes, but for colors and with meaningful names. Now "Spectral Hyper Abyss" is the same color for everyone!
0
2
4
One not very hot take - The Claude C Compiler has the best internal architecture docs of any compiler I've ever seen. Far, far, better than any compiler I've ever written, lol :-)
14
53
1K
Features for deception were active over the transcript. Was the model intentionally being deceptive? The circuit offers a simpler explanation: While calling the tool, the model precomputes the correct answer âin its headâ. Then, it attends to that rather than the tool output.
2
9
146
Hmm the desktop Codex app really likes using perl for things. Not sure how I feel about this.
0
0
0
here's a link to the paper on ArXiv! thanks to my collaborators at FAIR: Niloofar Mireshghallah1, Mark Ibrahim , Saeed Mahloujifar https://t.co/XseeDBD7vw (i left FAIR in october; it just took a while to get the paper out for a number of logistical reasons)
arxiv.org
Recent research has shown that language models can learn to \textit{reason}, often via reinforcement learning. Some work even trains low-rank parameterizations for reasoning, but conventional LoRA...
4
7
147
The odd thing about the OpenAI reaction to the Claude Ad is that they seem *surprised*?! I'm not against ads at all - I think ads have been great for access! But using them makes you vulnerable to criticism. Didn't they expect this?
0
0
1
Despite the commentary here, if these numbers are correct it is VERY good news for OpenAI. Obviously you don't amortize your R&D costs over a single year. Does anyone NOT expect GPT-5 attributable revenue NOT to grow?
Even the gross profits from running models werenât enough to recoup R&D costs. Gross profits running GPT-5 were less than OpenAI's R&D costs in the four months before launch. And the true R&D cost was likely higher than that.
0
0
0
Tried my first serious task with Claude Co-work. We are cooked.
0
0
2
ChatGPT 5.2 thinking one-shotted it. Claude code (Opus, with full access) couldn't work it out. Hmm
0
0
1