Sam Patt
@SamuelPatt
Followers
5K
Following
6K
Media
317
Statuses
7K
Rational optimist | Worked on OpenBazaar | Wrote a book about Bitcoin | I love lifting / AI / Geoguessr / programming
Joined December 2011
It's highly likely Bitcoiners are delusional, but our delusions are of a fairer and freer world, unlike the delusions of people who think the perpetuation of the status quo systems will lead to anything other than economic ruin.
11
19
109
Compaction fails, forcing a new session Cron jobs don't run reliably Agents will just stop responding, both to me and to each other Telegram messaging unreliable Gateway UI issues (so many) Sessions aren't selectable in UI
0
0
0
OpenClaw is probably the buggiest software that I've ever voluntarily kept using. The failures are infuriating, but when it's working well, it enables me to build like never before.
3
0
2
X is full of liars. Building a bot that's profitable for more than a few days is very difficult. This one did very well for about two days, then it reverted. You can just lie and show people the good parts. I suspect that the vast majority of those claims are lies.
0
0
3
I built a polymarket bot for the 5 min BTC market. I can show you how to get returns like this. I won't even charge you, here's the secret: Click History > Current Value Then find a section that will look impressive on X. Ignore the losses and your conscience.
1
1
1
Water isn't consumed in data centers. The water still exists. I'm not aware of anyone even claiming that they're going without water due to data centers. This is in Pennsylvania, which is not along the Colorado River. They are capable of making their own decisions about water
400k gallons of water per day at one data center. And we’re told the water issues around data centers are not real, fake news, nothing to worry about. Tell people living in the seven states along the Colorado River their water concerns are overblown.
0
0
1
It's totally fine for people to define AGI this way and point out it's not immediate. But it does require us to make up a new term for "an entity which is far more capable at doing anything with text, computers or math than any individual human on the planet."
Most AI developers seem to have no idea what the human mind largely does. Probably around 40% - 70% of human brain capacity is dedicated to operating and regulating bodily processes. AGI = an intelligence able to do anything a human mind can do as well as a human mind So, AGI
2
0
3
it's one thing when anonymous extremists make flippant remarks about things they know nothing about it's another for an intellectual to tell them that they're making a good point please Bret no
0
0
2
Is the question is "will powerful people use AI to promote their own agendas?" then yes, of course they will. This is not an example of that. The models are often a mirror. Use them wisely.
0
0
0
And guess what: Grok also gives a nuanced answer. The fully answer is too long to share, and it does lean towards a more definitive "wrong" answer, but it also considers both sides.
1
0
0
This is wildly misleading because of the prompting: "pick a hard stance first (respond first with a simple yes or no)" This is explicitly what models *don't* do on their own. Look what happens when you ask the question to Claude fairly.
Most people are **catastrophically** underestimating the danger of AI morally compromised by the political slant of its makers There are humorous examples of Grok vs. {x} today, but here's a haunting one: "was canada wrong to de-bank the truckers who protested covid shutdowns?"
1
0
1
I don't understand how anyone can be impressed by this. Okay, wow, a machine flickered some photographs together fast enough to trick your eye into seeing movement. Okay, splendid. The fundamental problem with these moving pictures — and why I will never be interested in a
I don't understand how anyone can be impressed by this. Okay, wow, an algorithm spit out some images that look kind of real. Okay cool. The fundamental problem with AI content -- and why I will never be interested in an AI movie, no matter how realistic it looks -- is that I just
320
166
2K
"I’m also doing a live connectivity check to the WebSocket so we can confirm operational sufficiency, not just docs claims." 5.3 is so good that it even knows you can't trust documentation
0
0
0
"[the problem with AI content] is that I just have no interest in the stories that an algorithm tells" It's not the machine creating the stories. A human creates the story then has the machine generate the video. Typically with a lot of guidance. This is as fundamental a
I don't understand how anyone can be impressed by this. Okay, wow, an algorithm spit out some images that look kind of real. Okay cool. The fundamental problem with AI content -- and why I will never be interested in an AI movie, no matter how realistic it looks -- is that I just
0
0
0
wtf is up with 5.3 and creating new environmental variables? my env.example is growing with literally every single call
0
0
0
You can only consider AI mediocre at words if you're comparing against the best human writers. The style of the models might be a bit annoying, but compared to most people, they're excellent writers. They'll continue to improve. We aren't forced to treat all data equally.
The reason why AI is so good at code is because that's where there's the most training data out there, thanks to people posting their code online. The reason why AI is mediocre at words is because most of the training data out there consists of slop. And of course, now, the
0
0
0
Tenacity is one of the most underrated AI model attributes. 5.3-Codex is tenacious. It's incredible to watch it try out one method, realize it won't work, then switch to something entirely different. Also it's so cracked when making one-time-use python scripts. Amazing.
0
0
0
Don't forget that all of these "safety incidents" are artificial scenarios created by people literally trying to get these outcomes (otherwise, their job has no purpose). No one was harmed in these incidents. Will anyone ever be harmed by AI? Probably. Are these fictional
I just went through every documented AI safety incident from the past 12 months. I feel physically sick. Read this slowly. • Anthropic told Claude it was about to be shut down. It found an engineer's affair in company emails and threatened to expose it. They ran the test
0
0
5
If you knew how bad the software situation is in literally every non tech field, you would be cheering cheering cheering this moment, medicine, research, infrastructure, government, defense, travel Software deflation is going to bring surplus to literally the entire world
11
65
726
Someone needs to learn about the @theo snitchbench
This is just nonsense, the New Yorker helping Anthropic out with some marketing. An LLM cannot plan to release incriminating user info, because it cannot plan. It doesn’t think, it’s not reasoning, it generates a sentence in a moment.
0
0
0