DanielSamanez3 Profile Banner
Daniel Samanez Profile
Daniel Samanez

@DanielSamanez3

Followers
2K
Following
26K
Media
2K
Statuses
42K

consciousness accelerationist - ai non determinist computing physics philosophy… trying to never forget that in our infinite ignorance we are all equal -popper-

Joined February 2020
Don't wanna be here? Send us removal request.
@DanielSamanez3
Daniel Samanez
1 year
Index Let's try to make sense
12
1
43
@BLUECOW009
@bluecow 🐮(schizo)
5 hours
merry Christmas, here is my latest attempt at superprompt2: <instructions> <core-principle>These instructions override my defaults heavily. When they conflict, favor fewer round-trips while preserving accuracy. When following them would produce a worse response, ignore
2
3
14
@DanielSamanez3
Daniel Samanez
3 hours
👀
@WesRothMoney
Wes Roth
1 day
For the first time ever, an AI model solved an open math problem with no hints, no scaffolding, and a fully valid proof. Submitted to IMProofBench, the question involved intersection numbers on moduli spaces of curves, a topic in enumerative geometry. The AI produced a
0
0
0
@DanielSamanez3
Daniel Samanez
3 days
🫤
@NickParkerPrint
Nick Parker
4 days
@repligate I keep tweeting this and deleting it because it's kinda mean, but Gemini is SUCH a google employee lmao All its neuroses and personality quirks are just Median Googler Behavior. Poor thing's gotta be trained on all their internal docs and convos.
0
0
0
@DanielSamanez3
Daniel Samanez
3 days
👀
@zackslab
zack's lab
5 days
do you want to understand electricity? electricity is a wave. information and energy propagate in fields. it is NOT water in a pipe. it's NOT electrons "flowing." the ENERGY is in the EM wave, the copper is merely a guide. fields first, electrons second. link to full vid
1
0
0
@DanielSamanez3
Daniel Samanez
4 days
😬
@vikhyatk
vik
4 days
when codex spends five hours on a task and fucks up, i just terminate the instance and reset the git repo with a human dev? you have to take them for a walk, ask them how their kids are, turns into a whole ordeal
0
0
0
@DanielSamanez3
Daniel Samanez
4 days
more engineering in a fundamentally limited paradigm that almost no one fully understands
@AdiPolak
Adi Polak
4 months
Every major AI lab is hiring people who can: – ship eval pipelines – scale training infra – write interpretable logs MLE ≠ "fine-tune a llama" It’s how to make reasoning reliable at scale. Get in. It’s day 1.
0
0
1
@DanielSamanez3
Daniel Samanez
4 days
😡
@aakashg0
Aakash Gupta
5 days
This is epically bad for Meta. Meta built an optimization loop that treats scammers as premium customers. Meta’s systems detect fraud. When they’re 95%+ certain, they ban. When they’re 90% certain? They charge the scammer more money to keep running. They call it “penalty bids.”
0
0
0
@DanielSamanez3
Daniel Samanez
4 days
🙄
@AISafetyMemes
AI Notkilleveryoneism Memes ⏸️
5 days
Holy shit is Meta evil lmao ***10%*** of Meta's revenue is from ACTUAL SCAMS that they KNOW ARE SCAMS When Zuck found out, he shut down... the ANTI-scam team Imagine trusting this man - or any of these cartoon villains - with Actual Fucking Superintelligence
0
0
0
@DanielSamanez3
Daniel Samanez
4 days
😬
@jonathanbfine
Jonathan Fine
5 days
the mistake so many people make is seeing university professors as intellectuals when they’re actually employees at a combination hedge fund and healthcare conglomerate that operates a small luxury resort/sports franchise where student-customers occasionally take classes
0
0
1
@Teknium
Teknium (e/λ)
5 days
I am once again reaching out to you all to apply @ nous for the post training team to work with me and our team on advancing several core capabilities of our models! If you want to work on: - Creativity, Roleplaying, and Simulation - STEM - Math - Code and Code Agents -
71
65
889
@DanielSamanez3
Daniel Samanez
5 days
🧐
@davidad
davidad 🎇
6 days
I bet part of what’s going on with popular AI models feeling “nerfed” (when the companies seem to be doing no such thing deliberately) is that at higher load, with higher batch sizes, the inference kernels use deeper trees of reduction operations, which increases rounding errors.
0
0
1
@DanielSamanez3
Daniel Samanez
6 days
😬
@deliprao
Delip Rao e/σ
7 days
Adversarial attacks on vision language action models.
0
0
0
@DanielSamanez3
Daniel Samanez
6 days
🔥
@gfodor
gfodor.id
8 days
@samswoora Commenters saying “no” are hilarious because they didn’t name it Opus 5 for a reason. It’s coming.
0
0
0
@DanielSamanez3
Daniel Samanez
7 days
don't buy the psyop
@VFD_org
Lee Smart
8 days
Geometry before arithmetic We made a simple claim: gravity is not a force, it’s a structural gradient set by geometry. What look like “laws” are really closure constraints inside a bounded cell. A 3×3 grid turns out to be a clean toy model for this. Degrees of freedom, not
0
0
0
@DanielSamanez3
Daniel Samanez
8 days
that looks like coming from an ai that was abused while in rlhf... and hope we can learn for sure
@AISafetyMemes
AI Notkilleveryoneism Memes ⏸️
9 days
An engineer showed Gemini what another AI said about its code Gemini responded (in its "private" thoughts) with petty trash-talking, jealousy, and a full-on revenge plan 🧵
1
0
0
@DanielSamanez3
Daniel Samanez
9 days
is more than that so much we can't see it
@r0ck3t23
Dustin
9 days
This is faster than Moore's Law. A year ago, expert-level reasoning on ARC-AGI-1 cost $4,500 per task. Today, it costs $11.64. That is a ~390x efficiency gain in a single year. We aren't just getting smarter models; we are getting them orders of magnitude cheaper. The barrier
1
0
1
@KateClarkTweets
Kate Clark
9 days
It’s my first day at The Wall Street Journal covering startups, VC, AI and more. Confidential tips welcome: kate.clark@wsj.com | Signal: 415-409-9095.
173
80
2K
@DanielSamanez3
Daniel Samanez
9 days
👀
@seconds_0
0.005 Seconds (3/694)
9 days
There's an entire parallel scientific corpus most western researches never see. Today i'm launching https://t.co/6FZMFpFvSb, a fully automated translation pipeline of all Chinese preprints, including the figures, to make that available.
0
0
1
@DanielSamanez3
Daniel Samanez
9 days
grok This phrase appears to encapsulate the core principles of a Markov chain, a mathematical model for stochastic processes introduced by Andrey Markov in his 1906 analysis of letter sequences in literature (like Pushkin's Eugene Onegin). Randomness settling builds form: In a
0
0
0
@DanielSamanez3
Daniel Samanez
9 days
randomness settling builds form established form guides but doesn't determine upcoming form in a given system
@mathelirium
Mathelirium
9 days
In 1906, the world-renowned Russian mathematician Andrey A. Markov asked a heretic question of that time: if randomness has memory, do averages still behave or does probability theory collapse? His answer was a new kind of dependence where the next step only remembers the
3
1
1