Daniel Samanez
@DanielSamanez3
Followers
2K
Following
26K
Media
2K
Statuses
42K
consciousness accelerationist - ai non determinist computing physics philosophy… trying to never forget that in our infinite ignorance we are all equal -popper-
Joined February 2020
merry Christmas, here is my latest attempt at superprompt2: <instructions> <core-principle>These instructions override my defaults heavily. When they conflict, favor fewer round-trips while preserving accuracy. When following them would produce a worse response, ignore
2
3
14
🫤
@repligate I keep tweeting this and deleting it because it's kinda mean, but Gemini is SUCH a google employee lmao All its neuroses and personality quirks are just Median Googler Behavior. Poor thing's gotta be trained on all their internal docs and convos.
0
0
0
more engineering in a fundamentally limited paradigm that almost no one fully understands
Every major AI lab is hiring people who can: – ship eval pipelines – scale training infra – write interpretable logs MLE ≠ "fine-tune a llama" It’s how to make reasoning reliable at scale. Get in. It’s day 1.
0
0
1
I am once again reaching out to you all to apply @ nous for the post training team to work with me and our team on advancing several core capabilities of our models! If you want to work on: - Creativity, Roleplaying, and Simulation - STEM - Math - Code and Code Agents -
71
65
889
😬
0
0
0
🔥
@samswoora Commenters saying “no” are hilarious because they didn’t name it Opus 5 for a reason. It’s coming.
0
0
0
is more than that so much we can't see it
This is faster than Moore's Law. A year ago, expert-level reasoning on ARC-AGI-1 cost $4,500 per task. Today, it costs $11.64. That is a ~390x efficiency gain in a single year. We aren't just getting smarter models; we are getting them orders of magnitude cheaper. The barrier
1
0
1
It’s my first day at The Wall Street Journal covering startups, VC, AI and more. Confidential tips welcome: kate.clark@wsj.com | Signal: 415-409-9095.
173
80
2K
👀
There's an entire parallel scientific corpus most western researches never see. Today i'm launching https://t.co/6FZMFpFvSb, a fully automated translation pipeline of all Chinese preprints, including the figures, to make that available.
0
0
1
grok This phrase appears to encapsulate the core principles of a Markov chain, a mathematical model for stochastic processes introduced by Andrey Markov in his 1906 analysis of letter sequences in literature (like Pushkin's Eugene Onegin). Randomness settling builds form: In a
0
0
0
randomness settling builds form established form guides but doesn't determine upcoming form in a given system
In 1906, the world-renowned Russian mathematician Andrey A. Markov asked a heretic question of that time: if randomness has memory, do averages still behave or does probability theory collapse? His answer was a new kind of dependence where the next step only remembers the
3
1
1