fozenne Profile Banner
francois Profile
francois

@fozenne

Followers
70
Following
543
Media
11
Statuses
584

lead data scientist. AI for high expertise domains, functional programing and domain driven design

Versailles, France
Joined January 2013
Don't wanna be here? Send us removal request.
@fozenne
francois
14 days
The three body problem novel is about AI doom
0
0
0
@fozenne
francois
14 days
2026 prediction : MD5: d1c5c969fc61989992d0a5128c1a42b1 Let’s see how long it takes to realize 👀
0
0
0
@TerribleMaps
Terrible Maps
26 days
Mind blown.. Germany’s 5 biggest cities lie perfectly on a 4th-degree polynomial by u/BarisSayit
348
902
26K
@JustinMitchel
Justin Mitchel
29 days
So... Postgres is now basically a search engine? pg_textsearch was just open sourced. It enables BM25 to search your database.... massive upgrade for key word search. Google uses BM25 in their search engine. Claude told me: "if you're already on Postgres, you can now skip
Tweet card summary image
github.com
PostgreSQL extension for BM25 relevance-ranked full-text search. Postgres OSS licensed. - timescale/pg_textsearch
89
408
5K
@MistralAI
Mistral AI
29 days
Mistral OCR 3 sets new benchmarks in both accuracy and efficiency, outperforming enterprise document processing solutions as well as AI-native OCR.
15
80
776
@fozenne
francois
1 month
And that’s why you in-house them
@alxfazio
alex fazio
1 month
friend at accenture told me they don’t do evals when building llm wrappers for clients 🤡
0
0
1
@jhleath
Hunter Leath
1 month
an interesting update: the team is starting to move away from AI coding completely (devin/claude/etc) because it's so much harder to review the AI code than writing things themselves
@jhleath
Hunter Leath
6 months
just found out that since this, i've become a top 50 user of Devin globally, now pushing ~60 PRs a day. AMA
196
227
4K
@simonw
Simon Willison
2 months
This one is pretty nasty - it tricks Antigravity into stealing AWS credentials from a .env file (working around .gitignore restrictions using cat) and then leaks them to a webhooks debugging site that's included in the Antigravity browser agent's default allow-list
@PromptArmor
PromptArmor
2 months
Top of HackerNews today: our article on Google Antigravity exfiltrating .env variables via indirect prompt injection -- even when explicitly prohibited by user settings!
49
335
2K
@doodlestein
Jeffrey Emanuel
2 months
Just read through the new LeJEPA paper by Yann LeCun and Randall Balestriero. I’ve been curious to know what Yann’s been working on lately, especially considering all his criticisms of LLMs (which I disagree with, as I think LLMs will keep improving and will take us to ASI fairly
40
99
936
@jxmnop
dr. jack morris
2 months
there are dozens or perhaps a couple hundred ex-{OpenAI, xAI, Google DeepMind} researchers founding companies in the current climate there are, as far as i know, zero people leaving to found startups out of Anthropic really makes you think
90
51
2K
@specialkdelslay
special k | CEO of snowbird switchup gift boss ❄🧥
3 months
This is supposed to be the thermodynamic quantum computer? it looks like a 3d printed plastic toy with demon symbols on the side or sum, 14 million in seed funding?? fill me in on what I'm missing here
@AnjneyMidha
Anjney Midha
3 months
Got to see it IRL. Congrats @GillVerd and team! So crazy it might just work. Excited to see what kinds of diffusion workloads this beast can accelerate
83
22
735
@cloneofsimo
Simo Ryu
3 months
Im confused about "10,000 more efficient" part. This means you can train stable-diffusion-3 like model with 20$~ ish amount of electricity. What stops them from building a model and demonstrating it, beyond *checks note* ... Fashion MNIST? Im genuinely curious whats stopping them
@extropic
Extropic
3 months
Hello Thermo World.
74
22
675
@AnthropicAI
Anthropic
3 months
New Anthropic research: Signs of introspection in LLMs. Can language models recognize their own internal thoughts? Or do they just make up plausible answers when asked about them? We found evidence for genuine—though limited—introspective capabilities in Claude.
294
796
5K
@wirelyss
Wirelyss 👁️‍🗨️💫
3 months
Luckily since the Louvre made NFTs of their jewelry, even though the crowns physically were stolen, they still own the same assets. Because the tokens still exist and are in limited supply just as before. Nothing has changed. few understand blockchain technology.
326
1K
16K
@tekbog
terminally onλine εngineer
3 months
multi cloud multi az systems engineers right now
53
1K
19K
@tekbog
terminally onλine εngineer
3 months
this is basically how open source works for big tech
@Mihonarium
Mikhail Samin
3 months
Amazing story: the Czech government spent six years planning a series of dams. A family of beavers constructed the dams for free, in 1-2 says, in the same locations that human picked, accomplishing the goals set by the Czech government and saving humans $1.2 million
37
455
9K
@fozenne
francois
4 months
Nice! Data extraction via web search tool calls was a vulnerability we were worried about early on. Glad it hzs been properly documented.
@simonw
Simon Willison
4 months
Classic prompt injection attack here against Notion: hidden text (white on white) in a PDF which, when processed by Notion, causes their agent to gather confidential data from other pages and append it into a query string that gets passed to their functions_search() tool
0
0
0
@WenhuChen
Wenhu Chen
4 months
The Internet latency is no joke. It took three years to open an Arxiv link.
26
25
882
@getjonwithit
Jonathan Gorard
4 months
Amusing how 99% of people interacting with reality forget how this thing works. It's an advanced extremization machine. It generates the next instant of time based on the Cauchy surface and the action. Under the hood, it's a giant volume integral that has eerily good output.
@GergelyOrosz
Gergely Orosz
4 months
Amusing how 99% of people using LLMs forget how these things work: They are advanced probability machines. They generate the next most likely token (word) based in the input and their training. Under the hood, it’s a giant matrix multiplication that has eerily good output.
46
98
1K