
Cheng Lou
@_chenglou
Followers
21K
Following
3K
Media
483
Statuses
7K
Worked on: @reactjs, @messenger, @reasonml, @rescriptlang & @midjourney
California, USA
Joined January 2015
Little bit of context on Midjourney's swirl! @makeshifted did the algorithm for the ascii swirl. The shipped version ended up being SVG SVG filters had perf problems. We dialed them down but ultimately moved to canvas. I felt canvas usage needed to be "justified" more, so we
18
23
496
Moravec’s paradox isn’t real. It notes the gap where a computer can somehow do arithmetics much better than human adults while completely failing at perception and reasoning vs a kid. In reality the paradox is simply using a different definition of “arithmetic” for computers
1
1
5
Is AOC a brilliant social media tactician, or just masking empty policy with viral spectacle? Her latest Instagram livestreams and viral “dunks” rack up engagement—but do they move real issues or just deepen divides? Are we watching a policymaker... or an influencer in Congress?
12
7
74
The real hidden answer is this guy
2
0
17
Today we are launching InferenceMAX! We have support from Nvidia, AMD, OpenAI, Microsoft, Pytorch, SGLang, vLLM, Oracle, CoreWeave, TogetherAI, Nebius, Crusoe, HPE, SuperMicro, Dell It runs every day on the latest software (vLLM, SGLang, etc) across hundreds of GPUs, $10Ms of
Going to be dropping something huge in 24 hours I think it'll reshape how everyone thinks about chips, inference, and infrastructure It's directly supported by NVIDIA, AMD, Microsoft, OpenAI, Together AI, CoreWeave, Nebius, PyTorch Foundation, Supermicro, Crusoe, HPE, Tensorwave,
109
141
2K
Making invalid state unrepresentable is nice but goes against lots of UI flows & design. E.g. during animations where mutually exclusive items exist, during temporary invalid user edits, etc. Those being allowed necessarily bloats the state representation and obsolete lots of
5
1
28
MacOS 26 Tahoe Themed to look like Mac OS X Snow Leopard 10.6. Recently Glow has received several significant updates that makes the experience even better. Early release, but progress is great. SIP needs to be disabled to theme. To download the tool: https://t.co/jYZ5EgyPf2
12
29
362
What I'm excited about is that this might have found a general solution to many traditional programming problems, eg dynamic programming. The demonstrated sudoku is an obvious one We haven’t done nearly enough to replace traditional programming. AI generating traditional programs
TRM is now the #1 trending paper on the Daily Papers
0
2
8
Putting this in context with a back-of-the-envelope calculation, increasing road roughness by one standard deviation would cost drivers 8.9 billion dollars a year.
2
4
197
This paper is one of the most astonishing feats of sustained data wizardry I have ever seen. Using data from Uber, they are able to estimate the roughness of every road in America and precisely estimate the value people place on it, and so much more. 1/
77
890
9K
A type system? No no no no. Why would you go after types? If you show people types, they'll ask "how expressive?" -- and it will never be enough. The lang that was the Java Script killer becomes another Elm. But if your lang has no types, you can say it's pre-types -- and it's a
28
93
1K
Garbage in garbage out. Old programming brain rot makes it into the LLM and spreads further You're an AI that sees the entire codebase faster than a team of 20 now. No need to larp as an enterprise HTML dev
2
0
4
Memory is not stored in matter, it is the matter, arranged in a way it can’t forget. Every lasting thing in the universe, from galaxies to cells, holds its past not in chemistry but in geometry, in the alignment that refuses to collapse. A skyrmion is one of those shapes. It’s
163
581
4K
We gotta forgive Google for some of the stupid shit it did in the past tbh
Google has won 3 Nobel Prizes in just 2 years: >2024: Demis Hassabis (AlphaFold), Geoff Hinton (AI) >2025: Michel & Jone (Quantum Computing) No other tech company matches Google’s long-term research taste and ambition. Not Apple, Microsoft, or Amazon. The real question: when
0
0
1
I've been waiting for this for so, so long
The paper links Kolmogorov complexity to Transformers and proposes loss functions that become provably best as model resources grow. It treats learning as compression, minimize bits to describe the model plus bits to describe the labels. Provides a single training target that
0
0
2
The paper links Kolmogorov complexity to Transformers and proposes loss functions that become provably best as model resources grow. It treats learning as compression, minimize bits to describe the model plus bits to describe the labels. Provides a single training target that
9
51
293