
Charles Wang
@charleswangb
Followers
2K
Following
4K
Media
1K
Statuses
12K
Bio/Medicine/Health AI. Transform life and the world. Complexity—Universality—Regenerativity—Transformation—Progress
Silicon Valley, CA
Joined June 2009
Add c, m, e, or h (computational, machine, emulated, homomorphic) to AI understanding, reasoning, thinking etc. H is most appropriate for GenAI, navigating morphodynamic space. With this core characteristic, no more misattribution of homeodynamics or teleodynamics or combo to AI.
Let me repeat:. LLMs has NO understanding, reasoning, world modeling, agency, sentience, consciousness…as any biological being does in the real world. Yet this does NOT tarnish its immense usefulness—one of most powerful things invented so far: <homomorphy ⇌ context>
1
0
1
RT @charleswangb: @bengoertzel Cool. Intelligence should be engineered in ways that its primary mode can never be exclusively centralized;….
0
1
0
If you are wondering where these Ts come from, look at this “tiny market”, $125T, and have a sense of why we need that much compute:.
My tweets are not cheap; some are worth 7 trillion dollars each if they are engineered into distributed platforms, marketplaces, and ecosystems to systematically see through illusion and deep into reality, to inexhaustibly cultivate meaning in both life and cosmic connections.
0
0
1
If UTM is a multi century startup, NVDA at $4T is still MVP; . PMF in value creation impacting people’s daily lives will bring it to $20T. In growth and scaling, $100T. In a mature business phase ready for “managerial class”, thrust to $1000T summit . Still, early, very early.
At what point and when this may become salient to mind:. NVDA vs US market cap. NVDA vs world market cap.
1
0
1
RT @charleswangb: Let me repeat:. LLMs has NO understanding, reasoning, world modeling, agency, sentience, consciousness…as any biological….
0
1
0
This understanding is nontrivial and should bear in mind as always. Right expectation entails proper actions. If not, we will be facing dilemmas like this, risking of boxing up or restricting a novel technology that has enormous upside potential:.
I worry that so much discussion of AI risks and alignment overlooks the rather large elephant in the room: creativity and open-endedness. Policy makers and gatekeepers need to understand two competing forces that no one seems to talk about: (1) there is a massive economic.
1
0
0
That’s right, hallucination isn’t all-in-itself a problem when humans take the center role of meaning- and sense-making. It is a BIG problem when LLM is imposed as “ministry of truth”, it is not, it has NO understanding whatsoever than just navigating the homomorphic space.
More and more I use language models for reasoning and opinions, not to learn facts. For example I ask about arguments for an against a situation. Or an opinion with long form justification. as a result, I rarely worry about hallucinations anymore.
1
0
0