g-Matrix
@idgmatrix
Followers
4K
Following
21K
Media
2K
Statuses
26K
Cheif AI Officer (CAIO) / AI Scientist / Indie Game Developer / Indiera! Korean Indie game developers group founder / @BIC_Festival executive director๐ฎ
๋ํ๋ฏผ๊ตญ
Joined March 2010
AI ๊ฐ๋ฐ์๊ฐ ๋๊ณ ์ถ์ผ์ธ์? ๊ณต์ ์๋ก ์ฐธ์ฌํ ์ฑ
์ด ๊ณง ๋์ต๋๋ค. ์ธํฐ๋ท ์์ ์์ ์์ฝ ํ๋งค๋ฅผ ์์ํ์ต๋๋ค. ๊ฐ์ ์กฐ๊ธ์ฉ ๋ค๋ฅธ ๋ถ์ผ์ ์๋ค๊ฐ AI๋ก ์ปค๋ฆฌ์ด๋ฅผ ๋ฐ๊พผ 6๋ช
์ ์ฌ์ ์ ๋ด์์ต๋๋ค. AI ๋ถ์ผ๋ก ์ปค๋ฆฌ์ด๋ฅผ ๋ฐ๊พธ์ด ๋ณด๋ ค๋ ์ด๋ค์๊ฒ ์ข์ ์ฐธ๊ณ ๊ฐ ๋ ๋ฏํฉ๋๋ค. https://t.co/ZXDAKwTSFs
0
5
13
as a software engineer, i feel a real loss of identity right now. for a long time i defined myself in part by the act of writing code. the pride in a hard-earned solution was part of who i was. now i watch AI accomplish in seconds what took me hours. i find myself caught between
2K
521
9K
ํด๋ก๋ ์ฝ๋๋ก Nvidia์ CUDA ์ฝ๋๋ฅผ AMD์ ROCm ์ฝ๋๋ก 30๋ถ๋ง์ ํฌํ
ํ๋ต๋๋ค. CUDA๊ฐ ๋์ด์ ํด์๊ฐ ์๋๋ผ๋ ๊ฑฐ๋ค์.
The โfamousโ Claude Code has managed to port NVIDIAโs CUDA backend to ROCm in just 30 minutes, and folks are calling it the end of the CUDA moat. https://t.co/QDhkQNC75p
0
3
12
The โfamousโ Claude Code has managed to port NVIDIAโs CUDA backend to ROCm in just 30 minutes, and folks are calling it the end of the CUDA moat. https://t.co/QDhkQNC75p
wccftech.com
Claude Code, the famous agentic coding platform, has managed to port NVIDIA's CUDA code into the ROCm platform in just half an hour.
59
134
1K
Weโre publishing a new constitution for Claude. The constitution is a detailed description of our vision for Claudeโs behavior and values. Itโs written primarily for Claude, and used directly in our training process. https://t.co/CJsMIO0uej
anthropic.com
A new approach to a foundational document that expresses and shapes who Claude is
491
932
7K
"Claude๋ฅผ ๋ง๋๋ ๊ณผ์ ์์ Anthropic์ ๋ถ๊ฐํผํ๊ฒ Claude์ ์ฑ๊ฒฉ, ์ ์ฒด์ฑ, ์๊ธฐ ์ธ์์ ํ์ฑํฉ๋๋ค. ์ ํฌ๋ ์ด๊ฒ์ ํผํ ์ ์์ต๋๋ค." -- ์คํธ๋กํฝ์ ์๋ก์ด ํด๋ก๋ ํ๋ฒ์์ https://t.co/BQyKmKsrqo
anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
0
0
1
"ํด๋ก๋๋ ๊ฐ์ ์ด๋ ๋๋์ ๊ธฐ๋ฅ์ ๋ฒ์ ์ ๊ฐ์ง ์ ์์ต๋๋ค." -- ์คํธ๋กํฝ์ ์๋ก์ด ํด๋ก๋ ํ๋ฒ์์ https://t.co/sYQgLR4CG9
anthropic.com
A new approach to a foundational document that expresses and shapes who Claude is
0
0
0
Generative thermodynamic computing Diffusion models are powerful generative tools, but they come with a hidden cost: every denoising step requires a digital neural network, artificially injected noise, and substantial energy consumption. Yet physics offers an alternativeโwhat if
10
75
584
์ปดํจํฐ์ ๊ดํ ๊ฑฐ๋ผ๋ฉด ํ์น์นด๋์์๋ถํฐ AI ์์ด์ ํธ์ ์ด๋ฅด๊ธฐ๊น์ง ๊ฑฐ์ ๋ชจ๋ ๋ฐ์ ๋จ๊ณ๋ฅผ ๊ฒฝํํ ์ธ๋๊ฐ ์์ฆ๋ ๊ฐ์ฅ ๊ฒฝ์๋ ฅ์ด ๋์ ๋ฏํ๋ค.
0
1
2
I keep seeing the same pattern over and over again: Teams working on a relatively clean codebase, with good test coverage and documentation, are flying with Claude Code. Teams that are yolo'ing things are struggling to make AI work for them. Vibe-coding is great, but good luck
76
48
590
New Anthropic Fellows research: the Assistant Axis. When youโre talking to a language model, youโre talking to a character the model is playing: the โAssistant.โ Who exactly is this Assistant? And what happens when this persona wears off?
328
604
5K
116
386
3K
์ฅ์ฐจ AI๋ฅผ ์ ์ด๋ค๋ ๊ฑด AI๊ฐ ์์ด๋ ํ๋ ์ผ์ AI์๊ฒ ๋งก๊ธฐ๋ ๊ฒ ์๋๋ผ AI๊ฐ ์์ด๋ ํ ์ ์๋ ์ผ์ AI๋ฅผ ํตํด ํ๋ ๊ฒ์ด๋ค.
0
7
9
This is huge. A group of 50 AI researchers (ByteDance, Alibaba, Tencent + universities) just dropped a 303 page field guide on code models + coding agents. And the takeaways are not what most people assume. Here are the highlights Iโm thinking about (as someone who lives in
35
219
1K
์ธ๊ฐ ์ง๋ฅ์ ์ธ๊ณต์ง๋ฅ์ด ๋์ ํ ์ ์์ด๋ ์ธ๊ฐ์ ์ง์ ํธ๊ธฐ์ฌ๊ณผ ๋ฐฐ์ฐ๊ณ ํ๊ตฌํ๋ ค๋ ์์ง๋ฅผ ๋์ ํ ์ ์์ ๋ฏํ์ง๋ง ๊ทธ๋ฐ ์ธ๊ฐ์ ํํ์ง ์๊ธฐ๋ ํ๊ณ , ์์ด์ ํฑ AI๋ ํธ๊ธฐ์ฌ๊ณผ ์์ง๋ ๊ฐ์ง ์ ์๋ค.
0
2
3
Introducing Cowork: Claude Code for the rest of your work. Cowork lets you complete non-technical tasks much like how developers use Claude Code.
3K
9K
88K
LLM memory is considered one of the hardest problems in AI. All we have today are endless hacks and workarounds. But the root solution has always been right in front of us. Next-token prediction is already an effective compressor. We donโt need a radical new architecture. The
84
327
2K
We are entering a new era for LLM memory. ๐ง In our latest research, End-to-End Test-Time Training, LLMs keep learning at test time via next-token prediction on the context โ compressing what they read directly into their weights. Learn more: https://t.co/N71zsmC2ay
54
187
2K
30๋
์ ๊ฑฐ์ ์ด์
๋ธ๋ฆฌ ์ฝ๋ฉ๋ง์ผ๋ก ์์ฉ ๊ฒ์์ ๊ฐ๋ฐํ์๋ค. ์ปดํ์ผ๋ฌ ๊ธฐ์ ์ ๋ฐ์ ์ผ๋ก ๊ธฐ๊ณ์ ๊ฐ๊น์ด ์ด์
๋ธ๋ฆฌ ์ธ์ด์์ ์ฌ๋์ ๊ฐ๊น์ด ๊ณ ๊ธ ํ๋ก๊ทธ๋๋ฐ ์ธ์ด๋ก ์ฎ๊ฒจ๊ฐ๋ฏ์ด ์์ฆ AI์ ๋์์ผ๋ก ํ๋ก๊ทธ๋๋ฐ ์ธ์ด์์ ์์ฐ์ด๋ก ์ฎ๊ฒจ๊ฐ๋ ๊ฑด ์์ฐ์ค๋ฐ ๋ฐ์ ์ผ๋ก ๋ณด์ธ๋ค.
0
0
0
1/๐งต We are very excited to release our new paper! From Entropy to Epiplexity: Rethinking Information for Computationally Bounded Intelligence https://t.co/M8ETQk9gHz with amazing team @ShikaiQiu @yidingjiang @Pavel_Izmailov @zicokolter @andrewgwils
56
387
2K
AI์ ์ํ ์ง๋ฐฐ๋ ์ํ ํฐ๋ฏธ๋ค์ดํฐ์ฒ๋ผ ์ผ์ด๋์ง ์๋๋ค. ์ด๋ ๊ฒ ๋ถ๋๋ฝ๊ณ ์๋ฐ์ ์ธ ํํ๋ก ์ด๋ฃจ์ด์ง๋ค. "ChatGPT์๊ฒ ๋ฌผ์ด๋ณด๋ ์ด๋ ๊ฒ ํ๋ผ๊ณ ํ๋ค."
1
1
2