Chester Jungseok Roh
@chester_roh
Followers
4K
Following
1K
Media
47
Statuses
3K
Chester Jungseok Roh / BFACTORY Founder & CEO
Seoul,Korea
Joined December 2010
5 minutes ago, @karpathy just dropped karpathy/jobs! he scraped every job in the US economy (342 occupations from BLS), scored each one's AI exposure 0-10 using an LLM, and visualized it as a treemap. if your whole job happens on a screen you're cooked. average score across
873
2K
11K
Australian tech entrepreneur Paul Conyngham explains how he used ChatGPT/AlphaFold (spent $3,000 with no biology background) to create a custom MRNA vaccine to treat his dog’s cancer tumors. Unreal.
this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to
219
3K
15K
Bio hacker 의 시대네 벌써.
this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to
1
4
26
.@dylan522p lays out how we know the hard upper bound on how much compute can be produced annually by 2030: around 200 GW/year. That’s a crazy number (there’s about 20 GW of AI deployed in the world right now), but it’s nowhere near enough to satisfy Sam/Elon/Dario/Demis’s
31
58
639
개인적으로 Claude code 보다는 codex 를 선호합니다. 그냥 취향차이.
📘 Codex Best practices 한글버전 PDF https://t.co/myJoTfwDgE 어제 OpenAI Developers에 게시된 Codex 모범사례 내용은 정말 잘 정리가 되어 있어요. 그래서 한국어 버전을 만들었습니다. 이런 내용으로 구성되어 있어요. 1. 강력한 첫 사용: 컨텍스트와 프롬프트 2. 어려운 작업은 먼저 계획
4
19
102
이번주 일요일 찬찬히 둘러볼 대상이 마침 생겼네.
Nemotron 3 Super is here — 120B total / 12B active, Hybrid SSM Latent MoE, designed for Blackwell. Truly open: permissive license, open data, open training infra. See analysis on @ArtificialAnlys Details in thread 🧵below:
0
0
4
이제 중국 프론티어 모델들 데뷔할 차례.
Hunter Alpha → Deepseek V4 Healer Alpha → Kimi K3 A new DeepSeek model has already appeared in the official app. Its context window is ≥1M, and according to earlier leaks, DeepSeek V4 may have ≥1T parameters. Kimi K2.5 has a context window of 262,144, exactly the same as
0
4
10
A CS student at MIT finished his final semester with a 4.0 GPA. I found his NotebookLM workflow buried in a Reddit thread at 2am. He deleted it an hour later. Here's exactly what he was doing. He never uploaded lecture slides and asked for a summary. His first prompt was
62
391
3K
Friction. 현존하는 앱생태계의 모든 서비스들의 이익은 마찰에서 발생한다. 다양한 과정을 통해 승리한 플레이어들이 자신들의 매체력을 이용해서 특정 워크플로우에서 반드시 봐야하는 광고나 혹은 sponsored offer 를 제공하는 형태로 자신들의 마진포인트를 짠다. AI 의 발전방향은 기존
The most underrated way AI will change society is this: It will destroy patience for friction. Once people get used to systems that can explain, summarize, edit, compare, negotiate, organize, generate, and guide in seconds, they will stop tolerating broken bureaucracies,
1
9
39
어제 한 이야기와 같은 이야기. 모든 문제를 compute 을 활용한 search problem으로 치환하여 해결. == RL 에 보상신호를 줄수 있느냐 없느냐의 문제 == non-verifiable 을 verifiable 로 바꿔주는 “환경scaling” 이 핵심. 뻔하지만 이 단계에 이르기 까지의 선수과목(pre-requisite)이 정말 거대함.
The recipe behind today’s frontier reasoning models is surprisingly similar to AlphaGo: 1) Imitate large amounts of human data 2) Scale inference compute to reason better (back then it was Monte Carlo Tree Search, today it's Chain of Thought) 3) Use RL to go beyond imitation
0
5
33
기술이 지나간 자리는 언제나 이런 괘적을 남긴다. 멀리 갈것도 없다. 인플루언서=바이브코더, 소프트웨어엔지니어=기자, 이렇게 치환하고 미디어업계에 일어난 일이 소프트웨어업계에 똑같이 일어난다고 보면 된다. 기존의 기득권자에게는 해가 되지만 전체로 보면 부의 크기가 증가하기에 진화의
The former CEO of Google just described how one programmer runs an AI agent from 7 PM to 4 AM. He wakes up, eats breakfast, and reviews what got invented overnight. Eric Schmidt says what's happening right now is "mind boggling". Schmidt says the very best programmers have
0
6
35