darkproger Profile Banner
Volodymyr Kyrylov Profile
Volodymyr Kyrylov

@darkproger

Followers
3K
Following
19K
Media
779
Statuses
16K

Technical Staff at OpenAI. AI student from USI/ETH. Donate https://t.co/GDSkWG30ZS

San Francisco, CA
Joined April 2008
Don't wanna be here? Send us removal request.
@darkproger
Volodymyr Kyrylov
2 years
Happy to release Accelerated Scan, a kernel library for first order parallel associative scans in vanilla @PyTorch, Triton 2.2.0 and CUDA C++. pip install accelerated-scan🧵
5
39
267
@darkproger
Volodymyr Kyrylov
10 days
this time is the charm
@OpenAI
OpenAI
10 days
GPT-5.2 Thinking evals
1
0
17
@darkproger
Volodymyr Kyrylov
14 days
macapptree is an amazing tool for screen perception in gpt-oss!
@HirnaMariya
Mariya Hirna šŸ‡ŗšŸ‡¦
14 days
Wild NeurIPS moment: @darkproger from @OpenAI told me he uses our open-source macapptree as his go-to tool for parsing macOS accessibility 🤯 Made my day!
0
1
25
@darkproger
Volodymyr Kyrylov
16 days
Excited to be talking about gpt-oss today!
@dmsobol
Daria Soboleva āœˆļø NeurIPS
1 month
I am excited to be organizing the 8th scaling workshop at @NeurIPSConf this year! Dec 5-6 | 5-8pm PT | Hard Rock Hotel San Diego Co-organized by @cerebras, @Mila_Quebec, and @mbzuai Register:
3
2
23
@aaron_lou
Aaron Lou
21 days
The Strategic Explorations team @OpenAI is looking to recruit researchers interested in working on the next frontier of language modeling! Feel free to reach out to me by email. @darkproger and I will also be at NeurIPS to connect and discuss in person.
13
20
322
@darkproger
Volodymyr Kyrylov
29 days
it feels so nice to start sweeping once the implementation is not buggy any more
0
0
9
@khoomeik
Rohan Pandey
2 months
as of 5 minutes ago, our gpt-oss implementation is merged into torchtitan! thanks to all the work by @jianiw_wang @__tianyu at @pytorch making it clean & scalable for the community ā¤ļø i hope y'all play around with training gpt-oss, it's great for its sparsity & reasoning
@khoomeik
Rohan Pandey
3 months
periodic ā¤ļø open-source! for example, we’ve been collaborating with the @PyTorch team to build the highest MFU gpt-oss training implementation (includes thinky sinky flexattn) here’s a few SFT runs of gpt-oss-20b & 120b, where i get ~24% MFU for 20b and ~8% for 120b
9
7
193
@darkproger
Volodymyr Kyrylov
2 months
To authenticate codex on spark, do: scp -r .codex vol@spark-abcd.local: Assuming it’s already working on your box with a screen and you are vol
0
0
6
@shiven_sinha
Shiven Sinha
4 months
LLMs are winning IOI golds & crushing code gen—but can they verify correctness? In Feb, our benchmark saw single digit scores with o3-mini. We re-ran our evals with the latest open models: GPT-OSS gets 21.6% at demonstrating bugs in code! Progressāœ…But verification's still hard
3
11
80
@darkproger
Volodymyr Kyrylov
4 months
Result:
Tweet card summary image
huggingface.co
0
0
3
@darkproger
Volodymyr Kyrylov
4 months
gpt-oss-20b with medium effort measures to Gemini 2.5 Pro on Ukrainian competitive programming. Thanks anonymous-researcher-ua for running the experiment and developing the benchmark
1
3
22
@vllm_project
vLLM
5 months
šŸ‘€ we care a lot about correctness, ran many evals and stared at many tensors to compare them. numerics of vLLM on hopper should be solid and verified! if you run into any correctness issue on vLLM, we would love to know and debug them!
@romainhuet
Romain Huet
5 months
Heads-up for developers trying gpt-oss: performance and correctness can vary a bit across providers and runtimes right now due to implementation differences. We’re working with inference providers to make sure gpt-oss performs at its best everywhere, and we’d love your feedback!
5
29
322
@eqhylxx
Lily Liu
5 months
Yes, I’m very sure vllm is correct — we spent quite a bit of time on that. 🄹
@vllm_project
vLLM
5 months
šŸ‘€ we care a lot about correctness, ran many evals and stared at many tensors to compare them. numerics of vLLM on hopper should be solid and verified! if you run into any correctness issue on vLLM, we would love to know and debug them!
0
9
145
@darkproger
Volodymyr Kyrylov
5 months
correctness takes time! Stay patient
@romainhuet
Romain Huet
5 months
Heads-up for developers trying gpt-oss: performance and correctness can vary a bit across providers and runtimes right now due to implementation differences. We’re working with inference providers to make sure gpt-oss performs at its best everywhere, and we’d love your feedback!
0
1
13
@DrYangSong
Yang Song
5 months
Very excited to see this model released to the open-source community. It's still hard to believe that our latest techniques allow it to be so incredibly powerful yet so remarkably small.
@darkproger
Volodymyr Kyrylov
5 months
super excited to have contributed to gpt-oss. We have put a lot of love into both training the model and making the developer examples, check them out:
3
2
135
@darkproger
Volodymyr Kyrylov
5 months
HealthBench is the coolest eval I got to run yet. Reproduce it here:
@thekaransinghal
Karan Singhal
5 months
OpenAI’s new gpt-oss models are our ā€œhealthiestā€ models pound-for-pound. šŸ’„ The 120b model outperforms all our other frontier models on HealthBench–GPT-4o, o1, o4-mini–except o3, which it nearly matches despite being much smaller. Even healthier models to come soon! šŸ‘‡
0
1
9
@darkproger
Volodymyr Kyrylov
5 months
the model is very smart! after the release we found that the model scores 82.2 on GPQA with tools if we improve answer extraction
@_aidan_clark_
Aidan Clark
5 months
gpt-oss is our new open-weight model family! the bigger one runs on a single GPU, you can run the small one on your laptop. Go install it right now, seriously! Telling your laptop to do something and watching it happen made me feel the AGI like nothing since ChatGPT.
0
1
84
@Eric_Wallace_
Eric Wallace
5 months
Today we release gpt-oss-120b and gpt-oss-20b—two open-weight LLMs that deliver strong performance and agentic tool use. Before release, we ran a first of its kind safety analysis where we fine-tuned the models to intentionally maximize their bio and cyber capabilities 🧵
109
354
3K
@darkproger
Volodymyr Kyrylov
5 months
super excited to have contributed to gpt-oss. We have put a lot of love into both training the model and making the developer examples, check them out:
Tweet card summary image
github.com
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI - openai/gpt-oss
9
9
183