Kevin Stone
@kevinleestone
Followers
2K
Following
320
Media
4
Statuses
38
Research @ OpenAI, previously at FAIR, TRI, and Google working on LLMs, RL, and Robotics.
California, USA
Joined April 2008
This effort should be very interesting as we push the reasoning performance of 🍓 even further. This role is a good fit for someone with strong engineering background and good ml intuitions.
.@OpenAI is hiring ML engineers for a new multi-agent research team! We view multi-agent as a path to even better AI reasoning. Prior multi-agent experience isn't needed. If you'd like to research this area with @kevinleestone and me fill out this form:
3
1
51
On the contrary to below point, I really believe AI is going to dramatically expand our cognitive abilities. Let me share a personal experience to show you what I mean: Last night, I spent over four hours, way past my usual bedtime, brainstorming with GPT-4o and especially o1
105
166
1K
I couldn’t agree more with this! I just had OpenAI GPT o1 work with me to write a major cancer treatment project, and in less than an hour, it was phenomenal and saved me many days of work! That’s worth a lot of Big Mac meals, though I would strongly advise you NOT to eat those☺️
‘The average price of a Big Mac meal, which includes fries and a drink, is $9.29.’ for two Big Mac meals a month you get access to ridiculously powerful machine intelligence, capable of high tier programming, phd level knowledge people don’t talk about this absurdity enough
4
18
187
@emollick Thank you for highlighting this! I think it shows the amazing potential these models have for being amazing assistants in research. I wish I had it for that 10 month span, could have done a lot more actual research!
1
1
25
Proud to release o1-preview to the world. Now that we have started to crack the challenge of getting models to “think” we are able to get large improvements on complex tasks by just letting them think harder.
7
14
121
Congrats Llama team! Very impressive results especially at the 70b scale.
0
0
10
To better enable the community to build on our work — and contribute to the responsible development of LLMs — we've published further details about the architecture, training compute, approach to fine-tuning & more for Llama 2 in a new paper. Full paper➡️ https://t.co/GlY2a1wKMk
34
535
2K
✈️ Just landed in Hawaii 🌴 to present two cool projects at #ICML2023 🚀 Masked Trajectory Models (w/ @philippswu, @arjunmajum, @kevinleestone, @yixin_lin_ , @IMordatch, @pabbeel) 📚 LfS Revisited (w/ @ncklashansen, @haosu_twitr, @HarryXu12, @xiaolonw et al.) Details in 🧵👇
3
2
25
Thrilled to release Llama 2 today ( https://t.co/lguwcpVPQp), our next-gen open-source LLM. Eager to see how the community will use and extend it. So grateful for the chance to work with such an amazing team and for Meta's resources and support to pull this off.
llama.com
Discover Llama 4's class-leading AI models, Scout and Maverick. Experience top performance, multimodality, low costs, and unparalleled efficiency.
0
4
22
You'll soon see lots of "Llama just dethroned ChatGPT" or "OpenAI is so done" posts on Twitter. Before your timeline gets flooded, I'll share my notes: ▸ Llama-2 likely costs $20M+ to train. Meta has done an incredible service to the community by releasing the model with a
163
1K
5K
Huge day indeed for AI and LLMs, congrats to Meta 👏 This is now the most capable LLM available directly as weights to anyone from researchers to companies. The models look quite strong, e.g. Table 4 in the paper: MMLU is good to look at, the 70B model is just below GPT-3.5. But
This is huge: Llama-v2 is open source, with a license that authorizes commercial use! This is going to change the landscape of the LLM market. Llama-v2 is available on Microsoft Azure and will be available on AWS, Hugging Face and other providers Pretrained and fine-tuned
61
497
4K
LLaMa-2 from @MetaAI is here! Open weights, free for research and commercial use. Pre-trained on 2T tokens. Fine-tuned too (unlike v1). 🔥🔥🔥 Lets gooo.... https://t.co/jEAV2dmxOG The paper lists the amazing authors who worked to make this happen night and day. Be sure to thank
26
178
1K
Check out some great work from our intern @mzubairirshad. Real-time category-level pose estimation and shape completion from RGB-D. #real2sim #icra2022
Super excited to share my internship work at @ToyotaResearch on category-level 3D object understanding and single-shot real2sim asset creation, accepted at #ICRA2022! https://t.co/QALESlkQMz
@GTrobotics @ICatGT @mlatgt @ieee_ras_icra ⬇️(1/6)
0
0
3
We published more details on our learned stereo system we use on our robots. We have found it to be more useful than existing active/passive depth sensors especially on shiny surfaces which are common in the home environment.
7
41
371
ImageNet is the new CIFAR! My students made FFCV ( https://t.co/QWUdL5hRxS), a drop-in data loading library for training models *fast* (e.g., ImageNet in half an hour on 1 GPU, CIFAR in half a minute). FFCV speeds up ~any existing training code (no training tricks needed) (1/3)
29
372
2K
Efficient Geometry-aware 3D Generative Adversarial Networks abs: https://t.co/YG9Tu6wqaB project page: https://t.co/7FhPb8jyiA demonstrate sota 3D-aware synthesis with FFHQ and AFHQ Cats, among other experiments
5
172
741
Introducing “Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation”! https://t.co/QHXW8nKt2z (w/ video!) NDFs are an object representation for robotic manipulation enabling imitation of pick-and-place tasks with pose generalization guarantees (1/n)
8
101
556
Come check out our poster (#15) at 9 am today! To learn how we achieve sim2real transfer across our fleet of home robots. #CoRL2021 w/ @ToyotaResearch @berkeley_ai
0
5
29
Excited to talk more about how simple low-cost simulation can transfer to real robots! We will be virtually presenting our poster Tue 11/9 at 5:15p GMT. #sim2real #CoRL2021 Paper: https://t.co/CYZWE7dFr8 Code:
github.com
Code release for our paper, "SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo" - ToyotaResearchInstitute/simnet
Excited to present our work at #CoRL2021 this week. Our paper shows a promising approach to achieve sim2real transfer for perception. w/ @ToyotaResearch @berkeley_ai (1/7) https://t.co/MAbYaXaPrY
1
3
6