Luke Melas-Kyriazi Profile
Luke Melas-Kyriazi

@lukemelas

Followers
1K
Following
263
Media
1
Statuses
34

Building @cursor_ai | Rhodes Scholar, Oxford University PhD (Visual Geometry Group) | Prev. Meta Research

San Francisco, CA
Joined October 2020
Don't wanna be here? Send us removal request.
@lukemelas
Luke Melas-Kyriazi
8 days
It's a large MoE trained with RL, more details in the blog post!
Tweet card summary image
cursor.com
Built to make you extraordinarily productive, Cursor is the best way to code with AI.
0
1
18
@lukemelas
Luke Melas-Kyriazi
8 days
Very excited to release our first agent model!
@cursor_ai
Cursor
8 days
Introducing Cursor 2.0. Our first coding model and the best way to code with agents.
4
3
33
@TheRohanVarma
Rohan Varma
29 days
people: "Cursor isn't enterprise ready" Jensen:
85
117
2K
@lukemelas
Luke Melas-Kyriazi
3 months
Enjoy GPT-5 and the new CLI! The team pushed hard to launch the CLI today. It's in beta and we'd love to hear your feedback.
@cursor_ai
Cursor
3 months
Cursor is now in your terminal! It’s an early beta. Access all models. Move easily between your CLI and editor.
0
1
18
@lukemelas
Luke Melas-Kyriazi
4 months
Message me if you’re at ICML and want to chat about coding models!
@rajko_rad
Rajko Radovanović
4 months
We ( @lukemelas @_awettig @cursor_ai @a16z ) have ~20 more open spots for a small HH tomorrow evening at ICML. If you are doing strong work on reasoning models, infra, code generation, please submit an RSVP and we will confirm if we can accomodate! 🔗👇
2
0
42
@lukemelas
Luke Melas-Kyriazi
5 months
Very excited to be launching Cursor 1.0! I don't think the team has ever been this excited about a launch!
@cursor_ai
Cursor
5 months
Cursor 1.0 is out now! Cursor can now review your code, remember its mistakes, and work on dozens of tasks in the background.
9
4
124
@cursor_ai
Cursor
6 months
New Tab model, 1M+ context windows, and a preview of our background agent
121
206
4K
@cursor_ai
Cursor
6 months
Cursor is now free for students. Enjoy!
2K
4K
41K
@amanrsanger
Aman Sanger
8 months
Cursor trained a SOTA embedding model on semantic search It substantially outperforms out of the box embeddings and rerankers used by competitors! You can see feel the difference when using agent!
47
48
954
@msfeldstein
Michael Feldstein
9 months
One of the things I've been working on at @cursor_ai is beefing up Cursor Rules. We want Agent to be as powerful as the most knowledgable person on your team. Here's how we use them at Cursor. 🧵
66
121
2K
@cursor_ai
Cursor
1 year
We are excited to announce that @SupermavenAI is joining Cursor! Together, we will continue to build Cursor into a research and product powerhouse. (1/5)
173
224
3K
@lukemelas
Luke Melas-Kyriazi
1 year
Excited to see what people build with Cursor + o1!
@cursor_ai
Cursor
1 year
OpenAI’s new o1 models are available in Cursor! We’ve found o1 to be excellent at well-specified, reasoning-intense problems. We still recommend sonnet/4o for most tasks. We’re initially rolling out the models with usage-based pricing but will iterate as rate limits increase.
4
1
15
@minchoi
Min Choi
1 year
It's only been just about a week since Cursor got massive attention. And people can't stop building with it. 10 wild examples:
96
437
4K
@filippos_kok
Filippos Kokkinos
1 year
🌟 I'm excited to present IM-3D today at #ICML! 🚀 Joint work with @lukemelas, Andrea Vedaldi, and Natalia Neverova. Join us at 1:30 PM, booth 2708! 💡
@_akhaliq
AK
2 years
Meta presents IM-3D Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation paper page: https://t.co/daS3wOynQP Most text-to-3D generators build upon off-the-shelf text-to-image models trained on billions of images. They use variants of Score
1
1
6
@seb_ruder
Sebastian Ruder
2 years
True Zero-shot MT Some thoughts on translating to truly unseen languages, Gemini 1.5's results on the MTOB long-context MT dataset, and similarities to L2 language acquisition. https://t.co/6U0DmmozSq
Tweet card summary image
newsletter.ruder.io
Teaching Machines a New Language Like Humans
0
14
71
@emollick
Ethan Mollick
2 years
Fascinating benchmark in the Google Gemini Pro 1.5 report: given the 500+ available pages of reference material on a language with 200 speakers (not available online), the AI is able translate with close to the ability of humans using the same material. https://t.co/TNNcbncpKL
10
50
240
@lukemelas
Luke Melas-Kyriazi
2 years
It's really impressive to see that when using the entire grammar book, the model's performance approaches that of the human baseline -- it's a very strong baseline!
0
0
0
@lukemelas
Luke Melas-Kyriazi
2 years
Thank you for this! This sort of evaluation and adoption is exactly what we were hoping for when we were writing MTOB. We are also very excited to see that the new Gemini models can process the entire grammar book in context!
@JeffDean
Jeff Dean
2 years
I want to draw people's attention to the ultra low resource translation use case for Kalamang highlighted here (and in the tech report at https://t.co/PV4ho60bLl). In context language learning from a single grammar book! This is easy to miss in my longer thread about Gemini 1.5
1
0
0
@JeffDean
Jeff Dean
2 years
Kalamang Translation One of the most exciting examples in the report involves translation of Kalamang. Kalamang is a language spoken by fewer than 200 speakers in western New Guinea in the east of Indonesian Papua ( https://t.co/HEGWvHpTnA). Kalamang has almost no online
13
123
717