Oleksandr Maksymets
@o_maksymets
Followers
594
Following
2K
Media
26
Statuses
333
Researcher working on Llama 🦙 in Meta AI, previously Embodied AI in FAIR, Ph.D. in Computer Science.
San Francisco, CA
Joined April 2009
@_kainoa_ @thepaulmcvay @arjunmajum @abha_gejji @DanielDugas14 @iamsashasax @vinceberges @HenaffMikael @ayushjain1144 @AngCao3 @mkalakrishnan @mido_assran @_krishna_murthy @rpartsey @aravindr93, @icute15, Sergio Arnaud, Ada Martin, Philip Thomas, Nicolas Ballas, Mike Rabbat
0
0
1
🐾 In the wild Deployed on a Boston Dynamics Spot 🤖: 8️⃣/🔟 successful “find & pick the plush-toy” trials in a multi-room apt —no manual resets. Check our demo! 🎥
0
0
2
📊 Data matters Releasing L3DD: 🗂️ 1 346 scenes, 🏷️ 131 641 lang-grounded 3D masks across ScanNet, ScanNet++, ARKitScenes—5× venue coverage vs prior sets.
0
0
1
🏆 Results New SOTA on SR3D + NR3D + ScanRefer: 61.7 → 49.4 %@25/50 IoU (prev. best 58.5/52.5). 🚀 Beats GPT-4o & other VLM agents by >20 pts while using only raw sensor clouds
0
0
1
💡 Core idea We pre-train a 3D-JEPA encoder that masks & predicts in latent space, turning lifted CLIP+DINO point-cloud features into contextualized scene reps—no meshes, no proposals, just sensor RGB-D.
0
0
1
On #ICML2025 16 Jul, 11 AM We present Meta Locate 3D: a model for accurate object localization in 3D environments. Meta Locate 3D can help robots accurately understand their surroundings and interact more naturally with humans. Demo, model, paper: https://t.co/8ZhV21TDxq
5
15
54
Introducing Meta Locate 3D: a model for accurate object localization in 3D environments. Learn how Meta Locate 3D can help robots accurately understand their surroundings and interact more naturally with humans. You can download the model and dataset, read our research paper,
33
213
1K
New work from the Robotics team at @AIatMeta . Want to be able to tell your robot bring you the keys from the table in the living room? Try out Locate 3D! interactive demo: https://t.co/aS9WPPmhcF model & code & dataset: https://t.co/oMWc32VrH9
0
6
50
llama-4-scout-17b-16e-instruct prompt: write a p5.js script that shows a ball bouncing inside a spinning hexagon. The ball should be affected by gravity and friction, and it must bounce off the rotating walls realistically
🚨 o3-mini crushed DeepSeek R1 🚨 "write a Python program that shows a ball bouncing inside a spinning hexagon. The ball should be affected by gravity and friction, and it must bounce off the rotating walls realistically"
21
46
414
👀 Accelerate performance of @AIatMeta Llama 4 Maverick and Llama 4 Scout using our optimizations in #opensource TensorRT-LLM.⚡ ✅ NVIDIA Blackwell B200 delivers over 42,000 tokens per second on Llama 4 Scout, over 32,000 tokens per seconds on Llama 4 Maverick. ✅ 3.4X more
64
90
608
BREAKING: Meta's Llama 4 Maverick just hit #2 overall - becoming the 4th org to break 1400+ on Arena!🔥 Highlights: - #1 open model, surpassing DeepSeek - Tied #1 in Hard Prompts, Coding, Math, Creative Writing - Huge leap over Llama 3 405B: 1268 → 1417 - #5 under style control
Today is the start of a new era of natively multimodal AI innovation. Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality. Llama 4 Scout • 17B-active-parameter model
79
374
2K
Introducing our first set of Llama 4 models! We’ve been hard at work doing a complete re-design of the Llama series. I’m so excited to share it with the world today and mark another major milestone for the Llama herd as we release the *first* open source models in the Llama 4
320
923
6K
Today is the start of a new era of natively multimodal AI innovation. Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality. Llama 4 Scout • 17B-active-parameter model
835
2K
13K
Llama 4 is a milestone — fast, smart, and open-source. It’s been incredible working on the vision side of this launch. Try it now at https://t.co/ETRvMc2OSR Let’s build the future of AI — together.
llama.com
Discover Llama 4's class-leading AI models, Scout and Maverick. Experience top performance, multimodality, low costs, and unparalleled efficiency.
0
0
1
We’re releasing: •Llama 4 Scout: 17B params, MoE, native vision, 10M+ context, runs on a single GPU •Llama 4 Maverick: best multimodal model in its class — beats GPT-4o & Gemini Flash •Llama 4 Behemoth (preview): already outperforming GPT-4.5 & Claude on STEM
1
0
2
Six months ago, I joined the Llama Multimodal team to work on the vision side of the model. Today, team is launching Llama 4 — redesigned from scratch and natively multimodal. This is a huge step forward for open-source AI.
1
0
6
Best hackathon location ever: USS Hornet Defense Tech Hackathon this weekend in Alameda, CA 🚢⚙️! Innovators and engineers will collaborate to advance cutting-edge technologies, including initiatives with ties to Ukrainian defense systems 🇺🇦. https://t.co/z68EjHfBAV
0
0
4
I’m excited to share a new AI Coding Competition from Meta and Microsoft Research building on Meta’s annual Hacker Cup! The most capable LLMs to date will be challenged to solve these questions. We invite major players in the code generation space to join.
Hacker Cup – one of the preeminent coding competitions started an AI track w/ Meta & Microsoft problems are hardddd – only a handful of engineers reliably solve them – requires deep algorithmic knowledge, reasoning, planning and fast execution – to solve 5 problems in 30
4
39
245