Ethan Reid
@EthanReidMorro
Followers
237
Following
4K
Media
9
Statuses
141
ML Researcher @moondreamai
Seattle, WA
Joined December 2022
To find the best Sovereign AI ideas, we invited the builders who are actually defining the stack. @TheAhmadOsman @VoidAsuka @yacinelearning @vikhyatk @NoCommas We’re also giving you more time to cook so you can enjoy the Thanksgiving weekend🦃 New deadline: Dec 7 Share your
45
65
426
Tried “turkey slice” with our new segmentation skill yesterday. It still amazes us how it just works. Grateful for everyone using Moondream. Passing 2M downloads a month means a lot.
9
7
80
When we generate SVG masks autoregressively, the starting point is a major source of ambiguity. Under standard cross-entropy training, this choice becomes muddled and the model struggles to decide where to begin. There are many valid ways to trace a path that yield a visually
@JulienBlanchon @EthanReidMorro @moondreamai @EthanReidMorro did you try ablating rl vs directly running backprop through a segmentation loss ?
0
0
6
we've scaled RL for a 100B+ MoE model achieving SOTA benchmark results for its size more important than the final model checkpoint is making the frontier infra required to train models like this accessible to everyone details on the full training recipe, our open source
Introducing INTELLECT-3: Scaling RL to a 100B+ MoE model on our end-to-end stack Achieving state-of-the-art performance for its size across math, code and reasoning Built using the same tools we put in your hands, from environments & evals, RL frameworks, sandboxes & more
20
32
340
Universal Segmentation is here… paper coming very soon.
We’re introducing Segmentation. SVG masks from prompt, points, or box. SOTA on benchmarks. https://t.co/RLVxgM7vL8
2
1
28
LMAOOOO this dataset has been the gold standard for a decade
Announcing RefCOCO-M, a refreshed RefCOCO with pixel-accurate masks and the problematic prompts removed. Better data for better evaluation. https://t.co/BqayflYv2v
32
429
11K
Shockingly, some pretty wild samples made it into the original RefCOCO; here are a few of the 46 vulgar examples we removed.
1
0
4
The original hand-drawn masks are often coarse and missing parts. We fix this by re-segmenting the RefCOCO validation split with an ensemble of models, discarding 47% of the masks due to unrecoverable quality.
1
0
1
RefCOCO is no longer an accurate measure of segmentation quality. Models now produce better masks than the benchmark. We’re releasing RefCOCO-M, a modernization of this classic eval.
6
2
13
Very impressed with @luminal. Really fast stuff guys!
0
0
2
🚀 Moondream 3 Preview is now live on fal! 🧠 9B params (2B active): faster + smarter 🖼️ Real-world vision: drones, robotics, med-imaging, retail ⚙️ 64-expert MoE + 32K context for structured reasoning 🔍 Native pointing, improved OCR, fine-tuning ready
7
30
324
moondream3-preview is out on Hugging Face vision language model with a mixture-of-experts architecture (9B total parameters, 2B active) delivering sota visual reasoning while still being efficient and deployment-friendly vibe coded a quick app for it in anycoder
3
33
209
A year ago, this would have been a long shot. Dense object detection is something that took more than just scaling data/compute to improve... excited to share more about this in the coming weeks!
0
0
5
RL is really sample efficient. We ran a small experiment on Geoguessr. With just 16 images per country, Moondream performs as well as Claude Sonnet. With the full dataset, it beats Sonnet by a decent margin while being orders of magnitude cheaper to run.
23
35
628
Follow up. Say you’re parking in NYC and you wanna know how many parking tickets the street you’re parking on has while you park. Well guess what you can get this info directly to your face! We use @nycgov and @moondreamai and @brilliantlabsAR to give a HUD of the number of
2
5
16
@mntruell then its Intelligence Unbound with @PrimeIntellect talks from @samsja19 @mackaygg @nehadesaraju @dylan522p @vincentweisser @willccbb @beffjezos (but he said not to publish parts of his talk so) enjoy!
4
6
33