EthanReidMorro Profile Banner
Ethan Reid Profile
Ethan Reid

@EthanReidMorro

Followers
237
Following
4K
Media
9
Statuses
141

ML Researcher @moondreamai

Seattle, WA
Joined December 2022
Don't wanna be here? Send us removal request.
@Gradient_HQ
Gradient
5 days
To find the best Sovereign AI ideas, we invited the builders who are actually defining the stack. @TheAhmadOsman @VoidAsuka @yacinelearning @vikhyatk @NoCommas We’re also giving you more time to cook so you can enjoy the Thanksgiving weekend🦃 New deadline: Dec 7 Share your
45
65
426
@moondreamai
moondream
3 days
Tried “turkey slice” with our new segmentation skill yesterday. It still amazes us how it just works. Grateful for everyone using Moondream. Passing 2M downloads a month means a lot.
9
7
80
@EthanReidMorro
Ethan Reid
4 days
When we generate SVG masks autoregressively, the starting point is a major source of ambiguity. Under standard cross-entropy training, this choice becomes muddled and the model struggles to decide where to begin. There are many valid ways to trace a path that yield a visually
@sentientcar
Sentient Car
4 days
@JulienBlanchon @EthanReidMorro @moondreamai @EthanReidMorro did you try ablating rl vs directly running backprop through a segmentation loss ?
0
0
6
@johannes_hage
Johannes Hagemann
5 days
we've scaled RL for a 100B+ MoE model achieving SOTA benchmark results for its size more important than the final model checkpoint is making the frontier infra required to train models like this accessible to everyone details on the full training recipe, our open source
@PrimeIntellect
Prime Intellect
5 days
Introducing INTELLECT-3: Scaling RL to a 100B+ MoE model on our end-to-end stack Achieving state-of-the-art performance for its size across math, code and reasoning Built using the same tools we put in your hands, from environments & evals, RL frameworks, sandboxes & more
20
32
340
@EthanReidMorro
Ethan Reid
5 days
Universal Segmentation is here… paper coming very soon.
@moondreamai
moondream
5 days
We’re introducing Segmentation. SVG masks from prompt, points, or box. SOTA on benchmarks. https://t.co/RLVxgM7vL8
2
1
28
@kvdozer
Dozer🚜
14 days
LMAOOOO this dataset has been the gold standard for a decade
@moondreamai
moondream
14 days
Announcing RefCOCO-M, a refreshed RefCOCO with pixel-accurate masks and the problematic prompts removed. Better data for better evaluation. https://t.co/BqayflYv2v
32
429
11K
@EthanReidMorro
Ethan Reid
14 days
RefCOCO-M is now available on Hugging Face under an MIT license:
Tweet card summary image
huggingface.co
1
0
2
@EthanReidMorro
Ethan Reid
14 days
Shockingly, some pretty wild samples made it into the original RefCOCO; here are a few of the 46 vulgar examples we removed.
1
0
4
@EthanReidMorro
Ethan Reid
14 days
The original hand-drawn masks are often coarse and missing parts. We fix this by re-segmenting the RefCOCO validation split with an ensemble of models, discarding 47% of the masks due to unrecoverable quality.
1
0
1
@EthanReidMorro
Ethan Reid
14 days
RefCOCO is no longer an accurate measure of segmentation quality. Models now produce better masks than the benchmark. We’re releasing RefCOCO-M, a modernization of this classic eval.
6
2
13
@EthanReidMorro
Ethan Reid
2 months
Very impressed with @luminal. Really fast stuff guys!
@matthewjgunton
Matthew Gunton
2 months
it's a good model sir
0
0
2
@fal
fal
2 months
🚀 Moondream 3 Preview is now live on fal! 🧠 9B params (2B active): faster + smarter 🖼️ Real-world vision: drones, robotics, med-imaging, retail ⚙️ 64-expert MoE + 32K context for structured reasoning 🔍 Native pointing, improved OCR, fine-tuning ready
7
30
324
@daniel0xFC
@daniel0xFC
2 months
like, c'mon maaan
3
2
20
@_akhaliq
AK
2 months
moondream3-preview is out on Hugging Face vision language model with a mixture-of-experts architecture (9B total parameters, 2B active) delivering sota visual reasoning while still being efficient and deployment-friendly vibe coded a quick app for it in anycoder
3
33
209
@EthanReidMorro
Ethan Reid
2 months
A year ago, this would have been a long shot. Dense object detection is something that took more than just scaling data/compute to improve... excited to share more about this in the coming weeks!
@Dorialexander
Alexander Doria
2 months
small models are the frontier now.
0
0
5
@vikhyatk
vik
5 months
RL is really sample efficient. We ran a small experiment on Geoguessr. With just 16 images per country, Moondream performs as well as Claude Sonnet. With the full dataset, it beats Sonnet by a decent margin while being orders of magnitude cheaper to run.
23
35
628
@0x1F9ED
AJ
5 months
Follow up. Say you’re parking in NYC and you wanna know how many parking tickets the street you’re parking on has while you park. Well guess what you can get this info directly to your face! We use @nycgov and @moondreamai and @brilliantlabsAR to give a HUD of the number of
@0x1F9ED
AJ
5 months
Hacking the @brilliantlabsAR frames today for make benefit of @nycgov using NYC open data.
2
5
16
@EthanReidMorro
Ethan Reid
6 months
Big things come in small packages, Moondream now runs with 42% less memory.
@moondreamai
moondream
6 months
🚨 New Moondream model just dropped Our 2B model, now in 4-bit QAT Open source. Ready today.
0
0
2
@swyx
swyx 🔜 NeurIPS + #DevWritersRetreat
7 months
@mntruell then its Intelligence Unbound with @PrimeIntellect talks from @samsja19 @mackaygg @nehadesaraju @dylan522p @vincentweisser @willccbb @beffjezos (but he said not to publish parts of his talk so) enjoy!
4
6
33