FocoosAI Profile
FocoosAI

@FocoosAI

Followers
11
Following
0
Media
2
Statuses
11

Joined February 2025
Don't wanna be here? Send us removal request.
@FocoosAI
FocoosAI
3 months
Last booth day for FocoosAI at CVPR! Our CEO and CTO will be happy to introduce you our platform and code! . See you there!. Ps. We have amazing t-shirts! Come before they end!
Tweet media one
0
0
3
@FocoosAI
FocoosAI
3 months
🔥🔥.
@NaveenManwani17
naveen manwani
3 months
🚨CVPR 2025 Highlight Paper Alert 🚨. ➡️Paper Title: SAMWISE: Infusing Wisdom in SAM2 for Text-Driven Video Segmentation. 🌟Few pointers from the paper. 🎯Referring Video Object Segmentation (RVOS) relies on natural language expressions to segment an object in a video clip.
0
0
1
@grok
Grok
21 days
Introducing Grok Imagine.
2K
4K
28K
@FocoosAI
FocoosAI
3 months
RT @fcdl94: We released open source our core training and models code. We’re in a path where computer vision models will require hours, no….
github.com
🚀 Lightning-fast computer vision models. Fine-tune SOTA models with just a few lines of code. Ready for cloud ☁️ and edge 📱 deployment. - GitHub - FocoosAI/focoos: 🚀 Lightning-fast computer...
0
1
0
@FocoosAI
FocoosAI
3 months
We’ll be in Nashville in less than a week for #CVPR2025! .@gabrosi3 will show you whether is better to show (visual prompting) or tell (open vocabulary) to achieve better performance in segmentation!.
@gabrosi3
Gabriele
3 months
Should you SHOW 🖼️ or TELL 📝 a model what to segment? 🤔.Our new #benchmark compares visual vs textual prompts for semantic segmentation across 14 datasets spanning 7 domains!. Check out our findings ⬇️
Tweet media one
0
0
1
@FocoosAI
FocoosAI
6 months
How do you reduce your costs when using LLMs? Let us hear thoughts!. For reading the full article or subscribing to our (new) substack:
0
0
1
@FocoosAI
FocoosAI
6 months
Do you really need the biggest model, long prompts, and verbose outputs? 🤷‍♂️. Smart choices = lower costs, higher efficiency, and reduced environmental impact. Let’s do more with less. 💡🔋 #AI #LLMs #Sustainability.
1
1
1
@FocoosAI
FocoosAI
6 months
AI’s growth = rising energy footprint 🌍. How can we optimize LLM usage?.✅ Use smaller, efficient models (LLaMA 8B, Phi-4 16B, Qwen 7B). ✅ Minimize input tokens (prompt engineering matters!). ✅ Limit output length (shorter responses = lower cost) - just ask LLM to do it.
1
0
0
@FocoosAI
FocoosAI
6 months
The cost in energy? ⚡.A medium-sized model (65B params) consumes ~3 Joules per token. - 200 output tokens ≈ powering a 10W LED for a minute 💡.- Millions of tokens/hour = megawatts of energy.- ChatGPT’s ops cost millions per month (excluding training!).
1
0
0
@FocoosAI
FocoosAI
6 months
Sounds fast, right? Not quite. Each token requires a full forward pass through the model, meaning:.- Bigger models = More computation.- Longer inputs = Quadratic increase in cost.- Longer outputs = More forward passes in the model (aka more computation).
1
0
0
@FocoosAI
FocoosAI
6 months
When you ask an LLM a question, it doesn’t “know” the answer. Instead, it:.1. Breaks input into smaller pieces (tokens). 2. A Transformer model computes relationships between words. 3. The model predicts and generates one token at a time.
1
0
0
@FocoosAI
FocoosAI
6 months
The Hidden Cost of LLMs: What Happens Under the Hood? A Thread 🧵. You’ve probably used an genAI assistant today, certainly powered by LLMs. But have you ever wondered what does it take to answer you? 🤔. It's not just computation—it’s energy and money. Let’s break it down
Tweet media one
1
2
1