
OpenBMB
@OpenBMB
Followers
4K
Following
722
Media
84
Statuses
283
OpenBMB (Open Lab for Big Model Base) aims to build foundation models and systems towards AGI.
Joined February 2022
🚀 Introducing MiniCPM-V 4.5 8B: pushing the boundary of multimodal AI!. ~ SOTA VL Capability: Surpasses GPT-4o, Gemini 2.0 Pro, Qwen2.5-VL 72B on OpenCompass!.~ "Eagle Eye" Video: 96x visual token compression for high refresh rate and long video understanding.~ Controllable
10
45
148
Thanks @mervenoyann for share our work❤️.
MiniCPM-V 4.5 is very good! 🤗. it comes with hybrid thinking: it decides when to think on it's own 😍. it also can handle high res documents with odd aspect ratios, and super long videos efficiently 🙏🏻. see below hybrid results ⤵️ model is in comments!
0
1
7
Huge thanks to @_akhaliq for sharing our work. Everyone is welcome to experience MiniCPM-V 4.5 on the Gradio app, and also try out anycoder together. I heard this was made with just two prompts.🤩.
huggingface.co
vibe coding a MiniCPM-V 4.5 @OpenBMB chat app in anycoder. MiniCPM-V 4.5 achieves an average score of 77.0 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks. With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-latest, Gemini-2.0 Pro,
0
3
25
🔥Kudos to the FlagOS team for flawlessly adapting our community’s MiniCPM-V-4—your dedication accelerates open research and inspires us all. Keep pushing boundaries; we’re cheering you on!. 🎯FlagOS is a game-changing open-source stack that unifies heterogeneous AI chips and.
Powered by FlagRelease platform which is backed by FlagOS (AI System Software Stack), MiniCPM-V-4.0 @OpenBMB has been successfully deployed on Nvidia GPUs. It's a critical step forward in realizing rapid and cross-platform model deployment. #FlagOS #ModelDeployment #opensourceai
0
3
15
Really very grateful @nodeshiftai for contributing the detailed MiniCPM-V 4.0 deployment tutorial. Everyone is welcome to try it out hands-on.
Smaller, Smarter, Faster. Meet MiniCPM-V 4.0. OpenBMB’s latest multimodal AI offers 4.1B parameters yet outperforms larger models like GPT-4.1-mini, delivering state-of-the-art image, multi-image, and video understanding.
0
2
13
thx @lucataco93.
0
1
5
Thank you very much for the adaptation of MiniCPM-V. Everyone is welcome to experience edge-side VL model offline on @SecretAILabs.
🚀 MiniCPM-V 4.0 is HERE!. Experience next-level image analysis that runs 100% offline on your device. Your photos, your privacy, your control. Huge thanks to @OpenBMB for this incredible breakthrough! 🙏. #SecretAI #MiniCPM #OfflineAI #PrivateAI #VisionAI #LocalLLM
0
1
11
RT @teortaxesTex: Underrated model. OpenBMB remains the only lab with the distinction of «had Stanford bros steal and republish your stuff»….
0
2
0
RT @karminski3: 面壁刚刚放出了 MiniCPM-V-4. 这是一个图/视频推理模型,可以截图图片或者视频中的内容。示例视频是上传了一个梗图然后问模型笑点在哪里. 模型总参数量4.1B,这个大小还是不错的,本地设备都能跑。. 模型地址:.
0
6
0
RT @prithiv_003: @mervenoyann MiniCPM-V4 ~ OpenCompass (Avg- 69.0) 🥶. Merged demo is available here: https://t.co/….
0
3
0
RT @mervenoyann: GPT-4.1-mini level model right in your iPhone 🤯. @OpenBMB released MiniCPM-V4, only 4B model surpassing GPT-4.1-mini in vi….
0
26
0