
TwelveLabs (twelvelabs.io)
@twelve_labs
Followers
3K
Following
2K
Media
713
Statuses
2K
🎥 Building multimodal foundation models for video understanding: https://t.co/iKo4kmNv6p 👉 Join our community: https://t.co/1b53LYUCBc
Joined February 2022
In the 85th session of #MultimodalWeekly, we have 3 exciting presentations on multimodal state-space models, multimodal reasoning, and multi-grained video editing.
1
0
1
NYC, we’re coming for you 👀TwelveLabs is proud to sponsor the Hacking Agents Hackathon, May 30–31, a 24hr AI sprint w/ @langflow_ai, @twilio, @digitalocean & more. 🛠️ Build with agents & video.⚡ Lightning workshop by James, our head of Head of Dev Experience.🎁 Prizes for best
0
0
1
In the 84th session of #MultimodalWeekly, we will explore how @twelve_labs and @superannotate work together to streamline the process of fine-tuning high-performing multimodal models.
1
0
0
Big moment for TwelveLabs at @awscloud Summit in Seoul 🇰🇷 Our CEO Jae Lee and Eng Director Esther Kim shared how we’re rethinking infra for video AI, with models that help orgs like MLSE cut editing time from 10 hrs to 9 mins 🤯. #TwelveLabs #VideoAI #AWSSummit
0
1
3
What if your app knew what was happening inside a video, scene by scene?. We’re teaming up with @qdrant_engine to show how multimodal AI + vector search powers next-gen recommendations. 📍 May 22 | San Francisco.🔗 #VideoAI #TwelveLabs.
0
1
2
In the 83rd session of #MultimodalWeekly, we have an exciting presentation with a framework for enhancing video generation at inference time via a teacher model from @dohunlee1234 (@kaist_ai).
1
1
3
RT @qdrant_engine: 🚀 We’re back with another AI Builders event - this time in collaboration with @twelve_labs!. Join us in SF next week for….
0
2
0
RT @langflow_ai: We are excited to announce @torcdotdev and @twelve_labs will be joining us in NYC on May 30-31 for our Hackathon! . Join u….
0
3
0
In the 81st session of #MultimodalWeekly, we have an exciting presentations on learning generalist agents using multimodal foundation world models from @pietromazzaglia.
1
1
4
The webinar recording with @pietromazzaglia is up! Watch here: He discussed:.- Training an embodied foundation model is hard.- Connecting and aligning generative world model and multimodal foundation model.- Reinforcement learning with model-based reward.
0
0
1
In the 82nd session of #MultimodalWeekly, we have an exciting presentation with a survey on test-time scaling in large language models from @silentspring49 and Qiyuan Zhang.
1
0
3