twelve_labs Profile Banner
TwelveLabs (twelvelabs.io) Profile
TwelveLabs (twelvelabs.io)

@twelve_labs

Followers
3K
Following
3K
Media
879
Statuses
2K

🎥 Building multimodal foundation models for video understanding: https://t.co/iKo4kmNv6p 👉 Join our community: https://t.co/1b53LYUCBc

San Francisco
Joined February 2022
Don't wanna be here? Send us removal request.
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
đź“– Read the full report to explore all 11 task categories and see how video AI is evolving from single answers to working partnerships. https://t.co/G94t5NoYfy What video workflows are you building? We'd love to hear how you're using Pegasus and what patterns you're seeing
0
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
The conclusion is clear: video AI is judged not by any one step in isolation, but by how well it collaborates across the entire workflow. From narrative generation to compliance checks, from sports analysis to technical review — Pegasus is helping users transform uncertainty
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
Structured Extraction — Breaking videos into chapters, highlights, event logs, and comparison tables with precise temporal boundaries Reasoning & Evaluation — From sports analysis to compliance checks, Pegasus acts as an on-demand assistant that captures structure, performance,
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
🎬 Here's what makes the 11 task categories so compelling: Information Retrieval — From simple Q&A to complex event tracking with timestamp-anchored evidence Content Creation — Narrative generation, tutorial scripts, and social media content that transforms raw footage into
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
When combined with supporting modules like shot and scene detection, event segmentation, temporal alignment, confidence calibration, and iterative feedback — users can build goal-oriented, reliable, and scalable workflows that go far beyond a single answer. 🔗 To expose these
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
đź’ˇ At Twelve Labs, we're building for this reality: Marengo provides high-fidelity multimodal embeddings for flexible retrieval and content search across video, audio, image, and text. Pegasus delivers video-to-text reasoning with summaries, descriptions, timestamped event
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
3. Agentic reasoning is increasingly needed at the clip level As requests grow more compound and multi-step, they demand precise frame spans, OCR snippets, speaker segments, and detection logs — requirements that can't be satisfied in a single pass. Addressing these cases calls
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
2. Hybrid combinations expect editor-ready structure Modern prompts bundle summarization, chapterization, normalized timecodes, key event extraction, and highlight proposals into a single flow. The output needs to be immediately ingestible by editing or content systems —
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
1. Instructions are tied to the timeline Users don't just want summaries — they want summaries anchored to frame ranges, with chapter boundaries detected automatically and merge/split rules applied intelligently. Temporal constraints have become central to how people think
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
But what stood out most wasn't just the diversity of tasks — it was how users combine them into layered, hybrid workflows that merge summarization, extraction, comparison, and transformation, often within a single query. 🎯 Three critical patterns emerged that are shaping the
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
After analyzing thousands of real-world prompts, we identified 11 distinct task categories spanning 4 core intents: information retrieval, content creation, structured extraction, and reasoning.
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
6 hours
🧵 We just published our most comprehensive analysis yet of how users work with Pegasus — and the findings reveal something fundamental: video AI is no longer about answering single questions. It's about enabling entire workflows.
1
2
2
@twelve_labs
TwelveLabs (twelvelabs.io)
5 days
Register for the webinar here: https://t.co/H4AlqdvWsq
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
5 days
✅ Umer Qureshi will present Tidier - a trip video summarization application that lets you upload videos from your vacation and gives you a montage of specific things you want from each video. https://t.co/NznxAA7tm4
Tweet card summary image
github.com
A trip video summarizing application that lets you upload videos from your vacation, and gives you a montage of specific things you want from each video. - UmerQureshi21/Tidier
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
5 days
✅ Akash Jain, Matthias Druhl, and Devland Beasley will present PropSage - a video-backed prop insights application that stitches real game videos to prop markets so you can see the evidence and act fast. https://t.co/7qynv9gwEW
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
5 days
✅ Pranav Devarinti, Ryan Zhang, Joseph Huang, and Bryce Pardo will present NewsCap - a new way to fact check videos in real time. https://t.co/7kS2LUN0H6
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
5 days
✅ Bayu Wicaksono, Fachri Najm Noer Kartiman and Mochamad Khaairi will present Qlassroom - a winning project at the recent @qdrant_engine Vector Space hackathon. https://t.co/H8S9pGY3hH
1
0
0
@twelve_labs
TwelveLabs (twelvelabs.io)
5 days
In the 98th session of #MultimodalWeekly, we feature several projects built with the TwelveLabs API at recent developer events.
1
0
1
@twelve_labs
TwelveLabs (twelvelabs.io)
5 days
We’re heading to @NVIDIAGTC in Washington, D.C. next week! ✨ Join us at booth I-25 to see how TwelveLabs is making video as searchable and understandable as text, unlocking new possibilities for media and enterprise applications. 📍 GTC Washington, D.C. 🗓️ Oct 21–24 See you
0
0
3