_tingliu Profile Banner
Ting Liu Profile
Ting Liu

@_tingliu

Followers
243
Following
87
Media
1
Statuses
48

Researcher @GoogleDeepMind

Los Angeles, CA
Joined January 2010
Don't wanna be here? Send us removal request.
@_tingliu
Ting Liu
5 months
And great to see VideoPrism makes the 1000th!
@JeffDean
Jeff Dean
5 months
Check out the 999 open models that Google has released on @huggingface: https://t.co/Fo4Ycn9ARi (Comparative numbers: 387 for Microsoft, 33 for OpenAI, 0 for Anthropic).
0
0
4
@osanseviero
Omar Sanseviero
5 months
Excited to share the release of VideoPrism! 🎥 📏Generate video embeddings 👀Useful for classifiers, video retrieval, and localization 🔧Adaptable for your tasks Model: https://t.co/B2RPZjgNFL Paper: https://t.co/Qs2mEdgCTP GitHub: https://t.co/bvtqM9GMa9
6
60
278
@GoogleResearch
Google Research
5 months
At 4:00 today, stop by the #CVPR2025 Google booth where Ting Liu will demo a model for video creation by demonstration that can generate physically plausible video that continues naturally given a context scene. Find sample videos at https://t.co/VmfjfuxDgR
1
6
39
@BoqingGo
Boqing Gong
5 months
Excited! VideoPrism-Base/Large are publicly available now: https://t.co/g5BNiA5O05 Check it out if you need a versatile video encoder for video-language or video-native tasks. Feedback appreciated!
Tweet card summary image
github.com
Official repository for "VideoPrism: A Foundational Visual Encoder for Video Understanding" (ICML 2024) - google-deepmind/videoprism
@GoogleAI
Google AI
2 years
Introducing VideoPrism, a single model for general-purpose video understanding that can handle a wide range of tasks, including classification, localization, retrieval, captioning and question answering. Learn how it works at https://t.co/vAVqXo8g4j
0
6
21
@_tingliu
Ting Liu
5 months
After over 15 months, we are excited to finally release VideoPrism! The model comes in two sizes, Base and Large, and the video encoders are available today at https://t.co/imLrPYAnEk. We are also working towards adding more support into the repository, please stay tuned.
Tweet card summary image
github.com
Official repository for "VideoPrism: A Foundational Visual Encoder for Video Understanding" (ICML 2024) - google-deepmind/videoprism
@GoogleAI
Google AI
2 years
Introducing VideoPrism, a single model for general-purpose video understanding that can handle a wide range of tasks, including classification, localization, retrieval, captioning and question answering. Learn how it works at https://t.co/vAVqXo8g4j
0
1
8
@_tingliu
Ting Liu
11 months
Introducing our latest work Video Creation by Demonstration, a novel video creation experience. Paper: https://t.co/YZFCLKj5aM Project: https://t.co/o9inp7qScE Huggingface:
Tweet card summary image
huggingface.co
0
8
34
@garyzhao9012
Long Zhao
1 year
Happy to share our recent work "Epsilon-VAE", an effective autoencoder that turns single-step decoding into a multi-step probabilistic process. Please check our paper for more detailed results! arXiv page:
Tweet card summary image
arxiv.org
In generative modeling, tokenization simplifies complex data into compact, structured representations, creating a more efficient, learnable space. For high-dimensional visual data, it reduces...
0
4
11
@GoogleAI
Google AI
2 years
Introducing Long Zhao, a Senior Research Scientist at Google, who worked to build VideoPrism: A Foundational Visual Encoder for Video Understanding. Read the blog to explore innovations in video understanding tasks and more → https://t.co/MnfeIMAohS
22
66
396
@GoogleAI
Google AI
2 years
Introducing VideoPrism, a single model for general-purpose video understanding that can handle a wide range of tasks, including classification, localization, retrieval, captioning and question answering. Learn how it works at https://t.co/vAVqXo8g4j
35
214
834
@_akhaliq
AK
2 years
Google presents Video Instruction Tuning Distilling Vision-Language Models on Millions of Videos paper page: https://t.co/MsoJ6GMhGq Experiments show that a video-language dual-encoder model contrastively trained on these auto-generated captions is 3.8% better than the
0
38
181
@jon_barron
Jon Barron
2 years
A bunch of people have requested the slides for my "Scholars & Big Models" CVPR workshop talk. I didn't have a script, but I wrote a rough version of what I probably said at the bottom of each slide. Feedback is welcome!
3
61
397
@zhou_honglu
Honglu Zhou
2 years
📢 Our #SMART101 challenge is now open! 🎉 Join the brightest minds in multimodal reasoning and cognitive models of intelligence to drive AI progress. 🚀 Don't miss out! Challenge closes on Sept. 1. Winning teams will receive prizes! 🏆 https://t.co/asTC5oscJh #VLAR #ICCV2023 #AI
1
20
17
@_akhaliq
AK
2 years
VideoGLUE: Video General Understanding Evaluation of Foundation Models paper page: https://t.co/Y97nZAXGm9 We evaluate existing foundation models video understanding capabilities using a carefully designed experiment protocol consisting of three hallmark tasks (action
0
23
108
@JenJSun
Jennifer J. Sun
3 years
~1 month left to submit a paper to our workshop on Multi-Agent Behavior #CVPR2023! Come discuss multi-agent behavior, including biological and artificial agents, across wide ranges of spatial and temporal scales 🔬🐭🚶🪰🚗🏀🌍 Hope to see you in June!
@JenJSun
Jennifer J. Sun
3 years
The Multi-Agent Behavior Workshop (MABe) will take place @CVPR 2023 in Vancouver! We are featuring fantastic speakers @SiyuTang3, @georgiagkioxari, @tlandgraf, @Wei_ZHAN_, @sabinehauert, & Ben Sapp; with panel & poster discussions 👇 Call for papers!📢 https://t.co/pWa7OoIPHF
0
6
19
@JeffDean
Jeff Dean
3 years
Bard is now available in the US and UK, w/more countries to come. It’s great to see early @GoogleAI work reflected in it—advances in sequence learning, large neural nets, Transformers, responsible AI techniques, dialog systems & more. You can try it at
Tweet card summary image
gemini.google.com
Meet Gemini, Google’s AI assistant. Get help with writing, planning, brainstorming, and more. Experience the power of generative AI.
@sundarpichai
Sundar Pichai
3 years
We're expanding access to Bard in US + UK with more countries ahead, it's an early experiment that lets you collaborate with generative AI. Hope Bard sparks more creativity and curiosity, and will get better with feedback. Sign up: https://t.co/C1ibWrqTDr https://t.co/N8Dzx1m0fc
27
118
711
@_jasonwei
Jason Wei
3 years
Best AI skillset in 2018: PhD + long publication record in a specific area Best AI skillset in 2023: strong engineering abilities + adapting quickly to new directions without sunk cost fallacy Correct me if this is over-generalized, but this is what it seems like to me lately
57
169
2K
@menglin_jia
Menglin Jia
3 years
Attending #ECCV2022? Drop by the poster session 1.A (poster 084) about VPT! 🔥❄️ w/ @CornellCIS, @BelongieLab and @MetaAI !
@BelongieLab
Belongie Lab
3 years
1/3) What is the best way to adapt large pre-trained vision models to downstream tasks in terms of effectiveness and efficiency? Drawing inspiration from the recent advances on Prompting in NLP, we propose a new simple and efficient method: Visual Prompt Tuning (VPT) 👇
1
5
23
@JenJSun
Jennifer J. Sun
3 years
I’m on the job market! I develop AI for scientists, to accelerate discovery from data & domain knowledge. My work tackles challenges from real-world workflows in domains such as neuroscience & healthcare, including annotation efficiency, interpretability & structure discovery.
4
32
182
@BelongieLab
Belongie Lab
3 years
1/3) What is the best way to adapt large pre-trained vision models to downstream tasks in terms of effectiveness and efficiency? Drawing inspiration from the recent advances on Prompting in NLP, we propose a new simple and efficient method: Visual Prompt Tuning (VPT) 👇
2
6
22
@JenJSun
Jennifer J. Sun
3 years
We are excited to release the dataset from the 2022 MABe Challenge! 🐭🪰 Our dataset consists of mouse (9 mil frames) and fly (4 mil frames) social interactions for studying behavioral representation learning! Paper: https://t.co/QV1KynfVkR Challenge: https://t.co/deeqxcf61L
3
20
87