Paul Vicol
@PaulVicol
Followers
1K
Following
1K
Media
66
Statuses
101
Research Scientist at @GoogleDeepMind. Working on Gemini reasoning models. PhD from @UofT and @VectorInst.
Toronto
Joined August 2019
🚀Check out the amazing emergent capabilities of Veo 3!
Are video models the path to general visual intelligence? Check out the #ICCV2025 Google booth today at 3pm to see how Veo 3 solves tasks for which it wasn't trained. https://t.co/Qoiea1O42k
0
0
6
🚀 All this and more in our paper! arXiv: https://t.co/wsUnrKbLkp Project page: https://t.co/BKdziuhygG By @thwiedemer, Yuxuan Li, @PaulVicol, @shaneguML, @nmatares, @kswersk, @_beenkim, @priyankjaini, and Robert Geirhos.
2
17
140
Veo 3 understands material properties, including buoyancy, reflections, flammability, and soft body physics.
2
0
17
Veo 3 can understand which objects fit in a backpack, and can categorize objects (putting all the toys in a bucket). Veo 3 can also draw, and understands the effects of gravity and air resistance on falling objects.
1
0
12
Veo 3 can reason, filling in the next element in a sequence of images.
1
1
19
🔥Veo 3 has emergent zero-shot learning and reasoning capabilities! This multitalented model can do a huge range of interesting tasks. It understands physical properties, can manipulate objects, and can even reason. Check out more examples in this thread!
Veo is a more general reasoner than you might think. Check out this super cool paper on "Video models are zero-shot learners and reasoners" from my colleagues at @GoogleDeepMind.
4
23
167
The new Gemini 2.0 Flash Thinking model (Gemini version of GPT o1 that takes a while to think before responding) is very nice and fast and now available to try on Google AI Studio 🧑🍳👏. The prominent and pleasant surprise here is that unlike o1 the reasoning traces of the model
Introducing Gemini 2.0 Flash Thinking, an experimental model that explicitly shows its thoughts. Built on 2.0 Flash’s speed and performance, this model is trained to use thoughts to strengthen its reasoning. And we see promising results when we increase inference time
131
432
5K
Introducing Gemini 2.0 Flash Thinking, an experimental model that explicitly shows its thoughts. Built on 2.0 Flash’s speed and performance, this model is trained to use thoughts to strengthen its reasoning. And we see promising results when we increase inference time
127
476
4K
We’ve been *thinking* about how to improve model reasoning and explainability Introducing Gemini 2.0 Flash Thinking, an experimental model trained to think out loud, leading to stronger reasoning performance. Excited to get this first model into the hands of developers to try
82
302
4K
🎉 Thank you to all the participants for contributing to the workshop!
@PaulVicol @rsalakhu @sedielem @kate_saenko_ @MatthiasBethge @vishaal_urao @seo_minjoon @tqchenml @mengyer @lrjconan @NailaMurray @weichiuma @BeidiChen Thanks for organizing the workshop!! Such a fun experience 😀
0
0
4
⏰ Timestamps 2: @vishaal_urao Continual Foundation Model Learning 5:06:39 @seo_minjoon On Knowledge Adaptability of LMs 5:42:00 Bing Liu, Continual Learning with LLMs 7:08:17 @tqchenml Universal LLM Deployment with ML Compilation 7:48:30 #NeurIPS2024 #AdaptiveFoundationModels
0
0
5
⏱️ Timestamps for invited speakers in https://t.co/f2BOd8EM00
@rsalakhu Tree Search for LM Agents 36:20 @sedielem Multimodal Iterative Refinement 1:10:45 @kate_saenko_ Is pre-training the key to successful domain generalization? 1:55:39 #NeurIPS2024 #AdaptiveFoundationModels
1
0
3
The recording of my #NeurIPS2024 workshop talk on multimodal iterative refinement is now available to everyone who registered. My talk starts at 1:10:45 into the recording. I believe this will be made publicly available eventually, but I'm not sure when exactly!
🌐 Posters: https://t.co/dtcqr1pd28 🎬 Workshop recording: https://t.co/f2BOd8EM00 Our workshop in numbers: 🖇️ 128 Papers 💬 8 Orals 🖋️ 564 Authors ✅ 40 Reviewers 🔊 7 Invited Speakers 👕 100 T-Shirts #NeurIPS2024 #AdaptiveFoundationModels
2
7
59
⚙️ Tong Chen presented “Generative Adapter: Contextualizing Language Models in Parameters with a Single Forward Pass” https://t.co/1MjLBXVxZs
@tomchen0 @hfang90 @nlpaxia @AllenLao @ben_vandurme @LukeZettlemoyer @JianfengGao0217 @kelvinih
#NeurIPS2024 #AdaptiveFoundationModels
1
3
8
📰 Amelia Hui Dai presented “Are LLMs Prescient? A Continuous Evaluation using Daily News as the Oracle” https://t.co/RbL8FFNfbT Amelia Hui Dai @rteehas @mengyer
#NeurIPS2024 #AdaptiveFoundationModels
1
0
2
👀 Wen-Tse Chen presented "Fine-tuning LLM Agents with Retrospective In-Context Online Learning" https://t.co/wAl0VhFyJC Wen-Tse Chen @JiayuChen98666 @FahimTajwar10 @_Hao_Zhu Xintong Duan @rsalakhu Jeff Schneider #NeurIPS2024 #AdaptiveFoundationModels
1
3
17
🏃 Zhepei Wei presented "Fast and Accurate Language Model Decoding via Parallel Token Processing" https://t.co/2pLIONg9hT
@weizhepei @WeiLin__Chen @tianhongzxy @yumeng0818
#NeurIPS2024 #AdaptiveFoundationModels
1
5
11
▶️ Yue Wu presented "Self-Play Preference Optimization for Language Model Alignment" https://t.co/Bp2gx9J4qW
@FrankYueWu1 @EdwardSun0909 @HuizhuoY @Kaixuan_Ji_19 Yiming Yang @QuanquanGu
#NeurIPS2024 #AdaptiveFoundationModels
1
1
16
🎾 Jennifer Hsia presented "RAGGED: Towards Informed Design of Retrieval Augmented Generation Systems" https://t.co/e1ytuZeLHD
@jen_hsia, Afreen Shaikh, Zhiruo Wang, @gneubig
#NeurIPS2024 #AdaptiveFoundationModels
1
0
3
🥣 Seungone Kim presented Personalized Soups https://t.co/RrR0ndqd79
@jang_yoel @seungonekim @billyuchenlin @yizhongwyz @jmhessel @LukeZettlemoyer @HannaHajishirzi @YejinChoinka @rajammanabrolu
#NeurIPS2024 #AdaptiveFoundationModels
1
1
12