
Cordelia Schmid
@CordeliaSchmid
Followers
2K
Following
3
Media
0
Statuses
40
Research Scientist @GoogleAI, Research Director @Inria, Phd in #ComputerScience from @GrenobleINP #MachinePerception #ML
Grenoble, France
Joined October 2018
RT @alirezafathi: Our team at Google DeepMind Foundational Research has an opening for a full-time Research Scientist! Areas of Interest ar….
0
28
0
RT @zeeshank95: Below 👇 are some examples of complex prompts, the LLM generated composite object priors, and the corresponding image genera….
0
2
0
RT @IanHuang3D: 🏡Building realistic 3D scenes just got smarter!. Introducing our #CVPR2025 work, 🔥FirePlace, a framework that enables Multi….
0
97
0
RT @alirezafathi: Our team at Google DeepMind Foundational Research is hiring full-time Research Scientists and Research Interns! Multimoda….
0
65
0
RT @ahmetius: Happy to share our recent preprint! Models like CLIP tend to struggle on fine-grained tasks. We equip these models with the….
0
3
0
RT @mcaron31: A simple (yet effective 📚) method to improve your favorite contrastive vision-text model: enhance it with knowledge retrieved….
0
32
0
RT @AntoineYang2: Introducing Vid2Seq, a new visual language model for dense video captioning. To appear at #CVPR2023. Work done @Google w….
0
16
0
RT @NagraniArsha: (1/N) **HIRING ALERT** Our team at @GoogleAI, led by @CordeliaSchmid, is hiring a full time Research Scientist, as well a….
0
71
0
RT @NagraniArsha: New work! AVATAR: Unconstrained Audiovisual Speech Recognition. ASR often fails with noisy/heavily accented audio. Our mo….
0
10
0
RT @SocNeuro_Tweets: Dans le cadre de #FENS2022, ne manquez pas la conférence-débat grand public sur #cerveau et #IA .@IpNeuro @institutpas….
0
12
0
RT @bschoelkopf: @ELLISforEurope postdoc position with @CordeliaSchmid to study causal video prediction:
0
5
0
RT @AntoineYang2: Thrilled to announce that our work « TubeDETR: Spatio-Temporal Video Grounding with Transformers » is accepted at @CVPR #….
0
7
0
RT @ahmetius: Accepted to #CVPR2022: Learning with Neighbor Consistency for Noisy Labels by myself, @jvlmdr, Anurag Arnab and @CordeliaSchm….
0
3
0
RT @NagraniArsha: Excited that "End-to-end Generative Pretraining for Multimodal Video Captioning" led by Paul, & w @anuragarnab and @Cord….
0
1
0
RT @MakarandTapaswi: Check out our @ICCV_2021 paper, Airbert, making strides in Vision-and-Language Navigation through pretraining on a lar….
0
3
0
RT @inria_paris: #prAIrieDay | À ne pas rater ➡️ la conférence "The Challenge of AI: large models"🇬🇧, avec @kchonyc, @CordeliaSchmid, @Igor….
0
6
0
RT @inria_paris: 🔴 #prAIrieDay | "The challenges of #AI : large models" avec @kchonyc, @CordeliaSchmid, @IgorCarron & @douglas_eck !. #Pari….
0
4
0
RT @m__dehghani: We released the code for ViViT as well as the checkpoints of its different variants. If you are interested in transformer….
0
36
0
RT @ImagineEnpc: #3DV2021 Poster Spotlight 4."Towards unconstrained joint hand-object reconstruction from RGB videos".by @yanahasson, @gulv….
0
6
0