Chris Dyer
@redpony
Followers
10K
Following
3K
Media
7
Statuses
465
i live in Angel, London. I am a researcher at DeepMind. I play the cello. ππͺππ π πͺπ¦ππΊ π π¨ππ π¬π π’π¨π π¨π·π
London, England
Joined April 2008
Great work, everyone! :)
Outstanding paper 3π: Don't lie to your friends: Learning what you know from collaborative self-play https://t.co/hvY1oaF6Jf
1
1
15
Super proud of what this team is doing! And I canβt wait to share more soon.
We're thrilled to announce SignGemma, our most capable model for translating sign language into spoken text. π§ This open model is coming to the Gemma model family later this year, opening up new possibilities for inclusive tech. Share your feedback and interest in early
1
7
61
2οΈβ£SignGemma is a sign language understanding model thatβs coming later this year π€πΌItβs a massively multilingual model thatβs best at translating ASL into English text, enabling further development of tech access for Deaf and Hard of Hearing users. π§ Share your feedback and
9
59
497
Terrific opportunity in London with a great team working on multimodal learning and eval.
I am hiring for RS/RE positions! If you are interested in language-flavored multimodal learning, evaluation, or post-training apply here π¦ https://t.co/xAiVx4KWrK I will also be #NeurIPS2024 so come say hi! (Please email me to find time to chat)
0
0
4
We maintain strong zero-shot transfer of CLIP and SigLIP when varying model size and data scale, while achieving up to 4x few-shot sample efficiency and up to +16% performance gains! Fun project with @confusezius, @zeynepakata, @dimadamen and @olivierhenaff.
π€Can you turn your vision-language model from a great zero-shot model to a great-at-any-shot generalist? Turns out you can, and here is how: https://t.co/8c1MwbxLRn Really excited to share our latest work on multimodal pretraining! π§΅A short and hopefully informative thread:
1
4
27
πΒ Join the Gemini Multilinguality team @GoogleDeepMind π Weβre looking for researchers passionate about making LLMs helpful for all. Dramatically improve model quality, coverage, and cultural relevance across hundreds of languages. #NLProc #MultilingualAI #i18n #LLMs
4
39
183
Weβre presenting the first AI to solve International Mathematical Olympiad problems at a silver medalist level.π₯ It combines AlphaProof, a new breakthrough model for formal reasoning, and AlphaGeometry 2, an improved version of our previous system. π§΅ https://t.co/U0OFXBia8n
290
1K
5K
DEADLINE March 29: prepare and submit your application for EEML 2024, Novi Sad, Serbia https://t.co/AjJoChe6P3 π·πΈ. Topics: Basics of ML, Multimodal learning, NLP, Advanced DL architectures, Generative models, AI for Science. Check our stellar speakers! Scholarships available! π
1
20
53
You are looking at the oldest plant ever to be regenerated, grown from 32,000-year-old seeds! A Russian team discovered a seed cache ofΒ Silene stenophylla, native to Siberia buried by an Ice Age squirrel, successfully germinated it https://t.co/ARwp1KnqGU
15
215
905
more cool stuff in this release from the amazing folks at GDM.
Today developers can start building with our first version of Gemini Pro through Google AI Studio at https://t.co/ozfVwuBpSZ. Β Developers have a free quota and access to a full range of features including function calling, embeddings, semantic retrieval, custom knowledge
0
0
3
The Gemini era is here. Thrilled to launch Gemini 1.0, our most capable & general AI model. Built to be natively multimodal, it can understand many types of info. Efficient & flexible, it comes in 3 sizes each best-in-class & optimized for different uses https://t.co/VUu1277bC2
395
2K
11K
Introducing COLM ( https://t.co/7T42bAAQa4) the Conference on Language Modeling. A new research venue dedicated to the theory, practice, and applications of language models. Submissions: March 15 (it's pronounced "collum" ποΈ)
31
425
2K
Today we are announcing a major breakthrough in the Vesuvius Challenge: we have read the first word from an unopened Herculaneum scroll. The word is "ΟΞΏΟΟΟ
ΟΞ±Ο" which means "purple dye" or "cloths of purple." https://t.co/0EDGBX4t4h Congratulations to 21yo computer science
289
3K
15K
@milosstanojevic @LaurentSartran Get started at https://t.co/ksytbLpjYj or read the paper https://t.co/YJEugzWpAT!
arxiv.org
The development of deep learning software libraries enabled significant progress in the field by allowing users to focus on modeling, while letting the library to take care of the tedious and...
0
1
2
Building neural models with discrete latent structure is now easy and scalable with SynJax, an ergonomic and powerful open source library for JAX from my team. Great work from @milosstanojevic and @LaurentSartran!
Today we are open-sourcing SynJax which is a JAX library for efficient probabilistic modeling of structured objects (sequences, segmentations, alignments, trees...). It can compute everything you would expect from a probability distribution: argmax, samples, marginals, entropy...
2
8
44
It has officially been 65 years since the Wug Test revealed kids' natural grasp of language rules. Happy wugiversary
361
3K
39K
Best part of having spent time in academia: seeing your former students (and their students) doing great things. Congrats @tsvetshop on the best paper award at #ACL2023NLP. Brilliant careful and important work!
2
11
109
Not long to wait until LCO's next event! Hear Bachβs St John Passion @stgabrielspimlico at 19.00 on Saturday 15 April, featuring @hesperoschoir, Eliza Lucyna Masewicz, Katie Macdonald, @HecBloggs and Twm Tegid Brunton. Book your tickets here: https://t.co/aUpkKHMsz3
0
2
4
Great work by @LaurentSartran, Adhi Kuncoro, @milosstanojevic, Phil Blunsom! 3/3 Paper: https://t.co/p5SeHvkozs Code:
github.com
Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022) - google-deepmind/transformer_grammars
0
0
15