Google AI is focused on bringing the benefits of AI to everyone. In conducting and applying our research, we advance the state-of-the-art in many domains.
We’re excited to release the weights of our Time Series Foundation Model (TimesFM) on Hugging Face!
To access, visit our HuggingFace () & GitHub () repositories.
Learn more ↓
#TimesFM
#TimeSeries
#Forecasting
#FoundationModels
TimesFM is a forecasting model, pre-trained on a large time-series corpus of 100 billion real world time-points, that displays impressive zero-shot performance on a variety of public benchmarks from different domains and granularities. Learn more →
Visit the
#ICLR2024
Google booth today at 3:15pm, where Zheng Xu will continue the discussion on Advances in private training for production on-device language models, and present our recent progress on learning with public and private data.
Differential privacy research advances power the private training of Gboard LMs! Join Zheng Xu for the
#ICLR2024
EXPO talk, "Advances in private training for production on-device language models," today at 12:45pm to learn more about this major milestone.
Stop by the
#ICLR2024
Google booth today @ 12:45 PM to hear
@_toolazyto_
discuss how to learn from privacy-enhancing aggregate data to achieve robust, scalable performance using the secret ingredient: The Belief Propagation Setup inspired by parity checks.
With OpenMask3D, you can search 3D scenes directly via free-form text queries. Drop by the
#ICLR2024
Google booth today at 9:30am for a demo with
@FrancisEngelman
on using visual-language models to achieve open-vocabulary 3D instance segmentation.
Michał Januszewski is a Staff Research Scientist at Google Research who’s decoding the secrets of how we think.
Explore how the Google Research Connectomics team developed AI to create detailed brain maps similar to electrical wiring diagrams. →
This year is the 10th anniversary of the Google Research’s Connectomics team! In celebration, today with
@MCB_Harvard
we’re publishing a 1.4 petabyte human brain connectome with 57k cells and 150M synapses in
@ScienceMagazine
→ &
Language models generate responses by sampling from a vast space of possible outputs. Learn how to measure and express uncertainty in these predictions at the
#ICLR2024
Google booth today at 3:15pm with
@adamjfisch
and
@TalSchuster
.
Obstetric experts vary in interpreting fetal heart tracings, posing a challenge in care. Visit the
#ICLR2024
Google booth at 12:45pm to learn how deep neural networks can potentially enhance early detection of fetal hypoxia & improve maternal/neonatal care.
Differential privacy research advances power the private training of Gboard LMs! Join Zheng Xu for the
#ICLR2024
EXPO talk, "Advances in private training for production on-device language models," today at 12:45pm to learn more about this major milestone.
Language models that predict the next word are a key tech for many applications. Learn how years of research now power the private training of Gboard LMs, since the development of federated learning (2017) & formal differential privacy guarantees (2022) →
Curious about the latest in Cloud AI Research? Join Google researchers Tomas Pfister and Sercan Arik for a live Q&A session today at 9:30 AM at the
#ICLR2024
Google booth.
Introducing Long Zhao, a Senior Research Scientist at Google, who worked to build VideoPrism: A Foundational Visual Encoder for Video Understanding.
Read the blog to explore innovations in video understanding tasks and more →
SVP for Research Science & Technology James Manyika, Quantum AI COO Charina Chou, and other experts met with
@SecBlinken
at our SF offices to discuss quantum tech progress, emphasizing international collaboration's pivotal role.
At 3:15pm today, the
#ICLR2024
Google booth will host
@phanein
,
@BahareFatemi
, and
@jhalcrow
for a talk on finding the correct graph inductive bias for Graph ML and developing strategies to convert graphs into language-like formats for LLMs.
Graphs, structures that describe connections between objects, are everywhere — imagine the tools in a kitchen, parts of a bike, or a group of friends. Learn about our latest work that explores how to encode graphs in a format that an LLM can understand: →
Visit the
@iclr_conf
Google booth today at 12:45 PM to learn how we trained a model to imitate the distribution of a small set of human attacks, & used it for data amplification while adapting a plug and play language model to reduce label noise.
Google is a proud Sponsor of the
#ICLR2024
@WiMLworkshop
. Curious about life at Google? Hear from Vahab Mirrokni and Lesly Miculicich at the roundtable discussion today from 12:50PM - 1:25PM CEST, or speak with one of the many other Googlers in attendance!
Stop by the
#ICLR2024
Google booth today at 9:30 AM to hear
@zlwang_cs
, Lesly Miculicich, and
@chl260
describe how Chain-of-Table enables step-by-step reasoning from table inputs by simplifying complex tables into more informative and manageable segments.
Interpreting data from tables can be challenging for
#LLMs
, but our proposed framework, Chain-of-Table, uses chain-of-thought to iteratively work through tasks to reach a new state of the art on multiple benchmarks. Read more at
At
#ICLR2024
? Stop by the Google booth to learn about the exciting work we’re doing across topics spanning reinforcement learning, LLMs, theory and optimization, societal impact, safety and privacy, and more. See our booth activity schedule at: