Josh Gordon
@random_forests
Followers
31K
Following
3K
Media
206
Statuses
2K
Open source ML @ Google NYC
New York, NY
Joined April 2011
New Kaggle competition. Add reasoning capabilities to a small Gemma model using Tunix (a new popular RL library in JAX). This is really great, thanks to Wei on my team + many others for driving it. Check it out!
📣 Hackathon Launch Alert! Google Tunix Hack - Train a Model to Show Its Work hosted by @googlecloud 🎯 Train Gemma models to show reasoning using Tunix 💰 $100,000 Prize Pool ⏰ Final Submission: Jan 12, 2026 https://t.co/tLRUv9bhvY
0
0
5
New function calling guide with Gemma 3 & KerasHub -- really nice to be able to run locally.
1
3
13
Spanish explorers arrived first. The French built massive forts. Native Americans lived here for millennia. So why did the English shape America’s destiny? Because they brought the seeds of liberty and settlers willing to defend it. Watch this 5-Minute Video to learn how.
10
76
307
KerasHub now includes HGNetV2! We’re excited to bring the high-efficiency, high-accuracy HGNetV2 image classification backbone into KerasHub’s model family. Model details and quickstart notebook are available on Kaggle: https://t.co/ytUSn7j9xK
#keras #kerashub #HGNet
kaggle.com
HGNetV2: GPU-Efficient, Lightweight CNN for Real-Time, Edge-Focused Image Classification
0
7
21
The JAX team is hosting a dinner / networking event during ICML on Thursday. Join us for an evening of food, drinks, and discussion of all things JAX. @SingularMattrix and other JAX team member will be attending. Please register early as capacity is limited. RSVP:
5
10
151
🎬 Generate videos with the Gemini CLI Add: 🧑💻 GenMedia MCP servers for Imagen, Veo & Chirp 📝 A GEMINI܂md file explaining your ✨ creative process And you too can take 🙀 Rusty the Cat on an adventure ⬇️ Full tutorial in the vid ⬇️
1
6
26
Some interesting Gemini CLI use cases and tutorials 🧵⬇️
45
425
4K
We’re building Keras Recommenders at Google, and would love to hear from people working in RecSys to understand what they need. What features matter most to you? DMs are open, feel free to reach out! https://t.co/wWb67TnaSu
keras.io
0
2
10
It was *so good* seeing everyone at the JAX and OpenXLA DevLab this week! Best event in a long time. Let’s do it again!
0
0
2
I am PUMPED to finally share what we’ve been working on: 🖥️ Introducing the Gemini CLI! It can code, sure, but with access to your system shell, files and MCP servers, it can also: 👩🔬 Do research 💽 Organise your MP3s Resolve rebases 🔬 Even strace that weird hung process
31
55
528
Here's more detail on how to load a Hugging Face checkpoint into a KerasHub model. Thanks for the walkthrough, @yufengg , @divyasheess, and @monicadsong ! https://t.co/JbFgdhY40Y
developers.googleblog.com
You can find performance & scale optimized JAX models in MaxText and MaxDiffusion: * https://t.co/DryC3jqfK7 * https://t.co/alWKlUFHW4 You can also use Keras / JAX to tune many Hugging Face Transformers model checkpoints by loading them into a KerasHub model. It's pretty cool!
0
3
9
KerasHub supports loading checkpoints of many model architectures from HuggingFace. So if there is a model checkpoint on HF that is not in the Keras format, you can easily load it in Keras and use it as a regular Keras model in any backends (JAX, TF, or PyTorch).
You can find performance & scale optimized JAX models in MaxText and MaxDiffusion: * https://t.co/DryC3jqfK7 * https://t.co/alWKlUFHW4 You can also use Keras / JAX to tune many Hugging Face Transformers model checkpoints by loading them into a KerasHub model. It's pretty cool!
1
3
5
If you're running JAX and you need to grab a model checkpoint from HuggingFace, KerasHub has you covered. Load, fine-tune, quantize, export for inference.
You can find performance & scale optimized JAX models in MaxText and MaxDiffusion: * https://t.co/DryC3jqfK7 * https://t.co/alWKlUFHW4 You can also use Keras / JAX to tune many Hugging Face Transformers model checkpoints by loading them into a KerasHub model. It's pretty cool!
2
32
102
You can find performance & scale optimized JAX models in MaxText and MaxDiffusion: * https://t.co/DryC3jqfK7 * https://t.co/alWKlUFHW4 You can also use Keras / JAX to tune many Hugging Face Transformers model checkpoints by loading them into a KerasHub model. It's pretty cool!
github.com
Contribute to AI-Hypercomputer/maxdiffusion development by creating an account on GitHub.
0
14
44
Really excited for the upcoming JAX & OpenXLA DevLab this Monday! This is a small group deep dive on the latest techniques, with breakouts on special interests. We'll record the tutorials for everyone, too. Opportunity: If you're interested in *healthcare research* with JAX /
openxla.org
0
1
4
11/ You can find lots more examples in quickstarts folder in the cookbook. Also check out the developer docs on https://t.co/j3XWjnn0QF for lots more walkthroughs, and code examples for JS developers. If you have questions or run into bugs, the best place to ask them is on
0
0
1
10/ You can play the audio right in Colab. There's also a neat example that prompts the model to read a discussion between two speakers, like NotebookLM. Add your own prompts and see what the model can do.
1
0
1
9/ Now you're ready to go. From the menu, choose Runtime -> Run all. If everything is working, the notebook will install the SDK, and begin making calls to demonstrate how to generate audio. There are neat examples in there - including telling the model to speak in a spooky
1
0
1
8/ Add a new secret called "GOOGLE_API_KEY". Paste your key there, and grant the notebook access.
1
0
1
7/ Now let's safely store your API key in Colab. Click the little key icon on the left. Secrets are private (and tied to your Google account, not a specific notebook). You can grant notebooks access to import them like an environment variable.
1
0
1