If you think OpenAI is cool, you’re gonna love my latest side project OpenUI. Tired of writing HTML by hand and remembering tailwind classes? Let OpenUI do it for you:
You can now visualize Transformers training performance with a seamless
@weights_biases
integration. Compare hyperparameters, output metrics, and system stats like GPU utilization across your models!
Step-by-step guide:
Colab:
@MatthewBerman
Appreciate it! Feels really good to have so many people acknowledge my little side project 🥰. I’m working on some fine tuned models and new features, stay tuned 🥳
Only 1 week away! Excited to invite you to Fully Connected 2024. Join top minds in the field to explore GenAI's real-world impact. See you on April 18th, SF!
So excited to partner with
@yanda
and
@dschol
in our most recent round of funding Building developer tools for engineers I admire is my dream job, can't wait to see what we can build together!
I’m in Serbia visiting family and geeking out on the language. I soon head to Germany, called Nemačka here. This translates roughly to “Land of the Mutes” in Protoslavic. I’m imaging the original Slav’s trying to communicate, giving up, then telling stories about it 👌
Just boarded my first flight since Covid started. My
@Delta
flight has all middle seats open, no crying, free sanitizing wet wipes... It’s like first class for a discounted coach price.
World-class ML requires managing both infrastructure & the models themselves.
Today we’re excited to announce our integration with
@NVIDIAAI
’s Base Command Platform, a hosted AI dev hub that gives enterprises instant access to state-of-the-art infra!
Want to automatically track *all* hyperparams of your model while it's training, without writing *any* configuration code?
Thanks to
@GuggerSylvain
, you can in fastai v2, with the new
@weights_biases
callback :)
Kmnist Benchmark japanese handwriting recognition competition: $1000 in compute credits to the contributor of the highest validation accuracy by July 8. Kuzushiji-MNIST is a drop-in replacement for the MNIST dataset (28x28 grayscale, 70,000 images)
Don't miss our webinar this Thursday when we'll demonstrate the power of
@NVIDIADC
's Base Command Platform x W&B.
Live demo 🔥
We invite you to launch a sweep with co-founder Chris
@vanpelt
into Base Command & monitor the results live in W&B.
📍 RSVP:
@togethercompute
my little llama3 8b fine tuning job has been pending for 18 hours. Is there a status page or way to see queue depth? The suspense is killing me.
@stormtroper1721
@l2k
@weights_biases
Hey
@stormtroper1721
I wrote most of that code and can help get to the bottom this. I'll try to make a simple script to reproduce. I opened an issue here: I'll post my findings and ping you if I don't see you on GH.
@stormtroper1721
@l2k
@weights_biases
I just tried to reproduce and I have a very simple working script in the github issue: . Can you try to make a simple script that shows the behavior you're seeing and add it to the thread?
@resistredaction
The utilization logic uses the psutil python library, specifically the cpu_percent function TLDR; if your current process uses multiple threads it can be greating than 100%:
@jjghsjfhehxjs
It’s possible. Today it renders raw HTML into an iframe, and you can later convert that to React. To render React directly in the browser, we would need a bundling step. I’ve thought about it, and Vite can even run in the browser so it’s possible, just more complicated.
@resistredaction
We don't make any network connections in your training code, they happen in a separate process. Calls to wandb.log to write synchronously to disk by default. You can make those writes asynchrounous by adding `sync=False` to your wandb.log calls.