_brimtown Profile Banner
Tim Brown Profile
Tim Brown

@_brimtown

Followers
617
Following
3K
Media
70
Statuses
652

ai and product eng @datadoghq

New York, NY
Joined September 2011
Don't wanna be here? Send us removal request.
@_brimtown
Tim Brown
1 day
new post: what happens when you make an llm read 50,000 messages from your college group chat?
1
0
3
@_brimtown
Tim Brown
1 day
that’s the soup-to-nuts of training and deploying your own model. Full post and more links on the blog. more frontend people doing models and more model people doing frontend 🫶 https://t.co/uvzCZca30G
Tweet card summary image
brimtown.com
Using LoRA and in-browser inference to fine-tune your friends
0
0
0
@_brimtown
Tim Brown
1 day
small models are very fun, fine-tuning is powerful, and being able to share an LLM app without having a credit card hooked up to a GPU/model provider is very liberating there should be many more small open models used for weird things! you can do this too, 10/10 would recommend
2
0
0
@_brimtown
Tim Brown
1 day
Earlier versions of the model were much more prone to crashing in the browser, or were fully fried. If you make the model read the groupchat 10 times it goes a little nutty
1
0
0
@_brimtown
Tim Brown
1 day
the rest of the app is a bog-standard, client-side rendered/vanilla React app on Vercel. There’s no “backend”, and the conversations are persisted in localStorage
1
0
0
@_brimtown
Tim Brown
1 day
WebLLM lets us run language models *in* the browser. custom model need a conversion step first, but Qwen is well supported 250MB of model weights are downloaded from @huggingface ‘s HTTP CDN in the browser, just like any other static asset https://t.co/rJCV4g54jE
Tweet card summary image
huggingface.co
1
0
0
@_brimtown
Tim Brown
1 day
training was easy - hosting, weirdly not so much? there’s no good “vercel but for stupid fine-tunes” (@RhysSullivan you should pitch this). I also wanted to spend $0 enter: @tqchenml & co’s WebLLM project https://t.co/HqtEMHQ7be
Tweet card summary image
chat.webllm.ai
Chat with AI large language models running natively in your browser
1
0
1
@_brimtown
Tim Brown
1 day
an hour of training and $2 later, I had created possibly the world’s most misaligned LLM. like if chatgpt was a freshman
1
0
0
@_brimtown
Tim Brown
1 day
fine-tuning is SO MUCH EASIER than it sounds! @UnslothAI ‘s Colab notebooks made it dead simple. If you’re a product engineer, React is like, way more complicated than this you just need a JSON file with 10s-1000s of rows shaped like this
1
0
1
@_brimtown
Tim Brown
1 day
backstory: I had been looking for an excuse to fine-tune a model, and @johnschulman2 ‘s LoRA blogpost tipped me over the edge had done some fine-tuning for work before (see @simonw thread) but wanted to get my hands dirty and build a whole app around one
@simonw
Simon Willison
26 days
Getting <500ms response times for a UI that updates as you type seems like a very strong justification for fine-tuning a small, fast custom model
1
0
0
@_brimtown
Tim Brown
1 day
I decided to find out. the result is a LoRA fine-tuned Qwen2.5 0.5B, running directly *in-browser* on your device (even phones!) if you’re feeling brave, you can even chat with it yourself (ios26 required): https://t.co/fHFkV9X71X
Tweet card summary image
infinitegroupchat.com
Join a simulated groupchat powered by an LLM trained on a college groupchat. Runs in real-time using WebLLM and a fine-tuned Qwen model, entirely in browser.
1
0
0
@_brimtown
Tim Brown
21 days
The optimal agent management UX was figured out by Dwarf Fortress decades ago
@max__drake
max drake
21 days
i mean...
0
0
4
@_brimtown
Tim Brown
23 days
Updog is the kind of product where it’s hard to imagine it ever being named anything else. lots of interesting product, ML, and dataviz work went into this one!
@RhysSullivan
Rhys
23 days
holy shit datadog shipped Updog
0
0
13
@RhysSullivan
Rhys
23 days
holy shit datadog shipped Updog
@_brimtown
Tim Brown
23 days
@RhysSullivan Here you go:
73
73
2K
@simonw
Simon Willison
26 days
Getting <500ms response times for a UI that updates as you type seems like a very strong justification for fine-tuning a small, fast custom model
@_brimtown
Tim Brown
26 days
@simonw We built Datadog’s natural language querying features (variant of text->SQL) using a fine-tuned model, replacing prompted OpenAI models. We did this explicitly for latency and cost purposes: the feature actually translates as you type in the UI, which required both <500ms
3
4
97
@samdape
sam
4 months
🪼🪼🪼
28
461
4K
@steveruizok
Steve Ruiz
8 months
Once you start designing UI like this, everything else just seems like an absolute snooze fest.
37
33
1K
@_brimtown
Tim Brown
11 months
favorite new yorker in favorite new york publication - read @voberoi in @HellGateNY !
@HellGateNY
Hell Gate *subscribe today!*
11 months
Software engineer @voberoi's https://t.co/yLrTNT3GNL is making keeping tabs on City Council meetings a little less painful. https://t.co/pLHFkPMEDQ
1
0
2
@_brimtown
Tim Brown
1 year
i love ratgptouille
1
1
8
@_chenglou
Cheng Lou
1 year
I did a small talk on second-order effect, emergent phenomenon and generative UI at @aiDotEngineer last week: https://t.co/UIDMeGs7Ag
3
7
56