
Toran Billups
@toranb
Followers
2K
Following
3K
Media
117
Statuses
8K
Decision Making With Feedforward Multilayer Perceptrons
Des Moines, IA
Joined September 2008
RT @jeremyphoward: ModernBERT is available as a slot-in replacement for any BERT-like model, with both 139M param and 395M param sizes. It….
huggingface.co
0
34
0
This blog post from the team @bitcrowd is an outstanding resource for those who want to leverage SOTA embeddings with bumblebee. Easily the highest value resource I've seen on the subject yet. This post in particular covers the path from zero to Jina v2
bitcrowd.dev
When directly compared with OpenAI's 8K model text-embedding-ada-002, the jina-embeddings-v2 stand out in terms of quality. Their long context length is a game changer. Don't let a missing model...
0
0
6
RT @garybernhardt: Me at 25: Tests should be 5ish lines! One assert per test!. Me at 40: This test is 56 lines long with 11 asserts. If I b….
0
72
0
RT @_philschmid: Data is all we need! 💎 @Alignment Labs AI just released Buzz, an instruction dataset with 3.13 million rows and a total of….
0
33
0
RT @paraxialio: New talk from @toranb, Adventures with Synthetic Data (lessons learned building a chatbot from my SMS dataset), presenting….
0
5
0
My favorite podcast of 2024! @peterg021 absolutely levels the pod with such a unique blend of business and machine learning from his experience in product. Thanks for sharing in such detail, this content stretched me in a few dimensions 🤯.
Just wrapped up this super enlightening episode of the MLOps Community podcast featuring Peter Guagenti, a total tech guru who's really shaping the AI scene in software development.
0
0
1
RT @charliebholtz: Introducing YouTune — fine tune image models on YouTube videos. > python tune.py <youtube-url>. • downloads video.• sc….
0
87
0
RT @josevalim: Tomorrow marks 13 years since the first commit to the Elixir repo. And today we celebrate by announcing that Elixir is, offi….
0
449
0
Easily the most valuable, grounded and applicable podcast of the year for me. Jump ahead to 39:22 for the recommendation system pro-tips that include upstream tuning around retrieval and ranking. Pure gold from @BEBischof.
🆕 pod: with @BEBischof of @_hex_tech!. On putting AI Magic into Notebooks, and how RAG is actually a recommendation systems problem. Also an @AAAzzam hot take: LLMOps is more like an "iron mine" than a "gold rush"!
0
1
4
Fine tune Mistral 7B with a single RTX 4090 and serve it with Nx! Big thanks to @jon_durbin and @sean_moriarity. Check it out! 👇. #MyElixirStatus
1
9
42
RT @philipbrown: This is a really great intro to machine learning and fine-tuning in @elixirlang by @toranb 🙌 #myel….
0
9
0
RT @chrisalbon: The next $10 billion product isn’t going to come from your mile deep knowledge of ML theory. It is going to come from ML ap….
0
9
0