🎉 Big secret! We’ve been running on
@AMD
Instinct™ GPUs in production for over a year.
🤝 Thrilled to now partner with AMD to offer GPU-rich enterprise LLMs!
🥳 LLM Superstation – combining Lamini's LLM infrastructure with AMD Instinct.
👉 Learn more:
Excited to announce a HUGE secret with
@LisaSu
:
@LaminiAI
has been building LLMs on
@AMD
GPUs *in production* for over the past year!
We’ve made running LLMs on AMD super easy and a highly competitive option through our LLM Superstation, available now at ~10x lower cost than
Training multiple LLMs taking forever? 😤
Costing you a fortune?💸
Enter PEFT! Get ready to multiply!! 🚀
1000 models, just 1 machine! 🤖
3 months of training -> 3 milliseconds ⚡️
Just one API call, load and train with Lamini!
👉
👀
We're live! Lamini makes it easy & developer-friendly to rapidly train custom LLMs! Fine-tune, RLHF, you name it. All with just a few lines of code. Swap out foundation models in a single line. Don’t worry about their different prompts. We'll handle it.
Getting structured output from an LLM can be a pain 🤦♀️ Our type system makes it easy to connect your data to a LLM 🎉 Just like another stage in your data pipeline. Play here 👉
Just in!!!
@LaminiAI
Cofounder & CTO
@GregoryDiamos
(key CUDA contributor) shares how we built an optimized LLM finetuning system on
@AMD
's ROCm AI stack. Leveraging
@AMDInstinct
& optimizations for major speedups! 🚀
👉 More in-depth technical details:
📢Exciting news! In a few days, we’ll be releasing “Finetuning LLMs”, co-created by our CEO
@realSharonZhou
and Andrew Ng.
In this 1 hour course, you’ll learn how to finetune thousands of new LLMs within minutes!
👀A sneak peek
📣Thrilled to release “Finetune LLMs,” co-created by our CEO
@realSharonZhou
&
@AndrewNg
!
👉 Enroll for free now!
🥳 Share what you build with us
@LaminiAI
. We'll showcase the best Lamini llamas (LLMs) with the world!
New short course on Fine-tuning LLMs! Many developers are moving beyond only prompting, to also fine-tuning LLMs - that is, taking a pre-trained model and training it further on your own data, which can deliver superior results inexpensively. In this course,
@realSharonZhou
, CEO
📢 Exciting news: Introducing custom fine-tuned models with LoRA in your environment!
Goal: Get you training larger models faster
Save: Time and compute
🌟 Plus, we've got you covered with a hosted playground➡️
@huggingface
Excited to announce: Finetuning for the people!
👉 It’s free, on small LLMs
👉 It’s fast, 10-15 minutes
👉 It’s furious, putting GPUs in a frenzy
Github repo:
Blog:
🧵
Simple steps to prepare your data and train an LLM 📚
1️⃣ Define the LLM interface
2️⃣ Find relevant data
3️⃣ Load data into types, Load types into LLM
4️⃣ Generate data
5️⃣ Train the LLM
Each step here 👉🏻
📢Excited to share that our API endpoint for model inference is now publicly available!🚀 Effortlessly integrate open-source LLMs into your applications, regardless of the programming language or platform you're working with. 🌐Access our API endpoint 👉.
@HamelHusain
We have a drop-in open-source replacement, including function calling!
We have both a hosted version and a version for you to run on your own hardware (NVIDIA or AMD).
Struggling with creating large datasets? 🤯
Lamini augmenters automatically generate high-quality data from <100 examples! 🥳
Install our Python library, augment your dataset, and make training magic today!!🪄
Get started:
Docs:
Our 2024 first startup cohort is working hard at building LLMs on Lamini 💪 🌶️
We are now accepting applications for our next batch in March. If you are an early-stage startup building LLM applications and needing compute, please apply now! 🙌 🥳
Try our finetuning demos! See the magic of Lamini in a few clicks! 😎
🔮 Finetune your custom LLM:
🦙 Llama-2 PEFT:
🦙🦙 Another Llama-2 finetuning:
What other finetuning demos do you want to see? 🤔
To prompt or to fine-tune? 🤔
What are the differences? 💭
Which is the best to improve your LLM? 📈
We’re here to demystify things. 🔍
Plus, a sneak peek into our next big thing 👀
👉
We're
#hiring
! Seeking software engineers eager to work directly with clients, with a mix of technical skills, entrepreneurial mindset, and product intuition.
If you're an engineer who loves working with customers, this is your dream job!
👉 Apply now
Introducing Lamini Pro! Just $99/mo, you get ALL:
Llama 2 finetuning, JSON outputs, up to 10k requests, hypertuning, RAG, full SDK access, hosted on Lamini, and more 🤩🚀
Focus on building your own LLMs without worrying about 💸🤑
👉 Subscribe now:
A technical deep dive into how we set up multi-node training on AMD GPUs and speed up LLM training for 1000x or even 10,000x! Led by our amazing
@ayushis4026403
👉
Excited to share how we’re scaling to thousands of GPUs in production!
…with multi-node LLM training, on not just Nvidia but
@AMD
GPUs
Details 👉
Great blog by our team, led by Ayushi 💅
tl;dr
- Push the limits of training LLMs on enterprise data
Try our LLM SDKs, fresh and delicious, loved by our designer👩🏻🎨
👉
Docs to QA LLM: Chat about your docs!
LLM Classifier: Train a new classifier with just a prompt!
LLM Routing Agent: Using tools with just prompts!
LLM Operator: Build your own operator!
Excited to announce that you can easily specialize LLMs with your data, all inside your
@Databricks
cluster! We’re officially partnering 🦙+ 🧱= 🚀
✅ Your data, kept private
✅ Your infrastructure
✅ Your LLM
👉
👉
Happy Monday! Are you having fun with our fast, free, and furious finetuning?🚀 We made it to the next level - easily manage your training, check progress, see eval results, and test your model in a beautiful interface at 🚄
ChatGPT giving irrelevant answers? 😤
Dream of an LLM that truly understands your data?💡
Lamini’s Domain Adaptation can help you make any LLM an expert in your domain with just 3 lines of code:
1⃣model.load_data(data)
2⃣model.train()
3⃣model.evaluate()
👉
Woohoo! Next Friday, Nov 10, Lamini's the best & the only
@realSharonZhou
will be speaking at this year's
@AngelList
Confidential!
RSVP today to join us for an EXCITING panel discussion about breaking barriers with AI 🤩
👉
Lamini empowers every enterprise and developer to build their own private LLMs easily, fast, and higher-performing than general LLMs! 💪
Sign up now to get more exclusive updates from the Lamini team!🔮
🚨 Tiny errors from LLMs could mean disaster in critical domains.
🥳 Lamini unveils "Photographic Memory" suite to benchmark LLM precision on specialized data across healthcare, finance, and more.
👉
Finetuning your own LLM can solve problems by stopping hallucinations and preventing leakage.
Our short course, co-created with
@LaminiAI
, helps you learn to fine-tune LLMs in a matter of minutes.
Learn more about it:
Unlike software engineering, prompt engineering requires a unique workflow.
In tomorrow’s live workshop,
@LaminiAI
’s CEO Sharon Zhou will help us demystify prompt engineering for open large language models.
Learn more and register here:
No more headaches writing parsers!🤯
Lamini now guarantees valid JSON output!🥳
Our very own
@SakshamConsul
shares challenges with parsers & prompting, how we designed our schema generator, and 👀 more spicy technical details🌶️
👉
🧐 Non-fine-tuned LLM vs. Fine-tuned LLM
An untrained LLM has no understanding of the world. It is completely random. The first thing we need to do is pre-training. Then, we get a base LLM (non-fine-tuned). After that we can fine-tune the base LLM. The figure shows the
The quote about
@AMD
's ROCm platform having "software parity" with
@nvidia
's CUDA platform for large language models came, interestingly, from a former Nvidia CUDA software architect who co-founded the startup.
Thrilled to partner with
@Nutanix
! 🤝
"Together, we make enterprise
#LLMs
easier by delivering AI-ready infrastructure to help organizations simplify operations, maintain data control, and accelerate
#AI
adoption."
-
@gregorydiamos
, Co-Founder, Lamini
Finetune your own LLMs in < 15 mins! 🚀🚀
Pro tip: You can now also share your trained models with others using the "Share" button on the UI to generate a shareable link so others can run inference on your model:)
Happy fine-tuning! 🎉🦙
Thrilled to release an easy, fast way to finetune LLMs.
Now anyone can iterate on what finetuning feels like on a toy example🧸
This is the *path* to turning an LLM into an expert on all your data, privately.
Run it in a few minutes on our Colab:
The secret is out. We are ecstatic to see the curtain lifted on
@LaminiAI
Superstation, powered by AMD Instinct. It's so easy that we are also a customer! The team can't wait to see what Enterprise LLMs developers will tune and personalize with their data. 🤝🤩🌟
📢When it comes to model training, garbage in = garbage out. That is why Lamini is thrilled to announce that dataset filters are now available as part of our python package! 🚀 Here is the link for access 👉 .
Thank you for the shoutout!
@DrStarson
We're glad you enjoyed these courses. Please do let us know if there're any specific topics you want to learn. Stay tuned for more learnings 🦙 🙌 😎
Tens of thousands of students have already enrolled.
Join them! Master finetuning LLMs! 🚀
Enroll now! (free for a limited time 😎)
👉
@realSharonZhou
@AndrewYNg
📣Thrilled to release “Finetune LLMs,” co-created by our CEO
@realSharonZhou
&
@AndrewNg
!
👉 Enroll for free now!
🥳 Share what you build with us
@LaminiAI
. We'll showcase the best Lamini llamas (LLMs) with the world!
Beyond the toy: for larger models & production use, we offer paid plans.
But the free version is plenty powerful to run a bunch of experiments and get a feel for finetuning.
Share our free-tier GPUs nicely please ♥️
Give it a spin:
Struggling with creating large datasets? 🤯
Lamini augmenters automatically generate high-quality data from <100 examples! 🥳
Install our Python library, augment your dataset, and make training magic today!!🪄
Get started:
Docs:
Lamini ditches Nvidia in favour of AMD
@LaminiAI
, an AI startup, is using AMD GPUs instead of the more popular Nvidia GPUs to run large language models (LLMs) like Llama-2 for customers
🤯 Finetuned Question Answering 🤯
Made a small POC on
@Replit
this morning. Finetuning a LLM with Teslas Q2 2023 earnings report. It's super fast, nimble and accurate in its responses.
Demo:
A prod ready version will be shipped in Superagent v0.0.1
You can do the same things as GPT-4 Turbo on every open-source LLM today,
@LaminiAI
does it all: 🚀
- Structure: Return valid JSON
- Speed: Make multiple function calls at once
- More knowledge: Retrieval built-in, with finetuning
- Longer context: Extend context windows (~128k,
How it works:
- Load your Q&A data
- Call llm.train()
- 💥Your LLM improves on your domain or style! Repeat to debug. AI is iterative!
Training unlocks an LLM's full potential: it’s what the big AI labs like
@OpenAI
use to get their LLMs to learn about the whole internet!
It's
#Snowday
! Lamini has integrated with
@SnowflakeDB
🦙❄️
Now, you can easily deploy & finetune large language models inside Snowflake 🚀
👉 See a demo:
👀 Read Snowflake's announcement:
Excited to announce a HUGE secret with
@LisaSu
:
@LaminiAI
has been building LLMs on
@AMD
GPUs *in production* for over the past year!
We’ve made running LLMs on AMD super easy and a highly competitive option through our LLM Superstation, available now at ~10x lower cost than
@bentossell
@LaminiAI
- $99/mo for several custom finetunes/LoRAs.
We also have customers doing continued pretraining and pretraining from scratch (more than $99/mo, less than 3M 🙃)
The world of language models is fast-evolving.
Join innovators among the rise of domain aware LLMs and hear real world advice on leveraging language models in commercial applications.
Tap into the conversation w/
@LangChainAI
,
@LaminiAI
,
@GenAICollective
,
@UnstructuredIO
& more
Woohoo! Next Friday, Nov 10, Lamini's the best & the only
@realSharonZhou
will be speaking at this year's
@AngelList
Confidential!
RSVP today to join us for an EXCITING panel discussion about breaking barriers with AI 🤩
👉
Today we’re releasing Code Llama, a large language model built on top of Llama 2, fine-tuned for coding & state-of-the-art for publicly available coding tools.
Keeping with our open approach, Code Llama is publicly-available now for both research & commercial use.
More ⬇️
Guarantee Valid JSON Output with
@LaminiAI
smooth
----
Why structured JSON output is so hard 🤔
LLMs are largely based on the transformer architecture, which uses an auto-regressive generator. Transformer treats each word as a token and generates one token at a time. The LLM
@__tinygrad__
Depends on your definition of quiet, we have some humming away in our office, in addition to our data center.
We're focused on enterprise/startup customers, and are really excited for what you build :D
We have a hosted version that you don't have to be hard of hearing to use.
@_tonygaeta
@realSharonZhou
@phodaie
BTW, all new user gets free credits to try Lamini, all the same features as Pro ! if it's not enough or you need any help, we're happy to support you, let us know! info
@lamini
.ai
With hands-on guidance from
@realSharonZhou
, you will:
✅ Master the concepts
✅ Familiarize yourself with best practices
✅ Finetune an LLM using your own data
✅ Do it all on your own infra for privacy (with
@LaminiAI
)