PrunaAI Profile Banner
Pruna AI Profile
Pruna AI

@PrunaAI

Followers
1K
Following
277
Media
170
Statuses
307

High-performance AI models that combine speed, quality, and specialization for key use cases. ๐ŸŒฑ Make AI accessible and sustainable for everyone!

Munich & Paris
Joined April 2023
Don't wanna be here? Send us removal request.
@PrunaAI
Pruna AI
6 hours
See you next Tuesday at @NeurIPSConf. Weโ€™ll be at ๐—ธ๐—ถ๐—ผ๐˜€๐—ธ ๐Ÿณ! Make sure to stop by, discover our latest research, and pick up a sticker!
12
3
23
@PrunaAI
Pruna AI
1 day
To celebrate ๐—™๐—น๐˜‚๐˜… ๐Ÿฎ ๐—ฟ๐˜‚๐—ป๐—ป๐—ถ๐—ป๐—ด ๐—ฎ๐˜ ๐—ณ๐˜‚๐—น๐—น ๐˜€๐—ฝ๐—ฒ๐—ฒ๐—ฑ after our collaboration with @bfl_ml and @replicate, weโ€™re launching the Flux 2 x Pruna Contest on X! ๐ŸŽจ Create an image using any Flux 2 model on BFL or Replicate ๐Ÿ† Prizes: ๐Ÿญ๐Ÿฑ๐Ÿฌโ‚ฌ ๐—ฝ๐—ฒ๐—ฟ ๐—ฐ๐—ฎ๐˜๐—ฒ๐—ด๐—ผ๐—ฟ๐˜† +
37
34
113
@PrunaAI
Pruna AI
2 days
Z-model from @AlibabaGroup is now running is less than 2 seconds on @replicate optimized by @PrunaAI ! We've accelerated inference so you can run Z-model faster, cheaper, and greener. And this is just the beginning, even more speed-ups are coming soon! ๐Ÿ‘‰ Try it on Replicate:
9
14
106
@PrunaAI
Pruna AI
2 days
Pruna is built โ€ฆ โ€ข with Pretzels & Croissants ๐Ÿฅจ ๐Ÿฅ โ€ข in Paris & Munich & in the metaverse ๐Ÿ‡ฉ๐Ÿ‡ชย ๐Ÿ‡ซ๐Ÿ‡ทย ๐Ÿ‘พ So, we moved our offices online to @gather_town and things have been great! Cool things weโ€™ve seen happen: โ€ข more engaging and fund global meetings โ€ข great excuses for
4
0
14
@PrunaAI
Pruna AI
3 days
Flux 2 from @bfl_ml is out at maximum inference speed on @replicate with @PrunaAI optimization! - The novelty: Stunning 4MP quality with new state-of-the-art realism and high-precision control with up to 8 reference images. This is a huge milestone for Black Forest Labs! - The
4
12
37
@PrunaAI
Pruna AI
4 days
โฐ ๐—ฆ๐˜๐—ฎ๐—ฟ๐˜๐—ถ๐—ป๐—ด ๐—ฎ๐˜ ๐Ÿฒ:๐Ÿฌ๐Ÿฌ ๐—ฝ๐—บ ๐—–๐—˜๐—ง! Learn from @chrismdesa from Cornell University how simple linear error feedback unifies two key challenges in modern ML: efficient training and efficient inference. In todayโ€™s webinar, Chris will present: โ€ข ๐—š๐—ฟ๐—ฎ๐—•
0
0
4
@PrunaAI
Pruna AI
4 days
๐Ÿš€ Excited to see you at our ๐—”๐—œ ๐—˜๐—ณ๐—ณ๐—ถ๐—ฐ๐—ถ๐—ฒ๐—ป๐—ฐ๐˜† ๐— ๐—ฒ๐—ฒ๐˜๐˜‚๐—ฝ ๐—ถ๐—ป ๐— ๐˜‚๐—ป๐—ถ๐—ฐ๐—ต on ๐——๐—ฒ๐—ฐ๐—ฒ๐—บ๐—ฏ๐—ฒ๐—ฟ ๐Ÿญ๐Ÿฌ๐˜๐—ต ๐—ฎ๐˜ ๐Ÿฒ:๐Ÿฏ๐Ÿฌ ๐—ฃ๐—  ๐—–๐—˜๐—ฆ๐—ง! ๐Ÿ“… Agenda โ€ข 6:30 PM: Doors open โ€ข 7:00โ€“8:00 PM: Talks featuring, โ€ข 8:00โ€“9:00 PM: Hang out with likeminded developers & Food ๐Ÿ• ๐Ÿ‘‰
0
1
2
@PrunaAI
Pruna AI
7 days
๐Ÿ‹๏ธโ€โ™€๏ธย A proper warmup is important, especially for compiled models, but Pruna can now do without it. torch.compile can accelerate your model even with LoRA. But the warmup and hot swapping can be rather slow. ๐ŸŒย  Our improvements: ๐Ÿ”ง Portable Compilation: Save compiled artefacts
0
0
3
@PrunaAI
Pruna AI
8 days
We kicked off the @dotaiconf with a burst of creativity, designing new ๐—ฃ๐—ฟ๐˜‚๐—ป๐—ฎ ๐—”๐—œ ๐˜€๐˜๐—ถ๐—ฐ๐—ธ๐—ฒ๐—ฟ๐˜€. Developers, researchers, and ML engineers teamed up to create merch design that captures what we care about: ๐—ฒ๐—ณ๐—ณ๐—ถ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜, ๐˜€๐—ฐ๐—ฎ๐—น๐—ฎ๐—ฏ๐—น๐—ฒ, ๐—ฎ๐—ป๐—ฑ ๐—ณ๐˜‚๐—ป ๐—”๐—œ. Some
3
0
3
@PrunaAI
Pruna AI
9 days
๐ŸŽ“ Learn from @chrismdesa how simple linear error feedback unifies two key challenges in modern ML: efficient training and efficient inference. In this webinar, on Nov. 24th, Chris will present: โ€ข ๐—š๐—ฟ๐—ฎ๐—• (๐—š๐—ฟ๐—ฎ๐—ฑ๐—ถ๐—ฒ๐—ป๐˜ ๐—•๐—ฎ๐—น๐—ฎ๐—ป๐—ฐ๐—ถ๐—ป๐—ด) โ€” a new way to select training
0
1
1
@Bertrand_Charp
Bertrand Charpentier
11 days
โšก๏ธ AI is slow, expensiveโ€ฆ and bad for the environment! At @dotaiconf, I shared how energy utilization interact with AI models โ€” and how we can make it much more efficient. ๐Ÿ’กWhat drives AI progress? Energy utilization drives AI breakthroughs like transformer architectures,
0
2
5
@allnoteson
Andreas Jansson
14 days
2 seconds for 4MP is wild. Great work from the @PrunaAI team!
@nifleisch
Nils Fleischmann
14 days
While everyone is waiting for Flux 2 from @bfl_ml, we built an ultra-fast 4 MP model based on Flux-Schnell. It only takes 2 seconds per generation. You can try it now on @replicate.
2
3
10
@nifleisch
Nils Fleischmann
14 days
While everyone is waiting for Flux 2 from @bfl_ml, we built an ultra-fast 4 MP model based on Flux-Schnell. It only takes 2 seconds per generation. You can try it now on @replicate.
3
4
24
@PrunaAI
Pruna AI
16 days
๐ŸŽ“ Learn from @chrismdesa how simple linear error feedback unifies two key challenges in modern ML: efficient training and efficient inference. In this webinar, on Nov. 24th, Chris will present: โ€ข ๐—š๐—ฟ๐—ฎ๐—• (๐—š๐—ฟ๐—ฎ๐—ฑ๐—ถ๐—ฒ๐—ป๐˜ ๐—•๐—ฎ๐—น๐—ฎ๐—ป๐—ฐ๐—ถ๐—ป๐—ด) โ€” a new way to select training
0
1
3
@PrunaAI
Pruna AI
21 days
Do you want to know what we can do for models on your costly H100s?! From March to May, we did some pretty cool things on @replicate โ€ข FLUX runs in less than a second (0.5s) โ€ข Hidream models series run 1.3x to 2.5x faster โ€ข Wan 2.2 runs on 8 GPUs instead of 1, making it
0
1
18
@MinetteKaum
Minette Kaunismรคki
22 days
Huge thank you to everyone who joined us yesterday to create a new merch design for @PrunaAI ! The design will be launched soon, stay tuned ๐Ÿ‘€ If youโ€™re at the @dotaiconf today, come say hi. And donโ€™t miss @Bertrand_Charp talk at 2:50 PM, see you there! ๐Ÿš€
0
3
2
@PrunaAI
Pruna AI
25 days
We want to congratulate @bria_ai_ on releasing FIBO, their frontier image generation model, which was trained on structured JSON for precise, controllable generation. Some cool things - x5 faster and powered by our optimisation expertise - JSON prompts? Yes, it gives you more
2
3
12
@PrunaAI
Pruna AI
28 days
๐ŸŽƒย Nothing scarier than slow and costly models. Joking but not joking. Happy Halloween! Pruna is ready for it. Boooo. Spooky!
0
1
6
@PrunaAI
Pruna AI
1 month
Get your laptops ready and put on your optimisation gloves for a hands-on session in Paris on Nov 5 (18:30โ€“21:00) with @dotaiconf! We're running a live session on model optimisation, including quantisation, compilation, LoRA tricks, and more. Get ready to get hands-on! ๐Ÿ‘ฉโ€๐Ÿ’ป Ideal
0
2
5