
Johanna Sommer
@j_m_sommer
Followers
216
Following
688
Media
3
Statuses
53
ML Research Engineer @ Pruna AI | PhD student @TU_Muenchen
Munich, Germany
Joined July 2018
Say hello to Minette Kaunismรคki, the newest member of our Developer Advocacy team! From her work as a Data Scientist to organising AI tinkerersโ events in Paris, sheโs ready to help developers explore, learn, and connect. During her time as a student, she played American
0
2
7
Which video model wins? ๐ฅ ๐ Wan 5B (optimized by @PrunaAI , running on @replicate ) โ just $0.025 - 12s to generate ๐ Lucy 5B (the new @DecartAI model, generated on @FAL ) โ $0.15 (6x more expensive) - 12s to generate
4
3
23
๐โ๐บ ๐๐๐ฎ๐ฟ๐๐ถ๐ป๐ด ๐ฎ ๐ป๐ฒ๐ ๐๐ฒ๐ฟ๐ถ๐ฒ๐: ๐ฃ๐ฎ๐ฝ๐ฒ๐ฟ ๐๐ถ๐ด๐ต๐น๐ถ๐ด๐ต๐๐! Iโll share research papers about (efficient) AI Iโve read including their code when available. ๐ฃ๐ฎ๐ฝ๐ฒ๐ฟ ๐๐ถ๐ด๐ต๐น๐ถ๐ด๐ต๐ #๐ฌ๐ญ: โOlica: Efficient Structured Pruning of Large Language Models
1
2
5
We are really happy to collaborate with @replicate to bring you a 3s per generation Qwen-Image-Edit!
The much anticipated image editing model from Qwen is now on Replicate https://t.co/oVMH223VtY Edit images in just 3 seconds, for $0.03 per image. We've worked with Pruna to deliver you the fastest way to use Qwen Image Edit.
2
7
38
Qwen Image Edit. 3 seconds, $0.03. https://t.co/zt7sjYCJbc
@PrunaAI do the impossible. > Make the text 3D and floating on a city street
The much anticipated image editing model from Qwen is now on Replicate https://t.co/oVMH223VtY Edit images in just 3 seconds, for $0.03 per image. We've worked with Pruna to deliver you the fastest way to use Qwen Image Edit.
8
33
303
๐ฌ Frame Arena is a neat little we use for comparing and evaluating the output of text to video generation models Workflow: โข ๐ณ ๐ค๐๐ฎ๐น๐ถ๐๐ ๐ ๐ฒ๐๐ฟ๐ถ๐ฐ๐: SSIM, PSNR, MSE, pHash, Color Histogram, Sharpness + Overall Quality โข ๐๐ป๐ฑ๐ถ๐๐ถ๐ฑ๐๐ฎ๐น
0
2
15
Qwen-Image is out now on @replicate. It takes 12s to generate state of the art images
Qwen-Image optimised and sped up by the amazing Pruna folks: https://t.co/XlmIsxIgMw Fast and high quality.
0
3
29
We got this. Who wants a super fast @Alibaba_Qwen Image model?
๐ Meet Qwen-Image โ a 20B MMDiT model for next-gen text-to-image generation. Especially strong at creating stunning graphic posters with native text. Now open-source. ๐ Key Highlights: ๐น SOTA text rendering โ rivals GPT-4o in English, best-in-class for Chinese ๐น In-pixel
1
2
8
i2v is also just $0.06 ๐คฏ https://t.co/fNA4vEqwYe
replicate.com
Fast Wan 2.2 image to video model. 480p and 720p video outputs. Fast, cheap image to video model on Replicate.
Here's another fast and very affordable Wan 2.2 endpoint from us and Pruna for you: Image-to-video, 480p $0.012 per second (that's $0.06 per video) https://t.co/JGUNlisEYS
3
2
35
We've worked with Pruna AI again to make Wan 2.2 faster, and cheaper. 33 seconds, $0.06 per video, this is the large 14b model at 480p. https://t.co/eE3wWPMwLF ๐จ Wan more?
15
19
188
Replicate and @PrunaAI join forces to bring Wan2.2 to life! A major upgrade in cinematic quality, smoother movements, and instruction following https://t.co/uPK1x9nboC
3
11
52
The folks @PrunaAI have shipped a very fast version of HiDream's E1.1 image editing model. It's like kontext, and co. It's open source, and #4 on the image editing AA leaderboard. https://t.co/oC3veTy4KS ~14 seconds to run, $0.02 per image
5
13
141
We're pleased to work with Pruna to bring you a new and fast image model. It can generate 2 megapixel images in 3.4 seconds on a single H100 https://t.co/JgW12u1vmh This model is based on the original Wan 2.1 video model, which Pruna have compressed, optimised and pruned.
๐ท Introducing Wan Image โ the fastest endpoint for generating beautiful 2K images! From Wan Video, we built Wan Image which generates stunning 2K images in just 3.4 seconds on a single H100 ๐ท Try it on @replicate: https://t.co/K3UGI1mku4 Read our blog for details, examples,
3
17
77
Wan 2.1 might be the best open-source text-to-image model, and everyone is sleeping on it. The one drawback is Wan's slow inference speed, so we applied a series of optimizations to bring it down to just 3s for 2 MP images. You can try it on @replicate: https://t.co/eu2bsQPVcD
2
4
18
How private is DP-SGD for self-supervised training on sequences? Our #ICML2025 spotlight shows that it can be very privateโif you parameterize it right! ๐ https://t.co/beIc1gt90L
#icml Joint work w/ M. Dalirrooyfard, J. Guzelkabaagac, A. Schneider, Y. Nevmyvaka, @guennemann 1/6
arxiv.org
Many forms of sensitive data, such as web traffic, mobility data, or hospital occupancy, are inherently sequential. The standard method for training machine learning models while ensuring privacy...
2
14
19
๐ช๐บโ๏ธ๐บ๐ธ In SF next week. Optimizing AI models & handing out croissants to @ycombinator startups haunted by Soham. DM before the croissants vanish ๐ฅ @PrunaAI
0
4
6
We made the new open weights FLUX.1 Kontext [dev] model 5x faster on an H100 out of the box with pruna! Don't believe us ? Check it out here:
replicate.com
Fast endpoint for Flux Kontext, optimized with pruna framework
๐ซ Open-weights @bfl_ml FLUX.1 Kontext [dev] is now open-source! It allows to perform image-to-image generation with state-of-the-art quality :) However, it takes ~14.4 seconds for each generation on one H100. When we learned about this, we were in our offsite to chill together
0
3
6
๐ซ Open-weights @bfl_ml FLUX.1 Kontext [dev] is now open-source! It allows to perform image-to-image generation with state-of-the-art quality :) However, it takes ~14.4 seconds for each generation on one H100. When we learned about this, we were in our offsite to chill together
5
15
94
Thrilled to announce that we just presented โMAGNet: Motif-Agnostic Generation of Molecules from Scaffoldsโ at #ICLR2025 ๐งฒ @j_m_sommer @Pseudomanifold @fabian_theis @guennemann For those who couldnโt make it to our spotlight: https://t.co/lZjDLFCYcW
1
15
59