A
#stablediffusion
3D animation from u/Healthy_Ad9884 on Reddit using the
@deforum_art
notebook. (Posted with permission)
The smooth rotation around the Y-Axis works so well here to create a coherent scene.
#AIart
Simple Stable Diffusion to Blender:
1. Use Automatic1111's depth map extension () to create image (img2img or txt2img) & depth map.
2. Import to Blender using Depth Map Importer
3. That's it - play around!
#stablediffusion
#aiart
Announcement:
@Adobe
has developed a set of generative AI tools called Firefly.
These include: text to image, text effects, and recolor vectors.
Watch the video for a look at how it works and try it here:
#AdobePartner
#AdobeFirefly
#AIArt
It's so quick and easy to prototype ideas with the help of AI:
1. Generate img with txt2img (
#midjourney
here)
2. Generate depth map using MiDaS
3. Use img and depth map to create displacement map
#aiart
#animation
Another artist whose style works really well with dioramas is Van Gogh. The vibrant colours combine with the layered effect to create something pretty magical.
Cool use of video init's using
#DeforumDiffusion
/
#stablediffusion
from u/EsdricoXD on Reddit.
prompt: A film still of lalaland, artwork by studio ghibli, makoto shinkai, pixv
sampler: euler ancestral
Steps: 45
scale: 14
strength: 0.55
Coherent and really Effective 🔥
"You were the Chosen One! It was said that you would destroy the Sith, not join them!"
Top: batch
#img2img
*Detailed*
Bottom:
#EbSynth
using 12 keyframes *Smooth*
It's a dark scene so I needed to tweak canny thresholds.
#aiart
#stablediffusion
#ControlNet
Another test combining
#stablediffusion
#depth2img
with
#ebsynth
This time a cel-shaded animation from a video of me (Not usually so croaky and full of cold!).
Background masked out. Need to see if better source using DSLR and lighting improves quality of end animation.
#aiart
New feature in
#AUTOMATIC1111
-
#gif2gif
1. Update A1111
2. Enable gif2gif in Extensions
3. Select from script dropdown in img2img tab
4. Example below is using
#pix2pix
checkpoint
5. "Make him Gaston with a blue shirt" + negative prompts
#aiart
#stablediffusion
Linking
#stablediffusion
animation to audio:
1. Generate music with
@mubertapp
2. Use audio keyframe generator to control zoom speed
3. "Borrow" prompt from
4. Generate animation using
@deforum_art
notebook
Effect too subtle?
Big News: The first Open-Source
#txt2video
1.7 billion parameter diffusion model has been released and you can play with it now at HuggingFace:
More examples here and you can git clone from here too (if you have the VRAM).
#aiart
…
Seeing how rough a drawing
#stablediffusion
#img2img
can make sense of. I used a very basic 2 minute charcoal sketch as init. Looks surprisingly similar to model (apart from hair).Prompt:A black and white photo of a young woman, studio lighting, realistic, Ilford HP5 400
Pushing
#ebsynth
to show how transformative the results can be with
#stablediffusion
Top left is the driving video and the others have been transformed with a single keyframe in a couple of minutes.
Anyone interested in a short breakdown on how to do it?
#AIart
"The Anomaly of Kepler-61b"
Entirely generated with AI.
Script:
#ChatGPT4
txt2video:
#Gen2
txt2speech:
@elevenlabsio
I'm loving being able to spend a couple of hour building up a story with the help of AI. It feels like a taste of the future and these tools will only get better.…
3D movement around a figure with
@deforum_art
using
#stablediffusion
.
I needed a recognisable face for consistency - with only a description in prompt, it didn't look like the same person at different angles. Still some distortion - need to tweak noise & strength.
#AIart
"People think dreams aren't real just because they aren't made of matter, of particles. Dreams are real. But they are made of viewpoints, of images, of memories and puns and lost hopes." -
@neilhimself
#midjourney
#AIart
#aiartcommunity
@Morbidful
Yeah - Charlize Theron got the Oscar for portraying Aileen Wurnos in Monster but I found the documentary from Nick Broomfield far more interesting.
Does anyone else sometimes feel this is too easy? Prompt: "Close-up photography of a face" - The results from
#Midjourney
are obviously impressive but I don't feel a connection with it.
#V4
#aiart
#generativeAI
@JoshLekach
TBH I don't see what's wrong with this.
It's far worse to be shaming them for (presumably) wearing too much makeup and too few clothes than to be a young person on a night out having fun.
Is that you with your top off in your profile pic? Does it only count for women?
I set this up with
@deforum_art
/
#stablediffusion
to generate overnight. The strength_schedule is mapped to the audio so the fox changes to the beat.
There are a couple of things I'd change but I love the effect🦊🦊🦊
Trimmed to 2mins for Twitter.
Music made with
@mubertapp
I've overlooked
#AnimateDiff
() but am really impressed by what's happening with it recently.
I came across some cool examples:
1. an ocean and lighthouse animation from
@mhgenerate
#aiart
#aianimation
A quick demo showing the difference between depth2img (1st animation) and img2img (2nd animation).
This is the girl with a pearl earring with the prompt "bronze sculpture of a girl" at 10 different noise steps.
Will post x/y plot below.
#stablediffusion
#aiart
As
@sureailabs
mentioned, reusing same seed with
#stablediffusion
is so powerful. You can drill down exact effect of changing:
1.-Artists.
2.-Phrasing e.g. "by" to "In style of".
3.-Order of modifiers.
4.-Punctuation. Inverted commas etc.
Time for some deep analysis.
1st Animation test with depth2img. Pretty interesting - seems to be less flickery than standard img2img/vid2vid. Very quick. Each 50 frame animation (768x448) took about 1 minute 52.
I expect this to be really good once we get to grips with it.
#stablediffusion
#AIart
Cool - I've tweaked the audio reactivity with this
@deforum_art
/
#stablediffusion
animation until it's right.
I isolated the drums and tweaked the audio-keyframe function. The cat should only change when the drum beats. Zoom also mapped to audio. 🐱🐱🐱
Music -
@mubertapp
#aiart
Yes, you need to work for it with
#StableDiffusion2
but you can get an incredible level of detail.
This is a raw output at 768x768. No upscaling or facial restoration.
I can imagine this getting insanely good with a bit of tweaking/finetuning.
#AIart
Raw
#StableDiffusion2
output from text prompt in (un)surprisingly few steps by
@masslevel
Really do recommend folk take some time to explore the model, even more versatile than before with some exciting things in the pipeline..
Great - InstructPix2Pix is working with
#Automatic1111
Just install the "instruct-pix2pix" extension, dl the 22000.ckpt and you're there.
1. Original
2. Make it autumn
3. Make her a ghost
4. Make it surreal
#stablediffusion
#aiart
"Declowning the Joker".
Not perfect but I like where this is heading. I needed to use depth2img as canny and HED picked up the makeup.
A fun experiment.
#aiart
#stablediffusion
#ControlNet
#ebsynth
I'm absolutely blown away by
@runwayml
's
#Gen2
using image input. The movement is so natural.
Using it with
@midjourney
is a winning combination.
If you want your video to stay true to your image, don't use a text prompt. (Thanks to
@Uncanny_Harry
and
@Merzmensch
for the tip!).…
This morning I've been playing around with the
@huggingface
's
#StableDiffusion
Textual Inversion Concepts Library working out how best to train my own concepts.
Would be great to force consistency with image generation to improve storytelling.
#AIart
A few people have been asking how I've been using
#Dreambooth
. I've been using this service:
Really simple interface - $3 to finetune a model which is pretty reasonable.
Can generate images on site and ckpt is available to DL to use elsewhere.
A couple more
#stablediffusion
1.5 comparisons.
Left: No CLIP
Right: With CLIP
All other settings the same.
Generally (there are exceptions) CLIP guided images seem to have fewer anatomy issues and more accurate colours.
#aiart
-Damien Hirst doesn't paint most of his work and hires assistant spot painters.
-Jeff Koons admits to never crafting his own pieces.
-Even Michelangelo had a team of assistants working on the Sistine Chapel.
-
#AIart
is generated using AI.
What makes an artist? Where's the line?
Another example of Composable Diffusion - (
@GanWeaving
was asking).
It's possible to use two styles in the same generated image without them blending into each other. "Photograph of Walter White AND Van Gogh Landscape"
I don't think this is possible through other methods.
Pure AI generation - no init images or video. We're getting closer to coherent animations.
It's interesting to note that prompts that worked well in SD 1.5 seem to work well with
#modelscope
like this Art Deco/Mucha/leyendecker one
#txt2video
#AIart
I'm really enjoying experimenting with
@KaiberAI
's new
#vid2vid
tool (in beta)
Really simple to use - all it needs is for you to upload a video and apply a style using a prompt.
(Source video from
@pexels
)
#AIart
@TomLikesRobots
Interesting, your outputs look very high quality and artistic, make me want to try it as well😀 I thought Text2video might be too early still, but now this change my mind😀👍👍
@ilyasut
Yeah - I guess it's all in the hands of Microsoft now. They have the compute, the weights and it looks like they'll be getting some new staff.
I've been testing
#stablediffusion
v.2 and I've been having fun with AI movie stills.
These tests were with Euler A and I think this model really benefits from using higher steps. Will keep on testing.
"Cinematic still of old man in a Victorian apothecary etc"
#aiart
A building animation using
#stablediffusion
depth2img. I dropped denoise to 50% for this one but I don't think painterly style helps with jitter. Form coherent but style is quite chaotic. Maybe less movement?
Thanks to Florian Delée on Pexels for source video.
#aiart
#aianimation
“I ate his liver with some fava beans and a nice chianti.”
It took a bit of time to match up the mouth and eye movements and needed a few keyframes but I think the videos blend together pretty well.
0.6 Denoise.
#aiart
#stablediffusion
#EbSynth
#controlnet
Another experiment with
#ebsynth
and
#controlnet
#img2img
. 5 keys. I quite like the eerie result.
Interesting to note that the closer you get to high detail photorealism with the keyframes, the more difficult it becomes to blend the videos.
Getting closer!
#aiart
#stablediffusion
@stats_feed
@sama
He stated during hearings that he doesn't hold any shares. I think he's saying he's going to kick off, and there's nothing they can do to stop him.
I've been playing with checkpoints and I love the effect of using a cartoon style checkpoint but using "cartoon" as a negative prompt.
Top: SD 1.5
Middle: Disney
Bottom: Arcane
Right: "Cartoon" neg prompt
#stablediffusion
#aiart
The 1.5 inpainting checkpoint for
#stablediffusion
is almost magic.
Been testing with some old Victorian Ghost Story concepts. Images lacked details in faces etc but because you can inpaint areas at full res with
#automatic1111
it's so easy to correct now.
1.Before
2.After
#aiart
I would love to watch a remake of "Fellowship of the Ring" with
@SnoopDogg
as Saruman (or even Gandalf).
I just need to work out how to transform voices now.
#aiart
#stablediffusion
Columbo: Animated. Test2.
Add
#EbSynth
to the mix and his mouth moves more convincingly. The blends between keys aren't perfect but I like it.
Maybe need to mix
#img2img
for body and
#EbSynth
for subtle details like speech.
Again
#ControlNet
for keyframes
#stablediffusion
#aiart
Basic Overview of EbSynth.
Input=driving video & keyframe(s).
Output=animation.
EbSynth uses pixels from keyframe and motion from video frames.
Bobcat(?) vid I used is sample that comes with EbSynth and here I used a checkerboard key for frame 000.
Pushing
#ebsynth
to show how transformative the results can be with
#stablediffusion
Top left is the driving video and the others have been transformed with a single keyframe in a couple of minutes.
Anyone interested in a short breakdown on how to do it?
#AIart
Nice work
@DiscoverStabDif
- I spotted your post on Reddit earlier. A pretty coherent 661.60512 Megapixel image. (Not quite Gigapixel yet)
I believe this was created using StableDiffusion and passed through the upscaler several times.
All we need now is a Where's Wally (Waldo in…
Cool. depth2img model is now working with Automatic1111 and on first glance works really well.
Update code and then instructions to get it installed here:
I look forward to using it for vid2vid to see how well it does.
#StableDiffusion2
#aiart
More testing with
#stablediffusion
's
#img2img
A pencil drawing from an old figure drawing session (stiffer than usual?). Some of the features are different but it's fixed some of the faults. It's hard to apply colour that's not there originally. Maybe need to PS it before?
Big personal news.
You may have noticed I haven't been posting much recently and this is due to settling into a new job. I've nearly completed my first month as an AI Artist with
@Metaphysic_ai
working on the upcoming Tom Hanks/Robert Zemeckis movie 🎥"Here". It's a hugely…
"Art outside of the picture frame." An experiment with movement.
There are no such thing as failures - just learning experiences. This isn't what I was going for but I'll take it.
#stabledifussion
@deforum_art
#aiart
Quick style test for a personal project I'm working on.
#metahuman
animation for init video. Dreambooth (for Ghibli style) and Textual Inversion(for likeness - to try to reduce jitter).
#stablediffusion
#AIart
I've been playing with
@runwayml
's
#gen2
today and I'm very impressed at the level of detail and coherence. Really cool.
This video is pure
#txt2video
but I've noticed that using an initial image can create some stunning results. I'll share some more over the weekend.
I haven't…
New feature in
#stablediffusion
is the "-n" command which lets you generate a batch of images. It provides seeds for images so once you've chosen the best you can reuse the seed and tweak the prompt. Here's a Pre-Raphaelite Nicole Kidman.
#AIart
"Growing Older"
From 5 to 80 in 13 seconds: an experiment reusing seeds in
#stablediffusion
Prompt: "close up face female portrait XX years old..." with XX iterating through the years.
#AIart
Hah - note to self. Make sure subject is evenly lit when selecting a video for
#EbSynth
. Other than that the stone sculpture effect is pretty cool.
No such thing as failure, just learning experiences.
#stablediffusion
#depth2img
Aim: Improve likeness of David Attenborough in AI generated video. No Image or Video inputs.
1. Generate
#Gen2
video with prompt: "a nature tv show with David Attenborough being interviewed". Not a bad video but it looks nothing like him.
2. Use
#img2img
with controlnet. Prompt:…
If you haven't used
#midjourney
for a while open it up and give it a shot. V4 was released in Alpha this morning and it's pretty amazing. I used an old Ghibli BOTW prompt and the improvement over v3 is pretty mindblowing.
#AIart
Checklist of things to explore for the weekend:
1.Textual Inversion
2.Depth Mapping
3.
#DeforumDiffusion
V0.3
4. Krita
#stablediffusion
plugin
Am I missing anything?
Last night I was playing around with masking with
#EbSynth
.Rough Low res demo and needs more keys but technique works.
TopLeft - Original
TopRight - Mask using
@runwayml
's depth2video with contrast increased
BottomLeft - No Mask
BottomRight - Mask applied
#stablediffusion
#aiart
Another really interesting tool in
#automatic1111
is composable diffusion.
1. "Scarlett Johannson and Natalie Portman"
2. "Scarlett Johannson AND Natalie Portman"
The AND operator treats each section as a separate prompt and combines them. Cool effects.
Another experiment with
#stablediffusion
's
#img2img
It even managed to turn my rough charcoal sketch of a goat into a photo. (He did lose his beard)
#aiart
Another ageing animation with
#stablediffusion
but this time I've tweaked the process.
-Changing a word has a more subtle effect in long prompts than short ones.
-Using descriptions such as old, middle aged, toddler usually works better than stating specific ages like 10, 40, 80.
"Growing Older"
From 5 to 80 in 13 seconds: an experiment reusing seeds in
#stablediffusion
Prompt: "close up face female portrait XX years old..." with XX iterating through the years.
#AIart
Audio reactive waves created using
@deforum_art
and
#stablediffusion
I think too sensitive? Using this function "0.9 - (x^2)" on with x as raw volume.
Default strength_schedule is 0.75 so need to keep values closer to that.
Music from
@mubertapp
I've posted a few photo style images created with
#StableDiffusion2
but here are some painterly ones. Yep, old prompts don't really work but it seems pretty versatile.
#AIart
A Star Wars Marketplace
Yes, you need to work for it with
#StableDiffusion2
but you can get an incredible level of detail.
This is a raw output at 768x768. No upscaling or facial restoration.
I can imagine this getting insanely good with a bit of tweaking/finetuning.
#AIart
Okay. Pika 1.0 is 🔥- definitely going to have some fun with this. For my 1st video I tried something that wouldn't be in the dataset.
"A cute tardigrade in the style of a 3d animated movie".
#pika
#aivideo
I was asked if depth2img could only change materials. So much more powerful.
Imagine you have a photo and want to change the style and feel without changing form and structure like img2img. Easy with depth2img.
Prompt: "inside a volcano"
(Photo: Kangyu Hu)
#stablediffusion
#aiart
A quick demo showing the difference between depth2img (1st animation) and img2img (2nd animation).
This is the girl with a pearl earring with the prompt "bronze sculpture of a girl" at 10 different noise steps.
Will post x/y plot below.
#stablediffusion
#aiart
Another artist whose style works really well with dioramas is Van Gogh. The vibrant colours combine with the layered effect to create something pretty magical.
Jumping on the diorama train this morning. I'm committed to another project at the moment but would 100% watch a Giacometti animation in this style. Maybe I'll have time to build a scene out in Blender at some point.
#aiart
#diorama
#midjourney
#V4
Another
#modelscope
#txt2video
style test.
A simple prompt with a strong style can produce consistently interesting animations and can overpower the ShutterStock logo.
Prompt: A painting by Van Gogh
Negative Prompt: text, watermark, copyright, blurry
Steps: 60
CFG: 12…
1st Animation test with depth2img. Pretty interesting - seems to be less flickery than standard img2img/vid2vid. Very quick. Each 50 frame animation (768x448) took about 1 minute 52.
I expect this to be really good once we get to grips with it.
#stablediffusion
#AIart
It's not quite Dreambooth but pretty fun. Using my photo as an image prompt in
#midjourney
V4 here I am as "gta v, cover art, artstation"
Has the vague features(bing nose, furrowed brow) but not really me.
#AIart
Midjourney and Pika 1.0 make a pretty great stack for creating ai video.
I'd tried animating this image with other img2vid models before but ran into issues with disappearing heads and extreme morphing. Pika handles it really well.
#pika
#aivideo
#midjourney
Steps to using textual inversion with
#automatic1111
:
1. Train your concept on colab
2. Download your .bin (or anyone else's from )
3. Change filename to conceptX .pt
4. Move to embeddings folder
5. Use conceptX in your prompt
Credit to u/LadyQuacklin on Reddit who created the depth map importer with the help of GPT3.
There are also other ways to control 2D -> 3D. I've forked the 3D depth inpainting repo () to control camera movement.
Lol. When you reuse your config from another project and forget to remove the init_image....
#StableDiffusion
#DeforumDiffusion
Not the look I was going for.
Interesting experiment with Dreambooth finetuning and animation with some camera movement. No inits here.
I brought the ckpt file into
@deforum
notebook
Interestingly I had to drop the cfg_scale to 4 as anything higher would overpower the whole scene.
#stablediffusion
#AIart
Interesting.
#stablediffusion
2.1 coming this week. I guess this is what Emad meant when he said that the release rate would speed up now that the legal stuff is out of the way. Good to see.
Experiments with Textual Inversion. I wanted to use a new character and Metahuman allows you to easily create one and render with different environments, angles and expressions.
Very happy with these - currently training Dreambooth to compare.
#stablediffusion
#aiart
Following up on this I tested again with
#stablediffusion
and was able to transition from human to werewolf by reducing the parentheses incrementally. Bit of a jump at last stage but maybe there are other hacks to be found?
The rest of the prompt and seed remain the same.
Weighting Hack for
#stablediffusion
I've been experimenting with punctuation and the more brackets you enclose a word with, the less weight it seems to have.
Prompt: "close up face female portrait, Vampire," with parentheses around vampire.
Anyone tried this?
Weighting Hack for
#stablediffusion
I've been experimenting with punctuation and the more brackets you enclose a word with, the less weight it seems to have.
Prompt: "close up face female portrait, Vampire," with parentheses around vampire.
Anyone tried this?
"Kandinsky". So many exciting things happening in the
#stablediffusion
ecosystem. This is a very simple 2D animation test made with
#DeforumDiffusion
So quick to generate - each frame took only a few seconds to render at 512x512 on a P100.
Just had an idea. Here's another depth2image animation using the same source and yep, it does seem to work pretty well with a familiar face.
#aiart
#aianimation
#stablediffusion
1st Animation test with depth2img. Pretty interesting - seems to be less flickery than standard img2img/vid2vid. Very quick. Each 50 frame animation (768x448) took about 1 minute 52.
I expect this to be really good once we get to grips with it.
#stablediffusion
#AIart
I've been experimenting with negative prompts as recommended by
@CoffeeVectors
and I'm amazed at the impact it can have.
Same prompt and settings but images on right also use negative prompts.
Let me know if you'd like any more info on set up.
#aiart
#stablediffusion