TomLikesRobots🤖 Profile Banner
TomLikesRobots🤖 Profile
TomLikesRobots🤖

@TomLikesRobots

Followers
32,575
Following
5,359
Media
1,349
Statuses
16,303

AI Artist at Metaphysic working with AI and VFX. All views my own. Experienced Web Dev and Artist. Early explorer of Artificial Creativity.

Edinburgh, Scotland
Joined July 2013
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@TomLikesRobots
TomLikesRobots🤖
2 months
First play with @pika_labs new sound effects. They seem to match the video well and add so much atmosphere. Really cool feature.
8
7
48
@TomLikesRobots
TomLikesRobots🤖
11 years
Check this out - My job title's Dynamic Integration Ninja! Get yours at: http://t.co/0JbBo3n6Xx via @Weirdopedia_Org
1
102
0
@TomLikesRobots
TomLikesRobots🤖
2 years
A #stablediffusion 3D animation from u/Healthy_Ad9884 on Reddit using the @deforum_art notebook. (Posted with permission) The smooth rotation around the Y-Axis works so well here to create a coherent scene. #AIart
53
415
3K
@TomLikesRobots
TomLikesRobots🤖
1 year
Simple Stable Diffusion to Blender: 1. Use Automatic1111's depth map extension () to create image (img2img or txt2img) & depth map. 2. Import to Blender using Depth Map Importer 3. That's it - play around! #stablediffusion #aiart
Tweet media one
Tweet media two
34
269
2K
@TomLikesRobots
TomLikesRobots🤖
1 year
Announcement: @Adobe has developed a set of generative AI tools called Firefly. These include: text to image, text effects, and recolor vectors. Watch the video for a look at how it works and try it here: #AdobePartner #AdobeFirefly #AIArt
63
204
2K
@TomLikesRobots
TomLikesRobots🤖
1 year
It's so quick and easy to prototype ideas with the help of AI: 1. Generate img with txt2img ( #midjourney here) 2. Generate depth map using MiDaS 3. Use img and depth map to create displacement map #aiart #animation
@TomLikesRobots
TomLikesRobots🤖
1 year
Another artist whose style works really well with dioramas is Van Gogh. The vibrant colours combine with the layered effect to create something pretty magical.
Tweet media one
Tweet media two
4
23
186
43
242
1K
@TomLikesRobots
TomLikesRobots🤖
2 years
Cool use of video init's using #DeforumDiffusion / #stablediffusion from u/EsdricoXD on Reddit. prompt: A film still of lalaland, artwork by studio ghibli, makoto shinkai, pixv sampler: euler ancestral Steps: 45 scale: 14 strength: 0.55 Coherent and really Effective 🔥
21
229
1K
@TomLikesRobots
TomLikesRobots🤖
7 months
@CultureCrave Lol. It's one ounce of water per pound per day. One glass of water for each pound would kill you.
Tweet media one
14
30
1K
@TomLikesRobots
TomLikesRobots🤖
1 year
"You were the Chosen One! It was said that you would destroy the Sith, not join them!" Top: batch #img2img *Detailed* Bottom: #EbSynth using 12 keyframes *Smooth* It's a dark scene so I needed to tweak canny thresholds. #aiart #stablediffusion #ControlNet
43
225
1K
@TomLikesRobots
TomLikesRobots🤖
1 year
Another test combining #stablediffusion #depth2img with #ebsynth This time a cel-shaded animation from a video of me (Not usually so croaky and full of cold!). Background masked out. Need to see if better source using DSLR and lighting improves quality of end animation. #aiart
@TomLikesRobots
TomLikesRobots🤖
1 year
A very quick test using depth guided #img2img and #EbSynth from @scrtwpns Temporal coherence is far better than vid2vid and #depth2img creates a really accurate keyframe. I need to do a deep dive into this. Masking and using AI generated environments? #stablediffusion #aiart
7
24
179
45
198
1K
@TomLikesRobots
TomLikesRobots🤖
1 year
New feature in #AUTOMATIC1111 - #gif2gif 1. Update A1111 2. Enable gif2gif in Extensions 3. Select from script dropdown in img2img tab 4. Example below is using #pix2pix checkpoint 5. "Make him Gaston with a blue shirt" + negative prompts #aiart #stablediffusion
36
179
1K
@TomLikesRobots
TomLikesRobots🤖
2 years
Linking #stablediffusion animation to audio: 1. Generate music with @mubertapp 2. Use audio keyframe generator to control zoom speed 3. "Borrow" prompt from 4. Generate animation using @deforum_art notebook Effect too subtle?
23
149
1K
@TomLikesRobots
TomLikesRobots🤖
1 year
Big News: The first Open-Source #txt2video 1.7 billion parameter diffusion model has been released and you can play with it now at HuggingFace: More examples here and you can git clone from here too (if you have the VRAM). #aiart
Tweet media one
21
211
1K
@TomLikesRobots
TomLikesRobots🤖
2 years
Seeing how rough a drawing #stablediffusion #img2img can make sense of. I used a very basic 2 minute charcoal sketch as init. Looks surprisingly similar to model (apart from hair).Prompt:A black and white photo of a young woman, studio lighting, realistic, Ilford HP5 400
Tweet media one
Tweet media two
32
120
980
@TomLikesRobots
TomLikesRobots🤖
1 year
Pushing #ebsynth to show how transformative the results can be with #stablediffusion Top left is the driving video and the others have been transformed with a single keyframe in a couple of minutes. Anyone interested in a short breakdown on how to do it? #AIart
61
138
960
@TomLikesRobots
TomLikesRobots🤖
1 year
"You did not seriously think that a hobbit could contend with the will of Sauron?" Keyframe generated with #ControlNet #img2img using the canny edge detection model & animated using #Ebsynth #aiart #stablediffusion
43
149
913
@TomLikesRobots
TomLikesRobots🤖
1 year
"The Anomaly of Kepler-61b" Entirely generated with AI. Script: #ChatGPT4 txt2video: #Gen2 txt2speech: @elevenlabsio I'm loving being able to spend a couple of hour building up a story with the help of AI. It feels like a taste of the future and these tools will only get better.…
49
171
854
@TomLikesRobots
TomLikesRobots🤖
2 years
3D movement around a figure with @deforum_art using #stablediffusion . I needed a recognisable face for consistency - with only a description in prompt, it didn't look like the same person at different angles. Still some distortion - need to tweak noise & strength. #AIart
35
84
751
@TomLikesRobots
TomLikesRobots🤖
2 years
"People think dreams aren't real just because they aren't made of matter, of particles. Dreams are real. But they are made of viewpoints, of images, of memories and puns and lost hopes." - @neilhimself #midjourney #AIart #aiartcommunity
Tweet media one
15
123
687
@TomLikesRobots
TomLikesRobots🤖
5 months
@Morbidful Yeah - Charlize Theron got the Oscar for portraying Aileen Wurnos in Monster but I found the documentary from Nick Broomfield far more interesting.
Tweet media one
8
53
645
@TomLikesRobots
TomLikesRobots🤖
1 year
Does anyone else sometimes feel this is too easy? Prompt: "Close-up photography of a face" - The results from #Midjourney are obviously impressive but I don't feel a connection with it. #V4 #aiart #generativeAI
Tweet media one
136
22
637
@TomLikesRobots
TomLikesRobots🤖
5 months
@JoshLekach TBH I don't see what's wrong with this. It's far worse to be shaming them for (presumably) wearing too much makeup and too few clothes than to be a young person on a night out having fun. Is that you with your top off in your profile pic? Does it only count for women?
8
8
635
@TomLikesRobots
TomLikesRobots🤖
2 years
I set this up with @deforum_art / #stablediffusion to generate overnight. The strength_schedule is mapped to the audio so the fox changes to the beat. There are a couple of things I'd change but I love the effect🦊🦊🦊 Trimmed to 2mins for Twitter. Music made with @mubertapp
38
83
564
@TomLikesRobots
TomLikesRobots🤖
7 months
I've overlooked #AnimateDiff () but am really impressed by what's happening with it recently. I came across some cool examples: 1. an ocean and lighthouse animation from @mhgenerate #aiart #aianimation
@mhgenerate
MH
7 months
13
71
508
17
88
571
@TomLikesRobots
TomLikesRobots🤖
1 year
Columbo: Animated - A test. Made with #controlnet - canny edge detection and #AUTOMATIC1111 I think I need to do some comparisons to work out the strengths of different techniques. #aiart #stablediffusion
34
85
513
@TomLikesRobots
TomLikesRobots🤖
2 months
@weirddalle It's just salt on freshly cut muscle. These kind of vids are posted pretty regularly.
@weirdterrifying
Weird and Terrifying
1 year
Adding salt to freshly cut muscle causes it to spasm.
482
885
5K
14
26
513
@TomLikesRobots
TomLikesRobots🤖
1 year
A quick demo showing the difference between depth2img (1st animation) and img2img (2nd animation). This is the girl with a pearl earring with the prompt "bronze sculpture of a girl" at 10 different noise steps. Will post x/y plot below. #stablediffusion #aiart
17
74
493
@TomLikesRobots
TomLikesRobots🤖
2 years
As @sureailabs mentioned, reusing same seed with #stablediffusion is so powerful. You can drill down exact effect of changing: 1.-Artists. 2.-Phrasing e.g. "by" to "In style of". 3.-Order of modifiers. 4.-Punctuation. Inverted commas etc. Time for some deep analysis.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
23
82
486
@TomLikesRobots
TomLikesRobots🤖
1 year
1st Animation test with depth2img. Pretty interesting - seems to be less flickery than standard img2img/vid2vid. Very quick. Each 50 frame animation (768x448) took about 1 minute 52. I expect this to be really good once we get to grips with it. #stablediffusion #AIart
18
65
465
@TomLikesRobots
TomLikesRobots🤖
2 years
Cool - I've tweaked the audio reactivity with this @deforum_art / #stablediffusion animation until it's right. I isolated the drums and tweaked the audio-keyframe function. The cat should only change when the drum beats. Zoom also mapped to audio. 🐱🐱🐱 Music - @mubertapp #aiart
23
81
405
@TomLikesRobots
TomLikesRobots🤖
1 year
Yes, you need to work for it with #StableDiffusion2 but you can get an incredible level of detail. This is a raw output at 768x768. No upscaling or facial restoration. I can imagine this getting insanely good with a bit of tweaking/finetuning. #AIart
Tweet media one
@EMostaque
Emad
1 year
Raw #StableDiffusion2 output from text prompt in (un)surprisingly few steps by @masslevel Really do recommend folk take some time to explore the model, even more versatile than before with some exciting things in the pipeline..
7
12
107
14
59
404
@TomLikesRobots
TomLikesRobots🤖
1 year
Great - InstructPix2Pix is working with #Automatic1111 Just install the "instruct-pix2pix" extension, dl the 22000.ckpt and you're there. 1. Original 2. Make it autumn 3. Make her a ghost 4. Make it surreal #stablediffusion #aiart
Tweet media one
Tweet media two
Tweet media three
Tweet media four
12
50
386
@TomLikesRobots
TomLikesRobots🤖
1 year
Managed to get the animation hack working for #automatic1111 with #ControlNet () Check "Do not append detectmap to output" in settings. Turn Jim Carrey into Tom Cruise. #AIart #stablediffusion
25
40
370
@TomLikesRobots
TomLikesRobots🤖
1 year
"Declowning the Joker". Not perfect but I like where this is heading. I needed to use depth2img as canny and HED picked up the makeup. A fun experiment. #aiart #stablediffusion #ControlNet #ebsynth
14
51
354
@TomLikesRobots
TomLikesRobots🤖
5 months
@heyyitsjanea @mingkengbomnam It's just a meme that's made to be ambiguous. 4 and 36 can both be correct.
Tweet media one
9
10
355
@TomLikesRobots
TomLikesRobots🤖
9 months
I'm absolutely blown away by @runwayml 's #Gen2 using image input. The movement is so natural. Using it with @midjourney is a winning combination. If you want your video to stay true to your image, don't use a text prompt. (Thanks to @Uncanny_Harry and @Merzmensch for the tip!).…
33
48
347
@TomLikesRobots
TomLikesRobots🤖
2 years
This morning I've been playing around with the @huggingface 's #StableDiffusion Textual Inversion Concepts Library working out how best to train my own concepts. Would be great to force consistency with image generation to improve storytelling. #AIart
Tweet media one
8
48
341
@TomLikesRobots
TomLikesRobots🤖
2 years
A few people have been asking how I've been using #Dreambooth . I've been using this service: Really simple interface - $3 to finetune a model which is pretty reasonable. Can generate images on site and ckpt is available to DL to use elsewhere.
Tweet media one
14
39
331
@TomLikesRobots
TomLikesRobots🤖
2 years
A couple more #stablediffusion 1.5 comparisons. Left: No CLIP Right: With CLIP All other settings the same. Generally (there are exceptions) CLIP guided images seem to have fewer anatomy issues and more accurate colours. #aiart
Tweet media one
Tweet media two
Tweet media three
14
29
311
@TomLikesRobots
TomLikesRobots🤖
1 year
This is more fun than it should be. Check out Stable Diffusion Multiplayer from @huggingface
Tweet media one
7
41
314
@TomLikesRobots
TomLikesRobots🤖
2 years
-Damien Hirst doesn't paint most of his work and hires assistant spot painters. -Jeff Koons admits to never crafting his own pieces. -Even Michelangelo had a team of assistants working on the Sistine Chapel. - #AIart is generated using AI. What makes an artist? Where's the line?
39
60
260
@TomLikesRobots
TomLikesRobots🤖
2 years
Another example of Composable Diffusion - ( @GanWeaving was asking). It's possible to use two styles in the same generated image without them blending into each other. "Photograph of Walter White AND Van Gogh Landscape" I don't think this is possible through other methods.
Tweet media one
18
21
290
@TomLikesRobots
TomLikesRobots🤖
1 year
Pure AI generation - no init images or video. We're getting closer to coherent animations. It's interesting to note that prompts that worked well in SD 1.5 seem to work well with #modelscope like this Art Deco/Mucha/leyendecker one #txt2video #AIart
18
60
285
@TomLikesRobots
TomLikesRobots🤖
1 year
I'm really enjoying experimenting with @KaiberAI 's new #vid2vid tool (in beta) Really simple to use - all it needs is for you to upload a video and apply a style using a prompt. (Source video from @pexels ) #AIart
14
47
286
@TomLikesRobots
TomLikesRobots🤖
1 year
Yes, it's either one way or the other - High Quality and Artistic or Pope Francis stealing spaghetti from Snoop Dogg.
@noodlesli2016
Noodles
1 year
@TomLikesRobots Interesting, your outputs look very high quality and artistic, make me want to try it as well😀 I thought Text2video might be too early still, but now this change my mind😀👍👍
0
0
1
23
52
279
@TomLikesRobots
TomLikesRobots🤖
5 months
@ilyasut Yeah - I guess it's all in the hands of Microsoft now. They have the compute, the weights and it looks like they'll be getting some new staff.
9
7
281
@TomLikesRobots
TomLikesRobots🤖
1 year
I've been testing #stablediffusion v.2 and I've been having fun with AI movie stills. These tests were with Euler A and I think this model really benefits from using higher steps. Will keep on testing. "Cinematic still of old man in a Victorian apothecary etc" #aiart
Tweet media one
Tweet media two
11
32
283
@TomLikesRobots
TomLikesRobots🤖
1 year
A building animation using #stablediffusion depth2img. I dropped denoise to 50% for this one but I don't think painterly style helps with jitter. Form coherent but style is quite chaotic. Maybe less movement? Thanks to Florian Delée on Pexels for source video. #aiart #aianimation
12
29
280
@TomLikesRobots
TomLikesRobots🤖
1 year
“I ate his liver with some fava beans and a nice chianti.” It took a bit of time to match up the mouth and eye movements and needed a few keyframes but I think the videos blend together pretty well. 0.6 Denoise. #aiart #stablediffusion #EbSynth #controlnet
13
27
276
@TomLikesRobots
TomLikesRobots🤖
1 year
"Make it Grass": A #pix2pix demo using #stablediffusion Simple prompts definitely seem to work a lot better. #aiart
14
30
271
@TomLikesRobots
TomLikesRobots🤖
1 year
Another experiment with #ebsynth and #controlnet #img2img . 5 keys. I quite like the eerie result. Interesting to note that the closer you get to high detail photorealism with the keyframes, the more difficult it becomes to blend the videos. Getting closer! #aiart #stablediffusion
13
30
265
@TomLikesRobots
TomLikesRobots🤖
5 months
@stats_feed @sama He stated during hearings that he doesn't hold any shares. I think he's saying he's going to kick off, and there's nothing they can do to stop him.
7
4
265
@TomLikesRobots
TomLikesRobots🤖
1 year
I've been playing with checkpoints and I love the effect of using a cartoon style checkpoint but using "cartoon" as a negative prompt. Top: SD 1.5 Middle: Disney Bottom: Arcane Right: "Cartoon" neg prompt #stablediffusion #aiart
Tweet media one
Tweet media two
11
28
265
@TomLikesRobots
TomLikesRobots🤖
2 years
The 1.5 inpainting checkpoint for #stablediffusion is almost magic. Been testing with some old Victorian Ghost Story concepts. Images lacked details in faces etc but because you can inpaint areas at full res with #automatic1111 it's so easy to correct now. 1.Before 2.After #aiart
Tweet media one
Tweet media two
18
24
253
@TomLikesRobots
TomLikesRobots🤖
1 year
I would love to watch a remake of "Fellowship of the Ring" with @SnoopDogg as Saruman (or even Gandalf). I just need to work out how to transform voices now. #aiart #stablediffusion
@TomLikesRobots
TomLikesRobots🤖
1 year
"You did not seriously think that a hobbit could contend with the will of Sauron?" Keyframe generated with #ControlNet #img2img using the canny edge detection model & animated using #Ebsynth #aiart #stablediffusion
43
149
913
18
34
250
@TomLikesRobots
TomLikesRobots🤖
1 year
Columbo: Animated. Test2. Add #EbSynth to the mix and his mouth moves more convincingly. The blends between keys aren't perfect but I like it. Maybe need to mix #img2img for body and #EbSynth for subtle details like speech. Again #ControlNet for keyframes #stablediffusion #aiart
8
24
244
@TomLikesRobots
TomLikesRobots🤖
1 year
Basic Overview of EbSynth. Input=driving video & keyframe(s). Output=animation. EbSynth uses pixels from keyframe and motion from video frames. Bobcat(?) vid I used is sample that comes with EbSynth and here I used a checkerboard key for frame 000.
@TomLikesRobots
TomLikesRobots🤖
1 year
Pushing #ebsynth to show how transformative the results can be with #stablediffusion Top left is the driving video and the others have been transformed with a single keyframe in a couple of minutes. Anyone interested in a short breakdown on how to do it? #AIart
61
138
960
9
32
250
@TomLikesRobots
TomLikesRobots🤖
7 months
Nice work @DiscoverStabDif - I spotted your post on Reddit earlier. A pretty coherent 661.60512 Megapixel image. (Not quite Gigapixel yet) I believe this was created using StableDiffusion and passed through the upscaler several times. All we need now is a Where's Wally (Waldo in…
@DiscoverStabDif
Discovering Stable Diffusion
7 months
It worked! here's a 1.2 #gigapixel image made with #stablediffusion ! took about 8 hours with 7 iterations of image-to-image upscaling. #aiartcommunity #AIgenerated #aiart #aiworlds #ai #artificialintelligence #aiartists #chatgpt4
Tweet media one
7
15
139
11
41
250
@TomLikesRobots
TomLikesRobots🤖
1 year
Cool. depth2img model is now working with Automatic1111 and on first glance works really well. Update code and then instructions to get it installed here: I look forward to using it for vid2vid to see how well it does. #StableDiffusion2 #aiart
Tweet media one
Tweet media two
9
27
241
@TomLikesRobots
TomLikesRobots🤖
1 year
I suspect this family is well represented in the Stable Diffusion dataset.
Tweet media one
14
15
236
@TomLikesRobots
TomLikesRobots🤖
2 years
More testing with #stablediffusion 's #img2img A pencil drawing from an old figure drawing session (stiffer than usual?). Some of the features are different but it's fixed some of the faults. It's hard to apply colour that's not there originally. Maybe need to PS it before?
Tweet media one
Tweet media two
Tweet media three
9
17
239
@TomLikesRobots
TomLikesRobots🤖
11 months
Big personal news. You may have noticed I haven't been posting much recently and this is due to settling into a new job. I've nearly completed my first month as an AI Artist with @Metaphysic_ai working on the upcoming Tom Hanks/Robert Zemeckis movie 🎥"Here". It's a hugely…
Tweet media one
59
10
240
@TomLikesRobots
TomLikesRobots🤖
1 year
Lol. MJ V4 can use two separate images as prompts. #aiart #midjourney
Tweet media one
Tweet media two
Tweet media three
9
18
234
@TomLikesRobots
TomLikesRobots🤖
2 years
"Art outside of the picture frame." An experiment with movement. There are no such thing as failures - just learning experiences. This isn't what I was going for but I'll take it. #stabledifussion @deforum_art #aiart
7
24
234
@TomLikesRobots
TomLikesRobots🤖
2 years
Quick style test for a personal project I'm working on. #metahuman animation for init video. Dreambooth (for Ghibli style) and Textual Inversion(for likeness - to try to reduce jitter). #stablediffusion #AIart
14
30
233
@TomLikesRobots
TomLikesRobots🤖
1 year
I've been playing with @runwayml 's #gen2 today and I'm very impressed at the level of detail and coherence. Really cool. This video is pure #txt2video but I've noticed that using an initial image can create some stunning results. I'll share some more over the weekend. I haven't…
14
28
229
@TomLikesRobots
TomLikesRobots🤖
2 years
New feature in #stablediffusion is the "-n" command which lets you generate a batch of images. It provides seeds for images so once you've chosen the best you can reuse the seed and tweak the prompt. Here's a Pre-Raphaelite Nicole Kidman. #AIart
Tweet media one
Tweet media two
9
21
219
@TomLikesRobots
TomLikesRobots🤖
2 years
"Growing Older" From 5 to 80 in 13 seconds: an experiment reusing seeds in #stablediffusion Prompt: "close up face female portrait XX years old..." with XX iterating through the years. #AIart
7
31
223
@TomLikesRobots
TomLikesRobots🤖
1 year
Less jitter when camera is moving more slowly. #stablediffusion depth2img with Automatic1111. Noise is down at 50% #aiart #aianimation
11
15
222
@TomLikesRobots
TomLikesRobots🤖
1 year
Hah - note to self. Make sure subject is evenly lit when selecting a video for #EbSynth . Other than that the stone sculpture effect is pretty cool. No such thing as failure, just learning experiences. #stablediffusion #depth2img
4
21
217
@TomLikesRobots
TomLikesRobots🤖
11 months
Aim: Improve likeness of David Attenborough in AI generated video. No Image or Video inputs. 1. Generate #Gen2 video with prompt: "a nature tv show with David Attenborough being interviewed". Not a bad video but it looks nothing like him. 2. Use #img2img with controlnet. Prompt:…
7
31
221
@TomLikesRobots
TomLikesRobots🤖
5 months
Video to Video rotoscoping/animating has got so much smoother since AnimateDiff appeared on the scene. This is getting really good.
@InnerRefle11312
Inner-Reflections
5 months
Using the new IPadpater batch unfold settings to rotoscope/animate a high motion fight scene! #hotshotxl #animatediff #aianimation
45
272
1K
8
34
220
@TomLikesRobots
TomLikesRobots🤖
1 year
If you haven't used #midjourney for a while open it up and give it a shot. V4 was released in Alpha this morning and it's pretty amazing. I used an old Ghibli BOTW prompt and the improvement over v3 is pretty mindblowing. #AIart
Tweet media one
@Ted_Underwood
@tedunderwood.me 🦋
1 year
I was pretty happy with the left version of "an intricate treehouse in a Studio Ghibli forest" back in April, but Midjourney V4 is, um, better.
Tweet media one
Tweet media two
8
20
204
9
25
220
@TomLikesRobots
TomLikesRobots🤖
2 years
Checklist of things to explore for the weekend: 1.Textual Inversion 2.Depth Mapping 3. #DeforumDiffusion V0.3 4. Krita #stablediffusion plugin Am I missing anything?
7
26
222
@TomLikesRobots
TomLikesRobots🤖
1 year
Last night I was playing around with masking with #EbSynth .Rough Low res demo and needs more keys but technique works. TopLeft - Original TopRight - Mask using @runwayml 's depth2video with contrast increased BottomLeft - No Mask BottomRight - Mask applied #stablediffusion #aiart
9
22
208
@TomLikesRobots
TomLikesRobots🤖
2 years
Another really interesting tool in #automatic1111 is composable diffusion. 1. "Scarlett Johannson and Natalie Portman" 2. "Scarlett Johannson AND Natalie Portman" The AND operator treats each section as a separate prompt and combines them. Cool effects.
Tweet media one
Tweet media two
7
27
207
@TomLikesRobots
TomLikesRobots🤖
2 years
Another experiment with #stablediffusion 's #img2img It even managed to turn my rough charcoal sketch of a goat into a photo. (He did lose his beard) #aiart
Tweet media one
Tweet media two
7
15
211
@TomLikesRobots
TomLikesRobots🤖
2 years
Another ageing animation with #stablediffusion but this time I've tweaked the process. -Changing a word has a more subtle effect in long prompts than short ones. -Using descriptions such as old, middle aged, toddler usually works better than stating specific ages like 10, 40, 80.
@TomLikesRobots
TomLikesRobots🤖
2 years
"Growing Older" From 5 to 80 in 13 seconds: an experiment reusing seeds in #stablediffusion Prompt: "close up face female portrait XX years old..." with XX iterating through the years. #AIart
7
31
223
10
38
204
@TomLikesRobots
TomLikesRobots🤖
2 years
Audio reactive waves created using @deforum_art and #stablediffusion I think too sensitive? Using this function "0.9 - (x^2)" on with x as raw volume. Default strength_schedule is 0.75 so need to keep values closer to that. Music from @mubertapp
8
32
192
@TomLikesRobots
TomLikesRobots🤖
1 year
I've posted a few photo style images created with #StableDiffusion2 but here are some painterly ones. Yep, old prompts don't really work but it seems pretty versatile. #AIart A Star Wars Marketplace
Tweet media one
Tweet media two
9
22
191
@TomLikesRobots
TomLikesRobots🤖
1 year
These prompts also work really well with brown skin. Beautiful detail. #stablediffusion2 #aiart
Tweet media one
Tweet media two
@TomLikesRobots
TomLikesRobots🤖
1 year
Yes, you need to work for it with #StableDiffusion2 but you can get an incredible level of detail. This is a raw output at 768x768. No upscaling or facial restoration. I can imagine this getting insanely good with a bit of tweaking/finetuning. #AIart
Tweet media one
14
59
404
7
31
194
@TomLikesRobots
TomLikesRobots🤖
5 months
Okay. Pika 1.0 is 🔥- definitely going to have some fun with this. For my 1st video I tried something that wouldn't be in the dataset. "A cute tardigrade in the style of a 3d animated movie". #pika #aivideo
15
18
193
@TomLikesRobots
TomLikesRobots🤖
1 year
I was asked if depth2img could only change materials. So much more powerful. Imagine you have a photo and want to change the style and feel without changing form and structure like img2img. Easy with depth2img. Prompt: "inside a volcano" (Photo: Kangyu Hu) #stablediffusion #aiart
@TomLikesRobots
TomLikesRobots🤖
1 year
A quick demo showing the difference between depth2img (1st animation) and img2img (2nd animation). This is the girl with a pearl earring with the prompt "bronze sculpture of a girl" at 10 different noise steps. Will post x/y plot below. #stablediffusion #aiart
17
74
493
4
31
191
@TomLikesRobots
TomLikesRobots🤖
1 year
Another artist whose style works really well with dioramas is Van Gogh. The vibrant colours combine with the layered effect to create something pretty magical.
Tweet media one
Tweet media two
@TomLikesRobots
TomLikesRobots🤖
1 year
Jumping on the diorama train this morning. I'm committed to another project at the moment but would 100% watch a Giacometti animation in this style. Maybe I'll have time to build a scene out in Blender at some point. #aiart #diorama #midjourney #V4
Tweet media one
Tweet media two
0
2
56
4
23
186
@TomLikesRobots
TomLikesRobots🤖
1 year
Another #modelscope #txt2video style test. A simple prompt with a strong style can produce consistently interesting animations and can overpower the ShutterStock logo. Prompt: A painting by Van Gogh Negative Prompt: text, watermark, copyright, blurry Steps: 60 CFG: 12…
10
27
182
@TomLikesRobots
TomLikesRobots🤖
1 year
A very quick test using depth guided #img2img and #EbSynth from @scrtwpns Temporal coherence is far better than vid2vid and #depth2img creates a really accurate keyframe. I need to do a deep dive into this. Masking and using AI generated environments? #stablediffusion #aiart
@TomLikesRobots
TomLikesRobots🤖
1 year
1st Animation test with depth2img. Pretty interesting - seems to be less flickery than standard img2img/vid2vid. Very quick. Each 50 frame animation (768x448) took about 1 minute 52. I expect this to be really good once we get to grips with it. #stablediffusion #AIart
18
65
465
7
24
179
@TomLikesRobots
TomLikesRobots🤖
1 year
It's not quite Dreambooth but pretty fun. Using my photo as an image prompt in #midjourney V4 here I am as "gta v, cover art, artstation" Has the vague features(bing nose, furrowed brow) but not really me. #AIart
Tweet media one
Tweet media two
15
11
177
@TomLikesRobots
TomLikesRobots🤖
5 months
Midjourney and Pika 1.0 make a pretty great stack for creating ai video. I'd tried animating this image with other img2vid models before but ran into issues with disappearing heads and extreme morphing. Pika handles it really well. #pika #aivideo #midjourney
5
24
178
@TomLikesRobots
TomLikesRobots🤖
2 years
Steps to using textual inversion with #automatic1111 : 1. Train your concept on colab 2. Download your .bin (or anyone else's from ) 3. Change filename to conceptX .pt 4. Move to embeddings folder 5. Use conceptX in your prompt
Tweet media one
Tweet media two
Tweet media three
Tweet media four
7
25
176
@TomLikesRobots
TomLikesRobots🤖
1 year
Credit to u/LadyQuacklin on Reddit who created the depth map importer with the help of GPT3. There are also other ways to control 2D -> 3D. I've forked the 3D depth inpainting repo () to control camera movement.
3
24
178
@TomLikesRobots
TomLikesRobots🤖
2 years
Lol. When you reuse your config from another project and forget to remove the init_image.... #StableDiffusion #DeforumDiffusion Not the look I was going for.
9
28
167
@TomLikesRobots
TomLikesRobots🤖
2 years
Interesting experiment with Dreambooth finetuning and animation with some camera movement. No inits here. I brought the ckpt file into @deforum notebook Interestingly I had to drop the cfg_scale to 4 as anything higher would overpower the whole scene. #stablediffusion #AIart
13
16
173
@TomLikesRobots
TomLikesRobots🤖
1 year
Interesting. #stablediffusion 2.1 coming this week. I guess this is what Emad meant when he said that the release rate would speed up now that the legal stuff is out of the way. Good to see.
Tweet media one
6
18
167
@TomLikesRobots
TomLikesRobots🤖
2 years
Experiments with Textual Inversion. I wanted to use a new character and Metahuman allows you to easily create one and render with different environments, angles and expressions. Very happy with these - currently training Dreambooth to compare. #stablediffusion #aiart
Tweet media one
Tweet media two
Tweet media three
Tweet media four
8
23
162
@TomLikesRobots
TomLikesRobots🤖
2 years
Following up on this I tested again with #stablediffusion and was able to transition from human to werewolf by reducing the parentheses incrementally. Bit of a jump at last stage but maybe there are other hacks to be found? The rest of the prompt and seed remain the same.
Tweet media one
@TomLikesRobots
TomLikesRobots🤖
2 years
Weighting Hack for #stablediffusion I've been experimenting with punctuation and the more brackets you enclose a word with, the less weight it seems to have. Prompt: "close up face female portrait, Vampire," with parentheses around vampire. Anyone tried this?
Tweet media one
13
15
164
11
30
165
@TomLikesRobots
TomLikesRobots🤖
2 years
Weighting Hack for #stablediffusion I've been experimenting with punctuation and the more brackets you enclose a word with, the less weight it seems to have. Prompt: "close up face female portrait, Vampire," with parentheses around vampire. Anyone tried this?
Tweet media one
13
15
164
@TomLikesRobots
TomLikesRobots🤖
2 years
"Kandinsky". So many exciting things happening in the #stablediffusion ecosystem. This is a very simple 2D animation test made with #DeforumDiffusion So quick to generate - each frame took only a few seconds to render at 512x512 on a P100.
6
20
157
@TomLikesRobots
TomLikesRobots🤖
2 years
@neilhimself @NetflixGeeked Yay, so looking forward to this. Reminder set ✅
Tweet media one
1
9
146
@TomLikesRobots
TomLikesRobots🤖
1 year
Just had an idea. Here's another depth2image animation using the same source and yep, it does seem to work pretty well with a familiar face. #aiart #aianimation #stablediffusion
@TomLikesRobots
TomLikesRobots🤖
1 year
1st Animation test with depth2img. Pretty interesting - seems to be less flickery than standard img2img/vid2vid. Very quick. Each 50 frame animation (768x448) took about 1 minute 52. I expect this to be really good once we get to grips with it. #stablediffusion #AIart
18
65
465
10
21
149
@TomLikesRobots
TomLikesRobots🤖
2 years
I've been experimenting with negative prompts as recommended by @CoffeeVectors and I'm amazed at the impact it can have. Same prompt and settings but images on right also use negative prompts. Let me know if you'd like any more info on set up. #aiart #stablediffusion
Tweet media one
Tweet media two
Tweet media three
9
10
154