fofr Profile Banner
fofr Profile
fofr

@fofrAI

Followers
21,553
Following
913
Media
1,866
Statuses
5,017
Explore trending content on Musk Viewer
Pinned Tweet
@fofrAI
fofr
1 year
A new experiment mixing regional prompting and controlnet to create some smooth non-flickering animations. If folks are interested I'll put up a workflow video on YouTube.
64
178
2K
@fofrAI
fofr
6 months
👀
192
916
6K
@fofrAI
fofr
1 year
I put some famous logos through ControlNet.
Tweet media one
Tweet media two
108
964
6K
@fofrAI
fofr
1 year
I asked Midjourney v5 to '/describe' some logos, to see how it would create prompts for them, and to see what it would create in response. Starbucks
Tweet media one
108
518
6K
@fofrAI
fofr
1 year
🧵 A big #Midjourney thread on how to write prompts to get good cinematic images. In this thread I’ll build up a single prompt with cinematic elements, and show their effects. Each prompt will use a 16:9 aspect ratio, and to minimise variation I've locked in a seed.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
98
439
3K
@fofrAI
fofr
19 days
👀
Tweet media one
62
361
2K
@fofrAI
fofr
2 years
With #stablediffusion img2img, I can help bring my 4yr old’s sketches to life. Baby and daddy ice cream robot monsters having a fun day at the beach. 😍 #AiArtwork
Tweet media one
Tweet media two
41
270
2K
@fofrAI
fofr
4 months
The live action Game of Thrones Mario Kart adaptation is wild. Winter is coming... so are the spiny shells.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
42
170
2K
@fofrAI
fofr
8 months
Controllism > Upscale > Pika Labs > Upscale video
199
158
2K
@fofrAI
fofr
11 months
You can also try out to color vectorize an image. Thanks for the heads up @KireiStudios . 1. Use Midjourney with a prompt like: "an svg vector image of [subject], white background, logo" 2. upload to 3. get svg, eps, pdf … 🙌😍
Tweet media one
Tweet media two
Tweet media three
20
200
1K
@fofrAI
fofr
1 year
Firefox
Tweet media one
14
29
1K
@fofrAI
fofr
3 months
Midjourney v6 + Stable Video 👌
29
88
1K
@fofrAI
fofr
1 year
Tweet media one
Tweet media two
9
177
1K
@fofrAI
fofr
1 year
I had to laugh a lot at this one.
Tweet media one
Tweet media two
15
111
1K
@fofrAI
fofr
8 months
I trained an SDXL fine-tune on really bad early digital photos with poor use of flash and crappy lighting.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
50
118
1K
@fofrAI
fofr
1 year
Pepsi, lol
Tweet media one
27
30
1K
@fofrAI
fofr
2 years
The accuracy in the details of #midjourney v4 is superb.
Tweet media one
Tweet media two
39
106
1K
@fofrAI
fofr
6 months
Stable Video Diffusion (SVD) is now on Replicate: - Make 14 frame or 25 frame videos - Use any image, it'll be resized to make the best video - Control degree of movement Great work putting this together @lucataco93 , and @StabilityAI for the fantastic…
26
141
922
@fofrAI
fofr
1 year
I think I hit a ChatGPT cache
Tweet media one
18
29
846
@fofrAI
fofr
9 months
😍 Woah, CoDeF (temporally consistent video processing) looks really powerful! Consistency is fantastic. Transform a derived canonical image using controlnet then apply to a whole video. Project: Code: Some fave examples in 🧵
19
224
847
@fofrAI
fofr
1 year
I liked this take on the Spotify logo
Tweet media one
6
9
834
@fofrAI
fofr
6 months
Dang, I just can't get over how good this Stable Video Diffusion model is.
39
80
849
@fofrAI
fofr
2 months
face + style = become image I've now pushed "become-image" as a Replicate model and open sourced the code and ComfyUI workflow on Github. All links in the thread below.
Tweet media one
@fofrAI
fofr
2 months
face + style – using IPAdapter for style, InstantID for consistency, prompt for steering and depth controlnet for continuity with original. This also uses DreamshaperXL Lightning, so these images are in just 4 steps. Will share details soon.
Tweet media one
15
52
489
17
99
729
@fofrAI
fofr
11 months
🧵 Announcing Prompter, an open-source tool (and npm package) for generating and sharing prompts: Use it with Midjourney, Stable Diffusion, Gen2, anything. It's great for quickly exploring models. I've shared some of my fave prompts in the thread 👇
Tweet media one
Tweet media two
Tweet media three
Tweet media four
28
152
731
@fofrAI
fofr
2 months
👀 face-to-sticker model is now live. Run it on Replicate. It's a ComfyUI workflow too. Details below.
Tweet media one
21
81
662
@fofrAI
fofr
6 months
First text-to-video render of Stable Video Diffusion (SVD) from a Midjourney input image. I'm impressed with - coherent movement - video quality - accuracy with original image Shame the explosions didn't, uh, explode.
24
62
606
@fofrAI
fofr
1 year
Twitter
Tweet media one
Tweet media two
Tweet media three
11
11
583
@fofrAI
fofr
1 year
I asked Bard to make some image prompts for MJv5. Prompts are in the Alt tags. I iterated and asked: - Embellish these with dramatic details, using only keywords - Include the city and year and some cinematic effects
Tweet media one
Tweet media two
Tweet media three
Tweet media four
21
66
578
@fofrAI
fofr
1 year
🧵A follow-up thread on how to write prompts to get good animated cinematic images.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
@fofrAI
fofr
1 year
🧵 A big #Midjourney thread on how to write prompts to get good cinematic images. In this thread I’ll build up a single prompt with cinematic elements, and show their effects. Each prompt will use a 16:9 aspect ratio, and to minimise variation I've locked in a seed.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
98
439
3K
19
104
562
@fofrAI
fofr
1 year
Diorama nightlights, take my money.
Tweet media one
Tweet media two
13
49
555
@fofrAI
fofr
1 month
5.6 million 🤯
Tweet media one
17
36
571
@fofrAI
fofr
1 year
There's now a usable implementation of DragGAN (that cool demo showing the lion opening and closing its mouth) Unofficial implementation: Colab: Original:
@likunchang1998
Kunchang Li
1 year
Our team member (Zeqiang Lai) has reproduced #DragGAN and integrated it into InternGPT! Have a try! 🏄‍♂️ Demo: Code:
14
145
505
8
154
555
@fofrAI
fofr
3 months
@venturetwins So talented
Tweet media one
21
8
557
@fofrAI
fofr
1 year
I fine-tuned an LLM using Midjourney prompts and @replicatehq ’s fine-tune API. The model produces descriptive, keyword heavy prompts out of the box. Try it here: Please share your results! Video guide on fine-tuning LLMs soon.
Tweet media one
Tweet media two
21
130
536
@fofrAI
fofr
1 year
Mona Lisa at the supermarket checkout #midjourneyV4
Tweet media one
16
91
520
@fofrAI
fofr
1 year
Adidas
Tweet media one
8
5
499
@fofrAI
fofr
8 months
FOMO
4
6
492
@fofrAI
fofr
1 year
Adobe, with attitude.
Tweet media one
4
4
473
@fofrAI
fofr
2 months
face + style – using IPAdapter for style, InstantID for consistency, prompt for steering and depth controlnet for continuity with original. This also uses DreamshaperXL Lightning, so these images are in just 4 steps. Will share details soon.
Tweet media one
@fofrAI
fofr
2 months
face + style
Tweet media one
8
6
163
15
52
489
@fofrAI
fofr
1 year
Tweet media one
Tweet media two
19
31
476
@fofrAI
fofr
1 year
Two word prompt. I love stumbling on short prompts that do something magical. What do you think the words were?
Tweet media one
61
28
462
@fofrAI
fofr
1 year
Apple
Tweet media one
Tweet media two
Tweet media three
2
7
464
@fofrAI
fofr
6 months
This morning I tried using voice to ask ChatGPT to get some football scores from the web, as now it has all the features baked in. Seemed like it was working, but then I checked the transcript.
Tweet media one
26
23
465
@fofrAI
fofr
8 months
I've put together a new emoji SDXL fine-tune. Try it with this: "A TOK emoji of [subject]" Works great with famous people.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
33
57
451
@fofrAI
fofr
1 year
Mario Kart movie adaptations, through the years. 40s
Tweet media one
Tweet media two
16
48
447
@fofrAI
fofr
2 years
I accidentally discovered that the Tron Legacy style trained diffusion model can make some truly epic space gardens. #stablediffusion #AIart
Tweet media one
Tweet media two
Tweet media three
Tweet media four
10
53
432
@fofrAI
fofr
13 days
Experimenting with consistent characters at different angles using InstantID, IPAdapters and RealVisXL
Tweet media one
Tweet media two
Tweet media three
Tweet media four
15
36
453
@fofrAI
fofr
1 year
This is an exceptional post on how to use Stable Diffusion (via a1111) with ControlNet to do image restoration. (It’s an involved process but has stellar results)
10
48
432
@fofrAI
fofr
5 months
Moments later.
28
74
414
@fofrAI
fofr
1 year
A new Midjourney workflow I’m really enjoying: 1. Take a photo 2. Use /describe 3. Append a subject to the very beginning of the prompt given by /describe (Like a robot, or a cityscape)
Tweet media one
Tweet media two
Tweet media three
Tweet media four
18
43
392
@fofrAI
fofr
2 months
Turn yourself into an emoji. I've added my emoji lora to the "face-to-many" model on Replicate.
Tweet media one
Tweet media two
14
41
396
@fofrAI
fofr
1 year
Taco Bell (Requested by @pitdesi )
Tweet media one
Tweet media two
11
23
387
@fofrAI
fofr
1 year
You can ask ChatGPT to draw you SVGs, and with the API you can instantly render them. It's like a cute lo-fi image generator. Oh and it can do some popular logos too.
13
59
366
@fofrAI
fofr
2 years
I took a photo of a coffee shop interior, and asked for an isometric vector view from #midjourney v4 This is so powerful.
Tweet media one
Tweet media two
Tweet media three
16
43
359
@fofrAI
fofr
1 year
A quick experiment with @Photoshop ’s new generative fill to combine two Midjourney creations. Prompt: "a space port" Scary how fast and easy this was.
13
38
354
@fofrAI
fofr
10 months
🧵 I used image-to-video AI via Pika Labs to bring famous album covers to life. Nirvana – Nevermind
28
61
344
@fofrAI
fofr
1 year
Tweet media one
Tweet media two
1
22
335
@fofrAI
fofr
5 months
If you've got the memory, you can make HD videos with SVD, using larger images. This example is 1408 × 768 (bigger than 720p) straight out of SVD – no upscaling (but I did add interpolation). Set the number of frames to decode at a time to 1 (decoding_t) to avoid OOM 1/2
25
24
324
@fofrAI
fofr
9 months
Here it is💥! A video guide to fine-tuning SDXL using Replicate. - takes ~10 minutes - needs 5 to 30 training images - makes awesome results Video covers: - picking training images - starting training - using model - prompting tips - all training settings Details in 🧵
10
55
312
@fofrAI
fofr
1 year
Trying out typography and elaborate letterforms with #midjourney v4, and getting some luck with "letter ..." prompts. These images spell out my handle, `fofr`.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
18
45
306
@fofrAI
fofr
1 month
You can now generate beautifully transparent SDXL images on Replicate, which I'm guessing is using LayerDiffusion: Prompt: "a photo of a glass bottle"
Tweet media one
Tweet media two
3
38
305
@fofrAI
fofr
1 year
Microsoft
Tweet media one
Tweet media two
Tweet media three
5
3
295
@fofrAI
fofr
1 year
The accuracy of these hands is impressive. Midjourney v5.1
Tweet media one
14
33
298
@fofrAI
fofr
1 year
Now I'm getting the first ChatGPT to generate prompts for Midjourney. A second ChatGPT evaluates the quality of those prompts and ask for changes. I give them a starting subject.
Tweet media one
Tweet media two
Tweet media three
11
22
298
@fofrAI
fofr
1 year
Pizza Hut (Requested by @spenge_blorb )
Tweet media one
Tweet media two
5
26
292
@fofrAI
fofr
1 year
I asked ChatGPT how someone from the 15th century might explain an SLR camera. Then I put the response into Midjourney. "Tis a large and complex contraption"
Tweet media one
10
24
289
@fofrAI
fofr
1 year
Red Bull (Requested by @MenciaBQF ) This one was really difficult.
Tweet media one
Tweet media two
2
16
285
@fofrAI
fofr
13 days
I've pushed the video-morpher model to Replicate, based on @ipivDev ’s workflow with an added ability to apply a style to the whole video. - 16:9, 4:3, 3:2, 1:1 aspects - preview in 20s, then upscale & interpolate - pick a checkpoint (3d, realistic, etc) Links below 👇
10
43
295
@fofrAI
fofr
1 year
New ControlNet v1.1 Some things to get excited about in here: - openpose with face and hands - "shuffle", which takes an image and a prompt and essentially creates variations ("The model is trained to reorganize images") - tiling, ie upscale using SD by…
Tweet media one
Tweet media two
Tweet media three
3
50
276
@fofrAI
fofr
6 months
Upscaled to 1080p and interpolated to 24fps, and this AI generation looks like actual video game footage (albeit with time paused). 🤯
@fofrAI
fofr
6 months
First text-to-video render of Stable Video Diffusion (SVD) from a Midjourney input image. I'm impressed with - coherent movement - video quality - accuracy with original image Shame the explosions didn't, uh, explode.
24
62
606
23
31
276
@fofrAI
fofr
1 month
This is a neat deepfake detection technique
@thatguybg
brett goldstein
1 month
Intel's FakeCatcher uses a digital version of Photoplethysmography (rPPG) to detect heart flow this method works by detecting the volume changes in blood vessels, by analyzing color variations in the video pixels that correspond to the blood flow across the face.
Tweet media one
10
36
271
8
30
268
@fofrAI
fofr
2 months
Generate transparent images directly with LayerDiffusion. No more background removal tools – it builds transparency into the diffusion process. It's only on A1111 at the moment: I need this in ComfyUI! This is the proper way.
10
36
267
@fofrAI
fofr
1 year
The Avengers
Tweet media one
Tweet media two
Tweet media three
3
2
258
@fofrAI
fofr
1 year
Trying out MJ v5.1, the coherency on this is fantastic, all those background faces. #midjourneyv51
Tweet media one
7
14
249
@fofrAI
fofr
5 months
This looks interesting – like AnimateAnyone and MagicAnimate, but with prompting and face transfer. DreaMoving: reference image + pose sequence + prompt
4
50
252
@fofrAI
fofr
7 months
But can Dalle 3 do this?
Tweet media one
28
17
250
@fofrAI
fofr
1 year
An attempt to turn Escher’s Relativity image into a photo using ControlNet and the canny model.
Tweet media one
Tweet media two
9
38
241
@fofrAI
fofr
6 months
Bringing back this cursed classic. Mannequin challenge variant. Compare Stable Video Diffusion with Pika Labs. Coherency is 🤯
@fofrAI
fofr
8 months
Controllism > Upscale > Pika Labs > Upscale video
199
158
2K
9
20
239
@fofrAI
fofr
7 months
Prompt: a wide angle photo of a television from a side angle, on the screen a lone news reader is looking at the camera, a CNN banner reports “Breaking news: We are not alone”, there is poor reception, it is a normal living room, a bit messy, day time, there is a window
Tweet media one
16
20
234
@fofrAI
fofr
1 year
A comparison of the latest controlnet depth map pre-processors in Automatic1111 stable-diffusion-webui: 1. Leres 2. Leres++ 3. Midas (original) 4. Zoe Leres++ is 🔥
Tweet media one
Tweet media two
Tweet media three
Tweet media four
8
36
236
@fofrAI
fofr
2 months
A small thread of interesting things you can do with my become-image Replicate model: 1. You can use animated inputs to reimagine them as real world people, with all of their exaggerated features
Tweet media one
9
28
237
@fofrAI
fofr
6 months
I'm blown away by the lip-sync quality of the VideoReTalking model. - created an image with my SDXL fine-tune - passed img to @runwayml Gen2 to animate - used video and VideoReTalking model to lip-sync:
@fofrAI
fofr
6 months
I trained SDXL on the slightly weird looking people from Toy Story (1995). Inspired by @skirano ’s really good Toy Story tune. Just more weird.
Tweet media one
Tweet media two
10
10
127
6
30
235
@fofrAI
fofr
1 year
Midjourney can make some gorgeous gradients. Hello new iPhone wallpapers.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
8
13
230
@fofrAI
fofr
5 months
Could you distinguish this #MidjourneyV6 image from a Cars 4 promo shot?
Tweet media one
15
12
229
@fofrAI
fofr
1 year
Premier League, I love how this one came out. Great idea @LowKeyTweep 🦁⚽️
Tweet media one
Tweet media two
9
15
229
@fofrAI
fofr
10 months
Trying out the Runway Gen2 image to video feature – it's taken a lot of attempts to get a video with good motion. Amazing when it works. Image in thread.
14
22
228
@fofrAI
fofr
4 months
I've been working on a new model on Replicate that lets you run any ComfyUI workflow with an API. It supports all the popular controlnets, base weights, preprocessors, photomaker, animatediff, LCM, upscalers, IPAdapters. Details in 🧵
10
32
225
@fofrAI
fofr
5 months
For anyone looking for an open-source video upscaler, @lucataco93 has added the Real-ESRGAN Video Upscaler to Replicate: Upscale to Full HD, 2k or 4k
5
41
217
@fofrAI
fofr
30 days
10 million. Dang. If I had a dollar for every 😅
Tweet media one
@fofrAI
fofr
1 month
5.6 million 🤯
Tweet media one
17
36
571
12
4
212
@fofrAI
fofr
1 year
Michelin (Requested by @chris_carlsson )
Tweet media one
Tweet media two
6
9
210
@fofrAI
fofr
4 months
Having 8x upscaling on Midjourney outputs with the newest @Magnific_AI is epic. It's like magic✨
10
22
212
@fofrAI
fofr
1 year
I'm in love with this world.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
@fofrAI
fofr
1 year
Exploration #2
Tweet media one
Tweet media two
Tweet media three
Tweet media four
5
7
95
6
18
205
@fofrAI
fofr
6 months
I've optimised my latent consistency model on Replicate to make it 3x faster: text to image, 4 steps at 768x768: - was: 2s - now: 0.6s
6
21
206
@fofrAI
fofr
1 year
These all use the Midjourney describe feature, which attempts to describe the image you give it as a prompt.
Tweet media one
9
5
202
@fofrAI
fofr
4 months
Image to video model "i2vgen-xl" is now on Replicate – make 16:9 videos at 1280x704 resolution: Example (which I also interpolated): It also seems to work well up to 48 frames.
9
25
202
@fofrAI
fofr
2 months
it's a me
Tweet media one
10
12
205
@fofrAI
fofr
7 months
New blog post up – Generate images in one second on your Mac using a latent consistency model. 1 second. Locally. 🤯
7
34
201
@fofrAI
fofr
15 days
I've updated the any-comfyui-workflow Replicate model to support: - KJNodes - Frame interpolation - More AnimateDiff LCM weights You can now run the excellent img2vid morph workflow by @ipivDev – I might turn that one into a dedicated model. Links below 👇
11
26
203
@fofrAI
fofr
1 year
If you're interested in AI art, and doing more complex stuff with things like Stable Diffusion – I have a YouTube channel with some tutorials:
5
9
200
@fofrAI
fofr
1 year
Another controlnet experiment, zooming in on a same canny outline. Less weird, but I wanted to see how stable it could be. It gets odd at the end. I love the bit where she seems to just naturally blink. Same prompt/seed for each frame. Interpolated using @runwayml .
8
18
195