Dmitrii Tochilkin
@cut_pow
Followers
3K
Following
1K
Media
50
Statuses
287
3D generation/reconstruction research ex. research @StabilityAI, @Google, @Yandex
Tbilisi
Joined September 2012
"A Year" AI animation artwork made in colab using my custom stable 3D animation algorithm on top of #stablediffusion model. In the thread I share some details about the algo and when i plan to release it, and talk about the joy and future of AI filmmaking 🎶 DakhaBrakha - Vesna
136
986
6K
Image-to-3d is almost instant now👀 Check it out in our demo
huggingface.co
Today we are releasing TripoSR in collaboration with @tripoAI. TripoSR is a new image-to-3D model capable of creating high quality outputs in less than a second. Learn more here: https://t.co/qfg5MYunOD
0
0
7
Artists, whose names I used explicitly in prompts: Mokoto Shinkai, Studio Ghibli, Artgerm, Beeple
3
1
40
This Anime Does Not Exist 🌚 And several hours before the intro didn't exist either — praise to AI animation! Strong generative image model is all you need (and my algorithm for 3D animation). Still testing its capabilities — now less jittering and blur. Would you watch this?
55
255
2K
Interview with me about AI animation & filmmaking! Steve, thanks for having me🤟
This week I interviewed Dmitrii Tochilkin for my channel and the video is up now. He's an amazing AI Animator pushing the boundaries of the technology and the art form. Who else should I interview? Drop some names please! #aiart #Animation
3
0
18
“Becoming myself” AI 3D animation with a dreamboothed #stablediffusion. No init image/video
14
35
237
@RJs_RATDREAMS @runwayml @devdef @megapunk @chris_wizard @EuclideanPlane Oh wow! I've also experimented with Josh Neuman's videos in July using DD warpfusion, but gave up on it. Didn't have @runwayml then, had to combine all AI video pipelines manually. Yours look much more stable and coherent
4
12
76
3. The last thing: I'm still exploring my ways to earn on #AIArt, so if you liked my work and want to support me (so that I can continue work on AI tools), I will be very grateful for any donation🙃 https://t.co/GdAQzDGrI1
7
0
69
2.6. Anyways, my faith in forthcoming AI filmmaking grows stronger each day. It will be big, it will be wide and it will be lovely (very soon)
4
2
62
2.5. And I'm so happy i've finished this video before the change in colab tariffs. It would cost millions now💰
4
1
55
2.4. One thing that was painful though is being an AI film operator in colab. It's a total humiliation to adjust endless camera param numbers to make a smooth move to the horse cart. Tools need to become overgrown with handy interfaces. Hope UX devs get interested in AIArt soon
3
1
76
...upon it's cross -Done! Umm, there are people behind it... -Ok, make them be dancing girls and go forward. Of course, many prompts were strict, but i don't know anyways where exactly objects appear, how they look etc. Openness to AI ideas and vision is key in this co-creation
2
1
38
2.3. I felt myself an explorer going hand in hand with AI through its internal worlds. I stopped rendering many times to adjust camera moves, so the process looked like this: -Ok, let's walk through old slavic village -There's a wooden church -Cool! Fly near and make sun shine...
3
2
51
2.2. As the old version, the remake is about cyclical change of seasons and resurrection. It ends where it starts with the rebirth of nature; song name 'Vesna' means spring. But now it was much much easier for me to express it with 3D SD! And more importantly, i felt more joy
1
1
46
2.1. AI filmmaking. I'm so happy that AI tools become sharper each month for people's self-expression through storytelling. My new video is a good depiction of it: Almost a year ago when I just got interested in AI art I made a video with the same idea in DD:
3
11
145
1.7 I plan to open-source colab next week -- there are a lot of moving pieces so i still need to figure out the optimal parameter setup. I wanted to finish the video and tidy up code a week ago, but my cat fell from the 5 floor, so currently I'm more of a nurse than an engineer🥲
6
1
104
1.6. Limitations: re-projection of pointclouds multiple times introduces artifacts, so you need to compensate it with few diffusion steps -> slight wobbling or blurriness even for known parts. In this sense Turbo may provide smoother (and faster) animation. Will try to solve it
2
2
44
1.5. I think this approach may potentially help us all wait until the real generative 3D rendering and text-to-video methods become good and public😀And also it's a little step towards more stable, realistic and production-level AI animation
3
1
67
1.4. I haven't seen similar solutions, so I decided to name this approach POISD -- "Pointcloud Occlusion Inpainting with Stable Diffusion". I'm very proud of it: it was a harder engineering task than my inpainting + attempt on stable animation for Disco
Inpainting mode in #DiscoDiffusion! I've finally made the parametrised guided inpainting for disco, and applied it for more stable 2D and 3D animations. In the thread i show what's in there https://t.co/1atSmGqALE
3
5
101
1.3 -"But we've already had 3D algorithm for DD and SD!" - Yes, and it also worked on top of MiDaS depth maps. The difference is that it interpolates missing info by 'warping' space, which is good for artistic or trippy videos, but not good for realistic animation. Visualization:
3
9
134
1.2 It heavily relies on the quality of depth maps, and uses the assumption that SD has implicit knowledge of the scene geometry in an image. So therefore it can plausibly inpaint missing parts without explicitly knowing underneath 3D meshes of the scene
2
2
69