
Anastasis Germanidis
@agermanidis
Followers
4K
Following
3K
Media
37
Statuses
486
Co-Founder & CTO @runwayml
New York, NY
Joined May 2011
RT @RunwayMLDevs: Starting today, we're announcing an 84-hour open challenge for the most interesting app built with the Runway API. The wi….
0
16
0
RT @c_valenzuelab: We are beyond the tipping point. AI is becoming the foundational layer upon which most creative work will be built. Rig….
0
46
0
Great work by @graceluo_ @jongranskog training diffusion models to be aligned with VLM feedback in minutes, which can be used to improve commonsense reasoning and enable many kinds of visual prompting.
✨New preprint: Dual-Process Image Generation! We distill *feedback from a VLM* into *feed-forward image generation*, at inference time. The result is flexible control: parameterize tasks as multimodal inputs, visually inspect the images with the VLM, and update the generator.🧵
1
4
46
Expanding the Gen-4 API with generalist image capabilities:.
Earlier this month, we released Gen-4 References, our most general and flexible image generation model yet. It became one of our most popular releases ever, with new use cases and workflows being discovered every minute. Today, we’re making it available via the Runway API,
0
1
16
RT @c_valenzuelab: CVPR returns this June. Join us Thursday June 12th for our annual CVPR Friends Dinner. RSVP at the link below. https://t….
0
9
0
RT @TomLikesRobots: I'm having more fun with @runwayml's Gen-4 References than I've had in a while with an AI model. This evening I started….
0
5
0
References has been the biggest demonstration so far for me that if you focus on the problems that you really need to solve, rather than the problems that feel most solvable, deep learning will reward you for it.
Today we are releasing Gen-4 References to all paid plans. Now anyone can generate consistent characters, locations and more. With References, you can use photos, generated images, 3D models or selfies to place yourself or others into any scene you can imagine. More examples
2
5
56
RT @runwayml: We have released early access to Gen-4 References to all Gen:48 participants. References allow you to create consistent world….
0
72
0
A different framing: we're in one continuous era of simulation. The only thing that changes is what's being simulated: from toy worlds, to the world as perceived by humans, to the world beyond human perception.
@dsivakumar The short paper "Welcome to the Era of Experience" is literally just released, like this week. Ultimately it will become a chapter in the book 'Designing an Intelligence' edited by George Konidaris and published by MIT Press.
1
1
26
RT @omerbartal: I’m excited to join Runway!. There’s a lot to explore at the edges of media and creativity, and it’s great time to rethink….
0
4
0
RT @mmalex: weirdly often, people ask me 'do you miss the games industry?' and my answer is always 'i miss the lovely people! and the art-c….
0
2
0
Gen-4 Turbo is an amazing feat of research and engineering. To give a sense of the improvement, in our internal evals its outputs were preferred ~90% of the time compared to those of non-Turbo Gen-3 Alpha.
Today we’re introducing Gen-4 Turbo. The fastest way to generate with our most powerful video model yet. With Gen-4 Turbo it now takes just 30 seconds to generate a 10 second video, making it ideal for rapid iteration and creative exploration. Now rolling out across all plans.
0
18
57
Welcome to the storytelling era of generative models. Incredibly proud of our team for this release. We set a very high standard internally for our next model. Building the world's best model for video generation was the baseline, what I'm most excited about is the new paradigm.
Today we're introducing Gen-4, our new series of state-of-the-art AI models for media generation and world consistency. Gen-4 is a significant step forward for fidelity, dynamic motion and controllability in generative media. Gen-4 Image-to-Video is rolling out today to all paid
6
6
82