
Anastasis Germanidis
@agermanidis
Followers
4K
Following
3K
Media
38
Statuses
498
Simple ideas, pursued maximally. Co-Founder & CTO @runwayml.
New York, NY
Joined May 2011
RT @ReedAlbergotti: I had a great interview with @runwayml CTO @agermanidis. Wrote an article but it was so interes….
0
10
0
Game Worlds is now available to everyone in Beta. It represents an entire new research direction for us towards generative storytelling and world-building:.
Today, we're launching the Runway Game Worlds Beta. Over the last few months, we have been working on research and products that are moving us closer toward a future where you will be able to explore any character, story or world in real time. While generating the pixels of
2
1
38
Models just want to generalize. For the past years, we’ve been pushing the frontier of controllability in video, releasing new models and techniques for inpainting, outpainting, segmentation, stylization, keyframing, motion and camera control. Aleph is a single in-context model.
Introducing Runway Aleph, a new way to edit, transform and generate video. Aleph is a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation, with the ability to perform a wide range of edits on an input video such as adding, removing
12
32
194
RT @runwayml: Introducing Act-Two, our next-generation motion capture model with major improvements in generation quality and support for h….
0
361
0
RT @RunwayMLDevs: Starting today, we're announcing an 84-hour open challenge for the most interesting app built with the Runway API. The wi….
0
16
0
RT @c_valenzuelab: We are beyond the tipping point. AI is becoming the foundational layer upon which most creative work will be built. Rig….
0
49
0
Great work by @graceluo_ @jongranskog training diffusion models to be aligned with VLM feedback in minutes, which can be used to improve commonsense reasoning and enable many kinds of visual prompting.
✨New preprint: Dual-Process Image Generation! We distill *feedback from a VLM* into *feed-forward image generation*, at inference time. The result is flexible control: parameterize tasks as multimodal inputs, visually inspect the images with the VLM, and update the generator.🧵
1
4
47
Expanding the Gen-4 API with generalist image capabilities:.
Earlier this month, we released Gen-4 References, our most general and flexible image generation model yet. It became one of our most popular releases ever, with new use cases and workflows being discovered every minute. Today, we’re making it available via the Runway API,
0
1
16
RT @c_valenzuelab: CVPR returns this June. Join us Thursday June 12th for our annual CVPR Friends Dinner. RSVP at the link below. https://t….
0
9
0
RT @TomLikesRobots: I'm having more fun with @runwayml's Gen-4 References than I've had in a while with an AI model. This evening I started….
0
5
0
References has been the biggest demonstration so far for me that if you focus on the problems that you really need to solve, rather than the problems that feel most solvable, deep learning will reward you for it.
Today we are releasing Gen-4 References to all paid plans. Now anyone can generate consistent characters, locations and more. With References, you can use photos, generated images, 3D models or selfies to place yourself or others into any scene you can imagine. More examples
2
5
56
RT @runwayml: We have released early access to Gen-4 References to all Gen:48 participants. References allow you to create consistent world….
0
72
0