Anastasis Germanidis Profile
Anastasis Germanidis

@agermanidis

Followers
4K
Following
3K
Media
38
Statuses
498

Simple ideas, pursued maximally. Co-Founder & CTO @runwayml.

New York, NY
Joined May 2011
Don't wanna be here? Send us removal request.
@agermanidis
Anastasis Germanidis
11 months
Towards Universal Simulation.
4
30
130
@agermanidis
Anastasis Germanidis
1 day
RT @ReedAlbergotti: I had a great interview with @runwayml CTO @agermanidis. Wrote an article but it was so interes….
0
10
0
@grok
Grok
4 days
Join millions who have switched to Grok.
212
241
2K
@agermanidis
Anastasis Germanidis
2 days
Game Worlds is now available to everyone in Beta. It represents an entire new research direction for us towards generative storytelling and world-building:.
@runwayml
Runway
2 days
Today, we're launching the Runway Game Worlds Beta. Over the last few months, we have been working on research and products that are moving us closer toward a future where you will be able to explore any character, story or world in real time. While generating the pixels of
2
1
38
@agermanidis
Anastasis Germanidis
22 days
Why I think that advancing simulation is the most important thing to be working on in AI
Tweet media one
7
13
97
@agermanidis
Anastasis Germanidis
25 days
I've learned that it takes about a year from initially setting a research direction internally to getting it to finally work. It took a year of investing in scaling laws for video models to releasing Gen-3 Alpha. It took a year of investing in a unified multi-task approach to.
2
8
89
@agermanidis
Anastasis Germanidis
29 days
Models just want to generalize. For the past years, we’ve been pushing the frontier of controllability in video, releasing new models and techniques for inpainting, outpainting, segmentation, stylization, keyframing, motion and camera control. Aleph is a single in-context model.
@runwayml
Runway
29 days
Introducing Runway Aleph, a new way to edit, transform and generate video. Aleph is a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation, with the ability to perform a wide range of edits on an input video such as adding, removing
12
32
194
@agermanidis
Anastasis Germanidis
1 month
RT @runwayml: Introducing Act-Two, our next-generation motion capture model with major improvements in generation quality and support for h….
0
361
0
@agermanidis
Anastasis Germanidis
2 months
This summer has a similar feeling as that of 2022. Something has clicked with the latest generation of models and there’s suddenly so much low hanging fruit. Back to making new prototypes every weekend.
2
2
43
@agermanidis
Anastasis Germanidis
2 months
RT @RunwayMLDevs: Starting today, we're announcing an 84-hour open challenge for the most interesting app built with the Runway API. The wi….
0
16
0
@agermanidis
Anastasis Germanidis
2 months
RT @c_valenzuelab: We are beyond the tipping point. AI is becoming the foundational layer upon which most creative work will be built. Rig….
0
49
0
@agermanidis
Anastasis Germanidis
2 months
Lucid Dream Test. Imagine the following scenario. You enter a room and you are asked to wear a VR headset that has a camera and supports a passthrough mode (which can display a real-time feed of your surroundings). Once you put on the headset, you find yourself in what appears to.
7
19
71
@agermanidis
Anastasis Germanidis
3 months
Total Pixel Space, which won the Grand Prix at this year's AIFF, is a wonderful video essay and, by the way, one of the clearest descriptions of universal simulation (as search in the space of all possible universes).
4
11
76
@agermanidis
Anastasis Germanidis
3 months
Great work by @graceluo_ @jongranskog training diffusion models to be aligned with VLM feedback in minutes, which can be used to improve commonsense reasoning and enable many kinds of visual prompting.
@graceluo_
Grace Luo
3 months
✨New preprint: Dual-Process Image Generation! We distill *feedback from a VLM* into *feed-forward image generation*, at inference time. The result is flexible control: parameterize tasks as multimodal inputs, visually inspect the images with the VLM, and update the generator.🧵
1
4
47
@agermanidis
Anastasis Germanidis
3 months
Leading the way is more fun than catching up.
6
5
59
@agermanidis
Anastasis Germanidis
3 months
RT @RunwayMLDevs: What would you build with a programmable world simulator?.
0
9
0
@agermanidis
Anastasis Germanidis
3 months
Expanding the Gen-4 API with generalist image capabilities:.
@runwayml
Runway
3 months
Earlier this month, we released Gen-4 References, our most general and flexible image generation model yet. It became one of our most popular releases ever, with new use cases and workflows being discovered every minute. Today, we’re making it available via the Runway API,
Tweet media one
0
1
16
@agermanidis
Anastasis Germanidis
3 months
As a field, we're just scratching the surface in building multimodal simulators of the universe. This will be a long journey, and the true purpose of deep learning.
3
12
80
@agermanidis
Anastasis Germanidis
4 months
RT @c_valenzuelab: CVPR returns this June. Join us Thursday June 12th for our annual CVPR Friends Dinner. RSVP at the link below. https://t….
0
9
0
@agermanidis
Anastasis Germanidis
4 months
RT @TomLikesRobots: I'm having more fun with @runwayml's Gen-4 References than I've had in a while with an AI model. This evening I started….
0
5
0
@agermanidis
Anastasis Germanidis
4 months
References has been the biggest demonstration so far for me that if you focus on the problems that you really need to solve, rather than the problems that feel most solvable, deep learning will reward you for it.
@runwayml
Runway
4 months
Today we are releasing Gen-4 References to all paid plans. Now anyone can generate consistent characters, locations and more. With References, you can use photos, generated images, 3D models or selfies to place yourself or others into any scene you can imagine. More examples
2
5
56
@agermanidis
Anastasis Germanidis
4 months
RT @runwayml: We have released early access to Gen-4 References to all Gen:48 participants. References allow you to create consistent world….
0
72
0