
Omer Bar Tal
@omerbartal
Followers
3K
Following
5K
Media
33
Statuses
344
Research lead @runwayml, previously @pika_labs @Google @WeizmannScience
New York, NY
Joined March 2016
I’m excited to join Runway!. There’s a lot to explore at the edges of media and creativity, and it’s great time to rethink storytelling which is core at Runway’s mission. I’ll be moving to NYC as well - if you wanna grab coffee, text me :).
42
6
286
Being able to produce something this cinematic end-to-end on a laptop for the price of an Uber ride is insane.
4
1
44
RT @runwayml: Runway Aleph is a new way to edit, transform and generate video. Its ability to perform a wide range of generalized tasks mea….
0
70
0
RT @runwayml: Runway Aleph can seamlessly alter environments, characters and moods wile still maintaining the original motion of your input….
0
69
0
RT @runwayml: Runway Aleph can accurately remove subjects from your scene. Even with complex motion, lighting conditions and passing foregr….
0
74
0
I think one of the most important parts of working on foundation models is conviction. Once you truly believe something should work, it usually does. A great example is text-to-video: the biggest insight (credit to Sora) was that simply training video models properly at scale.
I found an internal presentation from 2018 that could've been made yesterday. This was by far the hardest time to build Runway. No one really cared about or believed in what generative models could do for art and media. We were describing a future that didn't make sense to anyone
2
1
29
RT @runwayml: Runway Aleph can precisely replace, retexture or entirely reimagine specific parts of a video, making it possible to rapidly….
0
121
0
Might look like a dumb comparison, but it’s actually a perfect example of why ChatGPT dominates the consumer use case. Nobody cares about irrelevant benchmarks, and it feels like no one’s even trying to compete.
1
0
8
RT @runwayml: Runway Aleph can precisely control specific parts of a video, including manipulating ambient, atmospheric and directional lig….
0
123
0
Aleph isn’t just multitask, it’s task-agnostic by design. No disentanglement, no hacky pipelines. Just a true generalist. So bullish on our approach 👌.
Something we've believed for a long time is that workflows are infinite if the model learns the right way. Aleph is a single in-context model that can solve many workflows at inference time. A multi-task approach that doesn't require any specialized UI. Workflows can adapt to
1
6
54
RT @c_valenzuelab: Really nice demo of what @runwayml Aleph can do for complex changes in environments while adding accurate dynamic elemen….
0
158
0
RT @iamneubert: 🏛️ Returning the The Temple of Dendur to the desert. Videos like this can now be generated in two sentences by using Aleph….
0
35
0
RT @c_valenzuelab: Infinite camera coverage on demand is here. Generating completely new camera angles while retaining the action and motio….
0
21
0
Editing semitransparent effects in video usually requires layering and precise integration. Inpainting alone isn’t enough as content behind the mask shifts. We found Aleph can handle this zero-shot. No task-specific training. Truly a generalist model. Can’t wait to see what else
5
10
124
Bonus 💡: Aleph א is the first letter in the Hebrew Alphabet, and indeed -- this is only the beginning!.
2
2
20
Excited to introduce *Runway Aleph* -- the first general purpose visual generation model. We designed a model that can be conditioned on input videos/images/text, and be intuitively used for ANY kind of editing task!. Huge shoutout to my insanely talented team!.
Introducing Runway Aleph, a new way to edit, transform and generate video. Aleph is a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation, with the ability to perform a wide range of edits on an input video such as adding, removing
10
12
174
Launch season just started 🔥.
Introducing Act-Two, our next-generation motion capture model with major improvements in generation quality and support for head, face, body and hand tracking. Act-Two only requires a driving performance video and reference character. Available now to all our Enterprise
3
1
49