OpenAI just introduced SORA: new text2video
#AI
model.
While everyone is just posting mind-blowing examples, I want to talk about the consequences.
A thread 🧵🔽
Not
#CGI
: observable with the naked eye! The 3D space reacts to my eye's position.
Let's leverage some
#AI
:
1) Image generated by StableDiffusion SDXL
2) Depth map by ZoeDepth
3) Now it runs on MetaSpark which makes it possible to run it as a social augmented reality filter and
7. In conclusion, I want to say that you, my dear readers, are among the ~0.01% of people who have a clear understanding of these technologies, their capabilities, and the future.
Deceiving everyone else is a piece of cake.
Use this knowledge for good while you still can. Tell
Apple fans are starting to return their Vision Pros.
>open the article
>based on a couple of tweets and one Reddit post
>senior reporter
Absolutely degenerate journalism
3. Creating a completely non-existent news topic or "properly" heating up the right one will become even easier.
People trust videos. Whip up videos from various "angles," post them, and bots on something like ChatGPT will comment, discuss, and retweet.
We live in a world where
1. Clearly, this isn't the end of the road—generation quality and consistency (the ability to maintain story, style, and details) will get better, and costs will go down.
Videos today are already hard to distinguish from reality, and soon it'll be outright impossible, even with
AI spiral as a 3D parallax illusion. The video is not edited and not AR tracking: you can see the illusion with the naked eye.
It works in real-time by adjusting the 3D scene on screen, based on phone and head position.
5. As for flesh videographers, directors, and other filmmakers, those who say "AI won't replace you—a person with AI will" are only half right.
This is a transitional period that won't last very long. It's hard to predict, but I give this industry 10, maybe 15 years at most.
A
Real-time and mobile
#AR
! Just updated it to work on any surface: floor, walls, and ceiling! Also improved how real-world texture is mapped onto rocks. Wanna try?
2. Right now, pics of three-fingered kids and six-legged cats as "war victims" are racking up hundreds of thousands of retweets on X.
People are overlooking even such obvious flaws, not to mention the more sophisticated examples above.
And that's despite X having Community Notes,
New
#AR
stuff I'm working on: isolated AR portals. There are outer and inner spaces that are entirely isolated, yet we can seamlessly go from one to another.
AI generates decent textures WITHOUT lighting: problem solved.
As an artist, I will finally can to texture objects in a way that allows them to be relit 👀
Project links below ⬇️
4. Don't think OpenAI will be a force for good, imposing censorship and preventing the generation of deepfakes and other disinformation.
Well, they might, but they still don't possess unique and irreplaceable knowledge.
Other researchers and developers with similar solutions are
My
#AI
/
#AR
experience won 2nd place in the last Snap Hackathon! It uses machine learning to detect objects and turn them into cartoon characters. That's how it's made: 🧵👇
6. The entertainment industry will change—endless, personalized, interactive multiplayer series on any topic will likely emerge within a few years.
Personalized porn, naturally, will be a thing too (I know you thought of it).
New
#AR
stuff I'm working on: isolated AR portals. There are outer and inner spaces that are entirely isolated, yet we can seamlessly go from one to another.
Not
#CGI
: You can see this illusion with the naked eye! 👀
The 3D space reacts to my head movements. Watch until the end to see how the illusion breaks when I stop moving my head and only move the camera.
#AR
Just FYI I made Minecraft inside Instagram filter. infinite digging, flowing water, more than 50 types of blocks and even save&load feature. Hey
@Minecraft
what do you think?
Unity requires a Unity Pro license for visionOS XR support, also if you try to install the packages Unity will make sure to remove them on initialization, trust me I just tried it 😂
The cool thing is, you have 30 days to try Unity Pro which should give you time to evaluate the
A thread about how I've made this cyber-fashion AR garment with cinema-level VFX for
@lenslist
×
@SnapAR
Fashion Contest. (actually my first clothes design)
Even though it looks cool, no one actually will want this visual garbage persistently float in front of our eyes. That’s why such concepts will never become real.
This is a real location-based AR demo running inside real AR glasses
A glimpse into a future where our lives are more like the games and movies we love
That's how the illusion runs inside MetaSpark: here I manually move the virtual camera (in reality it's connected to the user's eye position).
The main difference from
#ARkit
is, that in Spark I have to manually modify the camera view and projection matrices in shaders.
iPhone 15 + External Display + Controller = new console?
Thanks to USB-C and a new GPU with raytracing and MetalFX Upscaling, the new iPhone will run AAA games such as Death Stranding, Resident Evil 4, and Assassin's Creed Mirage. Thoughts?
8. That's a wrap.
Follow
@Enuriru
for more expert insights about AI and AR. I don't post bullshit like "how to create AI influencer in two minutes".
Retweet the original tweet if you like it:
OpenAI just introduced SORA: new text2video
#AI
model.
While everyone is just posting mind-blowing examples, I want to talk about the consequences.
A thread 🧵🔽
The only reasonable hardware AI assistant is Meta + Rayban smart glasses.
Glasses are a wearable device that you can actually wear for the whole day. Especially with transition lenses (transparent indoors and dark outdoors).
Rabbit is basically a cheap phone without most
My
#AR
portal implementation merges two isolated 3D scenes with a 100% seamless passing experience. No Switching. No transitions.
Portals are double-sided: if there is smth in front or behind the portal, you can see it from the corresponding side.
Multiple portals supported.
@_akhaliq
they've already implemented a simplified version of it in TikTok app (in closed beta for AR developers). Works in real-time. I made a simple world effect with it:
Made real-world
#AR
collisions in Meta Spark. Unlike Snap Lens Studio, Spark is incapable of doing point cloud and mesh reconstruction: it can only detect planes. So I check collisions with planes manually. The physics is custom too.