@1littlecoder
1LittleCoder💻
1 day
🔥 China is cooking - 15 seconds with Native Audio! Alibaba's Wan 2.6 is here! ✅Video duration from 3 to 15 seconds ✅Video resolution of 480p, 720p, or 1080p ✅ intelligent prompt rewriting
@fal
fal
1 day
🚀 Wan 2.6 is now live on fal ! • Text-to-Video & Image-to-Video up to 1080p • Up to 15 second generations • Multi-shot video with intelligent scene segmentation • Import your own audio • Reference-to-Video - use 1-3 reference videos for character/object consistency
10
25
312

Replies

@1littlecoder
1LittleCoder💻
23 hours
ASMR 👀
@1littlecoder
1LittleCoder💻
23 hours
New ASMR👑 dropped - Wan 2.6 Text-to-Video! Prompt 👇🏽
0
0
2
@StavZilber
Stav Zilbershtein
23 hours
@1littlecoder how's the lipsync
1
0
0
@1littlecoder
1LittleCoder💻
23 hours
@StavZilber wouldn't be the best imho!
1
0
0
@Elzonris_HCP
ELZONRIS® (tagraxofusp-erzs) HCP
2 days
Harry Erba, MD, PhD, discusses an important part of treating blastic plasmacytoid dendritic cell neoplasm (BPDCN) patients with ELZONRIS® (tagraxofusp-erzs): managing CLS. Please see link in bio for full Prescribing Information.
0
1
13
@hckinz
Jay Sensei👾
1 day
@1littlecoder @fal is fast 👍🏻 even @Alibaba_Wan has not tweeted about Wan 2.6 yet. BTW, I don't understand reference to video input in this model. You input two video reference and in prompt you asked character from V1, v2 dancing in studio, I mean why don't just input two image and instruct
1
0
0
@1littlecoder
1LittleCoder💻
24 hours
0
0
0
@peter6759
zdhpeter
1 day
@1littlecoder you are quicker than higgsfield!
1
0
1
@1littlecoder
1LittleCoder💻
1 day
@peter6759 The team 🔥
0
0
1
@iamsikora
SIKORA
19 hours
@1littlecoder Looks stiff and bland.
0
0
0
@galaxyai__
Galaxy.ai
1 day
@1littlecoder Wan brings the visuals, https://t.co/BlDQN5hK75 brings the execution engine behind the scenes 🎬📈
0
0
0
@codewithimanshu
Himanshu Kumar
23 hours
@1littlecoder Video duration increased, now reaching fifteen seconds maximum.
0
0
0
@BeauchampG78318
Gregg Beauchamp
22 hours
@1littlecoder Doesnt look good at all.
0
0
0
@FreedomNoTerror
Freedom Not Terror
19 days
This is their story.
123
451
2K
@CryptoDeFi2048
Crypto DeFi
22 hours
@1littlecoder it's closed source, it means nothing
0
0
0
@bettercallsalva
Thiago Salvador
9 hours
@1littlecoder kinda feels like we skipped the “slow demo” era and went straight to “music video trailer for the future.” do you think this pace helps real adoption or just burns people out with constant fomo drops?
0
0
0
@bfl_ml
Black Forest Labs
20 hours
FLUX.2 [max] is here Our highest quality model to date. * Grounded generation - searches the web for real-time context. * Up to 10 reference images. Products, characters, styles stay consistent. * #2 on @ArtificialAnlys in text-to-image and image editing.
37
112
809
@TencentHunyuan
Tencent HY
6 hours
🚀🚀🚀Introducing HY World 1.5 (WorldPlay)! We have now open-sourced the most systemized, comprehensive real-time world model framework in the industry. In HY World 1.5, we develop WorldPlay, a streaming video diffusion model that enables real-time, interactive world modeling
18
94
546
@0xDevShah
Dev Shah
2 days
This is the DeepSeek moment for Voice AI. Today we’re releasing Chatterbox Turbo — our state-of-the-art MIT licensed voice model that beats ElevenLabs Turbo and Cartesia Sonic 3! We’re finally removing the trade-offs that have held voice AI back. Fast models sound robotic.
135
337
3K
@MetaNewsroom
Meta Newsroom
18 hours
Introducing SAM Audio: the first unified AI model that allows you to isolate and edit sound from complex audio mixtures. This could mean isolating the guitar in a video of your band, filtering out traffic noises, or removing the sound of a dog barking in your podcast, all with
29
247
1K