aituki
  • Home
  • Science
    • Your Bio-signals decoded
    • Transforming BioSignals
    • Raw Data
  • How it Works
  • Team
  • Articles
  • Early access
  • Demo
  • Menu Menu

Alibaba’s advanced AI video suite

Alibaba’s Tongyi Lab just released Wan2.1, an open-source suite of powerful video generation models that outperform SOTA open-source and closed models such as Sora on key benchmarks — while generating videos at 2.5x the speed.

The details:

Wan2.1-T2V-14B tops the VBench leaderboard, excelling in areas like complex motion dynamics, real-world physics simulation, and text generation.
All models support text-to-video, image-to-video, and video-to-audio, and are the first with the ability to render text in both English and Chinese.
Wan’s editing tools include video inpainting and outpainting, multi-image referencing, and the ability to maintain existing structures and characters.
The release also includes a light 1.3B version capable of running on consumer hardware—it can generate a 5-sec 480P clip on RTX 4090 in 4 minutes.
Why it matters: Another day, another wild open-source release out of China. Wan is a continuation of the accelerating quality we’ve seen from recent launches like Google’s Veo 2 — with telltale AI signs (choppy motion, artifacts, etc.) all but completely eliminated. Between Qwen and Wan, Alibaba is bringing the open-source heat in 2025.

Advancing BioSignal Intelligence for personalized health insights.

Navigation

  • Home
  • How it Works
  • Science
  • Roadmap
  • Team

Resources

  • Articles
  • Demo
  • Early access

Legal

  • Terms of service
  • Legal notices
  • Cookie policy
  • GDPR Compliance
Scroll to top Scroll to top Scroll to top