Artificial Intelligence

ByteDance’s new AI video generation model, Dreamina Seedance 2.0, comes to CapCut

Published byAIDaily Editorial Team
5 min read
Original source author: Sarah Perez

The new model in CapCut will have built-in protections for making video from real faces or unauthorized intellectual property.

Share:

OpenAI may be dialing back its efforts in the video generation market with the shutdown of its Sora app , but ByteDance on Thursday confirmed that its new audio and video model, Dreamina Seedance 2.0 , is now rolling out in its editing platform, CapCut .

ByteDance says the model allows creators to draft, edit, and sync video and audio content by using prompts, images, or reference videos.

The phased rollout will begin with CapCut users in Brazil, Indonesia, Malaysia, Mexico, the Philippines, Thailand, and Vietnam, with more markets added over time.

The news of the launch in CapCut follows a recent report that said the model’s global rollout would be paused , while it worked to address intellectual property issues that drew criticism from Hollywood over alleged copyright infringement. That likely explains the limited number of markets where the model is currently available within CapCut.

In China, the model is available to users of ByteDance’s Jianying app.

The video generation model works without reference images, even if the creator only uses a few words to describe the scene they have in mind, ByteDance says in its announcement . CapCut is also good at rendering realistic textures, movement, and lighting across a range of visual perspectives and angles, which the company notes could be used to edit, enhance, or correct creators’ own footage.

Another use case would be allowing creators to test potential ideas based on early concepts or sketches before filming the real video.

In addition, Dreamina Seedance 2.0 can be used for a wide range of content, including cooking recipes, fitness tutorials, business or product overviews, and videos with motion or action-focused content, where AI video models have historically faced challenges, the company explains.

At launch, the model supports clips of up to 15 seconds long across six aspect ratios.

In CapCut, the model will roll out across different areas, including editing features such as AI Video and generation tools like Video Studio. It will also come to ByteDance’s AI generation platform, Dreamina, and its marketing platform, Pippit.

Given its ability to create realistic content, ByteDance says it has added safety restrictions, so the model won’t have the ability to make videos from images or videos that contain real faces. CapCut will also block the use of unauthorized generation of intellectual property. (However, if the restrictions were working properly, the model would be available now in the United States. Likely, more tweaks are still being made.)

The content produced by Dreamina Seedance 2.0 will also include an invisible watermark, which will help to identify content made with the model when it’s shared off-platform, ByteDance added. This could aid in things like takedown requests from rights holders in the event that the model allowed copyright content through.

ByteDance says it will partner with experts and creative communities as the model rolls out to iterate and improve upon the model’s capabilities.

StrictlyVC kicks off the year in SF. Get in the room for unfiltered fireside chats with industry leaders, insider VC insights, and high-value connections that actually move the needle. Tickets are limited.

The AI skills gap is here, says AI company, and power users are pulling ahead Rebecca Bellan

The AI skills gap is here, says AI company, and power users are pulling ahead

The AI skills gap is here, says AI company, and power users are pulling ahead

Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’ Sarah Perez

Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’

Google unveils TurboQuant, a new AI memory compression algorithm — and yes, the internet is calling it ‘Pied Piper’

Kentucky woman rejects $26M offer to turn her farm into a data center Graham Starr

Kentucky woman rejects $26M offer to turn her farm into a data center

Kentucky woman rejects $26M offer to turn her farm into a data center

Someone has publicly leaked an exploit kit that can hack millions of iPhones Lorenzo Franceschi-Bicchierai Zack Whittaker

Someone has publicly leaked an exploit kit that can hack millions of iPhones

Someone has publicly leaked an exploit kit that can hack millions of iPhones

Cursor admits its new coding model was built on top of Moonshot AI’s Kimi Anthony Ha

Cursor admits its new coding model was built on top of Moonshot AI’s Kimi

Cursor admits its new coding model was built on top of Moonshot AI’s Kimi

Delve accused of misleading customers with ‘fake compliance’ Anthony Ha

Delve accused of misleading customers with ‘fake compliance’

Delve accused of misleading customers with ‘fake compliance’

An exclusive tour of Amazon’s Trainium lab, the chip that’s won over Anthropic, OpenAI, even Apple Julie Bort

An exclusive tour of Amazon’s Trainium lab, the chip that’s won over Anthropic, OpenAI, even Apple

An exclusive tour of Amazon’s Trainium lab, the chip that’s won over Anthropic, OpenAI, even Apple

What this coverage includes

  • Clear source attribution and link to the original publication.
  • Editorial framing about relevance, impact, and likely next developments.
  • Review for readability, context, and duplication before publication.

Original source:

TechCrunch AI

About this article

This article was curated and published by AIDaily as part of our editorial coverage of artificial intelligence developments. The content is based on the original source cited below, enriched with editorial context and analysis. Automated tools may assist with translation and initial structuring, but publication decisions, factual review, and contextual framing remain editorial responsibilities.

Learn more about our editorial process