A Year of AI-Generated Shorts
In February 2025, YouTube upgraded its Dream Screen feature with Google DeepMind's Veo 2 video model, allowing creators to generate standalone AI video clips directly inside the Shorts camera - not just background images, but full clips that could be woven into any Short with a text prompt. The feature launched in the US, Canada, Australia, and New Zealand, with expansion to other regions following throughout 2025. A year later, Dream Screen has quietly become one of the most consequential product changes YouTube has made in years - and the creator community is still debating whether that is a good thing.
The numbers reflect genuine adoption. YouTube has not released total view counts broken down by AI-generated content, but Dream Screen usage has grown steadily, particularly among educational and how-to creators who previously relied on expensive stock footage or elaborate production setups. The feature generates AI-watermarked clips using SynthID, and YouTube's policies require disclosure of AI-generated content - a disclosure requirement that has become increasingly relevant as AI-assisted Shorts have become difficult to distinguish from traditionally produced content.
The Creator Divide
The creator response splits cleanly along production budget lines. Large creators with established production workflows and audiences built on personal authenticity have been resistant - the concern is less about AI threatening their own channels than about raising the noise floor for everyone. Mid-tier educational creators - the segment that arguably delivers the most value per view on YouTube - have embraced the feature enthusiastically. For a science educator explaining cellular respiration or a history channel visualising ancient Rome, AI-generated B-roll removes a production bottleneck that previously required either a significant budget or creative compromise.
The Monetisation and Policy Question
YouTube's current policy allows AI-assisted content in the YouTube Partner Programme provided disclosure requirements are met. That policy has held for a year, but pressure from advertisers concerned about brand safety alongside AI-generated content has not abated. YouTube has since July 2025 restricted mass-produced, low-value AI content farms - a category of channel that was already problematic before generative AI made content production orders of magnitude cheaper. The more interesting policy question is where the line sits between "AI-assisted" and "AI-generated" for monetisation purposes - a distinction that is becoming increasingly difficult to enforce as AI integration deepens across the creation workflow.