Social media platforms are facing a sharp rise in fake videos as AI tools become more advanced and easier to use.
Since OpenAI launched its Sora video generator last year, realistic AI videos have spread rapidly online.
The latest Sora update reached 1 million downloads in under five days, allowing anyone to create convincing fake clips using only a simple text prompt.
In one recent case, a fake TikTok video appeared to show a woman discussing selling food stamps for cash. The clip even had traces of the Sora watermark, which the uploader tried to hide.
Still, thousands of viewers believed it was real and reacted angrily. Experts say this is becoming a dangerous trend, especially during sensitive political debates.
AI Misinformation: Platforms Struggle to Stop Fake AI Videos

Major social media companies, including Meta, TikTok, YouTube, and X, have introduced rules that require labels on AI-generated content.
But the rules are not working well. Sora’s watermark can be removed easily using online tools, making fake videos almost impossible to identify.
Key Concerns
- Watermarks can be erased within seconds
- AI videos spread faster than platforms can review them
- Many fake clips appear on Facebook without any labels
- Media outlets have mistakenly used AI videos as real footage
- Commenters often believe fake content is genuine
Human rights groups say the responsibility lies with the companies. They argue that platforms must improve moderation and develop better systems to detect and label AI videos before they go viral.
Also read about: Google AI Innovations 2025: Smarter Features, Real Benefits
Foreign Influence Adds to Global Risks
Experts warn that AI-generated videos are already being used in foreign influence campaigns.
Researchers at Clemson University found coordinated networks spreading fake clips about the Iran–Israel conflict.
Some of these videos, including fabricated scenes of bombings and plane crashes, gained millions of views before being flagged as false.
Other tech giants also contribute to the volume. Google’s Veo tool produced over 40 million videos within weeks of release.
Meta now even hosts a dedicated feed for AI-generated content. Surveys show that more than half of Americans doubt their ability to tell real videos from AI-made ones.
Even AI experts admit the problem is growing fast. Many say they can no longer easily distinguish real footage from AI-generated clips at first glance.
Also Read: