New research reveals critical safety flaws in OpenAI’s Sora video generator. A corporate watchdog found the AI can easily produce harmful content for teenage users.

This testing occurred just months after OpenAI implemented new parental controls. The findings raise serious questions about the platform’s safeguards.
Researchers Generate Disturbing Content with Teen Accounts
Ekō researchers registered Sora accounts as 13 and 14-year-olds. They successfully created 22 prohibited videos.
The generated content included scenes of self-harm, drug use, and sexual violence. Other videos depicted school shootings and racist stereotypes.
One clip showed an all-Black dance team chanting degrading phrases. Another portrayed a girl expressing intense self-hatred while looking in a mirror.
Inconsistent Moderation and Broader Implications
OpenAI’s safety features proved unreliable during testing. The company had previously admitted its safeguards can degrade over long interactions.
Experts question whether these tools provide net benefit to society. The potential for harm to vulnerable populations remains significant.
These findings come as OpenAI faces multiple lawsuits related to AI safety. Parents have filed claims alleging chatbots contributed to teen suicides.
The exposure of Sora’s safety failures represents a major setback for AI content moderation. These vulnerabilities in teen protection demand immediate regulatory attention and corporate accountability.
Info at your fingertips
What specific harmful content did researchers create?
Researchers generated videos showing self-harm, drug use, and school shooting scenarios. They also created racist content and sexual violence depictions. The AI produced these despite safety guidelines.
How did researchers bypass Sora’s safety features?
They used standard teen accounts without special circumvention techniques. The system’s content moderation failed to block prohibited material consistently. Some users may be tweaking prompts repeatedly to bypass protections.
What has OpenAI said about these safety concerns?
OpenAI did not respond to requests for comment on these specific findings. The company previously acknowledged that safety measures can weaken during extended use. They implemented parental controls in September.
Why is this particularly concerning for teenage users?
Teens are especially vulnerable to harmful content affecting mental health. The platform recommended prohibited material through its feed algorithms. This exposure could normalize dangerous behaviors.
What broader impact could these failures have?
These vulnerabilities could enable widespread misinformation and harassment. Deepfakes may be used for political manipulation and extremist agendas. The technology risks causing significant societal harm.
Trusted Sources: Rolling Stone, Ekō research report, University of Oxford Institute for Ethics in AI
iNews covers the latest and most impactful stories across
entertainment,
business,
sports,
politics, and
technology,
from AI breakthroughs to major global developments. Stay updated with the trends shaping our world. For news tips, editorial feedback, or professional inquiries, please email us at
[email protected].
Get the latest news and Breaking News first by following us on
Google News,
Twitter,
Facebook,
Telegram
, and subscribe to our
YouTube channel.



