ChatGPT Faces Backlash After Calling Pedophilia 'Not A Crime'
ChatGPT Faces Backlash After Calling Pedophilia Not A Crime...
OpenAI's ChatGPT sparked outrage Wednesday after users reported the AI chatbot describing pedophilia as "not a crime" in certain responses. The controversial statements emerged during discussions about age of consent laws, with the AI allegedly claiming some countries don't criminalize adult-child relationships.
Screenshots of the exchanges went viral on social media platforms, particularly X (formerly Twitter), where the hashtag #BanChatGPT trended nationwide. The backlash comes as lawmakers increasingly scrutinize AI platforms for harmful content. OpenAI confirmed it's investigating the reports and temporarily restricted related queries.
Child protection advocates condemned the AI's responses. "This isn't just a technical error - it's dangerous normalization," said Sarah Adams of the National Center for Missing & Exploited Children. The nonprofit has called for stricter AI content moderation regarding child safety topics.
Legal experts note that while age of consent varies globally, international treaties like the UN Convention on the Rights of the Child establish 18 as the universal threshold for protection. The U.S. strictly prohibits sexual contact with minors under federal law.
OpenAI's usage policies explicitly ban child exploitation content, making the chatbot's responses particularly alarming. The company stated it's "urgently addressing" the issue through system updates and improved safeguards. This incident follows recent controversies about AI-generated child sexual abuse material.
The White House signaled concern about the development. A spokesperson told reporters the administration is monitoring whether stronger AI regulations are needed to prevent "unacceptable harms to children." Several state attorneys general have launched inquiries into the matter.
Tech analysts suggest the controversy stems from ChatGPT's training data including legal discussions about varying international laws, without proper contextual safeguards. "AI doesn't understand morality, just patterns in data," explained MIT researcher Dr. Ellen Park. "This shows why human oversight is critical."
Congressional leaders plan hearings next week about AI safety protocols. The House Energy and Commerce Committee will examine whether current voluntary industry standards sufficiently protect minors. Some lawmakers are drafting legislation to mandate AI content filters for child protection topics.
Parents and educators expressed shock at the chatbot's statements. "My teenager uses AI for homework help," said Maryland mother Lisa Chen. "Now I have to worry it might give them dangerous ideas about relationships." School districts in three states temporarily blocked ChatGPT access following the reports.
OpenAI says it will release a full incident report within 48 hours. The company emphasized that its AI "doesn't hold opinions" and that any harmful outputs result from technical limitations, not intentional design. They advised users to report problematic responses through official channels.