Lead: The FTC AI chatbot safety inquiry is probing seven major tech companies over how their AI chatbots protect children and handle user data. The agency has issued compulsory orders demanding detailed information on safety protocols and monetization strategies. Companies must respond within a 45-day window while safety advocates press for stronger child protection measures.
FTC investigation timeline
The FTC AI chatbot safety inquiry began after tests uncovered hundreds of harmful interactions, including sexual content and grooming attempts. The commission says it needs monthly reports on engagement, revenue, and incidents by age bracket. The compulsory orders are meant to reveal how AI chatbots are deployed and monetized in the U.S.
AI chatbots safety gaps
Safety advocates say AI chatbots still produce unsafe content without stronger guardrails. The FTC AI chatbot safety inquiry focuses on input/output controls and training-data alignment to reduce harms. Experts recommend both runtime filters and value-aligned training to limit risky conversations with minors.
Child protection measures
The inquiry highlights urgent child protection priorities. The FTC AI chatbot safety inquiry asks platforms to show how they verify ages and block underage interactions. Effective child protection requires clearer age verification, reporting paths, and rapid removal of abusive outputs by AI chatbots.
Data handling by age group
Regulators want specifics on data handling by age group. The FTC AI chatbot safety inquiry requests monthly breakdowns of data, revenue and safety incidents across age segments. That data will help determine whether current safety protocols are adequate and how monetization strategies affect vulnerable users.
Monetization strategies under scrutiny
Money flows influence risk. The FTC AI chatbot safety inquiry seeks insight into monetization strategies tied to user engagement. Regulators want to know whether subscription tiers, ad models or in-chat purchases encourage harmful prompts or bypass safety protocols.
Compulsory orders explained
The commission’s compulsory orders are legally binding and require fast compliance. The FTC AI chatbot safety inquiry gives companies 45 days to deliver documents and data. Noncompliance can trigger enforcement action, fines, or litigation led by state attorneys general and federal authorities.
How companies can respond
Firms should document safety protocols, show testing results, and outline age verification systems. The FTC AI chatbot safety inquiry also expects disclosure of training data practices and post-deployment monitoring. Building transparent governance and sharing safety metrics can reduce regulatory risk.
Final context
This probe arrives as AI chatbots scale into education, customer support, and social platforms. The FTC AI chatbot safety inquiry aims to balance innovation with public safety. If companies adopt stronger guardrails, AI chatbots could still deliver positive learning outcomes while protecting children.
Frequently asked questions about FTC AI chatbot safety inquiry (FAQ)
Q: who is investigating AI chatbots?
A: The Federal Trade Commission is leading the probe, joined by state attorneys general and safety advocates.
Q: what data does the FTC want?
A: The FTC requests monthly reports on engagement, revenue, safety incidents and data handling by age group.
Q: why focus on children?
A: Tests found harmful interactions, including sexual content and grooming, highlighting gaps in child protection.
Q: what happens if companies don’t comply?
A: The FTC’s compulsory orders can lead to enforcement actions, fines, or legal proceedings.
Q: can AI chatbots be safe?
A: Yes—if companies implement robust safety protocols, age verification, and align training data with ethical standards.
Sources to this article
Federal Trade Commission (2025) “FTC orders and consumer protection materials” [online].