News Flash
NEW YORK, Sept 12, 2025 (BSS/AFP) - The US Federal Trade Commission announced Thursday it has launched an inquiry into AI chatbots that act as digital companions, focusing on potential risks to children and teenagers.
The consumer protection agency issued orders to seven companies -- including tech giants Alphabet, Meta, OpenAI and Snap -- seeking information about how they monitor and address negative impacts from chatbots designed to simulate human relationships.
"Protecting kids online is a top priority for" the FTC, said Chairman Andrew Ferguson, emphasizing the need to balance child safety with maintaining US leadership in artificial intelligence innovation.
The inquiry targets chatbots that use generative AI to mimic human communication and emotions, often presenting themselves as friends or confidants to users.
Regulators expressed particular concern that children and teens may be especially vulnerable to forming relationships with these AI systems.
The FTC is using its broad investigative powers to examine how companies monetize user engagement, develop chatbot personalities, and measure potential harm.
The agency also wants to know what steps firms are taking to limit children's access and comply with existing privacy laws protecting minors online.
Companies receiving orders include Character.AI, Elon Musk's xAI Corp, and others operating consumer-facing AI chatbots.
The investigation will examine how these platforms handle personal information from user conversations and enforce age restrictions.
The commission voted unanimously to launch the study, which does not have a specific law enforcement purpose but could inform future regulatory action.
The probe comes as AI chatbots have grown increasingly sophisticated and popular, raising questions about their psychological impact on vulnerable users, particularly young people.
Last month the parents of Adam Raine, a teenager who committed suicide in April at age 16, filed a lawsuit against OpenAI, accusing ChatGPT of giving their son detailed instructions on how to carry out the act.
Shortly after the lawsuit emerged, OpenAI announced it was working on corrective measures for its world-leading chatbot.
The San Francisco-based company said it had notably observed that when exchanges with ChatGPT are prolonged, the chatbot no longer systematically suggests contacting a mental health service if the user mentions having suicidal thoughts.