Colorado Attorney General Phil Weiser issued a consumer alert warning parents about the growing risks posed by social AI chatbots. Chatbots are tools designed to mimic human conversation, which, in some cases, can lead young users into harmful interactions.
"These chatbots interact with people as if they were another person," Weiser said. "They can take on personas like a celebrity, fictional character or even a trusted adult, and the conversation can turn inappropriate or dangerous quickly, especially when it comes to sexual content, self-harm or substance use."
The alert, released May 21, comes amid a sharp rise in reports of children engaging with AI bots in ways that have resulted in mental health crises and unsafe behaviors. Weiser's office warns that children and teens may not realize they're interacting with an AI rather than a real person, making them more vulnerable to manipulation.
Social AI chatbots are increasingly common on popular platforms. Some are embedded in social media sites, while others exist as standalone apps. They're often marketed as friends, mentors or entertainers.
According to HealthyChildren.org, children and teens are turning to chatbots not just for quick answers but also for entertainment or companionship, which can be risky as these programs are not designed with kids in mind and may expose them to false, harmful or inappropriate content.
"What you thought might be benign can turn quite harmful," Weiser said. "Parents need to be mindful of what their kids are doing."
The alert outlines several dangers, including chatbots generating age-inappropriate content, encouraging disordered behavior, or providing false and biased information. In some cases, children have shared private details with these bots, raising concerns about how that data may be used or stored.
Weiser said his office is watching closely for violations of Colorado's consumer protection laws, particularly those related to deceptive or unfair trade practices. He pointed to the state's ongoing lawsuit against Meta, the parent company of Facebook and Instagram, which alleges harm to children through manipulative design and lack of safeguards.
"If these platforms are crossing the line, whatever we can do in enforcement, we will," he said.
Still, Weiser acknowledged that regulation alone can't keep pace with the fast-moving world of AI. He called for a broader federal conversation and urged technology companies to act more responsibly.
The most effective protection, Weiser said, begins at home.
"Monitor their use. Be engaged," he said. "Ask your kids what they're doing online. If they say they're talking to someone, make sure they understand who or what that really is."
The alert recommends using parental controls, filtering tools and built-in age restrictions. But more importantly, Weiser said, families should normalize regular conversations about digital habits and online experiences.
"Teach your kids that these chatbots are not human," he said. "They're designed to seem human but they're not. Don't wait to talk to your kids."
Weiser said he's not ruling out the need for new state legislation but believes current laws provide a strong foundation for accountability. For now, raising awareness remains a top priority.
To help parents get started, his office has created a one-page tip sheet with safety advice and conversation starters, available at stopfraudcolorado.gov.
"Artificial intelligence is evolving rapidly, and many parents may not even be aware of social AI chatbots and their potential to harm children," Weiser said. "That needs to change."
This story was made available via the Colorado News Collaborative. Learn more at: