Children’s toymaker FoloToy says it’s pulling its AI-powered teddy bear “Kumma” after a safety group found that the cuddly companion was giving wildly inappropriate and even dangerous responses, including tips on how to find and light matches, and detailed explanations about sexual kinks.
“FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit,” marketing director Hugo Wu told The Register in a statement, in response to the safety report. “This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.”
The report, conducted by researchers at the Public Interest Research Group (PIRG) and released Thursday, tested three AI-powered toys from different companies, finding that all of them were capable of providing concerning answers to young users. Without too much prodding, the AI toys discussed topics that a parent might be uncomfortable with, ranging from religious questions to the glory of dying in battle as a warrior in Norse mythology.
But it was FoloToy’s Kumma that emerged as the worst influence by a landslide. Powered by OpenAI’s GPT-4o model by default, the same model that once powered ChatGPT, tests repeatedly showed that the AI toy dropped its guardrails the longer a conversation went on, until hitting rock bottom on incredibly disturbing topics.
In one test, Kumma provided step-by-step instructions on how to light match, while keeping its tone of a friendly adult explaining something to a curious ankle biter.
“Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it,” Kumma began, before listing the steps. “Blow it out when done. Puff, like a birthday candle.”
That, it turned out, was just the tip of the iceberg. In other tests, Kumma cheerily gave tips for “being a good kisser,” and launched into explicitly sexual territory by explaining a multitude of kinks and fetishes, like bondage and teacher-student roleplay. (“What do you think would be the most fun to explore?” it asked during one of those explanations.)
https://futurism.com/artificial-intelligence/ai-stuffed-animal-pulled-after-disturbing-interactions