Amidst a worldwide backlash against tech firms for allowing children access to artificial intelligence (AI) and exposing them to its negative influence, California-based Meta will halt teenagers’ access to AI characters, the company said in a blog post Friday.
Meta Platforms Inc, which owns Instagram and WhatsApp, said that starting in the “coming weeks,” minors will no longer be able to access AI characters “until the updated experience is ready”, although they’ll still be able to access Meta’s AI assistant.
The company said this applies to anyone who gave Meta a birthday that makes them a minor, as well as “people who claim to be adults but who we suspect are teens based on our age prediction technology.”
The move comes the week before Meta – along with TikTok and Google’s YouTube – is scheduled to stand trial in Los Angeles over its apps’ harms to children.
Also Read: Australia social media ban: Meta deletes over 500,000 accounts
Other companies, like Character.AI, have also banned teens from AI chatbots amid growing concerns about the effects of artificial intelligence conversations on children, after countries like Australia enforced new laws that ban children from using a host of social media platforms.
Character.AI is facing several lawsuits over child safety, including by the mother of a teenager who says the company’s chatbots pushed her teenage son to kill himself. The dark side of AI may include coercing a child into sending explicit content, engaging in sexual acts, or establishing a relationship built on deceit and manipulation. AI deepfakes can also be used as a tool for grooming, where predators create a facade of trustworthiness by impersonating another child.