Snap AI chatbot investigation launched in UK over teen-privacy concerns

Technology

In this article

The Snapchat application on a smartphone arranged in Saint Thomas, Virgin Islands, Jan. 29, 2021.
Gabby Jones | Bloomberg | Getty Images

Snap is under investigation in the U.K. over privacy risks associated with the company’s generative artificial intelligence chatbot. 

The Information Commissioner’s Office (ICO), the country’s data protection regulator, issued a preliminary enforcement notice Friday citing the risks the chatbot, My AI, may pose to Snapchat users, particularly 13-year-old to 17-year-old children.

“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI’,” said Information Commissioner John Edwards in the release.

The findings are not yet conclusive and Snap will have an opportunity to address the provisional concerns before a final decision. If the ICO’s provisional findings result in an enforcement notice, Snap may have to stop offering the AI chatbot to U.K. users until it fixes the privacy concerns.

“We are closely reviewing the ICO’s provisional decision. Like the ICO we are committed to protecting the privacy of our users,” a Snap spokesperson told CNBC in an email. “In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available.”

The tech company said it will continue working with the ICO to ensure the organization is comfortable with Snap’s risk assessment procedures. The AI chatbot, which runs on OpenAI’s ChatGPT, has features that alert parents if their children have been using the chatbot. Snap says it also has general guidelines for its bots to follow to refrain from offensive comments.

The ICO did not provide additional comment, citing the provisional nature of the findings.

The ICO previously issued a “Guidance on AI and data protection” and followed up with a general notice in April listing questions developers and users should ask about AI.

Snap’s AI chatbot has faced scrutiny since its debut earlier this year over inappropriate conversations, such as advising a 15-year-old how to hide the smell of alcohol and marijuana, according to the Washington Post.

Other forms of generative AI have also faced criticism as recently as this week. Bing’s image-creating generative AI has been used by extremist messaging board 4chan to create racist images, 404 reported.

The company said in its most recent earnings that more than 150 million people have used the AI bot.

Products You May Like

Articles You May Like

Crude oil prices steady after Ukraine hits Russia with U.S.-made, longer-range missiles
Climate-vulnerable islands storm out of COP29 negotiation room in row over funding
Downing Street indicates Netanyahu would be arrested in UK after ICC warrant
The drive to win a Rugby World Cup three-peat binds the Springboks’ DNA
After he beat Mike Tyson, everybody wants to fight Jake Paul

Leave a Reply

Your email address will not be published. Required fields are marked *