Social media giants urged to protect children, UK rejects under-16 ban


Photo: Jaromir Chalabala/Getty

UK regulators are calling on social media giants to enforce stricter protections for children on their platforms after lawmakers rejected a blanket ban for under-16s.

Online safety agencies Ofcom and the Information Commissioner’s Office wrote to YouTube, TikTok, Facebook, Instagram and Snapchat on Thursday, urging them to tackle a wide range of child safety issues, including stricter age verification measures to tackle child grooming on their platforms.

It comes after UK lawmakers voted against a proposal to include a social media ban for under-16s in child welfare legislation being debated earlier this month.

The UK government has launched a consultation on children’s social media use to gather the views of parents and young people on whether a social media ban would be effective.

Governments across Europe are weighing stricter rules to limit teenagers’ use of social media after Australia became the first country to implement a sweeping ban for under-16s in December. Spain, France and Denmark are among the countries with similar measurements.

Australia bans teenagers from using social media in new regulation

Meta urges Australia to reconsider social media ban on under-16s after 500,000 accounts blocked

Better age verification technologies

Ofcom said it has been given an April 30 deadline to respond, calling on social media platforms to report on what they are doing to keep children off their platforms.

Its demands included better enforcement of minimum age requirements, preventing strangers from contacting children, safer content for teenagers and an end to product testing such as AI on children.

Tech giants are “failing to put children’s safety at the heart of their products” and are falling short of promises to keep children safe online,” said Ofcom CEO Melanie Dawes.

“Without proper safeguards, such as effective age checks, children are routinely exposed to risks they did not choose, in services they cannot realistically avoid,” Dawes said.

The ICO published an open letter on Thursday, saying social media platforms would need to use facial age estimation, digital ID or one-time photo matching to get better at age verification.

Many platforms rely on “self-declaration” as the main way of verifying a user’s age, but this is “easily circumvented” and ineffective, according to the regulator.

“This puts children under the age of 13 at risk, allowing their information to be unlawfully collected and used without the protections they deserve,” Paul Arnold, CEO of the ICO, said in the letter.

“With ever-growing public concern, the status quo is not working, and the industry must do more to protect children. You must act now to identify and implement currently viable technologies to prevent underage children from accessing your service,” added Arnold.

Meta followed Australia’s social media ban, blocking 500,000 accounts believed to belong to under-16s from Instagram, Facebook and threads in the early days. But it called on the Australian government to reconsider, saying the blanket ban could encourage teenagers to evade the law and access social media sites without the necessary safeguards.

Instagram says it will alert parents when their teen repeatedly searches for words like suicide and self-harm in a short period of time.

A significant inquiry was conducted against Meta And Alphabet launched in January, targeting a young woman and her mother accused of having design features that contribute to Instagram and YouTube addiction.

Meta CEO Mark Zuckerberg and Instagram CEO Adam Mosseri have already testified, with a result expected in mid-March. The case could set a precedent for what responsibility social media companies have toward their younger users.

The European Commission launched an investigation into Elon Musk’s X in January over the spread of sexually explicit child content by its AI chatbot Grok. Additionally, it issued a £14 million fine ($18 million) against the ICO Reddit For unlawful processing of children’s personal data in February.

What the tech firms say

In a statement, a Meta spokesperson told CNBC it has implemented some of the measures already outlined by regulators, including using “AI to detect users’ age based on their activity and facial age estimation technology.”

It has a separate teen account with built-in safeguards, a spokesperson said. “With teenagers using an average of 40 apps per week, the most effective way to meet our own age assurance approach is to centrally verify age at the app store level,” he added.

TikTok has rolled out enhanced technologies across Europe since January, with the help of expert moderators, to detect and remove accounts belonging to anyone under its minimum age of 13.

It uses facial age estimation, credit card verification or government-approved identification to verify a user’s age, the company said.

Snapchat and YouTube did not immediately respond to requests for comment from CNBC.

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.

(tags to translate) Breaking News: Technology

Add Comment