UK regulator demands tech firms shield children from harmful content

7 months ago 31
Chattythat Icon

LONDON:

Social media platforms

will be hit by fines of up to $22.5 million unless they take action to ensure their

algorithms

do not direct children towards

harmful content

, the UK communications regulator

Ofcom

said Wednesday.
A new British online safety law imposes new legal responsibilities on platforms to protect children, and Ofcom published a draft code of practice that establishes how they should meet them.

"In line with new online safety laws, our proposed codes firmly place the responsibility for keeping children safer on

tech firms

," said Ofcom chief executive Melanie Dawes.
"They will need to tame aggressive algorithms that push harmful content to children in their personalised feeds and introduce

age-checks

so children get an experience that's right for their age," she added.

The report outlines 40 practical measures that will deliver a "step-change in online safety for children", said the regulatory chief.
"Once they are in force we won't hesitate to use our full range of enforcement powers to hold platforms to account," she warned.
The new measures are due to come into force next year, with rule-breakers facing fines of up to £18 million ($22.5 million) or 10 percent of their revenue.
Dawes said rogue platforms would be "named and shamed", and that they could even be banned for children.

As well as robust age-checks, the code requires firms to put in place content moderation systems and ensure harmful content is removed quickly.
Peter Wanless, chief executive of children's charity the NSPCC, called the draft code a "welcome step in the right direction".
"Tech companies will be legally required to make sure their platforms are fundamentally safe by design for children when the final code comes into effect," he added.

Read Entire Article