25 December 2024

Sebastian Bozon | AFP | Getty Images

LONDON – The United Kingdom officially took its sweeping online safety law into force on Monday, paving the way for tougher oversight of harmful online content and potentially huge fines for tech giants such as… deadGoogle and TikTok.

Ofcom, the British media and communications watchdog, has published the first edition of a code of practice and guidance for technology companies setting out what they must do to tackle illegal harms such as terrorism, hatred, fraud and child sexual abuse on their platforms.

These measures constitute the first set of duties imposed by the regulator under the Online Safety Act, a comprehensive law that requires technology platforms to do more to combat illegal content online.

The Online Safety Act imposes so-called “duties of care” on these technology companies to ensure they are held accountable for harmful content that is uploaded and spread on their platforms.

Although the law became law in October 2023, it has not yet taken full effect — but Monday's development effectively marks the official entry into safety duties.

Ofcom said technology platforms would have until March 16, 2025, to complete unlawful harm risk assessments, effectively giving them three months to bring their platforms into compliance with the rules.

Once this deadline passes, platforms should start implementing measures to prevent the risk of illegal harm, including better moderation, easier reporting and built-in safety testing, Ofcom said.

Ofcom chief executive Melanie Dawes said: “We will be monitoring the industry closely to ensure companies adhere to the stringent safety standards set for them under our first rules and guidance, with further requirements to follow quickly in the first half of next year.” In a statement Monday.

Risk of huge fines and suspension of service

Under the First Edition Act, reporting and complaints functions should be easier to find and use. For high-risk platforms, companies will be required to use a technology called hash matching to detect and remove child sexual abuse material (CSAM).

Hash matching tools link known CSAM images from police databases to encrypted digital fingerprints known as “hashes” for each piece of content to help social media sites’ automated filtering systems identify and remove them.

Ofcom stressed that the codes published on Monday were only the first set of codes and that the regulator would look to consult on additional codes in spring 2025, including banning accounts found to have shared CSAM content and enabling the use of artificial intelligence to address illegal harms.

“Ofcom's Unlawful Content Codes are a fundamental change for online safety which means that from March, platforms will have to proactively remove terrorist material, abuse of intimate images of children, and a range of other illegal content, leading to… Bridging the gap between laws that protect British Technology Minister Peter Kyle said in a statement on Monday: “We are in an offline world and an online world.”

Kyle added: “If the platforms fail to strengthen the role of the regulator, I support them to use their full powers, including issuing fines and asking the courts to block access to the sites.”

Leave a Reply

Your email address will not be published. Required fields are marked *