Ofcom wouldn’t blame it for feeling intimidated. The world, or at least the part of the planet that wants to clean up the internet, is watching the internet security bill and the UK telecoms regulator must enforce it. Hearings on the bill by a joint committee ended last week, and if you take a step back and look at what those hearings have come up with since September, it’s clear that Ofcom has a job on its hands.
Quick Primer: The bill covers technology companies that allow users to post their own content or interact with each other. This means that the big fish like Facebook, Twitter, Instagram, YouTube and Snapchat have to obey but also commercial porn sites like OnlyFans. Search engines such as Google are also included.
The bill places a duty of care on those companies to protect users from harmful content – on pain of heavy fines imposed by Ofcom. The duty of care is divided into three parts: preventing the spread of illegal content and activities such as child pornography, terrorist material and hate crimes (ie racist abuse); ensuring that children are not exposed to harmful or inappropriate content; And for big players like Facebook, Twitter and YouTube (described as “Category 1” services), adults are guaranteed to be protected from legal but harmful content. The latter category of content is determined by the Minister of Culture, after consultation with Ofcom, and then examined by Parliament before being enacted in secondary legislation.
Ofcom chief executive Ms Melanie Dawes has warned against being “inundated” with complaints from social media users and having to deal with the “absolute legal weight” of big tech companies’ response to the act once it becomes law, which should happen around the end of next year .
The Minister of Culture, Nadine Doris, concluded the sessions with an appearance in which she proposed a number of changes to the legislation. But even previous sessions have highlighted the complexities and loopholes in the bill. It should have been simpler – but there’s no doubt after Dorries came out that it would be even bigger.
The committee will publish its report on the bill by December 10 and Doris said it will consider the recommendations “really very seriously”. Here are some of the changes we can expect, or at least the issues that the committee will address in its report, after the hearings.
A permanent joint committee will oversee the law
Doris said a permanent committee of MPs and peers – modeled on the Joint Commission on Human Rights – would be set up to conduct “ongoing scrutiny” of the landscape to be governed by law and the role of the Secretary of State and Ofcom in law enforcement. . The body can also recommend when the Secretary of State publishes secondary powers under the law, such as providing guidance on how Ofcom should exercise its powers.
There will be criminal penalties for users and executives
Doris is certainly trending toward tech executives, telling Facebook founder Mark Zuckerberg and his head of communications, Nick Clegg, to stay away from the metaverse and focus on the real world. Addressing the broader tech industry, Doris said: “Remove your malicious algorithms today and you – as individuals – will not be subject to criminal accountability and prosecution.” Prosecution for failing to deal with algorithms that direct users toward malicious content is definitely not on the bill. As it stands, the bill contains provisions for a deferred power, after about two years, to impose criminal penalties on executives if they do not respond to information requests from Ofcom accurately and in a timely manner. Doris is now talking about imposing criminal penalties within three to six months for the much broader crime of allowing their platforms to direct users toward malicious content. Is this content illegal like racist abuse or less obvious areas like legal but harmful?
For users, three new criminal penalties will be imposed for offenses: sending messages or posts that “communicate a threat of serious harm”; Dissemination of misleading information – “false communications” – intended to cause emotional, psychological or physical harm not insignificant; and sending publications or messages intended to cause harm without an acceptable excuse.
Online advertising: in the invoice or not?
In his appearance before the committee, MoneySavingExpert.com founder Martin Lewis urged the government to include advertising in the bill as an area that should be regulated. “Deceptive ads are ruining people’s lives. People are taking their lives because they are being scammed, and it should be on the bill.” Ofcom’s Dawes suggested that regulation of advertising along with the Advertising Standards Authority and committee chair, Conservative MP Damian Collins, is being practiced around misleading political ads. But Doris forcefully stated last week that ads, especially deceptive ones, would be a very big plus, saying: “It needs its own bill.” However, do not be surprised if the commission tries to get it or at least make firm recommendations for dealing with advertising in the bill or various legislation.
Increased investigation powers for Ofcom
The Information Commissioner, Elizabeth Denham (Britain’s data regulator), said in her appearance that Ofcom did not have sufficient powers to properly audit tech companies. There was talk in sessions about access allowing the regulator to examine algorithms and demand changes to them. Dunham said she was able to “look under the hood” for tech companies as part of the age-appropriate design code, which requires websites and apps to consider the “best interests” of children users. She said Ofcom’s powers under the bill need to be “enhanced with audit powers for the regulator to be able to look under the hood”.
For now, the bill requires companies to provide details about how their services expose users to harmful content — and how they will combat that risk. These risk assessments will inform the code of conduct for the platforms that Ofcom will implement, but the feeling on the panel is that the regulator needs more oomph. Doris’ strong words about algorithms and criminal liability suggest she agrees.
Addressing Anonymous Abuse
Rio Ferdinand, the former Manchester United player and former Leeds footballer, spoke scathingly about the failure to deal with anonymous abuse during his September appearance. A blanket ban will not be imposed on anonymous social media accounts and posts but we are expecting some action. A recent survey showed that of those who experienced online abuse, 72% were from anonymous accounts.
The Internet Cleanup, a campaign group that advocates for increased civility and respect online, called for action on anonymous trolls when presented to the commission, and the bill required platforms to demonstrate that they have systems in place to deal with them. Anonymity. The internet cleanup called on social media platforms to give users the option of preemptively blocking any interactions with unidentified accounts, as well as making the users’ verification status clearly visible. The group also suggested that anonymous users register their identity with the platforms, who could hold that information against the account – and disclose it to law enforcement if necessary.
If you would like to read the full version of our newsletter, please sign up to receive TechScape in your inbox every Wednesday.
Adsgeni code is : 748912