Marked safe from meme
They are necessarily indiscriminate because they operate by reference to trigger text or images. The problem is aggravated by the inevitable use of what the bill calls ‘content moderation technology’, i.e. It can be handled by algorithms if you’re happy to accept a huge number of errors - both false positives and false negatives. He also has a response to those who insist this can all be handled by algorithms. Faced with the need to find unidentifiable categories of material liable to inflict unidentifiable categories of harm on unidentifiable categories of people, and threatened with criminal sanctions and enormous regulatory fines (up to 10 per cent of global revenue). YouTube adds 500 hours of videos every minute. At the last count, 300,000 status updates are uploaded to Facebook every minute, with 500,000 comments left that same minute. If the bill is passed in its current form, internet giants will have to identify categories of material which are potentially harmful to adults and provide them with options to cut it out or alert them to its potentially harmful nature.
There is no objective standard for “harmful” speech, especially when (as with the UK bill), it includes stuff that the law itself admits remains “legal.”Īs Sumption notes, making these kinds of calls at scale, when no one can even agree what the content is, is bound to be a disaster (and, for what it’s worth, he underplays the scale here, because while he’s showing how much happens every minute, it’s even more crazy when you realize how much content this means per hour or day, and how impossible it would be to monitor it all). Some people find certain content offensive. While I don’t necessarily agree with all of his characterization, there is something fundamental in here that I wish so many other people understood: this is all relative. It will vary from one internet user to the next. At a time when even universities are warning adult students against exposure to material such as Chaucer with his rumbustious references to sex, or historical or literary material dealing with slavery or other forms of cruelty, the harmful propensity of any material whatever is a matter of opinion. Many things which are harmless to the overwhelming majority of users may be harmful to sufficiently sensitive, fearful or vulnerable minorities, or may be presented as such by manipulative pressure groups. As if that were not general enough, ‘harm’ also extends to anything that may increase the likelihood of someone acting in a way that is harmful to themselves, either because they have encountered it on the internet or because someone has told them about it. It means any ‘physical or psychological harm’. Harm is defined in the bill in circular language of stratospheric vagueness.
Unfortunately, that is not much of a limitation. The range of material covered is almost infinite, the only limitation being that it must be liable to cause ‘harm’ to some people. It creates a new category of speech which is legal but ‘harmful’. The real vice of the bill is that its provisions are not limited to material capable of being defined and identified. Sumption warns that the Online Harms Bill will, itself, be quite harmful. Former UK Supreme Court judge, Jonathan Sumption, has published a piece in the Spectator, the old school UK political commentary magazine that is generally seen as quite conservative. It appears that more people - and prominent ones at that - are now speaking out against the bill. Unfortunately, it appears that important politicians seem to think that the Online Safety Bill will be a sort of magic wand that will make the “bad stuff” online disappear automatically (it won’t). We have discussed at great lengths the many problems of the UK’s Online Safety Bill, in particular how it will be a disaster for the open internet.
Wed, Aug 24th 2022 03:36pm - Mike Masnick