Get In Touch

Harvey’s Law Tasks the Criminal Justice System with a Problem It Cannot Solve

MPs have agreed with Katie Price that the current laws regulating social media are “not fit for purpose” after her son Harvey suffered horrendous online abuse. Parliament is now set to take up the issue in April, raising the first major national debate over so-called trolling. There is no doubt that internet speech carries very real risks along with its benefits. In recent years, social media has increased political polarization and instability, enabled political manipulation by foreign nations, and driven individuals to trauma and, in the most tragic cases, self-harm. Law must evolve and reform is necessary to address 21st century problems. However, in the debate ahead about how to regulate the internet, it is crucial to remember that the criminal justice system cannot and should not be the cure-all for abusive online speech.

The Commons Select Committee report “Online abuse and the experience of disabled people,” was prompted by Katie Price’s petition campaign related to shocking online abuse against her disabled son, Harvey. In addition to verbal abuse, one online “troll” created a video of himself having sex with the 14 year old. Katie Price raised awareness of cyber bullying and harassment using the hashtag #HarveysLaw and ultimately drew 221,914 supporters for her petition to create a specific criminal offence for online abuse.

The prevalence of social media makes internet abuse a major issue affecting millions of people. In the UK alone, there are around 44m active social media users, with 66% of the population using websites like Twitter, Facebook and Instagram. As a result, individuals are now able to communicate with people they would never have been able to contact before, and have access to technology that distributes images in an instant. If we are upset or happy, we can tell the entire world. If we don’t agree with a school of thought, we can write a tirade of comments that might range from well-thought out to hostile and menacing.

An emerging frontier in online speech is the ability to create images and videos that look genuine but are entirely the work of the author, mere fantasies. The offensive sexual images created of 14 year old Harvey Price, for example, were made possible by Deepfake, an AI generated practice which uses smart face-swap technology. This technology allows a user to
digitally manipulate photos to insert people into photos or films without their actual presence or permission. Fighting back against this abusive use of an image can seem impossible. Actress Scarlett Johansson, one of the biggest victims of Deepfake pornography, recently said, “I think it’s a useless pursuit, legally, mostly because the internet is a vast wormhole of darkness that eats itself.”

From the beginning of mankind, human beings have used the resources available to us in both positive and negative ways. One man’s tool for sharing food is another man’s murder weapon. But that’s not to say we don’t have ways to guide the blade. If someone wants to pierce a surface, all they have to do is apply enough pressure. So, we use our means to craft protective cloth that deters the biggest tragedies.

As new media amplifies freedom of speech and creativity, the question will be where to draw the line against its potential for harm and what mechanisms are best suited to the task. While dramatic examples of abuse make headlines, it is important to note that current law already criminalises certain online speech that is “indecent”, “grossly offensive”, “obscene” or of “menacing character”. These categories are highly subjective and flexible, leaving juries, i.e. the community, to decide what kinds of speech deserve the most severe punishment.

Beyond the most serious forms of abuse, setting standards for harmful speech is one of the most delicate tasks in an open society – a job for the scalpel, not the hammer. Ms. Price’s motives are understandable, but her petition for a new online abuse offence and registry for online trolls offers an ineffective way to combat the most common forms of abuse. In practical terms, a new broad offence category would overwhelm our already strained criminal justice system and would be slow to deliver results. It’s not realistic to imagine that an overburdened system with complex procedural rules can ever combat the most rampant forms of online abuse as they occur.

A registry for abusive online speakers is also misguided. As the Select Committee report noted, properly drafted regulations would eliminate the need for a registry, as offenders would already have convictions noted in their records. Furthermore, setting up an intrusive database would create more controversy than justice. There are serious privacy and fairness issues raised by publicly shackling people to their misdeeds forever, and the fact that internet speech is often impulsive and heated makes this balance especially difficult to strike.

Looking at the overall picture, relying on a criminal justice approach to online abuse alone dooms us to fail – the system is not responsive enough to meet the challenge, taking on one “troll” at a time. If we are serious about protecting the most vulnerable individuals and actually improving internet safety, we have to look at the system as a whole and make changes on a wider scale.

One option is to see internet users as an “audience” and borrow the community standards concept from telecom regulation. As Sharon White, CEO of Ofcom, has suggested, social media companies could be subjected to “high standards” set “by a clearly articulated set of rules that evolves with public opinion”, when responding to user complaints of online abuse. Companies that fail to promptly address complaints would face substantial fines, giving them real incentives for self-regulation that do not exist today. This approach acknowledges the fact that the internet has become a public space where people carry out important parts of their lives, and some basic standards for security and dignity should apply.

However we move forward, it is important that we look beyond the narrow reach of criminal law to address online abuse and harassment. Criminal offences should be reserved for the most serious forms of harm, and history shows that we should be exceptionally careful about criminalizing expression, however offensive it may be. The Price family’s ordeal and other accounts of appalling abuse should move us to think creatively about solutions that will actually work.

New technology has weaponized language in a new way, and there is an urgent need to embrace new strategies. We need legal reform that reflects technical savvy, a real world understanding of how social media functions in the lives of users, and vision for an online world where free expression is balanced with the right to personal security. This effort must take place within a comprehensive overhaul of cybercrime law more generally, as we recalibrate to the realities of the 21st century.

Contributor: Hannah Alejandro of Katz, Marshall and Banks LLP