Essay: Why Ireland’s Hate Speech Regulation Misses The Point
Ireland's upcoming Hate Speech regulation and the Social Media Business Model
I have spent the last three years researching the ethical issues of Social Media and was given the DCU School of Theology and Philosophy Scholarship 2022 to do so. I have spent the last year reviewing the literature on social media, working through 2700 papers on the topic and reading in-depth close to 100 of the most pertinent. Mostly I try to avoid politics like the plague, but The Criminal Justice Incitement to Violence or Hatred and hate offences Bill 2022 is currently making its way through the Dail and Seanad and is somewhat in my wheelhouse. This bill is designed to update Ireland’s 1989 Incitement to Hatred laws mainly because Social Media has outdated them. There is an increasing call for regulation on Social Media, and this is what my literature is focused on, and providing tailored recommendations for future policies like this one.
Social Media regulation is 100% necessary as the companies themselves are locked into a highly competitive market, and to regulate themselves properly would cost them market share and, ultimately, profits. However, this particular regulation misses the point. The bill addresses a symptom rather than the problem and, in my predictions, will create further problems and fail to address the root cause of the issue with communication on Social Media. There are a lot of complaints about this bill already: a vague and circular definition of hate (“hatred” means hatred against a person or a group of persons in the State or elsewhere on account of their protected characteristics or any one of those characteristics), the creation of thought crime, the potential for censorship of dissenting political beliefs, and an overall threat to free speech. However, my critique in this essay is that the bill does nothing to change the incentives at the bottom of the social media business model that rewards so-called hate speech and digital animosity. The bill criminalises an emotion, and the technology is optimised to create negative emotion like fear, anger and animosity (the only positive emotion that rivals these in the literature is awe). This is not to excuse animosity online or so-called hate speech, but the bigger picture of why this type of communication has become such a problem in the digital sphere needs to be revealed for a proper solution to be addressed.
To understand this problem, we must understand the Social Media business model. Social Media companies are for-profit companies that operate in a ‘two-sided market’ where they give away their product and services for free to users and generate revenue by charging a fee to advertisers to influence the behaviour of those users. Because of this ‘attention economy business model’, the more time users spend on site, the more revenue the company generates. The company is incentivised to take up as much user time and attention as possible to maximise advertiser profits and to maintain enough users on the networks to justify those advertisers signing up with them in the first place (one reason why Twitter allowed so many fake accounts and bots for so long).
This goal of maximising attention has led to an arms race to capture a finite amount of user attention between social media companies, creators and media on the platforms. This arms race has incentivised manipulative design techniques to emotionally hijack users and get them addicted. We have enough data now to infer what the black box content-ranking algorithms, optimised for user engagement and attention, are prioritising, and it’s not reliable, well-thought-out information: it’s sensational, shocking, and radical content. Big surprise? What captures human attention is drama, conflict, story, and social media has turned culture into a reality TV game show. Virality now drives much of the incentives for uploading and communicating on platforms by offering a tempting rise in status for little to no effort. What content goes viral?
According to Rathje et al. & Brady et al., it is “high arousal emotions”, “moral language”, and particularly “out-group animosity” (Brady et al., 2017; Rathje, S. et al., 2021), which sounds pretty close to ‘hate’. Brady reports that every moral-emotional word added to a tweet was associated with an average of 20% increase in sharing (Brady et al., 2017). Across 2,730,215 observations on Facebook and Twitter, Rathje S et al found that posts about the political out-group were more likely to be shared than the political in-group; therefore, posts associated with out-group language are 4.8 times as large as negative affect language and 6.7 times as big as the moral-emotional language (Rathje, S. et al., 2021), this means that the most popular kind of posts to make are those that attack other groups of people. The New York Times reported that Facebook found that posts users rated as ‘bad for the world’ received more engagement, and overall engagement decreased when they down-ranked these posts (Rathje, S. et al., 2021). In other words, the platforms are increasing negative emotions and out-group animosity for engagement and profit. As Rathje argues, the companies can’t stop promoting out-group animosity because they lose engagement and profit if they do. The companies themselves are powerless over the business model that they have signed up for because their profit margins are the treadmill on which they run, and to stop or slow down is to fall off and be overtaken by a competitor.
In this position, the government should step in when the companies cannot self-regulate. However, instead, this bill stands at the end of the social media funnel playing whack-a-mole with whoever pops out, without wondering why there has been such an increase and proliferation of this type of communication in the digital sphere in the first place? It’s like having lead contamination in the water supply. Lead poisoning makes people more aggressive, so therefore, there would be an increase in aggressive attacks, and the government could respond by severely punishing attackers to discourage others. However much this looks to be a competent response, this will not lower the number of attacks because the cause is environmental contamination, not just individual aggression. You could imagine how this punishing of individuals could create more animosity and conflict while the real cause goes undiscovered. Social Media is a contaminated environment. Many social media features such as ‘comment threads’, ’ liking’, and ‘sharing’ are designed to encourage engagement but have since been shown to create endless competition and conflict that makes the world worse (for more on the overall effects of social media, this talk by Dr Jonathan Haidt). The literature is unequivocal on that, so to create a bill like this that splits society further into in-groups and out-groups is the opposite of solving the problem.
In conclusion, the rise of out-group animosity or hate speech in the 21st century (mainly after 2013) with Social Media is a feature, not a bug. The bubble of so-called hate that we see in the twenty-first century is because we are being manipulated by new, highly-emotional and advanced technology that pit people against one another, so they keep coming back for more - that’s the business model. This bill is a paper signing exercise at this point, and with some amendments, it will likely come into fruition in its current form. However, my prediction is this bill will not solve the problem of ‘out-group animosity’ in the digital sphere because it does nothing to address the underlying incentives of the business model, which have driven the culture war and endlessly amplifies conflict and crisis to capture user attention. I predict that this regulation will, at best, do little to nothing and only waste taxpayer money and government resources, or at worst, it will generate new problems and fail to solve any old ones.
What would be a better solution would be banning the attention-economy business model, banning micro-targeted advertising and putting restrictions on data-collection like further GDPR regulations, and finally, limiting virality to disincentive out-group animosity. Social media did not invent hate, but it is amplifying inter-group conflict and is doing so by design. You won’t cure hate by taming social media. Still, you can stop the massive increase we have seen in the 21st century and start repairing the relational damage that has been done to Democracy and the public sphere by this first public encounter with misaligned artificial intelligence. Yes, Social Media is the first public encounter with large-scale public-facing artificial intelligence, which has not gone very well. As we stand on the eve of another encounter, with generative AI taking off lately, understanding this technology, and most importantly, ourselves, is becoming more pertinent by the day.
Rathje, S., et al. (2021). "Out-group animosity drives engagement on social media." Proceedings of the National Academy of Sciences 118(26): e2024292118.
Brady, W. J., et al. (2020). "The MAD model of moral contagion: The role of motivation, attention, and design in the spread of moralized content online." Perspectives on Psychological Science 15(4): 978-1010.
Thanks for reading Mahon's Newsletter! Subscribe for free to receive new posts and support my work.