Monitoring racist and xenophobic extremism to counter hate speech online: Ethical dilemmas and methods of a preventive approach

The rise of racism in Europe
In recent years online racism has seen a quick and serious growth in many European and non-European countries, till to become a worrying global phenomenon.(1)
One of the most striking examples of such process is the rise of White Supremacist Movements online. Their strategy mainly consists in disguising their hidden political agenda and attempting to subvert and destroy civil rights by presenting their standpoints through an overturn of the rhetoric of the civil rights movement.(2)

Undoubtedly, the increasing racism which is rotten in hidden racist expressions, is exploiting “favorable” conditions as the financial crisis, the increase of social conflicts and the rise of populist issues in politics. In Italy, as an example, UNAR, the Italian national anti-discrimination Office, documented that complaints for online racism weighed for 30.9% of the overall cases involving the media (UNAR, 2013). Similar situations also occurred in other European countries such as Slovenia, Finland, Hungary and United Kingdom, as it emerged by part of the research carried out within the framework of the European project LIGHT ON.(3)
Nowadays, racist claims are often hidden under a subtle and sophisticated rethoric. In fact, a huge amount of disguised racist contents is currently published on the Internet in form of occasional bigotry or individuals’ outburst, whereas they are, as a matter of fact, intended to foster racist attitudes among people and to support the “normalization” of racism. Some scholars defined as “common sense racism” or “rational racism” talking against immigrants, refugees, minority members (as well as homosexuals and disabled) as undesirable and avoiding to be labeled as “racist”.(4)

Internet and the normalization of racism

The Internet is playing a crucial role in the so-called normalization of racism: racist movements are well aware about the potential of social media in the diffusion of hate speech. They exploit the social media “virality” by cloaking the real source(s) of such messages and promoting sharing of such contents, with the final aim of manipulating people worries and outrage.(5)
Some scholar argued: “One gets the impression that they have now crossed the weak shelters of censorship and self-censorship that until recent years made difficult to explicitly pronounce racist discourses in public. Topics such as «anti-social behaviors» of the Roma people, the identification of immigration with crime, the danger of certain «races», the invitation to sink immigrants’ boats – that once a time was a domain (or prerogative) of the [Northern] League’s(6)
rhetoric [in Italy] – are now spoken publicly without any shame, and not only by right wing speakers”.(7)

Hate speech online is therefore a dangerous weapon that potentially everyone, including violent extremists, can use to promote hate and perpetrate hatred behaviors. “Extremists and violent extremists are using the Internet and social media to inspire, radicalise and recruit young people to their cause, whether as passive supporters, active enthusiasts or those willing to become operational”.(8)
The extremist groups were among the early users of the Internet, or more in general of electronic communication networks.(9) As explained by the Crown Prosecution Service (UK), there are a number of offenses that can be considered when dealing with violent extremism, and these include “offenses arising through spoken words, creation of tapes and videos of speeches, internet entries, chanting, banners and written notes and publications”.(10)

An ethical dilemma
The complex and controversial issue of hate speech and its use also by violent extremist groups touches upon an important ethical dilemma: how can we tackle this phenomenon without undermining freedom of speech?

The question has been effectively summarized by Nils Muižnieks, Commissioner for Human Rights at the Council of Europe. As it is recognized, hate speech is not about freedom of speech: it is a threat against the rights of others and public safety, since hate speech and violent action appear to be tightly intertwined. Many incidents occurred in the recent past easily demonstrate how hate speech can be actually perceived as an authorisation to engage in violence, which is likely to lead in committing real-life crimes. That is why is necessary to deal simultaneously with hate speech and hate crimes.(11)

Populist movements often exploit racist arguments in their discourse; in all cases allegations of racism, and especially of racist hate speech, must be substantiated by evidences and every complaint must be proven to be realistic.

Since hate speech is per se a controversial concept, and at the moment broadly accepted definitions are still missing, policy makers and law enforcement agencies must deal with the intrinsic ambiguity and the polysemy of such contents. Given that these messages can result in different interpretation by different people, it is more essential to ground reporting, assessments and legal prosecution on objective and factual arguments, taking carefully into account all the available information about the source and the context in which messages have been spread. Indeed, on line communication strategy of hatemongers is often grounded on old style propaganda techniques, in which authorship, source, or real intention of a publication or broadcast are intentionally cloaked or disguised. (12)

Light On Project: breaking up the vicious cycle
The transnational experience of the LIGHT ON project shows that defining whether a content is racist or not can be facilitated by joining experiences from different sectors. Indeed, investigations on a suspected hate speech case, and the related prevention policies, must take into account diverse information about the source and the context in which messages have been spread: “Collecting and analyzing the different expressions of contemporary racism is essential to understand the phenomenon and to design new strategies to contrast it”.(13) Furthermore, looking at this type of online monitoring from a broader perspective, it clearly emerges that a transnational approach is needed, as it is demonstrated by the findings of the research carried out by national teams in five European countries, using tools such as a visual database and a collaborative glossary. which are presented on the LIGHT ON website These digital tools, that allow users to filter entries per country, target, typology and target group, demonstrate “[…] how European Nazi and Fascist groups are tightly connected, in order to create a wide racist network across Europe. Many racist watchwords and symbols are indeed the same in different countries and Nazi websites often have many shared inbound and outbound links with their correspondents in other countries.”(14) In particular, the LIGHT ON project research has found many examples, ranging from the so-called “black” propaganda, aimed to deceive target groups by spreading false material through a disguised source, to the “grey” propaganda, in which sources are not identified and contents are at least partially true, although carefully selected to induce certain effects such as persuasion and mobilization.(15)

Some of the main tools developed within the project LIGHT ON (database, glossary, training manual, toolkit, guide on ‘how to spot online racism’ etc.) can provide some non-arbitrary evaluation parameters in order to differentiate between what constitutes an actual instigation to hate from what does not, also in view of the ethical dilemma concerning hate speech vs. freedom of speech (further discussed later on in this paper) – unfortunately at the core of the dramatic events which took place in Paris at the beginning of 2015, one of the worst security crisis in French in decades which begun with the massacre at the offices of the satirical magazine Charlie Hebdo.(16) Considering incidents happened in the aftermath of Charlie Hebdo attacks could help to better understand how hate speech and hate crimes are intimately related. In fact, only five days after the murders in Paris, Tell Mama(17) – a UK Muslim advocacy project monitoring, measuring and classifying anti-Muslim attacks – reported at least fifty attacks (bomb blast, arsons and graffiti) against Muslims in France; even London Mosques received threats, hate mail and offensive drawings. (18)
The above-mentioned relation should be assessed as the provisional result of a vicious cycle process.(19) According to Carey’s ritual model of communication,(20) media consumption (including discriminating or hatred contents) is a complex social process aimed to portray or confirm our particular views of a conflicting world, dramatically calling for engagement and action. Hence, we may consider that hate speech and hate crimes are not only tightly intertwined, but they reinforce each other. In particular, ritual model can be adopted to explain how racist speech produces and retains a social climate of discrimination and hatred, facilitating an unequal treatment of groups grounded on race, gender, and sexual orientation, in order to reinforce historical asymmetrical relationships among ethnic or minority groups(21). Since single hate incidents may not produce an immediate and visible harm, their cumulative impacts actually produce, maintain, transform and repair a reality of rising submission and self-reinforcing hate. (22)

The normalization of racism in mainstream media, social media and political discourse is often shaped as a wave of sensationalist news after an incident or a crime involving immigrants and ethnic minority members. This kind of media coverage is regularly followed by “common people’s” comments on popular social platforms and/or by flaming politicians statements intended to depict immigrants and minorities as threats to be countered, using every possible means.
Furthermore, such statements and comments contribute creating a vicious cycle of fear, anger and contempt, often entailing a pretended right to a violent self-defence.(23) Other similar examples of such vicious cycles effects are also found in hatred discrimination against LGBT people(24), disable people(25), as well as in cyberbullying(26).

Modern narratives and imagery of racism: notes from a preventive approach
One of the main aims of the LIGHT ON research was to investigate modern verbal and visual manifestations of racism and xenophobia. Very often racist symbols and images are accepted as normal social expressions, but as they convey much meaning, intent and significance in a communicative and immediately recognizable form, they influence personal and collective behaviors, especially when they are shared on the Internet, where these visual expressions can easily engage broad audiences. “These ‘newer’ forms of racism are so embedded in social processes and structures that they are even more difficult to explore and challenge”.(27) These expressions are used as tools to carry out racist arguments, raise the level of violence tolerated by the society and lead to a dangerous normalization of racism. Indeed, the European Commission against Racism and Intolerance of the Council of Europe warned on how “such public manifestations risk fuelling racism, xenophobia, anti-Semitism and intolerance.”(28) When the tolerated level of violence rises, intolerance becomes widespread and well-rooted in the society, creating the basis to perpetrate hate crimes, which are not sporadic outbursts of extremist individuals: they are the result of a cultural process articulated on prejudices that can lead to violent extremism. (29)

The LIGHT ON project brought together experts working in different fields, bringing their contribution on the state of national legislations, political issues on national agendas, policies for prevention, media reporting and relevant academic literature. They contributed to the data collection phase with their multi-sectorial expertise: victim support groups, for example, stressed how certain contents have the precise scope to harm people, and thus must be discerned from jokes, irony and satire, and labeled for their strong negative impact. Such synergy among different stakeholders suggested the importance of a victim-centered approach, and also the importance of the cooperation among civil society, researchers, local authorities and law enforcement agencies.

It also stressed the role of national authorities and groups providing legal support in addressing the racist cases identified during the monitoring or reported by victims. Indeed, despite online monitoring can be intended as a deterrent and a tool to prevent – or control – racist contents, its effectiveness results stronger when monitoring is linked to the action of support groups or national authorities at national level, in order to show the impact of monitoring and also to encourage self-reporting.

Going back to both prevention and action against hate speech, among the most important findings of the LIGHT ON project, the monitoring role of users and their key function in reporting online hate speech and racist propaganda emerge as an essential component in the fight against these phenomena, also in view of improving and increasing the response of the law enforcement authorities. Understanding the main reasons why hate speech on the Net often go unreported is a starting point to draw guidelines for reporting and tips for monitoring online materials promoting violent extremism. Some of these reasons are linked to the lack of confidence in the police by the victims, concern about revenge attacks or fear of retaliation, acceptance of violence and abuse (nothing will change anyway!), fear of having privacy compromised, fear of jeopardising immigration status, cultural language barriers or lack of victim support system.

Moving in this perspective the LIGHT ON project elaborated, within the training manual on Investigating and Reporting Hate Speech Online(30), a set of general tips for online reporting, with a particular focus on the main social networks as one of the main vehicles for spreading violent extremism and populist propaganda.
One of the first “tips” for users to report correctly an online hate incident is to evaluate the content of the speech and select the best strategy accordingly.(31) The user should consider whether the content is generated in his/her own country and thus subjected to national legislation. However, authors are well aware that this could make their identification easier, therefore they place the contents out of their country, violating the national rules on servers located abroad. The main suggestion in this case for the users is: always have a backup of the content of the hate speech incident! The LIGHT ON training manual includes a list of different concrete steps on how to backup, among other tips).

The main steps for reporting violent extremism on the most used social media (Facebook, Twitter, Wikipedia and You Tube) are also outlined in the LIGHT ON training materials: acquiring this knowledge makes it easier for law enforcement to adopt a victim-centered approach and effectively help victims by pointing them to the right path of reporting online.

The different social networking sites have different policies on definition of hate speech and methods to report and/or block the contents. Moreover, even when the online reporting fails, ISPs and Social Networking companies may have established policies to collaborate more efficiently with national authorities on the regulation and removal of hate speech.

Going back to the importance of recognizing hatred contents, in order to properly address them, the huge dilemma regarding the relation between populist arguments and violent extremism on the Internet, on one side, and freedom of speech on the other can not be left aside, especially in these dramatic days in Europe.(32)
“Reconciling rights which are at the core of democracy, such as freedom of belief and religion and freedom from discrimination, with the right to freedom of expression represents a significant challenge. When comedy and dark humour are included in the picture, establishing clear boundaries between what constitutes freedom of expression and what falls under the category of hate speech becomes an ever more complex challenge”.(33) But where do we draw the line?
Even if comedy and satire, as forms of expressions, are protected by laws dealing with freedom of expression they also come with duties and responsibilities and, as such, may be subjected to restrictions or penalties as prescribed by law. This implies that in democratic societies, governments may limit freedom of expression where necessary, but only in so far as they are regulated by law and in a manner which is proportionate. The test against which such limitations are evaluated is a strict one.

The authors

Andrea Cerase (PhD) is a media and culture sociologist, who has been working over the years on discrimination, racism and media portrayals of minorities. He has been a research fellow at La Sapienza University of Rome and also taught as adjunct lecturer at Florence and Sassari Universities. He also carries on research activities on risk communication, risk issues, journalism and applied social network analysis.

Elena D’Angelo is a Project Officer within the Emerging Crimes Unit at UNICRI. Her work is mainly focused on applied research and training for law enforcement and legal professionals. Ms. D’Angelo is the author and co-author of several publications on the following topics: counterfeiting, organized crime, data protection and anti-discrimination, including online hate-speech.

Claudia Santoro has been working at Progetti Sociali since 2012; mainly working on EU funded transnational projects on minorities, migrants, integration, racism and anti-discrimination. Progetti Sociali is an Italian social enterprise based in Pescara, working on the planning, implementation and coordination of social projects. More information:

1 Perry, B., and Olsson, P. (2009), “Cyberhate: the globalization of hate”, Information & Communications Technology Law, 18(2), 185-199.

2 Daniels, J. (2009), “Cyber racism: White supremacy online and the new attack on civil rights,” Rowman & Littlefield Publisher.

3 LIGHT ON project, JUST/2012/FRAC/AG/2699, co-financed by Fundamental Rights and Citizenship of the European Commission.

4 Capdevila, R., and Callaghan, J. E. (2008), “It’s not racist. It’s common sense. A critical analysis of political discourse around asylum and immigration in the UK,” Journal of Community and Applied Social Psychology, 18(1), 1-16.; Meddaugh, P. M., and Kay, J. (2009), “Hate Speech or ‘Reasonable Racism? The Other in Stormfront,” Journal of Mass Media Ethics, 24(4), 251-268.

5 Andrisani, P. (2014), “Quando il razzismo nel web diventa ‘virale’”, in Centro Studi e Ricerche IDOS (eds.) Rapporto Unar. Dalle Discriminazioni ai diritti, IDOS, Roma, pp. 249-252.

6 The Lega Nord (Northern League) is a federalist and regionalist political party in Italy, established in 1991 by Umberto Bossi. This party advocates for secession of the North of Italy and its members are very often involved in racist and xenophobic political talk (see Avanza, M. (2010), “The Northern League and its ‘innocuous’ xenophobia,” in Mammone, A., and Veltri, G. A. (Eds.) (2010), “Italy today: The sick man of Europe,” Routledge, Abingdon, UK, 131 – 142).

7 Rivera, A.M (2008), “La normalizzazione del razzismo,” in Naletto G. (ed.), Sicurezza di chi? Come combattere il razzismo, edizioni dell’asino, Roma: 55-61.

8 Institute for Strategic Dialogue (2014), “Policy briefing: Countering the appeal of violent extremism online,” p. 3.

9 Gerstenfeld P.B., Grant D.R., Chiang C.P. (2003), “Hate online: a content analysis of extremist internet sites, in analysis of social issues and public policy,” vol. 3, n. 1, pp. 29-44.


11 Muižnieks, N. (2013), “Hate speech is not protected speech,” ENARgy The European Network Against Racism’s webzine, available at

12 Daniels (2009), p. 119.

13 Boileau, A., Del Bianco D., Velea, R. (eds., 2014), “Understanding the perception of racism. Research as a tool against racism,” Light On Project, Gorizia (ISBN 978-88-89825-32-7).

14 Cerase, A. (2014) “Racist symbols and discourses: from Essentialist to Far-right racism”, ENARgy The European Network Against Racism’s webzine, April 2014, available at

15 Jowett, G. S. O’Donnell V., (1992), “Propaganda and Persuasion,” London: Sage; McQuail, D. (2000), “Mass media theory: An introduction,” London: Sage.

16 For a context analysis of the events see for instance:

17 Tell Mama UK – Standing against bigotry and prejudice

18 “France sees more than 50 anti-Muslim incidents after Charlie Hebdo shootings,” The, Jan 12th 2015, available at

19 The adjective provisional stays to indicate the possibility of overturn the process through monitoring and community actions.

20 We should not focus only on “information acquisition, though such acquisition occurs, but of dramatic action in which the reader joins a world of contended forces as an observer at play.” In: Carey (1989), “Communication as culture: Essays on media and society,” New York, Routledge, p. 17.

21 Calvert, C. (1997), “Hate speech and its harms: A communication theory perspective,” Journal of Communication, 47(1), pp. 4-19.

22 Carey’s argument in the explanation of his ritual model, In: Carey (1989), p.22.

23 Scagliotti, L. (2010), Racist violence in Italy, Brussels: ENAR/OSI; Binotto, M., Bruno, M., Lai, V. (2012); Cerase, A. (2013), “Colpevoli per elezione: Gli immigrati nella lente della cronaca nera,” Communicazionepuntodoc, n.7, pp. 69-88; Orrù, P. (2014), “Racist discourse on social networks: A discourse analysis of Facebook posts in Italy,” Rhesis, International Journal of Linguistics, Philology, and Literature, 5(1), pp. 113-133; Cerase, A. (forthcoming), “Il circolo vizioso della rappresentazione mediale,” in Binotto, M., Bruno, M., Lai, V. (eds), Tracciare i confini. L‘immigrazione nei media italiani, FrancoAngeli, Milano.

24 Dunbar, E. (2006), “Race, gender, and sexual orientation in hate crime victimization: Identity politics or identity risk?,” Violence and Victims, 21(3), pp. 323-337; Meyer, D. (2008), “Interpreting and experiencing anti-queer violence: Race, class, and gender differences among LGBT hate crime victims,” Race, Gender & Class, pp. 262-282.

25 Sherry, M. (2012), “Disability hate crimes: Does anyone really hate disabled people?”, Ashgate Publishing, Ltd.

26 Titley, G., Keen, E. and Földi, L. (2014), “Starting points for combating hate speech online,” Council of Europe, October 2014.

27 Bajt, V., (2014). “Contemporary racism across Europe”, Freedom From Fear Magazine, 9, 36-41.

28 ECRI (2013), “Annual Report on ECRI’s Activities,” (1 Jan – 31 Dec, 2012) Council of Europe, Strasbourg.

29 FRA (2012), “Making hate crime visible in the European Union: Acknowledging victims’ rights”, FRA – European Union Agency for Fundamental Rights, Brussels.

30 UNICRI (ed. 2014), “Investigating and reporting online hate speech. Training manual,” Light On Project, Turin.

31 Mnet (2012), “Responding to Online Hate,” Media Awareness Network, Ottawa, Canada.

32 Council of Europe, (2012), “Cyberhate and freedom of expression” (paragraph 3), in Mapping study on projects against online hate speech. DDCP-YD/CHS (2012) 2, Council of Europe, Strasbourg.

33 UNICRI (ed. 2014).