Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 11 November 2021

Dynamics of online hate and misinformation

  • Matteo Cinelli 1 ,
  • Andraž Pelicon 2 , 3 ,
  • Igor Mozetič 2 ,
  • Walter Quattrociocchi 4 ,
  • Petra Kralj Novak 2 &
  • Fabiana Zollo 1  

Scientific Reports volume  11 , Article number:  22083 ( 2021 ) Cite this article

16k Accesses

39 Citations

225 Altmetric

Metrics details

  • Computer science
  • Information technology

Online debates are often characterised by extreme polarisation and heated discussions among users. The presence of hate speech online is becoming increasingly problematic, making necessary the development of appropriate countermeasures. In this work, we perform hate speech detection on a corpus of more than one million comments on YouTube videos through a machine learning model, trained and fine-tuned on a large set of hand-annotated data. Our analysis shows that there is no evidence of the presence of “pure haters”, meant as active users posting exclusively hateful comments. Moreover, coherently with the echo chamber hypothesis, we find that users skewed towards one of the two categories of video channels (questionable, reliable) are more prone to use inappropriate, violent, or hateful language within their opponents’ community. Interestingly, users loyal to reliable sources use on average a more toxic language than their counterpart. Finally, we find that the overall toxicity of the discussion increases with its length, measured both in terms of the number of comments and time. Our results show that, coherently with Godwin’s law, online debates tend to degenerate towards increasingly toxic exchanges of views.

Similar content being viewed by others

online hate essay

Online hate network spreads malicious COVID-19 content outside the control of individual social media platforms

online hate essay

Bystanders’ collective responses set the norm against hate speech

online hate essay

Persistent interaction patterns across social media platforms and over time

Introduction.

Public debates on social media platforms are often heated and polarised 1 , 2 , 3 . Back in the 90s, Mike Godwin coined a theorem, today known as Godwin’s law, stating that “As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches to one”. More recently, with the advent of social media, an increasing number of people is reporting exposure to online hate speech 4 , leading institutions and online platforms to investigate possible solutions and countermeasures 5 . To prevent and counter the spread of hate speech online, for example, the European Commission agreed with Facebook, Microsoft, Twitter, YouTube, Instagram, Snapchat, Dailymotion, Jeuxvideo.com, and TikTok on a “Code of conduct on countering illegal hate speech online” 6 . In addition to fuelling the toxicity of the online debate, hate speech may have severe offline consequences. Some researchers hypothesised a causal link between online hate and offline violence 7 , 8 , 9 . Furthermore, there is empirical evidence that online hate may induce fear of offline repercussions 10 . However, the detection and contrast of hate speech is complicated. There are still ambiguities in the very definition of hate speech, with academic and relevant stakeholders providing their own interpretations 4 , including social media companies such as Facebook 11 , Twitter 12 , and YouTube 13 .

We use the term “hate speech” to cover whole spectrum of language used in online debates, from normal, acceptable to the extreme, inciting violence. On the extreme end, violent speech covers all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, antisemitism or other forms of hatred based on intolerance, including: intolerance expressed by aggressive nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and people of immigrant origin 14 . Less extreme forms of unacceptable speech include inappropriate (e.g., profanity) and offensive language (e.g., dehumanisation, offensive remarks), which is not illegal, but deteriorates public discourse and can lead to a more radicalised society.

In this work, we analyse a corpus of more than one million comments on Italian YouTube videos related to COVID-19 to unveil the dynamics and trends of online hate. First, we manually annotate a large corpus of YouTube comments for hate speech, and train and fine-tune a hate speech deep learning model to accurately detect it. Then, we apply the model to the entire corpus, aiming to characterise the behaviour of users producing hate, and shed light on the (possible) relationship between the consumption of misinformation and usage of hate and toxic language. The reason for performing hate speech detection on the Italian language is two-fold: First, Italy was one of the countries most affected by the COVID-19 pandemic and especially by the early application of non-pharmaceutical interventions (strict lockdown happened on March 9, 2020). Such an event, by forcing people at home, increased the internet use and was likely to exacerbate the public debate and foment hate speech against specific targets such as the government and politicians. Second, Italian is a less studied language in comparison to English or German 15 and, to the best of our knowledge, this is the first study to investigate hate speech in Italian on YouTube.

This work advances the current literature at different levels. There is a large body of literature about community-level hate speech 16 , 17 , 18 . However, less is known about the behavioural features of users using hate speech on mainstream social media platforms, with few recent exceptions for Twitter 19 , 20 , 21 and Gab 18 . Furthermore, to our knowledge, the relationship between online hate and misinformation is yet to be explored. In this paper, we study hate speech with respect to a controversial and heated topic, i.e., COVID-19, which has been already analysed in terms of sinophobic attitudes 22 . We relax the assumption behind many community-based studies, for which every post produced within an online community hosting haters is hate 17 , 23 . Instead, to cope with a classification task that involves more than one million comments, we annotate a high-quality dataset of more than 70,000 YouTube comments, which is used for training and evaluating a deep learning model. The model is standard in the state-of-the-art and builds on a wide strand of literature using machine learning 24 , 25 , 26 and deep learning 27 , 28 , 29 for automatic hate speech detection via text classification. Moreover, we distinguish YouTube channels into two categories: questionable, i.e., channels likely to disseminate misinformation, and reliable. This categorisation is in line with previous studies on the spreading of misinformation 30 , 31 , 32 , and builds on a list of misinformation sources provided by the Italian Communications Regulatory Authority (AGCOM).

Our results show that hate speech on YouTube is slightly more present than on other social media platforms 20 , 21 , 33 and that there are no significant differences between the proportions of hate speech detected in comments on videos from questionable and reliable channels. We also note that hate speech does not show specific temporal patterns, even on questionable channels. Interestingly, we do not find evidence of “pure haters”, intended as active users posting exclusively hateful comments. Still, we note that users skewed towards one of the two categories of video channels (questionable, reliable) are more prone to use toxic language—i.e., inappropriate, violent, or hateful—within their opponents community. Interestingly, users skewed towards reliable content use on average a more toxic language than their counterpart. Finally, we find that the overall toxicity of the discussion increases with its length measured both in terms of the number of comments and time. In other words, online debates tend to degenerate towards increasingly toxic exchanges of views, in line with Godwin’s law.

Data collection

We collected about 1.3M comments posted by more than 345,000 users on 30,000 videos from 7000 channels on YouTube. According to summary statistics about YouTube by Statista 34 , the number of YouTube users in 2019 in Italy was about 24 millions (roughly one third of the Italian population). By applying 1% empirical law, for which in an Internet community 99% of the participants just visualise content (the so-called lurkers), while only 1% of the users actively participate in the debate (e.g., interacting with content, posting information, commenting), we can evaluate the representativeness of our dataset. Therefore, we can expect that, out of 24 millions users on the platform, a population of 240,000 users usually interact with the content. Taking into account these estimates, the size of our sample (345,000) seems to be appropriate, especially when considering that we are focusing on a specific topic (COVID-19) and not on the whole content of the platform. These considerations are also consistent with another statistic of our dataset, where the videos show an average of 5M daily views (with peaks at 20M).

Using the official YouTube Data API, we performed a keyword search for videos that matched a list of keywords, i.e., { coronavirus, nCov, corona virus, corona-virus, covid, SARS-CoV }. An in-depth search was then performed by crawling the network of related videos as provided by the YouTube algorithm. Then, we filtered the videos that matched our set of keywords in the title or description from the gathered collection. Finally, we collected the comments received by these videos. The title and the description of each video, as well as the comments, are in Italian according to the Google’s cld3 language detection service. The set of videos covers the time window that goes from 01/12/2019 to 21/04/2020, while the set of comments ranges in the time window that goes from 15/01/2020 to 15/06/2020.

We assigned a binary label to each YouTube channel to distinguish between two categories: questionable and reliable. A questionable YouTube channel is a channel producing unverified and false content or directly associated to a news outlet that failed multiple fact checks performed by independent fact checking agencies. The list of YouTube channels labelled as questionable was provided by the Italian Communications Regulatory Authority (AGCOM). The remainder of the channels were labelled as reliable. Table  1 shows a breakdown of the dataset.

Hate speech model

Our aim is to create a state-of-the-art hate speech model, by deep learning methods. We first produce two high-quality manually annotated datasets for training and evaluating the model. The training set is intentionally selected to contain as much hate speech vocabulary as possible, while the evaluation set is unbiased, to assure proper model evaluation. We then apply the model to all the collected data and study the relationship between the hate speech phenomenon and misinformation.

Deep learning models based on Transformer architecture outperform other approaches to automated hate speech detection, as evident from recent shared tasks in the SemEval-2019 evaluation campaign: HatEval 28 and OffensEval 35 , as well as OffensEval 2020 29 . The central reference for hate speech detection for Italian is the report on the EVALITA 2018 hate speech detection task 36 . Furthermore, in 37 authors modelled the hate speech task using the Italian pre-trained language model AlBERTo, achieving state-of-the-art results on Facebook and Twitter datasets. We trained a new hate speech detection model for Italian following the state-of-the-art approach 37 on our four-class hate speech detection task (see sections “ Data selection and annotation ” and “ Classification ” for detailed information).

Data selection and annotation

The comments to be annotated were sampled from the Italian YouTube comments on videos about the COVID-19 pandemic in the period from January 2020 to May 2020. Two sets were annotated: a hate-speech-rich training set with 59,870 comments and an unbiased evaluation set with 10,536 comments.

To get a training set that is rich with hate speech, we annotated all the comments with a (basic) hate speech classifier (machine learning model) that assigns a score between -3 (hateful) and +3 (normal). The basic classifier was trained on a publicly available dataset of Italian hate speech against immigrants 38 . Even though this basic model is not very accurate, its performance is better than random and we used its result for selecting the training data to be annotated and later used for training our deep learning model. For a realistic evaluation scenario, threads (i.e., all the comments to the video) were kept intact during the annotation procedure, yet individual comments were annotated.

The threads (with comments) to be annotated for the training set were selected according to the following criteria: thread length (the number of comments in a thread between 10 and 500), and hatefulness (at least 5% of hateful comments according to our basic classifier). The application of these criteria resulted in 1168 threads (VideoIds) and 59,870 comments. The evaluation set was selected from May 2020 data as a random (unbiased) sample of 151 threads (VideosIds) with 10,543 comments.

Our hate speech annotation schema is adapted from OLID 39 and FRENK 40 . We differentiate between the following speech types:

Acceptable (non hate speech);

Inappropriate (the comment contains terms that are obscene or vulgar, but the text is not directed to any person or group specifically);

Offensive (the comment includes offensive generalisation, contempt, dehumanisation, or indirect offensive remarks);

Violent (the comment’s author threatens, indulges, desires or calls for (physical) violence against a target; it also includes calling for, denying or glorifying war crimes and crimes against humanity).

The data was split among eight contracted annotators. Each comment was annotated twice by two different annotators. The splitting procedure was optimised to get approximately equal overlap (in the number of comments) between each pair of annotators for each dataset. The annotators were given clear annotation guidelines, a training session and a test on a small set to evaluate their understanding of the task and their commitment before starting the annotation procedure. Furthermore, the annotation progress was closely monitored in terms of the annotator agreement to ensure high data quality.

The annotation results for the training and evaluation sets are summarised in Fig.  1 . The annotator agreement in terms of Krippendorff’s \(Alpha\)   41 and accuracy (i.e., percentage of agreement) on both the training and the evaluation sets is presented in Table  2 . The agreement results indicate that the annotation task is difficult and ambiguous, as the annotators agree on the label in only about 80% of the cases. Since the class distribution is very unbalanced, accuracy is not the most appropriate measure of agreement. \(Alpha\)   is a better measure of agreement as it accounts for the agreement by chance. Our agreement scores in terms of \(Alpha\)   are comparable to those of other high-quality datasets, like 21 , 42 .

figure 1

The distribution of the four hate speech labels in the manually annotated training ( a ) and evaluation ( b ) sets. The training set is intentionally biased to contain more hate speech while the evaluation set is unbiased.

Classification

A state-of-the-art neural model based on Transformer language models was trained to distinguish between the four hate speech classes. We use a language model based on the BERT architecture 43 which consists of 12 stacked Transformer blocks with 12 attention heads each. We attach a linear layer with a softmax activation function at the output of these layers to serve as the classification layer. As input to the classifier, we take the representation of the special [CLS] token from the last layer of the language model. The whole model is jointly trained on the downstream task of four-class hate speech detection. We used AlBERTo 44 , a BERT-based language model pre-trained on a collection of tweets in the Italian language. According to previous work 43 , fine-tuning of the neural models was performed end-to-end. We used the Adam optimizer with the learning rate of \(2e-5\) and learning rate warmup over the first 10% of the training instances. We used weight decay set to 0.01 for regularization. The model was trained for 3 epochs with batch size 32. We performed the training of the models using the HuggingFace Transformers library 45 .

The tuning of our models was performed by cross validation on the training set, while the final evaluation was performed on the separate out-of-sample evaluation set. In our setup, each data instance (YouTube comment) is labelled twice, possibly with inconsistent labels. To avoid data leakage between training and testing splits in cross validation, we use 8-fold cross validation where in each fold we use all the comments annotated by one annotator as a test set. We report the performance of the trained models using the same measures as are used for the annotator agreement: Krippendorff’s Alpha-reliability ( \(Alpha\) ) 41 , accuracy ( \(Acc\) ), and the \(F_{1}\)   score for individual classes, on both the training and the evaluation datasets. The validation results are reported in Table  3 . The coincidence matrices for the evaluation set, used to compute all the scores of the annotator agreements and the model performance, are reported in Table  S8 of SI.

The performance of our model is comparable to the annotator agreement in terms of Krippendorff’s \(Alpha\)   and accuracy ( \(Acc\) ), providing evidence for its high quality. The model achieves the annotator agreement both on the training set in the cross validation setting, as well as on the evaluation set. This shows the ability of the model to generalise well on the yet unseen, out-of-sample evaluation data. We observe similar results in terms of \(F_{1}\)   scores for individual classes. The only noticeable drop in performance compared to the annotators is the performance on the minority (Violent) class. We attribute this drop to the very low amount of data available for the Violent class compared to the other classes, however, the performance is still reasonable. We therefore apply our hate speech detection model to the set of 1.3M comments and report the findings.

Results and discussion

Relationship between hate speech and misinformation.

We start our analysis examining the distribution of the different speech types on both reliable and questionable YouTube channels. Figure  2 shows the cumulative distribution of comments, total and per type, by channel. The x-axis shows the YouTube channels ranked by their total number of comments, while the y-axis shows the total number of comments in the dataset (both quantities are reported as proportions). We observe that the distribution of comments is Pareto-like; indeed, the first 10% of channels (dotted vertical line) covers about 90% of the total number of comments. Such a 10 to 90 percent relationship is even stronger when comments are analysed according to their types; indeed, the heterogeneity of the distribution decreases going from violent to acceptable comments. It is also worth noting that, as indicated by the secondary y-axis of Fig.  2 , the first 10% of channels with most comments also contain about 50% of all the questionable channels in our list, thus indicating a relatively high popularity of these channels. In addition, questionable channels are about 0.25% of the total number of channels that received at least one comment and, despite being such a minority, they cover \(\sim\)  8% of the total number of comments (with the following partitioning: 8% acceptable; 7% inappropriate; 9% offensive; 9% violent) and the 1.3% of the total number of videos, thus highlighting a disproportion between their activity and popularity.

figure 2

Ranking of YouTube channels by number of comments and proportions of comment types per channel.

Figure  3 shows the proportion of comments by label and channel types, and their trend over time. In panel (a) we display the overall proportion of comment types, noting that the majority of comments is acceptable, followed by offensive, inappropriate, and violent types, all relatively stable over time (see panel (b)). It is worth remarking that, despite the proportion of hate speech found in the dataset is consistent with—although slightly higher than—previous studies 20 , 33 , the presence of even a limited number of hateful comments is in direct conflict with the platform’s policy against hate speech. Moreover, we do not observe relevant differences between questionable (panel (c)) and reliable (panel (d)) channels, providing a first piece of evidence in favour of a moderate (if not absent) relationship between online hate and misinformation.

figure 3

Proportion of the four hate speech labels in the whole dataset ( a ) over time ( b ), and for questionable ( c ) and reliable ( d ) YouTube channels. Panel b displays four dashed lines in correspondence of events of paramount relevance for the year 2020 in Italy. The first line is placed on 30/01/2020 when the first two cases of COVID-19 were detected in Italy. The second line is placed on 09/03/2020 when the Prime Minister enforced the first lockdown to the whole nation. The third line is placed on 10/04/2020 when the Prime Minister communicated to the nation an extension of the lockdown until May the 3rd. The fourth line is placed on 04/05/2020 when the “phase 2” (i.e., the suspension of the full lockdown) began. Interestingly, we note a higher share of Acceptable comments between the second and third lines, that is during the lockdown, perhaps due to positive messages and encouragement among people. Instead, as a possible consequence of the extension of the lockdown, we note a lower share of Acceptable comments right after the third line.

Now we aim at understanding whether hateful comments display a typical (technically, the average) time of appearance. This kind of information can indeed be crucial for the implementation of timely moderation efforts. More specifically, our goal is to discover whether 1) different speech types have typical delays and 2) any difference holds between comments on videos disseminated by questionable and reliable channels. To this aim, we define the comment delay as the time elapsed between the posting time of the video and that of the comment (in hours). Figure  4 displays the comment delays for the four types of hate speech and for questionable and reliable channels. Looking at panel (a) of Fig.  4 , we first note that all comments share approximately the same delay regardless of their type. Indeed, the distributions of the comment delay are roughly log-normal with a long average delay ranging from 120 h in the case of acceptable comments to 128 h in the case of violent comments (the comment delay is reduced by \(\sim 75\%\) when removing observations in the right tail of the distribution as shown in Table  S1 of SI). For what concerns comments on videos published by questionable and reliable channels, we do not find strong differences between typical delays of speech types within the two domains. In the case of questionable channels, we find that comment delays range from 66 to 42 h, while for reliable channels they range from 125 to 136 h (as reported in SI ). To summarise, we find a discrepancy in users’ responsiveness to the two types of content, with comments on questionable videos having a much lower typical delay than those on reliable videos. In addition, comments typical delays differ between reliable and questionable channels. In particular, on questionable channels toxic comments appear first and faster than acceptable ones, following decreasing levels of toxicity (violent \(\rightarrow\) offensive \(\rightarrow\) inappropriate). In other words, violent comments on questionable content display the shortest typical delay, followed by offensive, inappropriate, and acceptable comments. Conversely, on reliable channels the shortest typical delay is observed for appropriate comments, followed by violent, unacceptable, and offensive comments (for details refer to SI ).

figure 4

Distribution of comment delays in the whole dataset ( a ) and for questionable ( b ) and reliable ( c ) YouTube channels. The capital letters on the x-axis represent the different types of comments: acceptable (A); inappropriate (I); offensive (O); violent (V).

Users’ behaviour and misinformation

In line with other social media platforms 30 , 46 , users activity on YouTube follows a heavy tailed distribution, i.e., the majority of users post few comments, while a small minority is hyperactive (see Fig.  S1 of SI for details). Now we want to investigate whether a systematic tendency towards offences and hate can be observed for some (category of) users. In Fig.  5 , each vertex of the square represents one of the four speech types (acceptable—A; inappropriate—I; offensive—O; violent—V). Each dot is a user whose position in the square depends on the fraction of his/her comments for each category. As an example, a user posting only acceptable comments will be located exactly on the vertex A (i.e., in (0,0)), while a user that splits his/her activity evenly between acceptable and inappropriate comments will be located in the middle of the edge connecting the vertices A and I. Similarly, a user posting only violent comments will be located exactly on the vertex V (i.e., in (1,0)). More formally, to shrink the 4-dimensional space deriving by the four labels that fully characterise the activity of each user, we associate a user j the following coordinates in a 2-dimensional space:

where \(a_j\) , \(i_j\) , \(o_j\) , \(v_j\) are the proportions, respectively, of acceptable, inappropriate, offensive, and violent comments posted by user j over his/her total activity \(c_j\) .

figure 5

Users balance between different comment types. In panel ( a ) brighter dots indicate a higher density of users while in panel ( b ) brighter dots indicate a higher average activity of the users in terms of number of comments. We note that users focused on posting comments labelled as offensive and violent are almost absent in the data.

Although most of the users leave only or mostly acceptable comments, there are also several users ranging across categories (i.e., located away from the vertices of the square in Fig.  5 ). Interestingly, there is no evidence of “pure haters”, i.e., active users exclusively using hateful language, that are only 0.3% of the total number of users. Indeed, while there are users posting only or mostly violent comments (see Fig.  5 a), their overall activity is very low and below five comments (see Fig.  5 b). A similar situation is observed for offenders, i.e., active users posting only offending comments. Although we cannot exclude that moderation efforts put in place by YouTube (if any) might partially impact these results, the absence of pure haters and offenders highlights that hate speech is rarely only an issue of specific categories of users. Rather, it seems that regular users are occasionally triggered by external factors. To rule out possible confounding factors (note that users located in the centre of the square could display a balanced activity between different pairs of comment categories) we repeated the analysis excluding the category I (i.e., inappropriate). The results are provided in SI and confirm what we observe in Fig.  5 .

We now aim at unveiling the relationship between users behaviour in terms of commenting patterns and their activity with respect to questionable and reliable channels. Since misinformation is often associated with the diffusion of polarising content which plays on one’s fear and could fuel anger, frustration and hate 47 , 48 , 49 , our intent is to understand whether users more loyal to questionable content are also more prone to use a toxic language in their comments. Thus, we define the leaning l of a user j as the fraction of his/her activity spent in commenting videos posted by questionable channels, i.e.,

where \(\sum _{i = 1}^{c_j}q_j\) is the number of comments on videos from questionable channels posted by the user j and \(c_j\) is the activity of user j . Similarly, for each user j we compute the fraction of unacceptable comments \({\overline{a}}\) as:

where \(a_j\) is the fraction of acceptable comments posted by user j .

In Fig.  6 a, we compare users’ leaning \(l_j\) against the fraction of unacceptable comments \({\overline{a}}_j\) . As expected, we may observe two peaks (of different magnitude) in correspondence of extreme values of leaning ( \(l_j \sim 0\) and \(l_j \sim 1\) ), represented by the brighter squares in the plot. In addition, the joint distribution becomes sparser in correspondence of higher values of users’ leaning and fraction of unacceptable comments ( \(l_j \ge 0.5\) and \({\overline{a}}_j \ge 0.5\) ), indicating that a relevant share of users are placed at the two extremes of the distribution (thus being somewhat polarised) and that users producing mostly unacceptable comments are way less present.

In Fig.  6 b, we display the proportion of unacceptable comments posted by users displaying leaning at the two tails of the distribution (i.e., users displaying a remarkable tendency to comment questionable videos \(l_j \in [0.75,1)\) and users with a remarkable tendency to comment reliable videos \(l_j \in (0,0.25]\) ). We find that users skewed towards reliable channels post, on average, a higher proportion of unacceptable comments ( \(\sim 23\%\) ) than users skewed towards questionable channels ( \(\sim 17\%\) ). In other words, users who tend to comment on reliable videos are also more prone to use a unacceptable/toxic language. Further statistics on the two distributions are reported in SI .

Panel (c) of Fig.  6 provides a comparison between the distributions of unacceptable comments posted by users skewed towards questionable channels ( q in the legend) on videos published by either questionable or reliable channels. Panel (d) of Fig.  6 provides a similar representation for users skewed towards reliable channels ( r in the legend). We may note a strong difference in users behaviour: quite unimodal when they comment videos on the same side of the leaning; bimodal when they comment videos on the opposite side of leaning. Therefore, users tend to avoid using a toxic language when they comment videos in accordance with their leaning and to separate into roughly two classes (non-toxic, toxic) when they comment videos in contrast with their preferences. This finding resonates with evidence of online polarisation and with the presence of peculiar characters of the internet such as trolls and social justice warriors.

figure 6

Panel ( a ) displays the relationship occurring between the preference of users for questionable and reliable channels (the user leaning \(l_j\) ) and the fraction of unacceptable comments posted by the user ( \({\overline{a}}_j\) ) as a joint distribution. Panel ( b ) displays the distribution of unacceptable comments for users displaying a remarkable tendency to comment under videos posted by questionable ( \(l_j \in [0.75,1)\) ) and reliable ( \(l_j \in (0,0.25]\) ) channels. Panel ( c ) displays the distribution of unacceptable comments posted by users with leaning towards questionable channels ( \(l_j \in [0.75,1)\) indicated as q) under videos of questionable channels (dashed line indicated as q to q in the legend) and under videos of reliable channels (solid line indicated as q to r in the legend). Panel ( d ) displays the distribution of unacceptable comments posted by users with leaning towards reliable channels ( \(l_j \in (0,0.25]\) indicated as r) under videos of questionable channels (solid line indicated as r to q in the legend) and under videos of reliable channels (dashed line indicated as r to r in the legend).

Toxicity level of online debates

Finally, we aim at investigating whether online debates degenerate (i.e., increase their average toxicity) when the discussion gets longer, both in terms of number of comments and time. More in general, we are interested in analysing how commenting dynamics change over time and whether online hate follows similar dynamics to those observed for users’ sentiment 31 . Indeed, although violent comments and pure haters are quite rare, their presence could negatively impact the tone of the general debate. Furthermore, we want to understand whether the toxicity of comments tends to follow certain dynamics empirically observed on the internet such as Godwin’s law. To this purpose, we test whether toxic comments tend to appear more frequently at later stages of the debate.

To compute the toxicity level of a debate around a certain video, we assign each speech type (A,I,O,V) a toxicity value t as follows:

Acceptable: t = 0

Inappropriate: t = 1

Offensive: t = 2

Violent: t = 3

Then, we define the toxicity level T of a discussion d of n comments as the average of the toxicity values over all the comments of the discussion:

To understand how the toxicity level changes with respect to the number of comments and to comment delay (i.e., the time elapsed between the posting time of the video and that of the comment), we employ linear regression models. Figure  7 shows that a positive relationship between the two variables (i.e., average toxicity is an increasing function of the number of comments and comment delay) exists, and that such a relationship cannot be reproduced by linear models obtained with randomised comment labels (regression outcomes and a validation of our results using proportions of unacceptable comments are reported in SI ). We apply a similar approach to distinguish between comments on videos from questionable and reliable channels (as shown in SI ). Overall, similarly to the general case, we find stronger positive effects in real data than in randomised models although such effects are significant only in the case of comments under videos posted by reliable channels.

figure 7

Linear regression models for number of comments and comment delay. On the x-axis of panel ( a ) the comments are grouped in logarithmic bins while on the x-axis of panel ( b ) the comment delays are grouped in linear bins.

Finally, to evaluate the effect (in the short run) of violent comments, we study the transition between subsequent comments in threads appearing under YouTube videos. The choice of analysing threads instead of full lists of comments resides in the fact that YouTube comments are ranked according to several factors (among which the number of likes received by the comment, the length of the thread, the importance of the user who posted the comment). Therefore, given a certain video, we cannot be sure of what comments (and in which order) the user actually visualises. However, threads do not suffer from this issue, since comments in threads are presented in chronological order. The aim of studying the transitions between comment types is to find specific transition patterns (probabilities) between toxic comments and understand if the conversation tends to evolve in a way that is different with respect to random models. As an example, a thread with four comments 1 Acceptable, 2 Offensive and 1 Violent (in this order) can be summarised with the string “AOOV”, which entails three transitions between comment types, namely {AO; OO; OV}. By extending such a process to all threads in our dataset, we can compute the transition probability from one comment type to another using a 4 by 4 transition matrix. In this way, we can evaluate the possible presence of an escalation effect due to the fact that toxic comments could be immediately followed by increasingly toxic ones. The results are reported in Fig.  8 , in which we notice that certain transition probabilities cannot be reproduced by a null model in which the sequences of comments within threads are randomised. In particular, we note that, differently from the empirical data, in randomised instances the transition probability from one violent comment to another is 0 and the probability of passing from violent comments to unacceptable ones (inappropriate, offensive and violent) is always higher in the empirical case than at random. Similar results hold for offensive and inappropriate comments, but not for acceptable ones. This finding confirms the presence of a short term influence of violent comments that could flame the debate and scale up into streams of toxicity.

figure 8

Transition probabilities between different comments types represented by a \(4 \times 4\) transition matrix in the real (panel a ) and in the random case (panel b ). Brighter entries of the matrix indicate higher transition probabilities.

Conclusions

The aim of this work is two-fold: i) to investigate the behavioural dynamics of online hate speech and ii) to shed light on the possible relationship with misinformation exposure and consumption. We apply a hate speech deep learning model to a large corpus of more than one millions comments on Italian YouTube videos. Our analysis provides a series of important results which can support the development of appropriate solutions to prevent and counter the spread of hate speech online. First , there is no evidence of a strict relationship between the usage of a toxic language (including hate speech) and being involved within the misinformation community on YouTube. Second , we do not observe the presence of “pure” haters, instead it seems that the phenomenon of hate speech involves regular users who are occasionally triggered to use toxic language. Third , users polarisation and hate speech seem to be intertwined, indeed users are more prone to use inappropriate, violent, or hateful language within their opponents community (i.e., out of their echo chamber). Finally , we find a positive correlation between the overall toxicity of the discussion and its length, measured both in terms of number of comments and time.

Our results are in line with recent studies about (the increasing) polarisation of online debates and segregation of users 50 . Furthermore, they somewhat confirm the intuition behind some empirically grounded laws such as Godwin’s law which can be interpreted, by extension, as a statement regarding the increasing toxicity of online debates. A potential limitation of this work is represented by the relentless effort of YouTube in moderating hate on the platform. This could have prevented us from having complete information about the actual presence of hate speech in public discussions. In spite of this limitation, after collecting again the whole set of comments after at least 1 year from their posting time, we find that only 32% of violent comments were actually unavailable due to either moderation or removal by the author (see Table  S9 of SI). Another issue could be the presence of channels wrongly labelled as reliable instead of questionable (i.e., false negatives) or the fact that certain questionable sources available on YouTube are not included in the list, especially due to the high variety of content available on the platform and the relative ease with which one can open a new channel. Nonetheless, our findings are robust with respect to these aspects (as we show in a dedicated section of SI ). Future efforts should extend our work to other languages beyond Italian, social media platforms, and topics. For instance, studying hate speech on online political discourse over time could provide important insights on debated phenomena such as affective polarisation 51 . Moreover, further research on possible triggers in the language and content of videos is desirable.

Data availibility

The datasets generated during the current study for the purposes of training and evaluating the hate speech model are available at the CLARIN repository: http://hdl.handle.net/11356/1450 . The hate speech model is available at the HuggingFace repository: https://huggingface.co/IMSyPP/hate_speech_it .

Adamic, L. A., Glance, N. The political blogosphere and the 2004 us election: Divided they blog. In Proceedings of the 3rd International Workshop on Link Discovery , pp. 36–43 (2005).

Flaxman, S., Goel, S. & Rao, J. M. Filter bubbles, echo chambers, and online news consumption. Public Opin. Q. 80 (S1), 298–320 (2016).

Article   Google Scholar  

Coe, K., Kenski, K. & Rains, S. A. Online and uncivil? Patterns and determinants of incivility in newspaper website comments. J. Commun. 64 (4), 658–679 (2014).

Siegel, A. A. Online hate speech. Social Media and Democracy , p. 56 (2019).

Gagliardone, I., Gal, D., Alves, T. & Martinez, G. Countering Online Hate Speech (Unesco Publishing, 2015).

European Commission. Code of conduct on countering illegal hate speech online. https://ec.europa.eu/newsroom/just/document.cfm?doc_id=42985 (Accessed: 27.09.2021).

Calvert, C. Hate speech and its harms: A communication theory perspective. J. Commun. 47 (1), 4–19 (1997).

Chan, J., Ghose, A. & Seamans, R. The internet and racial hate crime: Offline spillovers from online access. MIS Q. 40 (2), 381–403 (2016).

Müller, K. & Schwarz, C. Fanning the flames of hate: Social media and hate crime. J. Eur. Econ. Assoc. (2018).

Awan, I. & Zempi, I. We fear for our lives: Offline and online experiences of anti-muslim hostility. Technical report, Birmingham City University (2015).

Facebook. Community standards. https://www.facebook.com/communitystandards/introduction (Accessed: 27.09.2021).

Twitter. Violent organizations policy. https://help.twitter.com/en/rules-and-policies/violent-groups (Accessed: 27.09.2021).

YouTube. Hate speech policy. https://support.google.com/youtube/answer/2801939?hl=en&ref_topic=9282436 (Accessed: 27.09.2021).

Council of Europe. Recommendation no. r (97) 20 of the committee of ministers to member states on “hate speech”. https://go.coe.int/URzjs (Accessed: 27.09.2021).

Fortuna, P. & Nunes, S. A survey on automatic detection of hate speech in text. ACM Comput. Surv. (CSUR) 51 (4), 1–30 (2018).

Kumar, S., Hamilton, W. L., Leskovec, J. & Jurafsky, D. Community interaction and conflict on the web. In Proceedings of the 2018 World Wide Web Conference , pp. 933–943 (2018).

Johnson, N. F. et al. Hidden resilience and adaptive dynamics of the global online hate ecology. Nature 573 (7773), 261–265 (2019).

Article   ADS   CAS   Google Scholar  

Mathew, B. et al. Hate begets hate: A temporal study of hate speech. Proc. ACM Hum. Comput. Interact. 4 (CSCW2), 1–24 (2020).

Ribeiro, M., Calais, P., Santos, Y., Almeida, V. & Meira Jr., W. Characterizing and detecting hateful users on twitter. In Proceedings of the International AAAI Conference on Web and Social Media , vol. 12 (2018).

Siegel, A. A. et al. Trumping hate on twitter? Online hate speech in the 2016 us election campaign and its aftermath. Q. J. Polit. Sci. 16 (1), 71–104 (2021).

Evkoski, B., Pelicon, A., Mozetič, I., Ljubešić, N. & Novak, P. K. Retweet communities reveal the main sources of hate speech. arXiv:2105.14898 (2021).

Schild, L., Ling, C., Blackburn, J., Stringhini, G., Zhang, Y. & Zannettou, S. “Go eat a bat, chang!”: An early look on the emergence of sinophobic behavior on web communities in the face of covid-19. arXiv:2004.04046 (2020).

Chandrasekharan, E., Samory, M., Srinivasan, A. & Gilbert, E. The bag of communities: Identifying abusive behavior online with preexisting internet data. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems , pp. 3175–3187 (2017).

Burnap, P. & Williams, M. L. Us and them: Identifying cyber hate on twitter across multiple protected characteristics.. EPJ Data Sci. 5 , 1–15 (2016).

Del Vigna, F., Cimino, A., Dell’Orletta, F., Petrocchi, M. & Tesconi, M. Hate me, hate me not: Hate speech detection on facebook. In Proceedings of the First Italian Conference on Cybersecurity (ITASEC17) , pp. 86–95 (2017).

Davidson, T., Warmsley, D., Macy, M. & Weber, I. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media , vol. 11 (2017).

Badjatiya, P., Gupta, S., Gupta, M. & Varma, V. Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion , pp. 759–760 (2017).

Basile, V., Bosco, C., Fersini, E., Debora, N., Patti, V., Pardo, F. M. R., Rosso, P. & Sanguinetti, M. et al. Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In 13th International Workshop on Semantic Evaluation , pp. 54–63 (Association for Computational Linguistics, 2019).

Zampieri, M., Nakov, P., Rosenthal, S., Atanasova, P., Karadzhov, G., Mubarak, H., Derczynski, L., Pitenis, Z. & Çöltekin, Ç. Semeval-2020 task 12: Multilingual offensive language identification in social media (offenseval 2020). arXiv:2006.07235 (2020).

Cinelli, M. et al. The covid-19 social media infodemic. Sci. Rep. 10 (1), 1–10 (2020).

Zollo, F. et al. Emotional dynamics in the age of misinformation. PLoS One 10 (09), 1–22 (2015).

Zollo, F. et al. Debunking in a world of tribes. PLoS One 12 (7), e0181821 (2017).

Gagliardone, I., Pohjonen, M., Beyene, Z., Zerai, A., Aynekulu, G., Bekalu, M., Bright, J., Moges, M., Seifu, M. & Stremlau, N. et al. Mechachal: Online debates and elections in Ethiopia—from hate speech to engagement in social media. Available at SSRN 2831369 (2016).

Statista Research Department. Leading social media networks in Italy as of January 2019, ranked by number of active users. https://www.statista.com/statistics/639777/social-media-active-users-italy/ (Accessed: 27.09.2021).

Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N. & Kumar, R. SemEval-2019 task 6: Identifying and categorizing offensive language in social media (OffensEval). In Proceedings of the 13th International Workshop on Semantic Evaluation , pp. 75–86 (Association for Computational Linguistics, 2019).

Bosco, C., Dell’Orletta, F., Poletto, F., Sanguinetti, M. & Maurizio, T. Overview of the evalita 2018 hate speech detection task. In EVALITA 2018-Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian , vol. 2263, pp. 1–9 (CEUR, 2018).

Polignano, M., Basile, P., De Gemmis, M. & Semeraro, G. Hate speech detection through AlBERTo Italian language understanding model. In NL4AI@ AI* IA (2019).

Sanguinetti, M., Poletto, F., Bosco, C., Patti, V. & Stranisci, M. An Italian Twitter corpus of hate speech against immigrants. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) (2018).

Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N. & Kumar, R. Predicting the type and target of offensive posts in social media. In Proceedings of NAACL (2019).

Ljubešić, N., Fišer, D. & Erjavec, T. The FRENK datasets of socially unacceptable discourse in Slovene and English (2019).

Krippendorff, K. Content Analysis. An Introduction to its Methodology , 4th edn. (Sage Publications, 2018).

Mozetič, I., Grčar, M. & Smailović, J. Multilingual Twitter sentiment classification: The role of human annotators. PLoS One 11 (5), e0155036 (2016).

Devlin, J., Chang, M.-W., Lee, K., Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 (2018).

Polignano, M., Basile, P., De Gemmis, M., Semeraro, G. & Basile, V. AlBERTo: Italian BERT language understanding model for NLP challenging tasks based on tweets. In 6th Italian Conference on Computational Linguistics, CLiC-it 2019 , vol. 2481, pp. 1–6 (CEUR, 2019).

Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q. & Rush, A. M. Hugging face’s transformers: State-of-the-art natural language processing. arXiv:abs/1910.03771 (2019).

Del Vicario, M. et al. The spreading of misinformation online. Proc. Natl. Acad. Sci. 113 (3), 554–559 (2016).

Article   ADS   Google Scholar  

Del Vicario, M., Quattrociocchi, W., Scala, A. & Zollo, F. Polarization and fake news: Early warning of potential misinformation targets. ACM Trans. Web (TWEB) 13 (2), 1–22 (2019).

Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A. & Petersen, M. B. Partisan polarization is the primary psychological motivation behind political fake news sharing on twitter. Am. Polit. Sci. Rev. , 1–17 (2020).

Guess, A., Nagler, J., & Tucker, J. Less than you think: Prevalence and predictors of fake news dissemination on facebook. Sc. Adv. 5 (1), eaau4586 (2019).

Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., Starnini, M. The echo chamber effect on social media. Proc. Natl. Acad. Sci. 118 (9) (2021).

Druckman, J. N., Klar, S., Krupnikov, Y., Levendusky, M. & Ryan, J. B. Affective polarization, local contexts and public opinion in America. Nat. Hum. Behav. 5 (1), 28–38 (2021).

Download references

Acknowledgements

The authors acknowledge financial support from the Slovenian Research Agency (research core funding no. P2-103), and the European Union’s Rights, Equality and Citizenship Programme under Grant Agreement no. 875263. The authors wish to thank Arnaldo Santoro for his support with the categorisation of misinformation sources.

Author information

Authors and affiliations.

Ca’ Foscari University of Venice, Venice, Italy

Matteo Cinelli & Fabiana Zollo

Jozef Stefan Institute, Ljubljana, Slovenia

Andraž Pelicon, Igor Mozetič & Petra Kralj Novak

Jozef Stefan International Postgraduate School, Ljubljana, Slovenia

Andraž Pelicon

Sapienza University of Rome, Rome, Italy

  • Walter Quattrociocchi

You can also search for this author in PubMed   Google Scholar

Contributions

M.C. and F.Z. designed the experiment and supervised the data annotation task; A.P., I.M., and P.K.N. developed the classification model and prepared Fig. 1 . M.C. performed the analysis and prepared Figs. 2 , 3 , 4 , 5 , 6 and 7 . All authors contributed to the interpretation of the results and wrote the manuscript.

Corresponding author

Correspondence to Fabiana Zollo .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary table s1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Cinelli, M., Pelicon, A., Mozetič, I. et al. Dynamics of online hate and misinformation. Sci Rep 11 , 22083 (2021). https://doi.org/10.1038/s41598-021-01487-w

Download citation

Received : 28 June 2021

Accepted : 22 October 2021

Published : 11 November 2021

DOI : https://doi.org/10.1038/s41598-021-01487-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

  • Michele Avalle
  • Niccolò Di Marco

Nature (2024)

Adaptive link dynamics drive online hate networks and their mainstream influence

  • Minzhang Zheng
  • Richard F. Sear
  • Neil F. Johnson

npj Complexity (2024)

Imagining the Postcolonial in Central Eastern Europe: Controversies of the Czech Manifesto for Decolonization

  • Andrea Průchová Hrůzová
  • Lydie Kárníková

International Journal of Politics, Culture, and Society (2024)

The Polarizing Impact of Political Disinformation and Hate Speech: A Cross-country Configural Narrative

  • Pramukh Nanjundaswamy Vasist
  • Debashis Chatterjee
  • Satish Krishnan

Information Systems Frontiers (2024)

The medium is the message: toxicity declines in structured vs unstructured online deliberations

  • Nouhayla Majdoubi

World Wide Web (2024)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

online hate essay

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Featured Content
  • Author Guidelines
  • Open Access
  • About The British Journal of Criminology
  • About the Centre for Crime and Justice Studies
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Terms and Conditions
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, prevalence of online hate speech on social media, theoretical framework, data and methods.

  • < Previous

Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime

  • Article contents
  • Figures & tables
  • Supplementary Data

Matthew L Williams, Pete Burnap, Amir Javed, Han Liu, Sefa Ozalp, Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime, The British Journal of Criminology , Volume 60, Issue 1, January 2020, Pages 93–117, https://doi.org/10.1093/bjc/azz049

  • Permissions Icon Permissions

National governments now recognize online hate speech as a pernicious social problem. In the wake of political votes and terror attacks, hate incidents online and offline are known to peak in tandem. This article examines whether an association exists between both forms of hate, independent of ‘trigger’ events. Using Computational Criminology that draws on data science methods, we link police crime, census and Twitter data to establish a temporal and spatial association between online hate speech that targets race and religion, and offline racially and religiously aggravated crimes in London over an eight-month period. The findings renew our understanding of hate crime as a process, rather than as a discrete event, for the digital age.

Hate crimes have risen up the hierarchy of individual and social harms, following the revelation of record high police figures and policy responses from national and devolved governments. The highest number of hate crimes in history was recorded by the police in England and Wales in 2017/18. The 94,098 hate offences represented a 17 per cent increase on the previous year and a 123 per cent increase on 2012/13. Although the Crime Survey for England and Wales has recorded a consistent decrease in total hate crime victimization (combining race, religion, sexual orientation, disability and transgender), estimations for race and religion-based hate crimes in isolation show an increase from a 112,000 annual average (April 13–March 15) to a 117,000 annual average (April 15–March 17) ( ONS, 2017 ). This increase does not take into account the likely rise in hate victimization in the aftermath of the 2017 terror attacks in London and Manchester. Despite improvements in hate crime reporting and recording, the consensus is that a significant ‘dark figure’ remains. There continues a policy and practice need to improve the intelligence about hate crimes, and in particular to better understand the role community tensions and events play in patterns of perpetration. The HMICFRS (2018) inspection on police responses to hate crimes evidenced that forces remain largely ill-prepared to handle the dramatic increases in racially and religiously aggravated offences following events like the United Kingdom-European Union (UK-EU) referendum vote in 2016 and the terror attacks in 2017. Part of the issue is a significant reduction in Police Community Support Officers throughout England, and in particular London ( Greig-Midlane (2014) indicates a circa 50 per cent reduction since 2010). Fewer officers in neighbourhoods gathering information and intelligence on community relations reduces the capacity of forces to pre-empt and mitigate spates of inter-group violence, harassment and criminal damage.

Technology has been heralded as part of the solution by transforming analogue police practices into a set of complementary digital processes that are scalable and deliverable in near real time ( Williams et al. , 2013 ; Chan and Bennett Moses, 2017 ; Williams et al. , 2017a ). In tandem with offline hate crime, online hate speech posted on social media has become a pernicious social problem ( Williams et al. , 2019 ). Thirty years on from the Home Office (1989) publication ‘ The Response to Racial Attacks and Harassment ’ that saw race hate on the streets become priority for six central Whitehall departments, the police, Crown Prosecution Service (CPS) and courts ( Bowling, 1993 ), the government is now making similar moves to tackle online hate speech. The Home Secretary in 2016 established the National Online Hate Crime Hub, a Home Affairs Select Committee in 2017 established an inquiry into hate crime, including online victimization, and a review by the Law Commission was launched by the prime minister to address the inadequacies in legislation relating to online hate. Social media giants, such as Facebook and Twitter, have been questioned by national governments and the European Union over their policies that provided safe harbour to hate speech perpetrators. Previous research shows hate crimes offline and hate speech online are strongly correlated with events of significance, such as terror attacks, political votes and court cases ( Hanes and Machin, 2014 ; Williams and Burnap, 2016 ). It is therefore acceptable to assume that online and offline hate in the immediate wake of such events are highly correlated. However, what is unclear is if a more general pattern of correlation can be found independent of ‘trigger’ events. To test this hypothesis, we collected Twitter and police recorded hate crime data over an eight-month period in London and built a series of statistical models to identify whether a significant association exists. At the time of writing, no published work has shown such an association. Our models establish a general temporal and spatial association between online hate speech targeting race and religion and offline racially and religiously aggravated crimes independent of ‘trigger’ events . Our results have the potential to renew our understanding of hate crime as a process, rather than a discrete event ( Bowling, 1993 ), for the digital age.

Since its inception, the Internet has facilitated the propagation of extreme narratives often manifesting as hate speech targeting minority groups ( Williams, 2006 ; Perry and Olsson, 2009 ; Burnap and Williams, 2015 , 2016 ; Williams and Burnap, 2016 ; Williams et al. , 2019 ). Home Office (2018) data show that 1,605 hate crimes were flagged as online offences between 2017 and 2018, representing 2 per cent of all hate offences. This represents a 40 per cent increase compared to the previous year. Online race hate crime makes up the majority of all online hate offences (52 per cent), followed by sexual orientation (20 per cent), disability (13 per cent), religion (12 per cent) and transgender online hate crime (4 per cent). Crown Prosecution Service data show that in the year April 2017/18, there were 435 prosecutions related to online hate, a 13 per cent increase on the previous year ( CPS, 2018 ). These figures are a significant underestimate. 1 HMICFRS (2018) found that despite the Home Office introducing a requirement for police forces to flag cyber-enabled hate crime offences, uptake on this practice has been patchy and inconsistent, resulting in unreliable data on prevalence.

Hawdon et al. (2017) , using representative samples covering 15- to 30-year-olds in the United States, United Kingdom, Germany and Finland, found on average 43 per cent respondents had encountered hate material online (53 per cent for the United States and 39 per cent for the United Kingdom). Most hate material was encountered on social media, such as Twitter and Facebook. Ofcom (2018b) , also using a representative UK sample, found that near half of UK Internet users reported seeing hateful content online in the past year, with 16- to 34-year-olds most likely to report seeing this content (59 per cent for 16–24s and 62 per cent for 25–34s). Ofcom also found 45 per cent of 12- to 15-year-olds in 2017 reported encountering hateful content online, an increase on the 2016 figure of 34 per cent ( Ofcom, 2018a ; 2018c ).

Administrative and survey data only capture a snapshot of the online hate phenomenon. Data science methods pioneered within Computational Criminology (see Williams and Burnap, 2016 ; Williams et al. , 2017a ) facilitate a real-time view of hate speech perpetration in action, arguably generating a more complete picture. 2 In 2016 and 2017, the Brexit vote and a string of terror attacks were followed by significant and unprecedented increases in online hate speech (see Figures 1 and 2 ). Although the production of hate speech increased dramatically in the wake of all these events, statistical models showed it was least likely to be retweeted in volume and to survive for long periods of time, supporting a ‘half-life’ hypothesis. Where hate speech was retweeted, it emanated from a core group of like-minded individuals who seek out each other’s messages ( Williams and Burnap, 2016 ). Hate speech produced around the Brexit vote in particular was found to be largely driven by a small number of Twitter accounts. Around 50 per cent of anti-Muslim hate speech was produced by only 6 per cent users, many of whom were classified as politically anti-Islam ( Demos, 2017 ).

UK anti-black and anti-Muslim hate speech on Twitter around the Brexit vote

UK anti-black and anti-Muslim hate speech on Twitter around the Brexit vote

Global anti-Muslim hate speech on Twitter during 2017 (gaps relate to breaks in data collection)

Global anti-Muslim hate speech on Twitter during 2017 (gaps relate to breaks in data collection)

The role of popular and politically organized racism in fostering terrestrial climates of intimidation and violence is well documented ( Bowling, 1993 ). The far right, and some popular right-wing politicians, have been pivotal in shifting the ‘Overton window’ of online political discussion further to the extremes ( Lehman, 2014 ), creating spaces where hate speech has become the norm. Early research shows the far right were quick to take to the Internet largely unhindered by law enforcement due to constitutional protections around free speech in the United States. The outcome has been the establishment of extreme spaces that provide a collective virtual identity to previously fragmented hateful individuals. These spaces have helped embolden domestic hate groups in many countries, including the United States, United Kingdom, Germany, the Netherlands, Italy and Sweden ( Perry and Olsson, 2009 ).

In late 2017, social media giants began introducing hate speech policies, bowing under pressure from the German government and the European Commission ( Williams et al. , 2019 ). Up to this point, Facebook, Instagram, YouTube and Twitter were accused of ‘shielding’ far right pages as they generated advertising income due to their high number of followers. The ‘Tommy Robinson’ Facebook page, with 1 million followers, held the same protections as media and government pages, despite having nine violations of the platform’s policy on hate speech, whereas typically only five were tolerated by the content review process ( Hern, 2018 ). The page was eventually removed in March 2019, a year after Twitter removed the account of Stephen Yaxley-Lennon (alias Tommy Robinson) from their platform.

Social media was implicated in the Christchurch, New Zealand extreme-right wing terror attack in March 2019. The terrorist was an avid user of social media, including Facebook and Twitter, but also more subversive platforms, such as 8chan. 8chan was the terrorist’s platform of choice when it came to publicizing his live Facebook video of the attack. His message opened by stating he was moving on from ‘shit-posting’—using social media to spread hatred of minority groups—to taking the dialogue offline, into action. He labelled his message a ‘real life effort post’—the migration of online hate speech to offline hate crime/terrorism ( Figure 3 ). The live Facebook video lasted for 17 minutes, with the first report to the platform being made after the 12th minute. The video was taken down within the hour, but it was too late to stop the widespread sharing. It was re-uploaded more than 2 million times on Facebook, YouTube, Instagram and Twitter and it remained easily accessible over 24 hours after the attack. Facebook, Twitter, but particularly 8chan, flooded with praise and support for the attack. Many of these posts were removed, but those on 8chan remain due to its lack of moderation.

Christchurch extreme right terror attacker’s post on 8chan, broadcasting the live Facebook video

Christchurch extreme right terror attacker’s post on 8chan, broadcasting the live Facebook video

In the days following the terror attack spikes in hate crimes were recorded across the United Kingdom. In Oxford, Swastikas with the words “sub 2 PewDiePie” were graffitied on a school wall. In in his video ahead of the massacre, the terrorist asked viewers to ‘subscribe to PewDiePie’. The social media star who earned $15.5 million in 2018 from his online activities has become known for his anti-Semitic comments and endorsements of white supremacist conspiracies ( Chokshi, 2019 ). In his uploaded 74-page manifesto, the terrorist also referenced Darren Osborne, the perpetrator of the Finsbury Park Mosque attack in 2017. Osborne is known to have been influenced by social media communications ahead of his attack. His phone and computers showed that he accessed the Twitter account of Stephen Yaxley-Lennon two days before the attack, who he only started following two weeks prior. The tweet from Robinson read ‘Where was the day of rage after the terrorist attacks. All I saw was lighting candles’. A direct Twitter message was also sent to Osborne by Jayda Fransen of Britain First ( Rawlinson, 2018 ). Other lone actor extreme right-wing terrorists, including Pavlo Lapshyn and Anders Breivik, are also known to have self-radicalized via the Internet ( Peddell et al. 2016 ).

Far right and popular right-wing activity on social media, unhindered for decades due to free-speech protections, has shaped the perception of many users regarding what language is acceptable online. Further enabled by the disinhibiting and deindividuating effects of Internet communications, and the ineffectiveness of the criminal justice system to keep up with the pace of technological developments ( Williams, 2006 ), social media abounds with online hate speech. Online controversies, such as Gamergate, the Bank of England Fry/Austen fiasco and the Mark Meechan scandal, among many others, demonstrate how easily users of social media take to antagonistic discourse ( Williams et al. , 2019 ). In recent times, these users have been given further licence by the divisive words of popular right-wing politicians wading into controversial debates, in the hopes of gaining support in elections and leadership contests. The offline consequences of this trend are yet to be fully understood, but it is worth reminding ourselves that those who routinely work with hate offenders agree that although not all people who are exposed to hate material go on to commit hate crimes on the streets, all hate crime criminals are likely to have been exposed to hate material at some stage ( Peddell et al. , 2016 ).

The study relates to conceptual work that examines the role of social media in political polarization ( Sunstein, 2017 ) and the disruption of ‘hierarchies of credibility’ ( Greer and McLaughlin, 2010 ). In the United States, online sources, including social media, now outpace traditional press outlets for news consumption ( Pew Research Centre, 2018 ). The pattern in the United Kingdom is broadly similar, with only TV news (79 per cent) leading over the Internet (64 per cent) for all adults, and the Internet, in particular social media taking first place for those aged 16–24 ( Ofcom, 2018b ). In the research on polarization, the general hypothesis tested is disinformation is amplified in partisan networks of like-minded social media users, where it goes largely unchallenged due to ranking algorithms filtering out any challenging posts. Sunstein (2017) argues that ‘echo chambers’ on social media reflecting increasingly extreme viewpoints are breeding grounds for ‘fake news’, far right and left conspiracy theories and hate speech. However, the evidence on the effect of social media on political polarization is mixed. Boxell et al. (2017) and Debois and Blank (2017) , both using offline survey data, found that social media had limited effect on polarization on respondents. Conversely, Brady et al. (2017) and Bail et al. (2018) , using online and offline data, found strong support for the hypothesis that social media create political echo chambers. Bail et al. found that republicans, and to a lesser extent democrats, were likely to become more entrenched in their original views when exposed to opposing views on Twitter, highlighting the resilience of echo chambers to destabilization. Brady et al. found that emotionally charged (e.g. hate) messages about moral issues (e.g. gay marriage) increased diffusion within echo chambers, but not between them, indicating this as a factor in increasing polarization between liberals and conservatives.

A recently exposed factor that is a likely candidate for increasing polarization around events is the growing use of fake accounts and bots to spread divisive messages. Preliminary evidence shows that these automated Twitter accounts were active in the UK-EU referendum campaign, and most influential on the leave side ( Howard and Kollanyi, 2016 ). Twitter accounts linked to the Russian Internet Research Agency (IRA) were also active in the Brexit debate following the vote. These accounts also spread fake news and promoted xenophobic messages in the aftermath of the 2017 UK terror attacks ( Crest, 2017 ). Accounts at the extreme-end of right-wing echo chambers were routinely targeted by the IRA to gain traction via retweets. Key political and far right figures have also been known to tap into these echo chambers to drum-up support for their campaigns. On Twitter, Donald Trump has referred to Mexican immigrants as ‘criminals and rapists’ and retweeted far right activists after Charlottesville, and Islamophobic tweets from the far right extremist group, Britain First. The leaders of Britain First, and the ex-leader of the English Defence League, all used social media to spread their divisive narrative before they were banned from most platforms between December 2017 and March 2019. These extremist agitators and others like them have used the rhetoric of invasion, threat and otherness in an attempt to increase polarization online, in the hope that it spills into the offline, in the form of votes, financial support and participation in rallies. Research by Hope Not Hate (2019) shows that at the time of the publication of their report, 5 of the 10 far-right social media activists with the biggest online reach in the world were British. The newest recruits to these ideologies (e.g. Generation Identity) are highly technically capable and believe social media to be essential to building a larger following.

Whatever the effect of social media on polarization, and how this may vary by individual-level factors, the role of events, bots and far right agitators, there remains limited experimental research that pertains to the key aim of this article: its impact on the behaviour of the public offline. Preliminary unpublished work suggests a link between online polarizing activity and offline hate crime ( Müller and Shwarz, 2018a , 2018b ). But what remains under-theorized is why social media has salience in this context that overrides the effect of other sources (TV, newspapers, radio) espousing arguably more mainstream viewpoints. Greer and Mclaughlin (2010) have written about the power of social media in the form of citizen journalism, demonstrating how the initially dominant police driven media narrative of ‘protestor violence’ in the reporting of the G20 demonstration was rapidly disrupted by technology-driven alternative narratives of ‘police violence’. They conclude “the citizen journalist provides a valuable additional source of real-time information that may challenge or confirm the institutional version of events” (2010: 1059). Increasingly, far right activists like Stephen Yaxley-Lennon are adopting citizen journalism as a tactic to polarize opinion. Notably, Lennon live-streamed himself on social media outside Leeds Crown Court hearing the Huddersfield grooming trials to hundreds of thousands of online viewers. His version of events was imbued with anti-Islam rhetoric, and the stunt almost derailed the trial. Such tactics take advantage of immediacy, manipulation, partisanship and a lack of accountability rarely found in mainstream media. Such affordances can provide a veil of authenticity and realism to stories, having the power to reframe their original casting by the ‘official’ establishment narrative, further enabled by dramatic delivery of ‘evidence’ of events as they occur. The ‘hacking’ of the information-communications marketplace enabled by social media disrupts the primacy of conventional media, allowing those who produce subversive “fake news” anti-establishment narratives to rise up the ‘hierarchy of credibility’. The impact of this phenomenon is likely considerable knowing over two-thirds of UK adults, and eight in ten 16- to 24-year-olds now use the Internet as their main source of news ( Ofcom, 2018b ).

The hypotheses test if online hate speech on Twitter, an indicator of right-wing polarization, can improve upon the estimations of offline hate crimes that use conventional predictors alone.

H1 : Conventional census regressors associated with hate crime in previous research will emerge as statistically significant.

‘Realistic’ threats are often associated with hate crimes (Stephan and Stephan, 2000 ; Roberts et al., 2013). These relate to resource threats, such as competition over jobs and welfare benefits. Espiritu (2004) shows how US census measures relating to economic context are statistically associated with hate crimes at the state level. In the United Kingdom, Ray et al. (2004) found that a sense of economic threat resulted in unacknowledged shame, which was experienced as rage directed toward the minority group perceived to be responsible for economic hardship. Demographic ecological factors, such as proportion of the population who are black or minority ethnic and age structure, have also been associated with hate crime ( Green, 1998 ; Nandi et al. , 2017 ; Williams and Tregidga, 2014 ; Ray et al. , 2004 ). In addition, educational attainment has been shown to relate to tolerance, even among those explicitly opposed to minority groups ( Bobo and Licari, 1989 ).

H2 : Online hate speech targeting race and religion will be positively associated with police recorded racially and religiously aggravated crimes in London.

Preliminary unpublished work focusing on the United States and Germany has showed that posts from right-wing politicians that target minority groups, deemed as evidence of extreme polarization, are statistically associated with variation in offline hate crimes recorded by the police. Müller and Shwarz (2018a) found an association between Trump’s tweets about Islam-related topics and anti-Muslim hate in US state counties. The same authors also found anti-refugee posts on the far-right Alternative für Deutschland’s Facebook page predicted offline-violent crime against immigrants in Germany ( Müller and Shwarz, 2018b ). This hypothesis tests for the first time if these associations are replicated in the United Kingdom’s largest metropolitan area.

H3 : Estimation models including the online hate speech regressor will increase the amount of offline hate crime variance explained in panel-models compared to models that include census variables alone.

Williams et al. (2017a) found that tweets mentioning terms related to the concept of ‘broken windows’ were statistically associated with police recorded crime (hate crime was not included) in London boroughs and improved upon the variance explained compared to census regressors alone. This hypothesis tests whether these results hold for the estimation of hate crimes.

The study adopted methods from Computational Criminology (see Williams et al. , 2017a for an overview). Data were linked from administrative, survey and social media sources to build our statistical models. Police recorded racially and religiously aggravated offences data were obtained from the Metropolitan Police Service for an eight-month period between August 2013 and August 2014. UK census variables from 2011 were derived from the Nomis web portal. London-based tweets were collected over the eight-month period using the Twitter streaming Application Programming Interface via the COSMOS software ( Burnap et al. , 2014 ). All sources were linked by month and Lower Layer Super Output Area (LSOA) in preparation for a longitudinal ecological analysis.

Dependent measures

Police recorded crime.

Police crime data were filtered to ensure that only race hate crimes related to anti-black/west/south Asian offences, and religious hate crimes related to anti-Islam/Muslim offences were included in the measures. In addition to total police recorded racially and religiously aggravated offences ( N = 6,572), data were broken down into three categories: racially and religiously aggravated violence against the person, criminal damage and harassment reflecting Part II of the Crime and Disorder Act 1998.

Independent measures

Social media regressors.

Twitter data were used to derive two measures. Count of Geo-coded Twitter posts —21.7 million posts were located within the 4720 London LSOAs over the study window as raw counts (Overall: mean 575; s.d. 1,566; min 0; max 75,788; Between: s.d. 1,451; min 0; max 53,345; Within: s.d. 589; min –23,108; max 28,178). Racial and Religious Online Hate Speech —the London geo-coded Twitter corpus was classified as ‘hateful’ or not (Overall: mean 8; s.d. 15.84; min 0; max 522; Between: s.d. 12.57; min 0; max 297; Within: s.d. 9.63; min –120; max 440). Working with computer scientists, a supervised machine learning classifier was built using the Weka tool to distinguish between ‘hateful’ Twitter posts with a focus on race (in this case anti-black/middle-eastern) and religion (in this case anti-Islam/Muslim), and more general non-‘hateful’ posts. A gold standard dataset of human-coded annotations was generated to train the machine classifier based on a sample of 2,000 tweets. In relation to each tweet, human coders were tasked with selecting from a ternary set of classes (‘yes’, ‘no’, and ‘undecided’) in response to the following question: ‘is this text offensive or antagonistic in terms of race, ethnicity or religion?’ Tweets that achieved 75 per cent agreement and above from four human coders were transposed into a machine learning training dataset (undecided tweets were dropped). Support Vector Machine with Bag of Words feature extraction emerged as most accurate machine learning model, with a precision of 0.89, a retrieval of 0.69 and an overall F-measure of 0.771, above the established threshold of 0.70 in the field of information retrieval ( van Rijsbergen, 1979 ). The final hate dataset consisted of 294,361 tweets, representing 1.4 per cent of total geo-coded tweets in the study window (consistent with previous research, see Williams and Burnap, 2016 ; Williams and Burnap, 2018 ). Our measure of online hate speech is not designed to correspond directly to online hate acts deemed as criminal in the UK law. The threshold for criminal hate speech is high, and legislation is complex (see CPS guidance and Williams et al. , 2019 ). Ours is a measure of online inter-group racial and/or religious tension, akin to offline community tensions that are routinely picked up by neighborhood policing teams. Not all manifestations of such tension are necessarily criminal, but they may be indicative of pending activity that may be criminal. Examples of hate speech tweets in our sample, include: ‘Told you immigration was a mistake. Send the #Muzzies home!’; ‘Integrate or fuck off. No Sharia law. #BurntheQuran’; and ‘Someone fucking knifed on my street! #niggersgohome’. 3

Census regressors

Four measures were derived from 2011 census data based on the literature that estimated hate crime using ecological factors (e.g. Green, 1998 ; Espiritu, 2004 ). These include proportion of population: (1) with no qualifications, (2) aged 16–24, (3) long-term unemployed, and (4) black and minority ethnic (BAME). 4

Methods of estimation

The estimation process began with a single-level model that collapsed the individual 8 months worth of police hate crime and Twitter data into one time period. Because of the skewed distribution of the data and the presence of over-dispersion, a negative binomial regression model was selected. These non-panel models provide a baseline against which to compare the second phase of modelling. To incorporate the temporal variability of police recorded crime and Twitter data, the second phase of modelling adopted a random- and fixed-effects regression framework. The first step was to test if this framework was an improvement upon the non-panel model that did not take into account time variability. The Breusch–Pagan Lagrange multiplier test revealed random-effects regression was favourable over single-level regression. Random effects modelling allows for the inclusion of time-variant (police and Twitter data) and time-invariant variables (census measures). Both types of variable were grouped into the 4720 LSOA areas that make up London. Using LSOA as the unit of analysis in the models allowed for an ‘ecological’ appraisal of the explanatory power of race and religious hate tweets for estimating police recorded racially and religiously aggravated offences ( Sampson, 2012 ). When the error term of an LSOA is correlated with the variables in the model, selection bias results from time-invariant unobservables, rendering random effects inconsistent. The alternative fixed-effects model that is based on within-borough variation removes such sources of bias by controlling for observed and unobserved ecological factors. Therefore, both random- and fixed-effects estimates are produced for all models. 5 A Poisson model was chosen over negative binomial, as the literature suggests the latter does not produce genuine fixed-effects (FE) estimations. 6 In addition, Poisson random-/fixed-effects (RE/FE) estimation with robust standard errors is recognized as the most reliable option in the presence of over-dispersion ( Wooldridge, 1999 ). There were no issues with multicollinearity in the final models.

Figures 4–7 show scatterplots with a fitted lined (95% confidence interval in grey) of the three types of racially and religiously aggravated offences (plus combined) by race and religious hate speech on Twitter over the whole eight-month period. Scatterplots indicated a positive relationship between the variables. Two LSOAs emerged as clear outliers (LSOA E01004736 and E01004763: see Figures 8–9 ) and required further inspection (not included in scatter plots). A jackknife resampling method was used to confirm if these LSOAs (and others) were influential points. This method fits a negative binomial model in 4,720 iterations while suppressing one observation at a time, allowing for the effect of each suppression on the model to be identified; in plain terms, it allows us to see how much each LSOA influences the estimations. Inspection of a scatterplot of dfbeta values (the amount that a particular parameter changes when an observation is suppressed) confirmed the above LSOAs as influential points, and in addition E01002444 (Hillingdon, in particular Heathrow Airport) and E01004733 (Westminster). The decision was made to build all models with and without outliers to identify any significant differences. The inclusion of all four outliers did change the magnitude of effects, standard errors and significance levels for some variables and model fit, so they were removed in the final models.

Hate tweets by R & R aggravated violence against the person

Hate tweets by R & R aggravated violence against the person

Hate tweets by R & R aggravated harassment

Hate tweets by R & R aggravated harassment

Hate tweets by R & R aggravated criminal damage

Hate tweets by R & R aggravated criminal damage

Hate tweets by R & R aggravated offences combined

Hate tweets by R & R aggravated offences combined

Outlier LSOA E01004736

Outlier LSOA E01004736

Outlier LSOA E01004763

Outlier LSOA E01004763

Table 1 presents results from the negative binomial models for each type of racially and religiously aggravated crime category. These models do not take into account variation over time, so estimates should be considered as representing statistical associations covering the whole eight-month period of data collection, and a baseline against which to compare the panel models presented later. The majority of the census regressors emerge as significantly predictive of all racially and religiously aggravated crimes, broadly confirming previous hate crime research examining similar factors and partly supporting Hypothesis 1. Partly supporting Green (1998) and Nandi (2017) the proportion of the population that is BAME emerged as positively associated with all race and religious hate crimes, with the greatest effect emerging for racially or religiously aggravated violence against the person. Partly confirming work by Bobo and Licari (1989) models shows a positive relationship between the proportion of the population with no qualifications and racially and religiously aggravated violence, criminal damage and total hate crime, but the association only emerged as significant for criminal damage. Proportion of the population aged 16–24 only emerged as significant for criminal damage and total hate crimes, and the relationship was negative, partly contradicting previous work ( Ray et al. , 2004 ; Williams and Tregidga, 2014 ). Like Espiritu (2004) and Ray et al. (2004) , the models show that rates of long-term unemployment were positively associated with all race and religious hate crimes. Although this variable had the greatest effect in the models, we found an inverted U-shape curvilinear relationship (indicated by the significant quadratic term). Figure 10 graphs the relationship, showing as the proportion of the long-term unemployed population increases victimization increases to a mid-turning point of 3.56 per cent where victimization begins to decrease.

Negative binomial models (full 8-month period, N = 4,270)

Racially or religiously aggravated violence against the personRacially or religiously aggravated harassment
SEIRR SEIRR
Prop. no qual0.001690.002361.00169–0.000230.002500.99977
Prop. 16–24–0.005100.003710.99492–0.007240.003760.99279
Prop. unmplyd0.62507***0.053841.868380.63071***0.056951.87894
Prop. unmplydsqr–0.08655***0.009880.91709–0.08940***0.010680.91448
Prop. BAME0.00992***0.000781.009970.00618***0.000871.00620
Tweet Freq.0.00005***0.000011.000050.00003**0.000011.00003
Hate Tweets0.00436***0.000681.004370.00437***0.000621.00438
 Constant1.200770.070823.322680.267350.071361.30650
Pseudo R 0.530.44
Racially or religiously aggravated criminal damageRacially or religiously aggravated offences combined
SEIRR SEIRR
Prop. no qual0.00893***0.002221.008970.003720.002231.00372
Prop. 16–24–0.00891**0.003540.99113–0.00692*0.003490.99310
Prop. unmplyd0.47102***0.057501.601620.58373***0.050951.79271
Prop. unmplydsqr–0.06921***0.011010.93313–0.08208***0.009510.92120
Prop. BAME0.00387***0.000781.003880.00806***0.000751.00809
Tweet Freq.0.00002*0.000011.000020.00004***0.000011.00004
Hate Tweets0.00456***0.000561.004570.00439***0.000671.00440
 Constant0.692180.068491.998071.848260.065336.34879
Pseudo R 0.390.52
Racially or religiously aggravated violence against the personRacially or religiously aggravated harassment
SEIRR SEIRR
Prop. no qual0.001690.002361.00169–0.000230.002500.99977
Prop. 16–24–0.005100.003710.99492–0.007240.003760.99279
Prop. unmplyd0.62507***0.053841.868380.63071***0.056951.87894
Prop. unmplydsqr–0.08655***0.009880.91709–0.08940***0.010680.91448
Prop. BAME0.00992***0.000781.009970.00618***0.000871.00620
Tweet Freq.0.00005***0.000011.000050.00003**0.000011.00003
Hate Tweets0.00436***0.000681.004370.00437***0.000621.00438
 Constant1.200770.070823.322680.267350.071361.30650
Pseudo R 0.530.44
Racially or religiously aggravated criminal damageRacially or religiously aggravated offences combined
SEIRR SEIRR
Prop. no qual0.00893***0.002221.008970.003720.002231.00372
Prop. 16–24–0.00891**0.003540.99113–0.00692*0.003490.99310
Prop. unmplyd0.47102***0.057501.601620.58373***0.050951.79271
Prop. unmplydsqr–0.06921***0.011010.93313–0.08208***0.009510.92120
Prop. BAME0.00387***0.000781.003880.00806***0.000751.00809
Tweet Freq.0.00002*0.000011.000020.00004***0.000011.00004
Hate Tweets0.00456***0.000561.004570.00439***0.000671.00440
 Constant0.692180.068491.998071.848260.065336.34879
Pseudo R 0.390.52

Notes: Because of the presence of heteroskedasticity robust standard errors are presented. * p < 0.05; ** p < 0.01; *** p < 0.001. All models significant at the 0.0000 level.

Plot of curvilinear relationship between long term unemployment and racially and religiously aggravated crime.

Plot of curvilinear relationship between long term unemployment and racially and religiously aggravated crime.

This finding at first seems counter-intuitive, but a closer inspection of the relationship between the proportion of the population that is long-term unemployed and the proportion of the population that is BAME reveals a possible explanation. LSOAs with very high long-term unemployment and BAME populations overlap. Where this overlap is significant, we find relatively low rates of hate crime. For example, LSOA E01001838 in Hackney, in particular the Frampton Park Estate area has 6.1 per cent long-term unemployment, a 68 per cent BAME population and only 2 hate crimes, and LSOA E01003732 in Redbridge has 5.6 per cent long-term unemployment, a 76 per cent BAME population, and only 2 hate crimes. These counts of hate crime either are below or are only slightly above the mean for London (mean = 1.39, maximum = 390). We know from robust longitudinal analysis by Nandi et al. (2017) that minority groups living in very high majority white areas are significantly more likely to report experiencing racial harassment. This risk decreases in high multicultural areas where there is low support for far right groups, such as London. Simple regression (not shown here) where the BAME population proportion was included as the only regressor does show an inverted U-shape relationship with all hate crimes, with the risk of victimization decreasing when the proportion far outweighs the white population. However, this curve was smoothed out when other regressors were included in the models. This analysis therefore suggests that LSOAs with high rates of long-term unemployment but lower rates of hate crime are likely to be those with high proportions of BAME residents, some of whom will be long-term unemployed themselves but unlikely to be perpetrating hate crimes against the ingroup.

Supporting Hypotheses 2, all negative binomial models show online hate speech targeting race and religion is positively associated with all offline racially and religiously aggravated offences, including total hate crimes in London over an eight-month period. The magnitude of the effect is relatively even across offence category. When considering the effect of the Twitter regressors against census regressors, it must be borne in mind the unit of change needed with each regressor to affect the outcome. For example, a percentage change in the BAME population proportion in an LSOA is quite different from a change in the count of hate tweets in the same area. The latter is far more likely to vary to a much greater extent and far more rapidly (see later in this section). The associations identified in these non-panel models indicate a strong link between hateful Twitter posts and offline racially and religiously aggravated crimes in London. Yet, it is not possible with these initial models to state direction of association: We cannot say if online hate speech precedes rather than follows offline hate crime.

Table 2 presents results from RE/FE Poisson models that incorporate variation over space and time . RE/FE models have been used to indicate causal pathways in previous criminological research; however, we suggest such claims in this article would stretch the data beyond their limits. As we adopt an ecological framework, using LSOAs as our unit of analysis, and not individuals, we cannot state with confidence that area-level factors cause the outcome. There are likely sub-LSOA factors that account for causal pathways, but we were unable to observe these in this study design. Nevertheless, the results of the RE/FE models represent a significant improvement over the negative binomial estimations presented earlier and are suitable for subjecting these earlier findings to a more robust test. Indeed, FE models are the most robust test given they are based solely on within-LSOA variation, allowing for the elimination of potential sources of bias by controlling for observed and unobserved ecological characteristics ( Allison, 2009 ). In contrast, RE models only take into account the factors included as regressors. These models therefore allow us to determine if online hate speech precedes rather than follows offline hate crime.

Random and fixed-effects Poisson regression models

Model AModel BModel C
SEIRR SEIRR SEIRR
Racially or religiously aggravated violence against the person
 Prop. no qual–0.02371***0.003220.97657–0.02094***0.003080.97928–0.02010***0.003200.98010
 Prop. 16–240.05212***0.008331.053500.04451***0.007281.045510.04265***0.007421.04357
 Prop. unmplyd0.80908***0.078042.245840.79596***0.075342.216580.79509***0.074832.21463
 Prop. unmplydsqr–0.10414***0.014900.90110–0.10288***0.014350.90224–0.10287***0.014250.90224
 Prop. BAME0.00328**0.001151.003290.00397***0.001091.003980.00413***0.001101.00414
 Tweet Freq.0.000010.000011.00001
 Hate Tweets0.00226***0.000491.002270.00134***0.000291.00134
  Constant–0.595390.100300.55135–0.587100.095200.55594–0.585470.094190.55685
 Tweet Freq.0.00001*0.000001.00001
 Hate Tweets0.00113***0.000351.00113–0.000460.000860.99954
 Prop. BAME × Hate Tweets0.00009*0.000021.00009
 Adjusted R 0.05670.30390.3568
Racially or religiously aggravated criminal damage
Model AModel BModel C
SEIRR SEIRR SEIRR
 Prop. no qual–0.00841**0.002680.99163–0.00543*0.002470.99459–0.004090.002530.99591
 Prop. 16–240.03228***0.005741.032810.02482***0.004731.025140.02234***0.004731.02259
 Prop. unmplyd0.62621***0.078591.870510.60606***0.074921.833190.60389***0.074201.82922
 Prop. unmplydsqr–0.08545***0.015810.91810–0.08326***0.015070.92011–0.08313***0.014910.92023
 Prop. BAME0.000100.000981.000100.000150.000911.000150.000210.000911.00021
 Tweet Freq.0.00003**0.000011.00004
 Hate Tweets0.00353***0.000651.003530.00133*0.000621.00133
  Constant–1.203800.088240.30005–1.204260.083190.29991
 Tweet Freq.0.00004***0.000011.00004
 Hate Tweets0.000270.000391.00027–0.001670.001150.99833
 Prop. BAME × Hate Tweets0.00003*0.000031.00003
 Adjusted R 0.02420.13670.1537
Racially or religiously aggravated harassment
Model AModel BModel C
SEIRR SEIRR SEIRR
 Prop. no qual–0.02173***0.003060.97851–0.01783***0.002810.98232–0.01663***0.002910.98351
 Prop. 16–240.04119***0.006811.042050.03124***0.005311.031730.02900***0.005361.02943
 Prop. unmplyd0.80724***0.076152.241720.78335***0.072512.188800.78353***0.071712.18918
 Prop. unmplydsqr–0.10780***0.014520.89781–0.10523***0.013780.90012–0.10543***0.013640.89993
 Prop. BAME0.000650.001111.000650.001570.001031.001570.001760.001031.00176
 Tweet Freq.0.00003*0.000011.00003
 Hate Tweets0.00404***0.000741.004050.00209***0.000571.00209
  Constant–1.590190.091970.20389–1.585030.085630.20494–1.588630.084450.20420
 Tweet Freq.0.00004**0.000011.00004
 Hate Tweets0.00080*0.000371.00080–0.001790.001420.99822
 Prop. BAME × Hate Tweets0.00008*0.000041.00008
 Adjusted R 0.03480.16920.1917
Racially or religiously offences combined
Model AModel BModel C
SEIRR SEIRR SEIRR
 Prop. no qual–0.02009***0.002970.98011–0.01806***0.002850.98210–0.01727***0.002950.98288
 Prop. 16–240.04632***0.007461.047410.04084***0.006721.041690.03908***0.006811.03985
 Prop. unmplyd0.76556***0.074482.150190.75562***0.072472.128940.75444***0.071972.12642
 Prop. unmplydsqr–0.09988***0.014470.90494–0.09892***0.014060.90582–0.09887***0.013960.90586
 Prop. BAME0.00196**0.001071.001960.00245*0.001031.002450.00260*0.001031.00261
 Tweet Freq.0.000010.000011.00001
 Hate Tweets0.00172***0.000371.001720.00093***0.000261.00093
  Constant0.037970.091981.038710.043290.088350.044911.04593
 Tweet Freq.0.00002**0.000011.00002
 Hate Tweets0.00093***0.000281.00094–0.000700.000710.99931
 Prop. BAME × Hate Tweets0.00004*0.000021.00004
 Adjusted R 0.04950.29370.3412
Model AModel BModel C
SEIRR SEIRR SEIRR
Racially or religiously aggravated violence against the person
 Prop. no qual–0.02371***0.003220.97657–0.02094***0.003080.97928–0.02010***0.003200.98010
 Prop. 16–240.05212***0.008331.053500.04451***0.007281.045510.04265***0.007421.04357
 Prop. unmplyd0.80908***0.078042.245840.79596***0.075342.216580.79509***0.074832.21463
 Prop. unmplydsqr–0.10414***0.014900.90110–0.10288***0.014350.90224–0.10287***0.014250.90224
 Prop. BAME0.00328**0.001151.003290.00397***0.001091.003980.00413***0.001101.00414
 Tweet Freq.0.000010.000011.00001
 Hate Tweets0.00226***0.000491.002270.00134***0.000291.00134
  Constant–0.595390.100300.55135–0.587100.095200.55594–0.585470.094190.55685
 Tweet Freq.0.00001*0.000001.00001
 Hate Tweets0.00113***0.000351.00113–0.000460.000860.99954
 Prop. BAME × Hate Tweets0.00009*0.000021.00009
 Adjusted R 0.05670.30390.3568
Racially or religiously aggravated criminal damage
Model AModel BModel C
SEIRR SEIRR SEIRR
 Prop. no qual–0.00841**0.002680.99163–0.00543*0.002470.99459–0.004090.002530.99591
 Prop. 16–240.03228***0.005741.032810.02482***0.004731.025140.02234***0.004731.02259
 Prop. unmplyd0.62621***0.078591.870510.60606***0.074921.833190.60389***0.074201.82922
 Prop. unmplydsqr–0.08545***0.015810.91810–0.08326***0.015070.92011–0.08313***0.014910.92023
 Prop. BAME0.000100.000981.000100.000150.000911.000150.000210.000911.00021
 Tweet Freq.0.00003**0.000011.00004
 Hate Tweets0.00353***0.000651.003530.00133*0.000621.00133
  Constant–1.203800.088240.30005–1.204260.083190.29991
 Tweet Freq.0.00004***0.000011.00004
 Hate Tweets0.000270.000391.00027–0.001670.001150.99833
 Prop. BAME × Hate Tweets0.00003*0.000031.00003
 Adjusted R 0.02420.13670.1537
Racially or religiously aggravated harassment
Model AModel BModel C
SEIRR SEIRR SEIRR
 Prop. no qual–0.02173***0.003060.97851–0.01783***0.002810.98232–0.01663***0.002910.98351
 Prop. 16–240.04119***0.006811.042050.03124***0.005311.031730.02900***0.005361.02943
 Prop. unmplyd0.80724***0.076152.241720.78335***0.072512.188800.78353***0.071712.18918
 Prop. unmplydsqr–0.10780***0.014520.89781–0.10523***0.013780.90012–0.10543***0.013640.89993
 Prop. BAME0.000650.001111.000650.001570.001031.001570.001760.001031.00176
 Tweet Freq.0.00003*0.000011.00003
 Hate Tweets0.00404***0.000741.004050.00209***0.000571.00209
  Constant–1.590190.091970.20389–1.585030.085630.20494–1.588630.084450.20420
 Tweet Freq.0.00004**0.000011.00004
 Hate Tweets0.00080*0.000371.00080–0.001790.001420.99822
 Prop. BAME × Hate Tweets0.00008*0.000041.00008
 Adjusted R 0.03480.16920.1917
Racially or religiously offences combined
Model AModel BModel C
SEIRR SEIRR SEIRR
 Prop. no qual–0.02009***0.002970.98011–0.01806***0.002850.98210–0.01727***0.002950.98288
 Prop. 16–240.04632***0.007461.047410.04084***0.006721.041690.03908***0.006811.03985
 Prop. unmplyd0.76556***0.074482.150190.75562***0.072472.128940.75444***0.071972.12642
 Prop. unmplydsqr–0.09988***0.014470.90494–0.09892***0.014060.90582–0.09887***0.013960.90586
 Prop. BAME0.00196**0.001071.001960.00245*0.001031.002450.00260*0.001031.00261
 Tweet Freq.0.000010.000011.00001
 Hate Tweets0.00172***0.000371.001720.00093***0.000261.00093
  Constant0.037970.091981.038710.043290.088350.044911.04593
 Tweet Freq.0.00002**0.000011.00002
 Hate Tweets0.00093***0.000281.00094–0.000700.000710.99931
 Prop. BAME × Hate Tweets0.00004*0.000021.00004
 Adjusted R 0.04950.29370.3412

Notes: Table shows results of separate random and fixed effects models. To determine if RE or FE is preferred the Hausman test can be used. However, this has been shown to be inefficient, and we prefer not to rely on it for interpreting our models (see Troeger, 2008 ). Therefore, both RE and FE results should be considered together. Because of the presence of heteroskedasticity robust standard errors are presented. Adjusted R 2 for random effects models only. * p < 0.05; ** p < 0.01; *** p < 0.001. All models significant at the 0.0000 level.

The RE/FE modelling was conducted in three stages (Models A to C) to address Hypothesis 3—to assess the magnitude of the change in the variance explained in the outcomes when online hate speech is added as a regressor. Model A includes only the census regressors for the RE estimations, and for all hate crime categories, broadly similar patterns of association emerge compared to the non-panel models. The variance explained by the set of census regressors ranges between 2 per cent and 6 per cent. Such low adjusted R-square values are not unusual for time-invariant regressors in panel models ( Allison, 2009 ).

Models B and C were estimated with RE and FE and introduce the Twitter variables of online hate speech and total count of geo-coded tweets. Model B introduces online hate speech alone, and both RE and FE results show positive significant associations with all hate crime categories. The largest effect in the RE models emerges for harassment (IRR 1.004). For every unit increase in online hate speech a corresponding 0.004 per cent unit increase is observed in the dependent. Put in other terms, an increase of 100 hate tweets would correspond to a 0.4 per cent increase, and an increase of 1,000 tweets would correspond to a 4 per cent increase in racially or religiously aggravated harassment in a given month within a given LSOA. Given we know hate speech online increases dramatically in the aftermath of trigger events (Williams and Burnap, 2015), the first example of an increase of 100 hate tweets in an LSOA is not fanciful. The magnitude of the effect with harassment, compared to the other hate offences, is also expected, given hate-related public order offences, that include causing public fear, alarm and distress, also increased most dramatically in the aftermath the ‘trigger’ events alluded to above (accounting for 56 per cent of all hate crimes recorded by police in 2017/18 ( Home Office, 2018) ). The adjusted R-square statistic for Model B shows large increases in the variance explained in the dependents by the inclusion of online hate speech as a regressor, ranging between 13 per cent and 30 per cent. Interpretation of these large increases should be tempered given time-variant regressors can exert a significant effect in panel models ( Allison, 2009 ). Nonetheless, the significant associations in both RE and FE models and the improvement in the variance explained provide strong support for Hypotheses 2 and 3.

Model C RE and FE estimations control for total counts of geo-coded Tweets, therefore eradicating any variance explained by the hate speech regressor acting as a proxy for population density ( Malleson and Andresen, 2015 ). In all models, the direction of relationship and significance between online hate speech and hate crimes does not change, but the magnitude of the effect does decrease, indicating the regressor was likely also acting, albeit to a small extent, as proxy for population density. The FE models also include an interaction variable between the time-invariant regressor proportion of the population that is BAME and the time-variant regressor online hate speech. The interaction term was significant for all hate crime categories with the strongest effect emerging for racially and religiously aggravated violence against the person. Figure 11 presents a predicted probability plot combining both variables for the outcome of violent hate crime. In an LSOA with a 70 per cent BAME population with 300 hate tweets posted a month, the incidence rate of racially and religiously aggravated violence is predicted to be between 1.75 and 2. However, it must be borne in mind when interpreting these predictions, the skewed distribution of the sample. Just over 70 per cent of LSOAs have a BAME population of 50 per cent or less and 150 or less hate tweets per month, therefore the probability for offences in these areas is between 1 and 1.25 (lower-left dark blue region of the plot). This plot provides predictions based on the model estimates, meaning if in the future populations and hate tweets were to increase toward the upper end of the spectrums, these are the probabilities of observing the racially and religiously aggravated violence in London.

Predicted probability of R & R agg. violence by BAME population proportion and hate tweet count

Predicted probability of R & R agg. violence by BAME population proportion and hate tweet count

Our results indicate a consistent positive association between Twitter hate speech targeting race and religion and offline racially and religiously aggravated offences in London. Previous published work indicated an association around events that acted as ‘triggers’ for on and offline hate acts. This study confirms this association is consistent in the presence and absence of events. The models allowed us to provide predictions of the incidence rate of offline offences by proportion of the population that is BAME and the count of online hate tweets. The incidence rate for near three-quarters of LSOAs within London when taking into account these and other factors in the models remains below 1.25. Were the number of hate tweets sent per month to increase dramatically in an area with a high BAME population, our predictions suggest much higher incidence rates. This is noteworthy, given what we know about the impact of ‘trigger’ events and hate speech, and indicates that the role of social media in the process of hate victimization is non-trivial.

Although we were not able to directly test the role of online polarization and far right influence on the prevalence of offline hate crimes, we are confident that our focus on online hate speech acted as a ‘signature’ measure of these two phenomena. Through the various mechanisms outlined in the theoretical work presented in this article, it is plausible to conclude that hate speech posted on social media, an indicator of extreme polarization, influences the frequency of offline hate crimes. However, it is unlikely that online hate speech is directly causal of offline hate crime in isolation. It is more likely the case that social media is only part of the formula, and that local level factors, such as the demographic make-up of neighbourhoods (e.g. black and minority ethnic population proportion, unemployment) and other ecological level factors play key roles, as they always have in estimating hate crime ( Green, 1998 ; Espiritu, 2004 ; Ray et al. , 2004 ). What this study contributes is a data and theory-driven understanding of the relative importance of online hate speech in this formula. If we are to explain hate crime as a process and not a discrete act, with victimization ranging from hate speech through to violent victimization, social media must form part of that understanding ( Bowling, 1993 ; Williams and Tregidga, 2014 ).

Our results provide an opportunity to renew Bowling’s (1993) call to see racism as a continuity of violence, threat and intimidation. We concur that hate crimes must be conceptualized as a process set in geographical, social, historical and political context. We would add that ‘technological’ context is now a key part of this conceptualization. The enduring quality of hate victimization, characterized by repeated or continuous insult, threat, or violence now extends into the online arena and can be linked to its offline manifestation. We argue that hate speech on social media extends ‘climates of unsafety’ experienced by minority groups that transcend individual instances of victimization ( Stanko, 1990 ). Online hate for many minorities is part and parcel of everyday life—as Pearson et al. (1989 : 135) state ‘A black person need never have been the actual victim of a racist attack, but will remain acutely aware that she or he belongs to a group that is threatened in this manner’. This is no less true in the digital age. Social media, through various mechanisms such as unfettered use by the far right, polarization, events, and psychological processes such as deindividuation, has been widely infected with a casual low-level intolerance of the racial Other .

Our study informs the ongoing debate on ‘predictive policing’ using big data and algorithms to find patterns at scale and speed, hitherto unrealizable in law enforcement ( Kaufmann et al. , 2019 ). Much of the criminological literature is critical. The process of pattern identification further embeds existing power dynamics and biases, sharpens the focus on the symptoms and not the causes of criminality, and supports pre-emptive governance by new technological sovereigns ( Chan and Bennett Moses, 2017 ). These valid concerns pertain mainly to predictive policing efforts that apply statistical models to data on crime patterns, offender histories, administrative records and demographic area profiles. These models and data formats tend to produce outcomes that reflect existing patterns and biases because of their historical nature. Our work mitigates some of the existing pitfalls in prediction efforts in three ways: (1) The data used in estimating patterns are not produced by the police, meaning they are immune from inherent biases normally present in the official data generation process; (2) social media data are collected in real-time, reducing the error introduced by ‘old’ data that are no longer reflective of the context; and (3) viewing minority groups as likely victims and not offenders, while not addressing the existing purported bias in ongoing predictive policing efforts, demonstrates how new forms of data and technology can be tailored to achieve alternative outcomes. However, the models reported in this article are not without their flaws, and ahead of their inclusion in real-life applications, we would warn that predictions alone do not necessarily lead to good policing on the streets. As in all statistics, there are degrees of error, and models are only a crude approximation of what might be unfolding on the ground. In particular, algorithmic classification of hate speech is not perfect, and precision, accuracy and recall decays as language shifts over time and space. Therefore, any practical implementation would require a resource-intensive process that ensured algorithms were updated and tested frequently to avoid unacceptable levels of false positives and negatives.

Finally, we consider the methodological implications of this study are as significant as those outlined by Bowling (1993) . Examining the contemporary hate victimization dynamic requires methods that are able to capture both time and space variations in both online and offline data. Increasing sources of data on hate is also important due to continued low rates of reporting. We demonstrated how administrative (police records), survey (census) and new forms of data (Twitter) can be linked to study hate in the digital age. Surveys, interviews and ethnographies should be complemented by these new technological methods of enquiry to enable a more complete examination of the social processes which give rise to contemporary hate crimes. In the digital age, computational criminology, drawing on dynamic data science methods, can be used to study the patterning of online hate speech victimization and associated offline victimization. However, before criminologists and practitioners incorporate social media into their ‘data diets’, awareness of potential forms of bias in these new forms of data is essential. Williams et al . (2017a) identified several sources of bias, including variations in the use of social media (e.g. Twitter being much more popular with younger people). This is particularly pertinent given the recent abandonment of Twitter by many far right users following a clamp-down on hate speech in Europe. A reduction in this type of user may see a corresponding decrease in hate tweets, as they flock to more underground platforms, such as 8chan, 4chan, Gab and Voat, that are currently more difficult to incorporate into research and practical applications. The data used in this study were collected at a time before the social media giants introduced strict hate speech policies. Nonetheless, we would expect hate speech to be displaced, and in time data science solutions will allow us to follow the hate wherever it goes.

The government publication of ‘The Response to Racial Attacks and Harassment’ in 1989 saw a sea-change in the way criminal justice agencies and eventually the public viewed hate crime the United Kingdom ( Home Office, 1989 ). In 2019, the government published its Online Harms White Paper that tries to achieve the same with online hate ( Cabinet Office, 2019 ). Over the past decade, online hate victims have failed to convince others that they are undeserved targets of harm that is sufficiently serious to warrant collective concern, due to insufficient empirical credibility and their subsequent unheard calls for recognition. This research shows that online hate victimization is part of a wider process of harm that can begin on social media and then migrate to the physical world. Qualitative work shows direct individual level links between online and offline hate victimization ( Awan and Zempi, 2017 ). Our study extends this to the ecological level at the scale of the UK’s largest metropolitan area. Despite this significant advancement, we were unable to examine sub-LSOA factors, meaning the individual level mechanisms responsible for the link between online and offline hate incidents remain to be established by more forensic and possibly qualitative work. The combination of the data science-driven results of this study and future qualitative work has the potential to address the reduced capacity of the police to gain intelligence on terrestrial community tensions that lead to hate crimes. Such a technological solution may even assist in the redressing of the bias reportedly present in ‘predictive policing’ efforts, by refocussing the algorithmic lens away from those historically targeted by police, onto those that perpetrate harms against minorities.

This work was supported by the Economic and Social Research Council grant: ‘Centre for Cyberhate Research and Policy: Real-Time Scalable Methods & Infrastructure for Modelling the Spread of Cyberhate on Social Media’ (grant number: ES/P010695/1) and the US Department of Justice National Institute for Justice grant: ‘Understanding Online Hate Speech as a Motivator for Hate Crime’ (grant number: 2016-MU-MU-0009)

Allison , D. P . ( 2009 ), Fixed Effects Regression Models . Sage .

Google Scholar

Google Preview

Awan , I. and Zempi , I . ( 2017 ), ‘I Will Blow Your Face Off’—Virtual and Physical World Anti-Muslim Hate Crime’, British Journal of Criminology , 57 : 362 – 80

Burnap , P. , Rana , O. , Williams , M. , Housley , W. , Edwards , A. , Morgan , J. , Sloan , L. and Conejero , J . ( 2014 ), ‘COSMOS: Towards an Integrated and Scalable Service for Analyzing Social Media on Demand’, IJPSDS , 30 : 80 – 100 .

Burnap , P. and Williams , M. L . ( 2015 ), ‘Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making’ , Policy & Internet. 7 : 223 – 42 .

———. ( 2016 ), ‘Us and Them: Identifying Cyber Hate on Twitter across Multiple Protected Characteristics’ . EPJ Data Science, 5 : 1 – 15

Bail , C. A. , Argyle , L. P. , Brown , T. W. , Bumpus , J. P. , Chen , H. , Hunzaker , M. B. F. , Lee , J. , Mann , M. , Merhout , F. and Volfovsky , A . ( 2018 ), ‘Exposure to Opposing Views on Social Media Can Increase Political Polarization’ PNAS , 115 : 9216 – 21 .

Bobo , L. and Licari , F. C . ( 1989 ), ‘Education and Political Tolerance: Testing The Effects of Cognitive Sophistication and Target Group Affect’ , Public Opinion Quarterly 53 : 285 – 308 .

Bowling , B . ( 1993 ), ‘Racial Harassment and The Process of Victimisation: Conceptual and Methodological Implications for The Local Crime Survey’ , British Journal of Criminology , 33 : 231 – 50 .

Boxell , L. , Gentzkow , M. , and Shapiro , J. M . ( 2017 ), ‘Greater Internet Use Is Not Associated With Faster Growth In Political Polarization Among Us Demographic Groups’ , PNAS , 114 : 10612 – 0617 .

Brady , W. J. , Wills , J. A. , Jost , J. T. , Tucker , J. A. and Van Bavel , J. J . ( 2017 ), ‘Emotion Shapes The Diffusion of Moralized Content in Social Networks’ , PNAS , 114 : 7313 – 18 .

Cabinet Office. ( 2019 ) Internet Safety White Paper . Cabinet Office

Chan , J. and Bennett Moses , L . ( 2017 ), ‘Making Sense of Big Data for Security’ , British Journal of Criminology , 57 : 299 – 319 .

Chokshi , N . ( 2019 ), PewDiePie in Spotlight After New Zealand Shooting . New York Times .

CPS. ( 2018 ), Hate Crime Report 2017–18 . Crown Prosecutions Service .

Crest ( 2017 ), Russian Influence and Interference Measures Following the 2017 UK Terrorist Attacks . Centre for Research and Evidence on Security Threats .

Debois , E. and Blank , G . ( 2017 ), ‘The Echo Chamber is Over-Stated: The Moderating Effect of Political Interest and Diverse Media’ , Information, Communication & Society , 21 : 729 – 45 .

Demos. ( 2017 ), Anti-Islamic Content on Twitter . Demos

Espiritu , A . ( 2004 ), ‘Racial Diversity and Hate Crime Incidents’ , The Social Science Journal , 41 : 197 – 208 .

Green , D. P. , Strolovitch , D. Z. and Wong , J. S . ( 1998 ), ‘Defended Neighbourhoods, Integration and Racially Motivated Crime’ , American Journal of Sociology , 104 : 372 – 403 .

Greer , C. and McLaughlin , E . ( 2010 ), ‘We Predict a Riot? Public Order Policing, New Media Environments and the Rise of the Citizen Journalist’ , British Journal of Criminology , 50 : 1041 – 059 .

Greig-Midlane , J . ( 2014 ), Changing the Beat? The Impact of Austerity on the Neighbourhood Policing Workforce . Cardiff University .

Hanes , E. and Machin , S . ( 2014 ), ‘Hate Crime in the Wake of Terror Attacks: Evidence from 7/7 and 9/11’ , Journal of Contemporary Criminal Justice , 30 : 247 – 67 .

Hawdon , J. , Oksanen , A. and Räsänen , P . ( 2017 ), ‘Exposure To Online Hate In Four Nations: A Cross-National Consideration’ , Deviant Behavior , 38 : 254 – 66 .

Hern , A . ( 2018 ), Facebook Protects Far-Right Activists Even After Rule Breaches . The Guardian .

HMICFRS. ( 2018 ), Understanding the Difference: The Initial Police Response to Hate Crime . Her Majesty’s Inspectorate of Constabulary and Fire and Rescue Service .

Home Office. ( 1989 ), The Response to Racial Attacks and Harassment: Guidance for the Statutory Agencies, Report of the Inter-Departmental Racial Attacks Group . Home Office .

———. ( 2018 ), Hate Crime, England and Wales 2017/18 . Home Office .

Hope Not Hate. ( 2019 ), State of Hate 2019 . Hope Not Hate .

Howard , P. N. and Kollanyi , B . ( 2016 ), Bots, #StringerIn, and #Brexit: Computational Propeganda during the UK-EU Referendum . Unpublished Research Note. Oxford University Press.

Kaufmann , M. , Egbert , S. and Leese , M . ( 2019 ), ‘Predictive Policing and the Politics of Patterns’ , British Journal of Criminology , 59 : 674 – 92 .

Lehman , J . ( 2014 ), A Brief Explanation of the Overton Window . Mackinac Center for Public Policy .

Malleson , N. and Andresen , M. A . ( 2015 ), ‘Spatio-temporal Crime Hotspots and The Ambient Population’ , Crime Science , 4 : 1 – 8 .

Müller , K. and Schwarz , C. ( 2018a ), Making America Hate Again? Twitter and Hate Crime Under Trump . Unpublished working paper. University of Warwick.

———. ( 2018b ), Fanning the Flames of Hate: Social Media and Hate Crime . Unpublished working paper. University of Warwick.

Nandi , A. , Luthra , R. , Saggar , S. and Benzeval , M . ( 2017 ), The Prevalence and Persistence of Ethnic and Racial Harassment and Its Impact on Health: A Longitudinal Analysis . University of Essex .

Ofcom. ( 2018a ), Children and Parents: Media Use and Attitudes . Ofcom

———. ( 2018b ), News Consumption in the UK: 2018 . Ofcom .

———. ( 2018c ), Adults’ Media Use and Attitudes Report . Ofcom

ONS. ( 2017 ), CSEW Estimates of Number of Race and Religion Related Hate Crime in England and Wales, 12 Months Averages, Year Ending March 2014 to Year Ending March 2017 . Office for National Statistics .

Pearson , G. , Sampson , A. , Blagg , H. , Stubbs , P. and Smith , D. J . ( 1989 ), ‘Policing Racism’, in R. Morgan and D. J. Smith , eds., Coming to Terms with Policing: Perspectives on Policy . Routledge .

Peddell , D. , Eyre , M. , McManus , M. and Bonworth , J . ( 2016 ), ‘Influences and Vulnerabilities in Radicalised Lone Actor Terrorists: UK Practitioner Perspectives’ , International Journal of Police Science and Management , 18 : 63 – 76 .

Perry , B. and Olsson , P . ( 2009 ), ‘Cyberhate: The Globalisation of Hate’ , Information & Communications Technology Law , 18 : 185 – 99 .

Pew Research Centre. ( 2018 ), Americans Still Prefer Watching to Reading the News . Pew Research Centre .

Rawlinson , K . ( 2018 ), Finsbury Park-accused Trawled for Far-right Groups Online, Court Told . The Guardian .

Ray , L. , Smith , D. and Wastell , L . ( 2004 ), ‘Shame, Rage and Racist Violence’ , British Journal of Criminology , 44 : 350 – 68 .

Roberts, C., Innes, M., Williams, M. L., Tregidga, J. and Gadd, D. (2013), Understanding Who Commits Hate Crimes and Why They Do It [Project Report]. Welsh Government.

van Rijsbergen , C. J . ( 1979 ), Information Retrieval (2nd ed.), Butterworth .

Sampson , R. J . ( 2012 ), Great American City: Chicago and the Enduring Neighborhood Effect . University of Chicago Press .

Stanko . ( 1990 ), Everyday Violence . Pandora .

Stephan , W. G. and Stephan , C. W . ( 2000 ), An Integrated Threat Theory of Prejudice . Lawrence Erlbaum Associates .

Sunstein , C. R . ( 2017 ), #Republic: Divided Democracy in the Age of Social Media . Princeton University Press .

Troeger, V. E. (2008), ‘Problematic Choices: Testing for Correlated Unit Specific Effects in Panel Data’, Presented at 25th Annual Summer Conference of the Society for Political Methodology, 9–12 July 2008.

Williams, M. L. (2006), Virtually Criminal: Crime, Deviance and Regulation Online . Routledge.

Williams , M. and Burnap , P . ( 2016 ), ‘Cyberhate on Social Media in the Aftermath of Woolwich: A Case Study in Computational Criminology and Big Data’ , British Journal of Criminology , 56 : 211 – 38 .

———. ( 2018 ), Antisemitic Content on Twitter . Community Security Trust .

Williams , M. and Tregidga , J . ( 2014 ), ‘Hate Crime Victimisation in Wales: Psychological and Physical Impacts Across Seven Hate Crime Victim-types’ , British Journal of Criminology , 54 : 946 – 67 .

Williams , M. L. , Burnap , P. and Sloan , L. ( 2017a ), ‘Crime Sensing With Big Data: The Affordances and Limitations of Using Open-source Communications to Estimate Crime Patterns’ , The British Journal of Criminology , 57 : 320 – 40.

———. ( 2017b ), ‘Towards an Ethical Framework for Publishing Twitter Data in Social Research: Taking into Account Users’ Views, Online Context and Algorithmic Estimation’ , Sociology , 51 : 1149 – 68 .

Williams, M. L., Eccles-Williams, H. and Piasecka, I. (2019), Hatred Behind the Screens: A Report on the Rise of Online Hate Speech . Mishcon de Reya.

Williams, M. L., Edwards, A. E., Housley, W., Burnap, P., Rana, O. F., Avis, N. J., Morgan, J. and Sloan, L. (2013), ‘Policing Cyber-Neighbourhoods: Tension Monitoring and Social Media Networks’, Policing and Society , 23: 461–81.

Wooldridge , J. M . ( 1999 ), ‘Distribution-Free Estimation of Some Nonlinear Panel Data Models’ , Journal of Econometrics , 90 : 77 – 97 .

For current CPS guidance on what constitutes an online hate offence see: https://www.cps.gov.uk/legal-guidance/social-media-guidelines-prosecuting-cases-involving-communications-sent-social-media .

Not all hate speech identified reaches the threshold for a criminal offence in England and Wales.

These are not actual tweets from the dataset but are instead constructed illustrations that maintain the original meaning of authentic posts while preserving the anonymity of tweeters (see Williams et al. 2017b for a fuller discussion of ethics of social media research).

Other census measures were excluded due to multicollinearity, including religion.

To determine if RE or FE is preferred, the Hausman test can be used. However, this has been shown to be inefficient, and we prefer not to rely on it for interpreting our models (see Troeger, 2008 ). Therefore, both RE and FE results should be considered together.

See https://www.statalist.org/forums/forum/general-stata-discussion/general/1323497-choosing-between-xtnbreg-fe-bootstrap-and-xtpoisson-fe-cluster-robust .

Month: Total Views:
July 2019 47
August 2019 135
September 2019 708
October 2019 1,021
November 2019 938
December 2019 949
January 2020 1,526
February 2020 1,448
March 2020 1,086
April 2020 641
May 2020 587
June 2020 759
July 2020 493
August 2020 454
September 2020 624
October 2020 919
November 2020 1,042
December 2020 912
January 2021 1,048
February 2021 949
March 2021 1,438
April 2021 1,387
May 2021 1,042
June 2021 586
July 2021 548
August 2021 591
September 2021 647
October 2021 751
November 2021 717
December 2021 820
January 2022 653
February 2022 679
March 2022 691
April 2022 625
May 2022 654
June 2022 492
July 2022 325
August 2022 328
September 2022 465
October 2022 911
November 2022 788
December 2022 541
January 2023 796
February 2023 461
March 2023 653
April 2023 577
May 2023 507
June 2023 411
July 2023 395
August 2023 369
September 2023 525
October 2023 833
November 2023 882
December 2023 665
January 2024 787
February 2024 604
March 2024 794
April 2024 764
May 2024 478
June 2024 370
July 2024 330
August 2024 115

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1464-3529
  • Print ISSN 0007-0955
  • Copyright © 2024 Centre for Crime and Justice Studies (formerly ISTD)
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Architecture and Design
  • Asian and Pacific Studies
  • Business and Economics
  • Classical and Ancient Near Eastern Studies
  • Computer Sciences
  • Cultural Studies
  • Engineering
  • General Interest
  • Geosciences
  • Industrial Chemistry
  • Islamic and Middle Eastern Studies
  • Jewish Studies
  • Library and Information Science, Book Studies
  • Life Sciences
  • Linguistics and Semiotics
  • Literary Studies
  • Materials Sciences
  • Mathematics
  • Social Sciences
  • Sports and Recreation
  • Theology and Religion
  • Publish your article
  • The role of authors
  • Promoting your article
  • Abstracting & indexing
  • Publishing Ethics
  • Why publish with De Gruyter
  • How to publish with De Gruyter
  • Our book series
  • Our subject areas
  • Your digital product at De Gruyter
  • Contribute to our reference works
  • Product information
  • Tools & resources
  • Product Information
  • Promotional Materials
  • Orders and Inquiries
  • FAQ for Library Suppliers and Book Sellers
  • Repository Policy
  • Free access policy
  • Open Access agreements
  • Database portals
  • For Authors
  • Customer service
  • People + Culture
  • Journal Management
  • How to join us
  • Working at De Gruyter
  • Mission & Vision
  • De Gruyter Foundation
  • De Gruyter Ebound
  • Our Responsibility
  • Partner publishers

online hate essay

Your purchase has been completed. Your documents are now available to view.

Online hate, digital discourse and critique: Exploring digitally-mediated discursive practices of gender-based hostility

Majid KhosraviNik is Senior Lecturer in Digital Media & Discourse Studies at Newcastle University. He teaches modules on Digital Discourses & Identity and Politics, Power & Communication at the School of Arts & Cultures while supervising a number of doctoral and post-doctoral projects. He has published widely on critical discourse studies including immigration discourses, self and other representation, national identity, right wing populism, and regional identities in the Middle East. He is specifically interested in digital media discursive practices. Majid researches the intersection of participatory web, discourse and politics by investigating the impact, dynamic and challenges of social media technologies within a Social Media Critical Discourse Studies (SM-CDS) model. Majid is a founder of Newcastle Critical Discourse Studies, sits on editorial board of Critical Discourse Studies (Routledge) and Journal of Language & Politics (John Benjamins) while acting as an expert evaluator and moderator for a range of leading international publishers and research grant organizations including the EU commission.

Eleonora Esposito is Marie Skłodowska-Curie Research Fellow at the University of Navarra (Spain). She holds a M.A. in Cultural and Postcolonial Studies (University of Naples L’Orientale, 2010) and a PhD / Doctor Europaeus in English Linguistics (University of Naples Federico II, 2015). Her research interests are in the field of Language, Politics, Gender and Society in the European Union and in the Anglophone Caribbean, investigated in the light of Critical Discourse Studies, Multimodal Studies and Translation Studies. Currently, Eleonora is exploring new theoretical perspectives and integrated methodologies for the critical investigation of Social Media Discourses, with a focus on online hostility and misogyny.

The communicative affordances of the participatory web have opened up new and multifarious channels for the proliferation of hate. In particular, women navigating the cybersphere seem to be the target of a disproportionate amount of hostility. This paper explores the contexts, approaches and conceptual synergies around research on online misogyny within the new communicative paradigm of social media communication ( KhosraviNik 2017a : 582). The paper builds on the core principle that online misogyny is demonstrably and inherently a discourse; therefore, the field is envisaged at the intersection of digital media scholarship, discourse theorization and critical feminist explications. As an ever-burgeoning phenomenon, online hate has been approached from a range of disciplinary perspectives but has only been partially mapped at the interface of meaning making contents/processes and new mediation technologies. The paper aims to advance the state of the art by investigating online hate in general, and misogyny in particular, from the vantage point of Social Media Critical Discourse Studies (SM-CDS); an emerging model of theorization and operationalization of research combining tenets from Critical Discourse Studies with scholarship in digital media and technology research ( KhosraviNik 2014 , 2017a , 2018 ). Our SM-CDS approach to online misogyny demarcates itself from insinuation whereby the phenomenon is reduced to digital communicative affordances per se and argues in favor of a double critical contextualization of research findings at both digital participatory as well as social and cultural levels.

About the authors

Allen, Joseph, David Szwedo & Amori Mikami. 2012. Social networking site use predicts changes in young adults’ psychological adjustment. Journal of Research on Adolescence 22. 453–466. 10.1111/j.1532-7795.2012.00788.x Search in Google Scholar

Amichai, Hamburger, Yair & Katelyn Y.A. McKenna. 2006. The Contact Hypothesis Reconsidered: Interacting via the Internet. Journal of Computer Mediated Communication 11. 825–843. 10.1111/j.1083-6101.2006.00037.x Search in Google Scholar

Androutsopoulos, Jannis. 2008. Potentials and limitations of discourse-centred online ethnography. Language@ Internet 5(8). Search in Google Scholar

Anti-Defamation, League. 2010. Responding to cyberhate: Toolkit for action. Retrieved from: http://www.adl.org/sites/default/files/documents/assets/pdf/combating-hate/ADL-Responding-to-Cyberhate-Toolkit.pdf (accessed 12/4/2018). Search in Google Scholar

Baider, Fabienne. 2018. “Go to hell fucking faggots, may you die!” Framing the LGBT subject in online comments. Lodz Papers in Pragmatics 14(1). 69–92. 10.1515/lpp-2018-0004 Search in Google Scholar

Baider, Fabienne & Maria Constantinou. 2014. Language of cyber-politics: ‘Imaging/ imagining’ communities. Lodz Papers in Pragmatics 10(2). 213–244. 10.1515/lpp-2014-0012 Search in Google Scholar

Baker, Paul & Jesse Egbert (eds.). 2016. Triangulating Methodological Approaches in Corpus-Linguistic Research . London: Routledge. 10.4324/9781315724812 Search in Google Scholar

Banks, James. 2010. Regulating hate speech online. International Review of Law, Computers and Technology 24 (3). 233–239. 10.1080/13600869.2010.522323 Search in Google Scholar

Barton, David & Carmen Lee. 2013. Language Online: Investigating Digital Texts and Practices . Abingdon: Routledge. 10.4324/9780203552308 Search in Google Scholar

Butler, Judith. 2009. Performativity, precarity and sexual politics. Revista de Antropología Iberoamericana 4(3). i–xiii. 10.11156/aibr.040303e Search in Google Scholar

Chakraborti, Neil, Jon Garland & Stevie-Jade Hardy. 2014. The Leicester Hate Crime Project. Findings and Conclusions . Leicester: University of Leicester. Search in Google Scholar

Citron, Keats. 2009. Cyber civil rights. Boston University Law Review 89. 61–125. Search in Google Scholar

Citron, Keats & Helen L. Norton. 2011. Intermediaries and hate speech: Fostering digital citizenship for our information age. Boston University Law Review 91(4).1435–1484. Search in Google Scholar

Coleman, Gabriella. 2002. Phreaks, hackers, and trolls: The politics of transgression and spectacle. In Michael Mandiberg (ed.), The Social Media Reader 99–119. New York and London: New York University Press. 10.18574/nyu/9780814763025.003.0012 Search in Google Scholar

Consalvo, Mia & Charles Ess. 2011. The Handbook of Internet Studies West Sussex: Wiley. 10.1002/9781444314861 Search in Google Scholar

Couldry, Nick. 2012. Media, Society, World: Social Theory and Digital Media Cambridge: Polity Press. Search in Google Scholar

Davidson, Julia & Elena Martelozzo. 2013. Exploring young people’s use of social networking sites and digital media in the internet safety context: A comparison of the UK and Bahrain. Information, Communication & Society 16. 1–21. 10.1080/1369118X.2012.701655 Search in Google Scholar

Dutton, William H. (ed.). 2013. The Oxford Handbook of Internet Studies Oxford: Oxford University Press. 10.1093/oxfordhb/9780199589074.001.0001 Search in Google Scholar

Epley, Nicholas & Justin Kruger. 2005. When what you type isn’t what they read: The perseverance of stereotypes and expectancies over e-mail. Journal of Experimental Social Psychology 41(4). 414–422. 10.1016/j.jesp.2004.08.005 Search in Google Scholar

Fairclough, Norman. 1995. Critical Discourse Analysis: Papers in the Critical Study of Language . London: Longman. Search in Google Scholar

Fairclough, Norman. 2003. Analyzing Discourse: Textual Analysis for Social Research . London: Routledge. 10.4324/9780203697078 Search in Google Scholar

Fairclough, Norman & Ruth Wodak. 1997. Critical Discourse Analysis. In Teun van Dijk (ed.), Discourse Studies. A Multidisciplinary Introduction - Vol. 2. Discourse as Social Interaction 258–84. London: SAGE. Search in Google Scholar

Fichman, Pnina & Madelyn Sanfilippo. 2016. Online Trolling and Its Perpetrators: Under the Cyberbridge Lanham: Rowman and Littlefield. Search in Google Scholar

Foucault, Michel. 1971. Nietzsche, la genealogie, l’histoire. In Suzanne Bachelard, Georges Canguilhem, François Dagognet (eds.), Hommage à Jean Hyppolite 145–72. Paris: Presses Universitaire de France. Search in Google Scholar

Granovetter, Mark. 1973. The strength of weak ties. American Journal of Sociology 78(6).1360–1380. 10.1086/225469 Search in Google Scholar

Guadagno, Rosanna & Robert Cialdini. 2002. Online persuasion: An examination of gender differences in computer-mediated interpersonal influence. Group Dynamics: Theory, Research, and Practice 6(1). 38–51. 10.1037/1089-2699.6.1.38 Search in Google Scholar

Hardaker, Claire. 2013. What is turning so many young men into trolls? The Guardian Retrieved from: http://www.theguardian.com/media/2013/aug/03/how-to-stop-trolls-social-media (accessed 12/4/2018). Search in Google Scholar

Henry, Nicola & Anastasia Powell. 2018. Technology-facilitated sexual violence: a literature review of empirical research. Trauma, Violence, & Abuse 19 (2). 195–208. 10.1177/1524838016650189 Search in Google Scholar

Herring, Susan. 2001 Computer-Mediated Discourse. In Deborah Tannen, Deborah Schiffrin & Heidi Hamilton (eds.), Handbook of Discourse Analysis, 612–634. Oxford: Blackwell. 10.1002/9780470753460.ch32 Search in Google Scholar

Herring, Susan. 2004. Computer-Mediated Discourse Analysis: An Approach to Researching Online Behavior. In Sasha A. Barab, Rob Kling & James H. Gray (eds.), Designing for Virtual Communities in the Service of Learning, 339–376. New York: Cambridge University Press. 10.1017/CBO9780511805080.016 Search in Google Scholar

Herring, Susan, Kirk Job-Sluder, Rebecca Scheckler & Sasha Barab. 2002. Searching for safety online: Managing "trolling" in a feminist forum. Information Society 18(5). 371–384. 10.1080/01972240290108186 Search in Google Scholar

Hine, Christine. 2000. Virtual Ethnography . London: Sage. 10.4135/9780857020277 Search in Google Scholar

Hunsinger, Jeremy, Lisbeth Klastrup & Matthew M. Allen (eds.). 2010. International Handbook of Internet Research London, New York: Springer. 10.1007/978-1-4020-9789-8 Search in Google Scholar

Jane, Emma A. 2014. “Your a ugly, whorish, slut” – understanding e-bile. Feminist Media Studies 14(4). 531–546. 10.1080/14680777.2012.741073 Search in Google Scholar

Jane, Emma A. 2015. Flaming? What flaming? The pitfalls and potentials of researching online hostility. Ethics and Information Technology 17(1). 65–87. 10.1007/s10676-015-9362-0 Search in Google Scholar

Jane, Emma A. 2016. Online misogyny and feminist digilantism, Continuum 30(3). 284–297, 10.1080/10304312.2016.1166560 Search in Google Scholar

Jane, Emma A. 2017. Misogyny Online: A Short (and Brutish) History . London: Sage. 10.4135/9781473916029 Search in Google Scholar

Joinson, Adam. 2003. Understanding the Psychology of Internet Behaviour: Virtual Worlds, Real Lives . New York: Palgrave Macmillan. Search in Google Scholar

Kaufer, David. 2000. Flaming: A White Paper Pittsburgh, PA: Department of English, Carnegie Mellon University. Search in Google Scholar

Keipi, Teo, Matti Näsi, Atte Oksanen & Pekka Räsänen. 2017. Online Hate and Harmful Content: Cross-National Perspectives . London: Routledge. 10.4324/9781315628370 Search in Google Scholar

KhosraviNik, Majid. 2010. Actor descriptions, action attributions, and argumentation: towards a systematization of CDA analytical categories in the representation of social groups. Critical Discourse Studies 7(1).55–72. 10.1080/17405900903453948 Search in Google Scholar

KhosraviNik, Majid. 2014. Critical discourse analysis, power and new media discourse. In Yusuf Kalyango & Monika Kopytowska (eds.), Why Discourse Matters: Negotiating Identity in the Mediatized World 287–306. New York: Peter Lang. Search in Google Scholar

KhosraviNik, Majid. 2015a. Discourse, Identity and Legitimacy: Self and Other Representation in Discourses on Iran’s Nuclear Programme Amsterdam: John Benjamins. 10.1075/dapsac.62 Search in Google Scholar

KhosraviNik, Majid. 2015b. Macro and micro legitimation in discourse on Iran’s nuclear programme: The case of Iranian national newspaper Kayhan Discourse & Society 21(1). 52–73. 10.1177/0957926514541345 Search in Google Scholar

KhosraviNik, Majid. 2017a. Social Media Critical Discourse Studies (SM‐CDS). In John Flowerdew & John E. Richardson (eds.), Handbook of Critical Discourse Analysis 583–596. London: Routledge. 10.4324/9781315739342-40 Search in Google Scholar

KhosraviNik, Majid. 2017b. Right wing populism in the West: Social Media Discourse and Echo Chambers. Insight Turkey 19(3). 53–68. 10.25253/99.2017193.04 Search in Google Scholar

KhosraviNik, Majid. 2018. Social Media Techno-Discursive Design, Affective Communication and Contemporary Politics Fundan Journal of the Humanities and Social Sciences 2018 (ePub ahead of Print), 1–16. Search in Google Scholar

KhosraviNik, Majid & Darren Kelsey. Forthcoming. Social Media, Discourse and Politics . London: Bloomsbury. Search in Google Scholar

KhosraviNik, Majid & Mahrou Zia. 2014. Persian nationalism, identity and anti-Arab sentiments in Iranian Facebook discourses: Critical discourse analysis and social media communication. Journal of Language and Politics 13(4). 755–780 10.1075/jlp.13.1.08kho Search in Google Scholar

KhosraviNik, Majid & Johann Unger 2016. Critical discourse studies and social media: Power, resistance and critique in changing media ecologies. In Ruth Wodak and Michael Meyer (eds.), Methods of Critical Discourse Analysis , 3 rd edn, 205–234. London: Sage. Search in Google Scholar

KhosraviNik, Majid & Nadia Sarkhoh. 2017. Arabism and anti-Persian sentiments on participatory web: A social media critical discourse study (SM-CDS). International Journal of Communication 11. 3614–3633. 10.4324/9781315739342-40 Search in Google Scholar

Kiesler, Sara, Jane Siegel & Timothy W. McGuire. 1984. Social psychological aspects of computer-mediated communication. American Psychologist 39(10). 1123–1134. 10.1037/0003-066X.39.10.1123 Search in Google Scholar

Kopytowska, Monika. 2013. Blogging as the mediatization of politics and a new form of social interaction - a case study of Polish and British political blogs. In Piotr Cap & Urszula Okulska (eds.) Analyzing Genres in Political Communication 379–421. Amsterdam: John Benjamins. 10.1075/dapsac.50.15kop Search in Google Scholar

Kopytowska, Monika. 2015. Mediating identity, ideology and values in the public sphere: towards a new model of (constructed) social reality Lodz Papers in Pragmatics 11(2). 133–156. 10.1515/lpp-2015-0008 Search in Google Scholar

Kopytowska, Monika. 2017. Introduction: Discourses of Hate and Radicalism in Action. In Monika Kopytowska (ed.), Contemporary Discourses of Hate and Radicalism across Space and Genres 1–12. Amsterdam: John Benjamins. 10.1075/bct.93 Search in Google Scholar

Kopytowska, Monika & Fabienne Baider. 2017. From stereotypes and prejudice to verbal and physical violence: Hate speech in context. Lodz Papers in Pragmatics . 13(2). 133–152. 10.1515/lpp-2017-0008 Search in Google Scholar

Kopytowska, Monika, Łukasz Grabowski & Julita Woźniak. 2017. Mobilizing against the Other: cyberhate, refugee crisis and proximization. In Monika Kopytowska (ed.), Contemporary Discourses of Hate and Radicalism across Space and Genres 57–98. Amsterdam: John Benjamins. 10.1075/bct.93.11kop Search in Google Scholar

Kopytowska Monika & Paul Chilton. 2018. “Rivers of Blood”: Migration, fear and threat construction. Lodz Papers in Pragmatics 14(1). 133–161. 10.1515/lpp-2018-0007 Search in Google Scholar

Kress, Gunther & Theo van Leeuwen. (2006) [1996]. Reading Images: The Grammar of Visual Design (2 nd edn.). New York: Routledge. 10.4324/9780203619728 Search in Google Scholar

Lange, Patricia 2006. What is your claim to flame? First Monday 11(9). Retrieved from: http://firstmonday.org/ojs/index.php/fm/article/view/1393 (accessed 2/4/2018). 10.5210/fm.v11i9.1393 Search in Google Scholar

Langton, Rae. 1993. Speech acts and unspeakable acts. Philosophy & Public Affairs 22(4). 293–330. 10.1093/acprof:oso/9780199247066.003.0002 Search in Google Scholar

Langton, Rae. 2012. Beyond Belief: Pragmatics in Hate Speech and Pornography. In Ishani Maitra & Mary Kate McGowan (eds.), Speech & Harm. Controversies over Free Speech , 72–93. Oxford: Oxford University Press. 10.1093/acprof:oso/9780199236282.003.0004 Search in Google Scholar

Lazar, Michelle. 2007. Feminist Critical Discourse Analysis: Articulating a Feminist Discourse Praxis. Critical Discourse Studies 4(2). 141–164. 10.1080/17405900701464816 Search in Google Scholar

Lea, Martin, Tim O’Shea, Pat Fung & Russell Spears. 1992. Flaming in computer-mediated communication—Observations, explanations, implications. In Martin Lea (ed.), Contexts of computer mediated communication 89–112. New York: Harvester-Wheatsheaf. Search in Google Scholar

Lewandowska-Tomaszczyk, Barbara. 2017. Incivility and confrontation in online conflict discourses. Lodz Papers in Pragmatics 13(2) 347–367. 10.1515/lpp-2017-0017 Search in Google Scholar

Lillian, Donna. 2007. A thorn by any other name: Sexist discourse as hate speech. Discourse & Society 18(6). 719–740. 10.1177/0957926507082193 Search in Google Scholar

Machin, David & Andrea Mayr. 2012. How to Do Critical Discourse Analysis: A Multimodal Introduction . London: Sage. Search in Google Scholar

Marwick, Alice E. & Danah Boyd. 2014. Networked privacy: How teenagers negotiate context in social media. New Media & Society 16(7). 1051–1067. 10.1177/1461444814543995 Search in Google Scholar

McKenna, Katelyn Y.A. & John A. Bargh. 2000. Plan 9 from Cyberspace: The Implications of the Internet for Personality and Social Psychology. Personality and Social Psychology Review 4. 57–75. 10.1207/S15327957PSPR0401_6 Search in Google Scholar

Moor, Peter, Ard Heuvelman & Ria Verleur. 2010. Flaming on YouTube. Computers in Human Behavior 26. 1536–1546. 10.1016/j.chb.2010.05.023 Search in Google Scholar

O’Sullivan, Patrick & Andrew J. Flanagin. 2003. Reconceptualizing ‘flaming’ and other problematic messages. New Media and Society 5(1). 69–94. 10.1177/1461444803005001908 Search in Google Scholar

Organization for Security and Cooperation in Europe (OSCE) / Office for Democratic Institutions and Human Rights (ODIHR). 2009. Hate crime laws: A practical guide Retrieved from: http://www.osce.org/odihr/36426 (accessed 12/4/2018). Search in Google Scholar

Page, Ruth, David Barton, Johann W. Unger, & Michele Zappavigna. 2014. Researching Language and Social Media: A Student Guide . London: Routledge. 10.4324/9781315771786 Search in Google Scholar

Phillips, Whitney. 2011. LOLing at tragedy: Facebook trolls, memorial pages and resistance to grief online. First Monday 16(12). Retrieved from: http://firstMonday.org/ojs/index.php/fm/article/view/3168 (accessed 12/4/2018). 10.5210/fm.v16i12.3168 Search in Google Scholar

Phillips, Whitney. 2012. The house that fox built: Anonymous, spectacle, and cycles of amplification. Television and New Media 14(6). 494–509. 10.1177/1527476412452799 Search in Google Scholar

Postmes, Tom, Russell Spears & Martin Lea. 2002. Intergroup differentiation in computer-mediated communication: Effects of depersonalization. Group Dynamics 6(1). 3–16. 10.1037/1089-2699.6.1.3 Search in Google Scholar

Powell, Anastasia and Nicola Henry. 2017. Sexual Violence in a Digital Age . Palgrave Macmillan. 10.1057/978-1-137-58047-4 Search in Google Scholar

Ragnedda, Massimo & Glenn Muschert (eds.) 2013. The Digital Divide: The Internet and Social Inequality in International Perspective . London: Routledge. 10.4324/9780203069769 Search in Google Scholar

Reisigl, Martin. 2014. Argumentation analysis and the Discourse-Historical Approach. A Methodological Framework. In Christopher Hart & Piotr Cap (eds.), Contemporary Critical Discourse Studies , 67–96. London: Bloomsbury. Search in Google Scholar

Reisigl, Martin & Ruth Wodak. 2001. Discourse and Discrimination: Rhetoric of Racism and Anti-Semitism . London: Routledge. Search in Google Scholar

Reisigl, Martin & Ruth Wodak. 2009. The Discourse-Historical Approach. In Ruth Wodak and Michael Meyer (eds.), Methods of Critical Discourse Analysis , 2 nd edn, 87–121. London: Sage. Search in Google Scholar

Sarkeesiaan, Anita. 2015. “Stop the Trolls. Women Fight Back Online Harassment”. Women in the World . Retrieved from: https://www.youtube.com/watch?v=BGrlk8_kevI (accessed 12/4/2018). Search in Google Scholar

Siegel, Jane, Vitaly Dubrovsky, Sara Kiesler & Timothy W. McGuire. 1986. Group processes in computer-mediated communication. Organizational Behavior and Human Decision Processes 37(2). 157–187. 10.1016/0749-5978(86)90050-6 Search in Google Scholar

Suler, John & Wende Phillips. 1998. The Bad Boys of Cyberspace: Deviant Behavior in a Multimedia Chat Community. CyberPsychology & Behavior 1(3). 275–294. 10.1089/cpb.1998.1.275 Search in Google Scholar

Thurlow, Crispin, Laura Lengel & Alice Tomic. 2009. Computer Mediated Communication: Social Interaction and the Internet . London: Sage. Search in Google Scholar

Titley, Gavan. 2012. Hate Speech Online: Considerations for the Proposed Campaign. Starting Points for Combating Hate Speech Online . Strasbourg: Council of Europe. Retrieved from: https://rm.coe.int/1680665ba7 (accessed 12/4/2018. Search in Google Scholar

Turnage, Anna. 2007. Email flaming behaviors and organizational conflict. Journal of Computer-Mediated Communication 13. 43–59. 10.1111/j.1083-6101.2007.00385.x Search in Google Scholar

Waldron, Jeremy. 2012. The Harm in Hate Speech . Cambridge: Harvard University Press. 10.4159/harvard.9780674065086 Search in Google Scholar

Wallace, Patricia. 2016. The Psychology of the Internet . New York: Cambridge University Press. Search in Google Scholar

Weber, Anne. 2009. Manual on hate speech, Council of Europe Publishing. Retrieved from: https://www.coe.int/t/dghl/standardsetting/hrpolicy/Publications/Hate_Speech_EN.pdf (accessed 12/4/2018). Search in Google Scholar

Weisband, Suzanne & Leanne Atwater. 1999. Evaluating Self and Others in Electronic and Face-to-Face Groups. Journal of Applied Psychology 84. 632– 639. 10.1037/0021-9010.84.4.632 Search in Google Scholar

Whillock, Rita Kirk. 1995. The use of hate as a stratagem for achieving political and social goals. In Rita Kirk Whillock & David Slayden (eds.), Hate Speech 28–54. London: Sage. Search in Google Scholar

Wodak, Ruth & Michael Meyer. 2016. Methods of Critical Discourse Analysis , 3 rd edn. London: Sage. Search in Google Scholar

© 2018 Walter de Gruyter GmbH, Berlin/Boston

  • X / Twitter

Supplementary Materials

Please login or register with De Gruyter to order this product.

Lodz Papers in Pragmatics

Journal and Issue

Articles in the same issue.

New UN policy paper launched to counter and address online hate

Governments and Internet companies are failing to meet challenges of online hate.

Facebook Twitter Print Email

The UN Office on Genocide Prevention and the Responsibility to Protect launched a new policy paper on Wednesday aimed at countering and addressing hate speech online.  

The policy paper,  Countering and Addressing Online Hate Speech: A Guide for Policy Makers and Practitioners , was developed jointly by the UN Office with the Economic and Social Research Council (ESRC) Human Rights, Big Data and Technology Project, at the UK’s University of Essex. 

‘Unprecedented speed’ 

“We have seen across the world, and time, how social media has become a major vehicle in spreading hate speech at an unprecedented speed, threatening freedom of expression and a thriving public debate,” said Alice Wairimu Nderitu, Special Adviser to the UN Secretary-General on the Prevention of Genocide, who is the global focal point on the issue.

“We saw how the perpetrators in the incidents of identity-based violence used online hate to target, dehumanize and attack others, many of whom are already the most marginalized in society, including ethnic, religious, national or racial minorities, refugees and migrants, women and people with diverse sexual orientation, gender identity, gender expression, and sex characteristics,” said Ms. Nderitu. 

Key recommendations include: 

  • The need to ensure respect for human rights and the rule of law when countering online hate speech, and apply these standards to content moderation, content curation and regulation. 
  • Enhancing transparency of content moderation, content curation and regulation. 
  • Promoting positive narratives to counter online hate speech, and foster user engagement and empowerment. 
  • Ensuring accountability, strengthen judicial mechanisms and enhance independent oversight mechanisms. 
  • Strengthening multilateral and multi-stakeholder cooperation. 
  • Advancing community-based voices and formulating context-sensitive and knowledge-based policymaking and good practice to protect and empower vulnerable groups and populations to counter online hate speech. 

The policy paper builds upon earlier initiatives, including  The UN Strategy and Plan of Action on Hate Speech , which seeks to enhance the UN’s response to the global spread and impact of hate speech. 

The Strategy makes a firm commitment to step up coordinated action to tackle hate speech, both at global and national levels, including the use of new technologies and engaging with social media to address online hate speech and promote positive narratives. 

Role for tech, social media 

“Digital technologies and social media play a crucial role in tackling hate speech, through outreach, awareness-raising, providing access to information, and education,” noted the Special Adviser. 

“The transformation of our lives into a hybrid format, with the share of our life spent online ever increasing, ensuring that we all enjoy the same rights online as we do offline has become ever more important,” noted Dr. Ahmed Shaheed, Deputy Director, Essex Human Rights, Big Data and Technology Project and former UN  Special Rapporteur on Freedom of Religion or Belief . 

‘Mass atrocities’ 

He warned of “the acts of violence that follow from online incitement to violence, including mass atrocities”, beyond the digital divides created by online hate. 

“Unfortunately, our investment in countering online hate has not yet matched the reality of its dissemination and impact online. And it remains our responsibility – all relevant stakeholders – to step up our efforts to preserve the hard-won gains achieved to-date in advancing non-discrimination and equality,” concluded Special Adviser Nderitu. 

  • Hate Speech

Hate speech, toxicity detection in online social media: a recent survey of state of the art and opportunities

  • Regular Contribution
  • Published: 25 September 2023
  • Volume 23 , pages 577–608, ( 2024 )

Cite this article

online hate essay

  • Anjum 1 &
  • Rahul Katarya   ORCID: orcid.org/0000-0001-7763-291X 1  

1473 Accesses

Explore all metrics

Information and communication technology has evolved dramatically, and now the majority of people are using internet and sharing their opinion more openly, which has led to the creation, collection and circulation of hate speech over multiple platforms. The anonymity and movability given by these social media platforms allow people to hide themselves behind a screen and spread the hate effortlessly. Online hate speech (OHS) recognition can play a vital role in stopping such activities and can thus restore the position of public platforms as the open marketplace of ideas. To study hate speech detection in social media, we surveyed the related available datasets on the web-based platform. We further analyzed approximately 200 research papers indexed in the different journals from 2010 to 2022. The papers were divided into various sections and approaches used in OHS detection, i.e., feature selection, traditional machine learning (ML) and deep learning (DL). Based on the selected 111 papers, we found that 44 articles used traditional ML and 35 used DL-based approaches. We concluded that most authors used SVM, Naive Bayes, Decision Tree in ML and CNN, LSTM in the DL approach. This survey contributes by providing a systematic approach to help researchers identify a new research direction in online hate speech.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

online hate essay

Similar content being viewed by others

online hate essay

Hate Speech Detection in Multi-social Media Using Deep Learning

online hate essay

A literature survey on multimodal and multilingual automatic hate speech identification

online hate essay

Multi-step Online Hate Speech Detection and Classification Using Sentiment and Sarcasm Features

Explore related subjects.

  • Artificial Intelligence

Data availability statements

Data generated or analyzed during this study are included in this published article.

https://hatespeechdata.com/ .

https://semeval.github.io/SemEval2021/tasks.html .

https://hasocfire.github.io/hasoc/2020/index.html .

https://swisstext-and-konvens-2020.org/shared-tasks/ .

https://sites.google.com/view/trac2/live?authuser=0 .

https://ai.Facebook.com/blog/hateful-memes-challenge-and-data-set/ .

Newman, N., Fletcher, R., Kalogeropoulos, A. et al.: Reuters Institute Digital News Report 2018 (2018)

Global social media ranking (2019). https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/

Diwhu, G., Ghdwk, W.K.H., Ihpdoh, R.I.D., Vwxghqw, X.: Automated detection of hate speech towards woman on Twitter. In: International Conference On Computer Science And Engineering. pp 7–10 (2018)

Fortuna, P., Nunes, S.: A survey on automatic detection of hate speech in text. ACM Comput Surv (2018). https://doi.org/10.1145/3232676

Article   Google Scholar  

bbc Facebook launches initiative to fight online hate speech. In: bbc. ps:// www.bbc.com/news/technology-40371869

Organisation International Alert (2016) A plugin to counter hate speech online. https://europeanjournalists.org/mediaagainsthate/hate-checker-plugin-to-counter-hate-speech-online/

Salminen, J., Guan, K., Jung, S.G. et al.: A literature review of quantitative persona creation. In: Conf Hum Factors Comput Syst - Proc 1–15 (2020). https://doi.org/10.1145/3313831.3376502

Biere, S., Analytics, M.B.: Hate speech detection using natural language processing techniques. VRIJE Univ AMSTERDAM 30 (2018)

DePaula, N., Fietkiewicz, K.J., Froehlich, T.J. et al.: Challenges for social media: misinformation, free speech, civic engagement, and data regulations. In: Proceedings of the Association for Information Science and Technology, pp. 665–668 (2018)

Varade, R.S., Pathak, V.: Detection of hate speech in hinglish language. In: ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (2020)

Djuric, N., Zhou, J., Morris, R. et al.: Hate speech detection with comment embeddings. In: Proceedings of the 24th International Conference on World Wide Web. Association for Computing Machinery, New York, NY, USA, pp. 29–30 (2015)

Davidson, T., Warmsley, D,. Macy, M., Weber, I.: Automated Hate Speech Detection and the Problem of Offensive Language. (2017). arXiv170304009v1 [csCL] 11 Mar 2017 Autom

Miró-Llinares, F., Moneva, A., Esteve, M.: Hate is in the air! But where? Introducing an algorithm to detect hate speech in digital microenvironments. Crime Sci. 7 , 1–12 (2018). https://doi.org/10.1186/s40163-018-0089-1

Daniel Burke The four reasons people commit hate crimes. In: CNN. https://edition.cnn.com/2017/06/02/us/who-commits-hate-crimes/index.html

Equality and Diversity Forum (2018) Hate Crime: Cause and effect | A research synthesis. Equal Divers Forum

ONTARIO PO, GENERAL MOA: CROWN POLICY MANUAL (2005). https://files.ontario.ca/books/crown_prosecution_manual_english_1.pdf

Räsänen, P., Hawdon, J., Holkeri, E., et al.: Targets of online hate: examining determinants of victimization among young finnish Facebook users. Violence Vict. 31 , 708–725 (2016)

Contributors, W.: Hate crime. In: Wikipedia (2020). https://en.wikipedia.org/wiki/Hate_crime

twitter Twitter policy against Hate speech. https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy

facebook Hate speech. https://www.facebook.com/communitystandards/hate_speech

Instagram Instagram policy for hate speech. https://help.instagram.com/477434105621119

Youtube YouTube hate policy. https://support.google.com/youtube/answer/2801939?hl=en

Dr. Amarendra Bhushan Dhiraj: Countries Where Cyber-bullying Was Reported The Most In 2018 (2018)

United nations: Universal Declaration of Human Rights (1948)

Nations S-G of the U: European Convention on Human Rights, the International Covenant on Civil and Political Rights (1966)

Gagliardone, I., Patel, A., Pohjonen, M.: Mapping and analysing hate speech online. In: SSRN Electronic Journal. p 41 (2015)

Schmidt, A., Wiegand, M.: A survey on hate speech detection using natural language processing. In: Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media, Pp. 1–10 (2017)

Nastiti, F.E., Prastyanti, R.A., Taruno, R.B., Hariyadi, D.: Social media warfare in Indonesia political campaign: a survey. In: Proceedings - 2018 3rd International Conference on Information Technology, Information Systems and Electrical Engineering, ICITISEE 2018. IEEE, pp 49–53 (2019)

Kumar, A., Sachdeva, N.: Cyberbullying detection on social multimedia using soft computing techniques: a meta-analysis. Multimed. Tools Appl. (2019). https://doi.org/10.1007/s11042-019-7234-z

Waqas, A., Salminen, J., Jung, S., et al.: Mapping online hate: a scientometric analysis on research trends and hotspots in research on online hate. PLoS ONE 14 , 1–21 (2019). https://doi.org/10.1371/journal.pone.0222194

Waseem, Z., Hovy, D.: Hateful symbols or hateful people ? Predictive features for hate speech detection on Twitter. In: Association for Computational Linguistics Proceedings of NAACL-HLT. pp 88–93 (2016)

Vigna, F. Del, C. A., Orletta, F.D. et al.: Hate me , hate me not : Hate speech detection on Facebook. In: In Proceedings of the First Italian Conference on Cybersecurity (ITASEC17), Venice, Italy. pp 86–95 (2017)

Agarwal S, Sureka A (2017) But I did not mean it! - Intent classification of racist posts on tumblr. In: Proceedings - 2016 European Intelligence and Security Informatics Conference, EISIC 2016. IEEE, pp 124–127

CodaLab Competition. https://competitions.codalab.org/competitions/19935 .

Wang, G., Wang, B., Wang, T. et al: Whispers in the dark: Analysis of an anonymous social network. In: Proceedings of the ACM SIGCOMM Internet Measurement Conference, IMC. pp 137–149 (2014)

Ziai, A.: cohen kappa. In: Medium (2017). https://towardsdatascience.com/inter-rater-agreement-kappas-69cd8b91ff75

Gambäck. B,, Sikdar, U.K.: Using Convolutional Neural Networks to Classify Hate-Speech. In: Proceedings ofthe First Workshop on Abusive Language Online. pp 85–90 (2017)

Badjatiya, P., Gupta, S., Gupta, M., Varma, V.: Deep Learning for Hate Speech Detection in Tweets. In: arXiv:1706.00188v1 [cs.CL]. p 2 (2017)

Park, J.H., Fung, P.: One-step and two-step classification for abusive language detection on Twitter. In: Association for Computational Linguistics Proceedings of the First Workshop on Abusive Language Online, pages 41–45, Vancouver, Canada, July 30. pp 41–45 (2017)

Waseem, Z.: Are you a racist or am i seeing things ? Annotator influence on hate speech detection on Twitter. In: Proceedings of2016 EMNLP Workshop on Natural Language Processing and Computational Social Science. pp 138–142 (2016)

Jha, A: When does a Compliment become Sexist ? Analysis and Classification of Ambivalent Sexism using Twitter Data. In: Proceedings ofthe Second Workshop on Natural Language Processing. pp 7–16 (2017)

Davidson, T., Warmsley, D., Macy, M., Weber, I.: Automated hate speech detection and the problem of offensive language ∗. In: arXiv (2017)

Alorainy, W., Burnap, P., Liu, H.A.N., Williams, M.L.: “ The Enemy Among Us ”: detecting cyber hate speech with threats-based othering language embeddings. ACM Trans. Web 13 (2019)

Nobata, C., Tetreault, J.: Abusive language detection in online user content. In: International World Wide Web Conference. Pp. 145–153 (2016)

Al, Z., Amr, M.: Automatic hate speech detection using killer natural language processing optimizing ensemble deep learning approach. Springer Comput. (2019) https://doi.org/10.1007/s00607-019-00745-0

Detecting Insults in Social Commentary. https://www.kaggle.com/c/detecting-insults-in-social-commentary

MacAvaney, S., Yao, H.-R., Yang, E., Russell, K., Goharian, N.F.O. (2019) Hate speech detection: challenges and solutions. PLoS ONE 14(8): e0221152. https://doi.org/10.1371/journal.pone.0221152 . https://sites.google.com/view/trac1/shared-task

Timothy Quinn: Hatebase database. (2017). https://www.hatebase.org/

Charitidis, P., Doropoulos, S., Vologiannidis, S., et al.: Towards countering hate speech against journalists on social media. Online Soc. Netw. Media 17 , 10 (2020). https://doi.org/10.1016/j.osnem.2020.100071

Albadi, N., Kurdi, M., Mishra, S.: Are they our brothers? Analysis and detection of religious hate speech in the Arabic Twittersphere. In: Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2018. IEEE, (pp. 69–76) (2018)

Al-Hassan, A., Al-Dossari, H.: Detection of hate speech in Arabic tweets using deep learning. Multimed. Syst. (2021). https://doi.org/10.1007/s00530-020-00742-w

Ousidhoum, N., Lin, Z., Zhang, H. et al.: Multilingual and multi-aspect hate speech analysis. EMNLP-IJCNLP 2019 - 2019 Conf Empir Methods Nat Lang Process 9th Int Jt Conf Nat Lang Process Proc Conf 4675–4684 (2020). https://doi.org/10.18653/v1/d19-1474

Mulki, H., Haddad, H., Bechikh Ali, C., Alshabani, H.: L-HSAB: A Levantine Twitter dataset for hate speech and abusive language, pp. 111–118 (2019). https://doi.org/10.18653/v1/w19-3512

Ljubešić, N., Erjavec, T., Fišer, D.: Datasets of Slovene and Croatian moderated news comments, pp. 124–131 (2019). https://doi.org/10.18653/v1/w18-5116

Dinakar, K.: Modeling the detection of textual cyberbullying. In: 2011, Association for the Advancement of Artificial Intelligence, pp 11–17 (2011)

Greevy, E., Smeaton, A.F.: Classifying racist texts using a support vector machine. In: ACM Proceeding, pp 468–469 (2004)

Watanabe, H., Bouazizi, M., Ohtsuki, T.: Hate speech on Twitter: a pragmatic approach to collect hateful and offensive expressions and perform hate speech detection. IEEE Access 6 , 13825–13835 (2018). https://doi.org/10.1109/ACCESS.2018.2806394

Rodriguez, A., Argueta, C., Chen, Y.L.: Automatic detection of hate speech on facebook using sentiment and emotion analysis. In: 1st International Conference on Artificial Intelligence in Information and Communication, ICAIIC 2019. Pp. 169–174 (2019)

Hall, L.O., WPKNVCKWB,: snopes.com: Two-striped Telamonia Spider. J Artif Intell Res 2009 , 321–357 (2006). https://doi.org/10.1613/jair.953

Raufi, B., Xhaferri, I.: Application of machine learning techniques for hate speech detection in mobile applications. In: 2018 International Conference on Information Technologies, InfoTech 2018 - Proceedings. IEEE, pp 1–4 (2018)

Waseem, Z., Thorne, J., Bingel, J.: Bridging the gaps: multi task learning for domain transfer of hate speech detection. In: Online Harassment, Human–Computer Interaction Series, pp 29–55 (2018)

Lynn, T., Endo, P.T., Rosati, P., et al.: Data set for automatic detection of online misogynistic speech. Data Br. 26 , 104223 (2019). https://doi.org/10.1016/j.dib.2019.104223

Plaza-Del-Arco, F.-M., Molina-González, M.D., Ureña-López, L.A., Martín-Valdivia, M.T.: Detecting Misogyny and Xenophobia in Spanish Tweets using language technologies. ACM Trans. Internet Technol. 20 , 1–19 (2020). https://doi.org/10.1145/3369869

Pelzer, B., Kaati, L., Akrami, N.: Directed digital hate. In: 2018 IEEE International Conference on Intelligence and Security Informatics, ISI 2018, pp. 205–210 (2018)

Martins, R., Gomes, M., Almeida, J.J. et al.: Hate speech classification in social media using emotional analysis. In: Proceedings - 2018 Brazilian Conference on Intelligent Systems, BRACIS 2018, pp. 61–66 (2018)

Basak, R., Sural, S., Ganguly, N., Ghosh, S.K.: Online public shaming on Twitter: detection, analysis, and mitigation. IEEE Trans. Comput. Soc. Syst. 6 , 208–220 (2019). https://doi.org/10.1109/TCSS.2019.2895734

Sreelakshmi, K., Premjith, B., Soman, K.P.: Detection of hate speech text in Hindi-English Code-mixed Data. Procedia Comput. Sci. 171 , 737–744 (2020). https://doi.org/10.1016/j.procs.2020.04.080

Andreou, A., Orphanou, K., Pallis, G.: MANDOLA : A Big-Data Processing and Visualization. ACM Trans. Internet Technol. 20 (2020)

Zimbra, D., Abbasi, A., Zeng, D., Chen, H.: The state-of-the-art in Twitter sentiment analysis. ACM Trans. Manag. Inf. Syst. 9 , 1–29 (2018). https://doi.org/10.1145/3185045

Mariconti, E., Suarez-Tangil, G., Blackburn, J., et al.: “You know what to do”: proactive detection of YouTube videos targeted by coordinated hate attacks. Proc ACM Hum.-Comput. Interact (2019). https://doi.org/10.1145/3359309

Gitari ND, Zuping Z, Damien H, Long J (2015) A Lexicon-based approach for hate speech detection a Lexicon-based approach for hate speech detection. Int. J. Multimed. Ubiquitous Eng. https://doi.org/10.14257/ijmue.2015.10.4.21

Lima, L., Reis, J.C.S., Melo, P. et al.: Inside the right-leaning echo chambers: characterizing gab, an unmoderated social system. In: Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. ASONAM 2018. pp 515–522 (2018)

Watanabe, H., Bouazizi, M., Ohtsuki, T.: Hate speech on Twitter : a pragmatic approach to collect hateful and offensive expressions and perform hate speech detection. IEEE Access, pp. 13825–13835 (2018)

Ruwandika, N.D.T., Weerasinghe, A.R.: Identification of hate speech in social media. In: 2018 International Conference on Advances in ICT for Emerging Regions (ICTer) : Identification. IEEE, pp. 273–278 (2018)

Alorainy W, Burnap P, Liu H, et al.: Suspended accounts : a source of tweets with disgust and anger emotions for augmenting hate speech data sample. In: Proceeding of the 2018 International Conference on Machine L̥earning and Cybernetics. IEEE (2018)

Setyadi, N.A., Nasrun, M., Setianingsih, C.: Text analysis for hate speech detection using backpropagation neural network. In: The 2018 International Conference on Control, Electronics, Renewable Energy and Communications (ICCEREC). IEEE, pp 159–165 (2018)

Alfina, I., Mulia, R., Fanany, M.I., Ekanata, Y.: Hate speech detection in the Indonesian language: A dataset and preliminary study. In: 2017 International Conference on Advanced Computer Science and Information Systems, ICACSIS 2017. pp 233–237 (2018)

Sharma, H.K., Singh, T.P., Kshitiz, K., et al.: Detecting hate speech and insults on social commentary using NLP and machine learning. Int. J. Eng. Technol. Sci. Res. 4 , 279–285 (2017)

Google Scholar  

Sutejo, T.L., Lestari, D.P.: Indonesia hate speech detection using deep learning. In: International Conference on Asian Language Processing. IEEE, pp 39–43 (2018)

Lekea, I.K.: Detecting hate speech within the terrorist argument : a greek case. In: 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE, pp 1084–1091 (2018)

Liu, H., Burnap, P., Alorainy, W., Williams, M.L.: A fuzzy approach to text classification with two-stage training for ambiguous instances. IEEE Trans. Comput. Soc. Syst. 6 , 227–240 (2019). https://doi.org/10.1109/TCSS.2019.2892037

Wang, J., Zhou, W., Li, J., et al.: An online sockpuppet detection method based on subgraph similarity matching. In: Proceedings - 16th IEEE International Symposium on Parallel and Distributed Processing with Applications, 17th IEEE International Conference on Ubiquitous Computing and Communications, 8th IEEE International Conference on Big Data and Cloud Computing, 11t. IEEE, pp. 391–398 (2019)

Wu, K., Yang, S., Zhu, K.Q.: False rumors detection on Sina Weibo by propagation structures. In: Proc - Int Conf Data Eng 2015-May:651–662 (2015). https://doi.org/10.1109/ICDE.2015.7113322

Saksesi, A.S., Nasrun, M., Setianingsih, C.: Analysis text of hate speech detection using recurrent neural network. In: The 2018 International Conference on Control, Electronics, Renewable Energy and Communications (ICCEREC) Analysis. IEEE, pp. 242–248 (2018)

Sazany, E.: Deep learning-based implementation of hate speech identification on texts in Indonesian : Preliminary Study. In: 2018 International Conference on Applied Information Technology and Innovation (ICAITI) Deep. IEEE, pp 114–117 (2018)

Son, L.H., Kumar, A., Sangwan, S.R., et al.: Sarcasm detection using soft attention-based bidirectional long short-term memory model with convolution network. IEEE Access 7 , 23319–23328 (2019). https://doi.org/10.1109/ACCESS.2019.2899260

Salminen, J., Hopf, M., Chowdhury, S.A., et al.: Developing an online hate classifier for multiple social media platforms. Human-centric Comput. Inf. Sci. 10 , 1–34 (2020). https://doi.org/10.1186/s13673-019-0205-6

Coste, R.L. (2000) Fighting speech with speech: David Duke, the anti-defamation league, online bookstores, and hate filters. In: Proceedings of the Hawaii International Conference on System Sciences. p 72

Gelber, K.: Terrorist-extremist speech and hate speech: understanding the similarities and differences. Ethical Theory Moral Pract. 22 , 607–622 (2019). https://doi.org/10.1007/s10677-019-10013-x

Zhang, Z.: Hate speech detection: a solved problem ? The challenging case of long tail on Twitter. Semant WEB IOS Press 1 , 1–5 (2018)

Hara, F.: Adding emotional factors to synthesized voices. In: Robot and Human Communication - Proceedings of the IEEE International Workshop, Pp. 344–351 (1997)

Fatahillah, N.R., Suryati, P., Haryawan, C.: Implementation of Naive Bayes classifier algorithm on social media (Twitter) to the teaching of Indonesian hate speech. In: Proceedings—2017 International Conference on Sustainable Information Engineering and Technology, SIET 2017, pp. 128–131 (2018)

Ahmad Niam, I.M., Irawan, B., Setianingsih, C., Putra, B.P.: Hate speech detection using latent semantic analysis (LSA) method based on image. In: Proceedings - 2018 International Conference on Control, Electronics, Renewable Energy and Communications, ICCEREC 2018. IEEE, pp. 166–171 (2019)

Gitari, N.D., Zuping, Z., Damien, H., Long, J.: A lexicon-based approach for hate speech detection. Int. J. Multimed. Ubiquitous Eng. 10 , 215–230 (2015)

Chen, Y., Zhou, Y., Zhu, S., Xu, H.: Detecting offensive language in social media to protect adolescent online safety. In: Proceedings - 2012 ASE/IEEE International Conference on Privacy, Security, Risk and Trust and 2012 ASE/IEEE International Conference on Social Computing, SocialCom/PASSAT 2012. IEEE, pp. 71–80 (2012)

Pitsilis, G.K., Ramampiaro, H., Langseth, H.: Effective hate-speech detection in Twitter data using recurrent neural networks. Appl. Intell., Pp. 4730–4742 (2018)

Pitsilis, G.K., Ramampiaro, H., Langseth, H.: Detecting offensive language in Tweets using deep learning (2018). arXiv:180104433v1 1–17. https://doi.org/10.1007/s10489-018-1242-y

Warner, W., Hirschberg, J.: Detecting hate speech on the World Wide Web. In: Association for Computational Linguistics Proceedings of the 2012 Workshop on Language in Social Media (LSM 2012), pp. 19–26 (2012)

Dinakar, K., Jones, B., Havasi, C., Lieberman, H.: Common sense reasoning for detection, prevention, and mitigation of cyberbullying. ACM Trans. Interact. Intell. Syst. 2 , 30 (2012). https://doi.org/10.1145/2362394.2362400

Burnap, P., Williams, M.L.: Cyber hate speech on twitter: an application of machine classification and statistical modeling for policy and decision making. Policy Internet 7 , 223–242 (2015). https://doi.org/10.1002/poi3.85

Garc, A: Hate speech dataset from a white supremacy forum. In: Proceedings of the Second Workshop on Abusive Language Online, pp. 11–20 (2018)

Ombui, E., Karani, M., Muchemi, L.: Annotation framework for hate speech identification in Tweets : Case Study of Tweets During Kenyan Elections. In: 2019 IST-Africa Week Conference (IST-Africa). IST-Africa Institute and Authors, pp. 1–9 (2019)

Hosseinmardi, H., Mattson, S.A., Rafiq, R.I. et al.: Detection of cyberbullying incidents on the Instagram Social Network. In: arXiv:1503.03909v1 [cs.SI] 12 Mar 2015 Abstract (2015)

Raufi, B., Xhaferri, I.: Application of machine learning techniques for hate speech detection in mobile applications. In: 2018 International Conference on Information Technologies (InfoTech-2018), IEEE Conference Rec. No. 46116 20–21 September 2018, St. St. Constantine and Elena, Bulgaria. IEEE (2018)

Warner, W., Hirschberg, J.: Detecting hate speech on the World Wide Web. In: 19 Proceedings of the 2012 Workshop on Language in Social Media (LSM. pp 19–26) (2012)

Wang, G., Wang, B., Wang, T. et al.: Whispers in the dark : analysis of an anonymous social network categories and subject descriptors. ACM 13 (2014)

Mathew, B., Saha, P., Yimam, S.M. et al.: HateXplain: a benchmark dataset for explainable hate speech detection. In: ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). p 12 (2020)

Kiilu, K.K., Okeyo, G., Rimiru, R., Ogada, K.: Using Naïve Bayes Algorithm in detection of Hate Tweets. Int. J. Sci. Res. Publ. 8:99–107. https://doi.org/10.29322/ijsrp.8.3.2018.p7517 (2018)

Sanchez, H.: Twitter Bullying Detection, pp. 1–7 (2016). In: https://www.researchgate.net/publication/267823748

Gröndahl, T., Pajola, L., Juuti, M. et al.: All you need is “love”: Evading hate speech detection. In: Proceedings of the ACM Conference on Computer and Communications Security. pp 2–12 (2018)s

Correa, D., Silva, L.A., Mondal, M., et al.: The many shades of anonymity : characterizing anonymous social media content. Assoc Adv. Artif. Intell. 10 (2015)

Paetzold, G.H., Malmasi, S., Zampieri, M.: UTFPR at SemEval-2019 Task 5: Hate Speech Identification with Recurrent Neural Networks. In: arXiv:1904.07839v1 . p 5 (2019)

Miro-Llinares, F., Rodriguez-Sala, J.J.: Cyber hate speech on twitter: analyzing disruptive events from social media to build a violent communication and hate speech taxonomy. Int. J. Design Nat. Ecodyn. pp 406–415 (2016)

Rizoiu, M.-A., Wang, T., Ferraro, G., Suominen, H.: Transfer learning for hate speech detection in social media. arXiv:190603829v1 (2019)

Pitsilis, G.K., Ramampiaro, H., Langseth, H.: Effective hate-speech detection in Twitter data using recurrent neural networks. Appl. Intell. 48 , 4730–4742 (2018). https://doi.org/10.1007/s10489-018-1242-y

Varade, R.S., Pathak, V.B.: Detection of hate speech in hinglish language. Adv. Intell. Syst. Comput. 1101 , 265–276 (2020). https://doi.org/10.1007/978-981-15-1884-3_25

Modha, S., Majumder, P., Mandl, T., Mandalia, C.: For surveillance detecting and visualizing hate speech in social media: a cyber watchdog for surveillance. Expert Syst. Appl. (2020). https://doi.org/10.1016/j.eswa.2020.113725

Maxime: What is a Transformer?No Title. In: Medium (2019). https://medium.com/inside-machine-learning/what-is-a-transformer-d07dd1fbec04

Horev R BERT Explained: State of the art language model for NLP Title. https://towardsdatascience.com/bert-explained-state-of-the-art-language-model-for-nlp-f8b21a9b6270

Mozafari, M., Farahbakhsh, R., Crespi, N.: A BERT-based transfer learning approach for hate speech detection in online social media. Stud. Comput. Intell. 881 SCI:928–940 (2020). https://doi.org/10.1007/978-3-030-36687-2_77

Mutanga, R.T., Naicker, N., Olugbara, O.O. (2020) Hate speech detection in twitter using transformer methods. Int. J. Adv. Comput. Sci. Appl.; 11, 614–620 . https://doi.org/10.14569/IJACSA.2020.0110972

Plaza-del-Arco, F.M., Molina-González, M.D., Ureña-López, L.A., Martín-Valdivia, M.T.: Comparing pre-trained language models for Spanish hate speech detection. Expert Syst. Appl. 166 (2021)

Pandey, P.: Deep generative models. In: medium. https://towardsdatascience.com/deep-generative-models-25ab2821afd3

Wullach, T., Adler, A., Minkov, E.M.: Towards hate speech detection at large via deep generative modeling. IEEE Internet Comput. (2020). https://doi.org/10.1109/MIC.2020.3033161

Dugas, D., Nieto, J., Siegwart, R., Chung, J.J.: NavRep : Unsupervised representations for reinforcement learning of robot navigation in dynamic human environments (2021)

Behzadi, M., Harris, I.G., Derakhshan, A.: Rapid cyber-bullying detection method using compact BERT models. In: Proc - 2021 IEEE 15th Int Conf Semant Comput ICSC 2021 199–202. (2021) https://doi.org/10.1109/ICSC50631.2021.00042

Araque, O., Iglesias, C.A.: An ensemble method for radicalization and hate speech detection online empowered by sentic computing. Cognit. Comput. (2021). https://doi.org/10.1007/s12559-021-09845-6

Plaza-del-Arco, F.M., Molina-González, M.D., Ureña-López, L.A., Martín-Valdivia, M.T.: Comparing pre-trained language models for Spanish hate speech detection. Expert Syst. Appl. 166 , 114120 (2021). https://doi.org/10.1016/j.eswa.2020.114120

Badjatiya, P., Gupta, S., Gupta, M., Varma, V.: Deep learning for hate speech detection in tweets. In: 26th International World Wide Web Conference 2017, WWW 2017 Companion (2019)

Mossie, Z., Wang, J.H.: Vulnerable community identification using hate speech detection on social media. Inf. Process Manag. 57 , 102087 (2020). https://doi.org/10.1016/j.ipm.2019.102087

Magu, R., Joshi, K., Luo, J.: Detecting the hate code on social media. In: Proceedings of the 11th International Conference on Web and Social Media, ICWSM 2017. pp 608–611 (2017)

Qian, J., Bethke, A., Liu, Y., et al.: A benchmark dataset for learning to intervene in online hate speech. In: EMNLP-IJCNLP 2019 - 2019 Conf Empir Methods Nat Lang Process 9th Int Jt Conf Nat Lang Process Proc Conf 4755–4764 (2020). https://doi.org/10.18653/v1/d19-1482

Chicco, D., Jurman, G.: The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 21 , 1–13 (2020). https://doi.org/10.1186/s12864-019-6413-7

Lee, K., Ram, S.: PERSONA: Personality-based deep learning for detecting hate speech. In: International Conference on Information Systems, ICIS 2020 - Making Digital Inclusive: Blending the Local and the Global. Association for Information Systems (2021)

Download references

Author information

Authors and affiliations.

Big Data Analytics and Web Intelligence Laboratory, Department of Computer Science and Engineering, Delhi Technological University, New Delhi, India

Anjum & Rahul Katarya

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Rahul Katarya .

Ethics declarations

Conflict of interest.

All the authors of this paper declare that he/she has no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Anjum, Katarya, R. Hate speech, toxicity detection in online social media: a recent survey of state of the art and opportunities. Int. J. Inf. Secur. 23 , 577–608 (2024). https://doi.org/10.1007/s10207-023-00755-2

Download citation

Accepted : 02 September 2023

Published : 25 September 2023

Issue Date : February 2024

DOI : https://doi.org/10.1007/s10207-023-00755-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Deep learning
  • Natural language processing (NLP)
  • Machine learning
  • Online hate speech (OHS)
  • Social media
  • Toxicity detection
  • Find a journal
  • Publish with us
  • Track your research

Cyberviolence

Online hate speech and hate crime.

online hate essay

Cyberviolence Resource presented during the first No Hate Speech Week organised in Strasbourg

Marked for the first time in 2022, the UN International Day for Countering Hate Speech (18 June)...

New Council of Europe's Committee of Ministers Recommendation to member States on combating hate crime

New Council of Europe's Committee of Ministers Recommendation to member States on combating hate crime

On 7 May 2024, a new Recommendation of the Committee of Ministers to member States on combating...

Cybercrime and freedom of expression discussion paper now available in FR and ES

Cybercrime and freedom of expression discussion paper now available in FR and ES

The recent discussion paper on cybercrime and freedom of expression prepared under the Octopus...

Council of Europe's No Hate Speech Week 2024 - Call for participants

Council of Europe's No Hate Speech Week 2024 - Call for participants

The No Hate Speech Week, 17-20 June 2024, is organised around the International Day for...

Cyberviolence Resource presented during the Octopus Conference 2023

Cyberviolence Resource presented during the Octopus Conference 2023

Between 13-15 December 2023, Octopus Conference 2023 took place in Bucharest, Romania. The...

online hate essay

The Council of Europe standards and practices related to addressing hate speech have guided the work of the Expert Committee on Combating Hate Speech (ADI/MSI-DIS). It prepared a Recommendation on a comprehensive approach to addressing hate speech within a human rights framework, including in the context of an online environment.

The final Recommendation was adopted by the Committee of Ministers  in May 2022. It provides non-binding guidance for member States, building on the relevant case-law of the European Court of Human Rights   and paying special attention to the online environment in which most of today’s hate speech can be found. Thematic factsheets on hate speech are issued regularly. 

Hate crime is partly covered by the Additional Protocol to the Budapest Convention on Xenophobia and Racism , and thus addresses cyberviolence motivated by certain biases, but not if motivated by other perceived characteristics such as gender, sexual orientation or disability. The work of the Council of Europe and other organisations on discrimination and intolerance is also relevant. Key issues are the role of service providers and the question of hate speech versus free speech. A  Committee of Experts on Combating Hate Crime  ( PC/ADI-CH) started its work in 2022.

Free speech versus hate speech:

Countries have different views about the degree to which speech should be limited by society – that is, where to set the balance between one person’s fundamental right to express him/herself and another person’s fundamental right to safety. A multitude of case-law judgements and decisions can be consulted online, as well as  CM/Rec(2022)16 on hate speech and its Explanatory Memorandum which gives guidance for the various stakeholders involved.  

An educational youth campaign, called the “ No Hate Speech Movement ”, was run by the Council of Europe between 2012-2018. This campaign aimed at combating online hate speech by mobilising young people and youth organisations to recognise and act against these human rights violations. The No Hate Speech Movement developed among other things an overview of reporting structure for hate speech and cyberbullying to national structures  and has opened the path for follow-up initiatives at both the Council of Euope and national level.

Council of Europe Conventions

  • Additional Protocol to the Convention on Cybercrime concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems  (2003)
  • Second Additional Protocol to the Convention on Cybercrime on enhanced co-operation and disclosure of electronic evidence  (2022) new
  • European Convention on Human Rights  (1950)

Recommendations / Declarations of the Committee of Ministers

  • Recommendation CM/Rec(2024)4 of the Committee of Ministers to member States on combating hate crime   New
  • Recommendation CM/Rec(2022)16 of the Committee of Ministers to Member States in combating hate speech   New
  • R ecommendation CM/Rec(2019)1 of the Committee of Ministers to member States on preventing and combating sexism
  • Declaration (Decl/29/05/2019) by the Committee of Ministers on the legacy of the No Hate Speech Movement youth campaign
  • Recommendation CM/Rec(2018)2 of the Committee of Ministers to member States on the roles and responsibilities of internet intermediaries
  • Recommendation CM/Rec(2014)6 of the Committee of Ministers to member States on a Guide to human rights for Internet users

Resolutions of the Parliamentary Assembly of the Council of Europe (PACE)

  • Resolution 2275 (2019) The role and responsibilities of political leaders in combating hate speech and intolerance
  • Resolution 2276 (2019) Stop hate speech and acts of hatred in sport
  • Resolution 2144 (2017) Ending cyberdiscrimination and online hate
  • Resolution 1967 (2014) A strategy to prevent racism and intolerance in Europe

Recommendations of the European Commission against Racism and Intolerance (ECRI)

  • ECRI General Policy Recommendation No 15, on combating hate speech , adopted on 8 December 2015
  • ECRI General Policy Recommendation No. 6, on combating the dissemination of racist, xenophobic and antisemitic material via the internet , adopted on 15 December 2000
  • ECRI Country monitoring concerning manifestations of racism and intolerance , with recommendations

Reporting to national bodies

Most European countries have established national reporting mechanisms and support for victims of cyber bullying, hate speech and hate crime, provided by national authorities and NGOs.

  • National reporting procedures and mechanisms for hate speech, hate crime and cyberbulling in different countries
  • Reporting to social media platforms

Social media platforms offer tips to help protect users from cyber bullying and hate speech, and provide tools for reporting them to the platform administrators or moderators.

  • Council of Europe HELP tutored courses and OSCE’s Office of Democratic Institutions (ODIHR): Hate Speech and Hate Crime
  • Council of Europe: Action against cyberviolence
  • Council of Europe: Digital Partnership
  • Council of Europe: EU-CoE joint project on Increasing Civil Society Organisations’ knowledge and capacities to tackle hate speech online
  • Council of Europe: Mapping of national responses to hate speech
  • Council of Europe: Recommendation CM/Rec(2022)16[1] of the Committee of Ministers to member States on combating hate speech
  • Council of Europe: Recommendation of the Committee of Ministers to member States on combating hate crime
  • European Commission: The EU Code of conduct on countering illegal hate speech online
  • European Court of Human Rights Case law on hate speech
  • European Union-Council of Europe: Toolkit for human rights speech
  • Germany: Making use of not specific to the online environment provisions
  • INHOPE: Against illegal content and activity
  • Mauritius: Awareness campaign on cyberbullying and cyberviolence
  • No Hate Speech Movement
  • Norway: Action against cyberviolence
  • OSCE-ODIHR Hate Crime Reporting
  • Parliamentary Assembly of the Council of Europe, and its No Hate Parliamentary Alliance
  • Singapore: Cyber wellness
  • Slovakia: Criminal Code provisions applied to cyberviolence
  • UK: Prosecution of hate crime
  • Teaching Resources
  • Upcoming Events
  • On-demand Events

When Online Hate Speech Has Real World Consequences

  • Social Studies
  • Antisemitism
  • facebook sharing
  • email sharing

About This Mini-Lesson

This mini-lesson is part one of a two-part exploration of online hate speech, celebrity influence, and the real-life consequences they can engender. This installment uses a recent high-profile example of antisemitic rhetoric and action it inspired to prompt students' thinking about the dangerous role that celebrity influence and social media platforms can play in amplifying online hate speech. It also educates students about the origin and meaning of a common antisemitic trope so that they can better identify and deconstruct antisemitism they may see online. Finally, it helps students consider who is responsible for combating online hate and what methods should be used to do so. 

What follows are teacher-facing instructions for the activities. Find student-facing instructions in the Google Slides for this mini-lesson.

What’s Included

This mini-lesson is designed to be adaptable. You can use the activities in sequence or choose a selection best suited to your classroom. It includes:

  • 3 activities
  • Student-facing slides

Additional Context & Background

The prevalence of online hate is alarming, both for the people it targets and for our society as a whole. This mini-lesson looks at the impacts of online hate, celebrity influence, and one concerning trend in online hate-–rising antisemitism. 

Online hate speech can harm the mental health of those whose identities are targeted, making them feel fearful, or anxious, and alone. Additionally, it has been linked to violent attacks around the world, including the shootings in mosques in Christchurch, New Zealand; in the Tree of Life Synagogue in Pittsburgh, Pennsylvania; and in the Emanuel African Methodist Episcopal Church in Charleston, South Carolina. 1

Antisemitic attacks, and other identity-based hate crimes, have increased in recent years in the United States, both online and in person. On TikTok alone, antisemitic comments increased 912 percent from 2020 to 2021 2 . During a one-week timespan in May of 2021, 17,000 Twitter users posted variations of the antisemitic phrase “Hitler was right.” 3 Antisemitic violence has also surged, and the New York police department reported a 400 percent increase in attacks targeting Jews in February of 2022 compared to the previous February. 4

In October 2022, the musician, producer, and fashion designer Ye (formerly known as Kanye West) drew increased attention to the alarming trend of online antisemitism and other forms of hate when he posted two antisemitic tweets, one of which was removed by Twitter soon after it was posted. His account was suspended, but in the days that followed, he gave a series of interviews that continued to include antisemitic attacks. 5  

Ye has been accused of spreading anti-Black, misogynistic, antisemitic and other hateful messages for years, though this behavior has escalated more recently. Celebrities and influencers have the ability to amplify hate speech online beyond the power of most people because of their large followings. For example, in October 2022, Ye had an estimated 27 million Twitter followers and 18.4 million Instagram followers. For comparison, there are an estimated 14.8 million Jews in the world, meaning that Ye has a larger online following than there are members of the identity group he targeted in this instance.

Ye’s antisemitic speech has drawn strong reactions, with some people and extremist organizations expressing support for his antisemitic ideas, while others have condemned them.

Soon after he posted his tweets, Ye was invited by the Holocaust Museum LA to take a private tour, an offer he rejected publicly during an interview. The Holocaust museum was then flooded with hate mail, some containing threats of violence. 6

On October 22, 2022, white nationalists displayed a banner on an overpass in Los Angeles with the message, “Honk if you know Kanye was right about the Jews.” The group responsible for the banner has held similar antisemitic demonstrations on the same overpass and elsewhere before. On October 30, a nearly identical message was projected onto the TIAA Bankfield Stadium following a college football game in Jacksonville. Similar messages were projected elsewhere in the city the same night. 7

In the weeks following Ye’s October antisemitic Tweets, other celebrities, as well as non-celebrities, began posting expressions of solidarity with Jewish people on their social media accounts. The fact that so many people spoke out against his antisemitism has had an impact. Due to the public pressure, including a Campaign Against Antisemitism petition with 175,000 signatures, Adidas severed business relationships with Ye, as did Gap, Balenciaga and the talent agency that represented him 8 . He was also suspended from Twitter and Instagram in October 2022.

Ye’s antisemitic speech is one high profile example of a larger trend. Hate speech that targets people based on their identities is alarmingly common in online spaces and is regularly spread by both celebrities and non-celebrities alike.

  • 1 Zachary Laub, Hate Speech on Social Media: Global Comparisons , Council on Foreign Relations , June 7, 2019.
  • 2 Natalie Gabriel Weimann, “New Antisemitism on TikTok,” Antisemitism on Social Media (2022): 172.
  • 3 Samantha A. Vinokor-Meinrath, #antisemitism (California: Praeger, 2022), 47.
  • 4 Yossi Lempkowicz, “In New York City, anti-Jewish hate crimes jump 400% in February compared to last year, reports NYPD, ” European Jewish Press , March 13, 2022.
  • 5 Moises Mendez II, “What to Know About Kanye West's Anti-Semitic Remarks—and the Companies That Have Cut Ties, ” TIME , October 24, 2022.
  • 6 CBSLA STAFF, “Holocaust Museum of LA flooded with antisemitic messages after offering Kanye West a private tour, ” CBS News , October 24, 2022.
  • 7 Jacob Knutson, “SEC condemns antisemitic message projected during Florida-Georgia football game, ” Axios , October 30, 2022.
  • 8 Faarea Masud, “Adidas cuts ties with rapper Kanye West over anti-Semitism, ” BBC News, October 26, 2022.

Preparing to Teach

A note to teachers.

Before teaching this mini-lesson, please review the following information to help guide your preparation process.

Start with Yourself

Self-reflection is important preparation for facilitating conversations about troubling current events. As educators, we have to make time to process our own feelings and become aware of the way our own identities and experiences shape the perspectives we hold. Read the “Start with Yourself” section on page 2 of our Fostering Civil Discourse  guide. Then reflect on the following questions:

  • What emotions does news of online hate and increasing antisemitism raise for you? What questions are you grappling with?
  • What perspectives will you bring to your reflection on this news with your students?
  • What emotions might your students bring to your discussion? How can you respond to these emotions?

Creating and Maintaining Community Norms Around Discussing Hate Speech

Students may respond differently to the materials in this mini-lesson, depending on their knowledge of or personal experience with hate speech or antisemitism. Before teaching the following activities, consider revisiting your classroom norms with your students or  creating a class contract together if you have not done so already. It also is important that you view and read the materials for this lesson before deciding whether or not they are appropriate for your students.

Your contract should also make it clear that while you encourage the expression of different viewpoints and diverse voices, members of your community are responsible for maintaining an environment that respects the dignity and humanity of all. Consider how you and your students can respond if someone in your class violates your norms, for example by repeating hate speech or an antisemitic trope.  For more ideas on how to address problematic comments in the classroom, including scenarios and sentence stems, read human rights educator Loretta Ross’s article Speaking Up Without Tearing Down , published by Learning for Justice.

Save this resource for easy access later.

What Are the Trends and Impacts of Online Hate? (25 minutes)

Begin by asking students to look at two infographics from the ADL ’s report Online Hate and Harassment: The American Experience 2021 “Anatomy of Harassment” and “Demographics of Harassment,” which can be found in the Slides for this mini-lesson. 

The data in the infographics comes from a survey conducted by YouGov on behalf of the ADL. The participants in the survey were all over the age of 18 and were chosen to be representative of the demographics of the United States. They were asked to report whether they had experienced online harassment over their lifetime. A total of 847 people responded.

Ask students to respond to the following prompts in their journals:

What information did you find surprising? What information did you find troubling? What questions does this information raise for you?

Once students have finished, ask for volunteers to share some responses for each question with the class.

Then, share the following passage with background information about the rise of online hate, specifically antisemitism, and the events surrounding Ye’s recent antisemitic speech (which can also be found in the context section and Slides for this mini-lesson):

The prevalence of online hate is alarming, both for the people it targets and for our society as a whole. This mini-lesson looks at the impacts of online hate, celebrity influence, and one concerning trend in online hate-– rising antisemitism.  Online hate speech can harm the mental health of those whose identities are targeted, making them feel fearful, or anxious, and alone. Additionally, it has been linked to violent attacks around the world, including the shootings in mosques in Christchurch, New Zealand; in the Tree of Life Synagogue in Pittsburgh, Pennsylvania; and in the Emanuel African Methodist Episcopal Church in Charleston, South Carolina. 1 Antisemitic attacks, and other identity-based hate crimes, have increased in recent years, in the United States, both online and in person. On TikTok alone, antisemitic comments increased 912 percent from 2020 to 2021 2 . During a one-week timespan in May of 2021, 17,000 Twitter users posted variations of the antisemitic phrase “Hitler was right.” 3 Antisemitic violence has also surged, and the New York police department reported a 400 percent increase in attacks targeting Jews in February of 2022 compared to the previous February. 4 In October 2022, the musician, producer, and fashion designer Ye (formerly known as Kanye West) drew increased attention to the alarming trend of online antisemitism and other forms of hate when he posted two antisemitic tweets, one of which was removed by Twitter soon after it was posted. His account was suspended, but in the days that followed, he gave a series of interviews that continued to include antisemitic attacks. 5 Ye has been accused of spreading anti-Black, misogynistic, antisemitic  and other hateful messages for years, though this behavior has escalated more recently . Celebrities and influencers have the ability to amplify hate speech online beyond the power of most people because of their large followings. For example, iIn October 2022, Ye had an estimated 27 million Twitter followers and 18.4 million Instagram followers. For comparison, there are an estimated 14.8 million Jews in the world, meaning that Ye has a larger online following than there are members of the identity group he targeted in this instance. Ye’s antisemitic speech has drawn strong reactions, with some people and extremist organizations expressing support for his antisemitic ideas, while others have condemned them. Soon after he posted his tweets, Ye was invited by the Holocaust Museum LA to take a private tour, an offer he rejected publicly during an interview. The Holocaust museum was then flooded with hate mail, some containing threats of violence. 6 On October 22, 2022, white nationalists displayed a banner on an overpass in Los Angeles with the message, “Honk if you know Kanye was right about the Jews.” The group responsible for the banner has held similar antisemitic demonstrations on the same overpass and elsewhere before. On October 30, a nearly identical message was projected onto the TIAA Bankfield Stadium following a college football game in Jacksonville. Similar messages were projected elsewhere in the city the same night. 7 In the weeks following Ye’s October antisemitic Tweets, other celebrities, as well as non-celebrities, began posting expressions of solidarity with Jewish people on their social media accounts. The fact that so many people spoke out against his antisemitism has had an impact. Due to the public pressure, including a Campaign Against Antisemitism petition with 175,000 signatures, Adidas severed business relationships with Ye, as did Gap, Balenciaga and the talent agency that represented him 8 . He was also suspended from Twitter and Instagram in October 2022. Ye’s antisemitic speech is one high profile example of a larger trend. Hate speech that targets people based on their identities is alarmingly common in online spaces and is regularly spread by both celebrities and non-celebrities alike. 

Once you have finished reading, discuss the following questions with your students:

How might online hate impact the individuals whose identities are targeted? How might it impact society as a whole? How might the impact of encountering hate speech online be different from the impact of encountering it in person (for example seeing a banner with a hateful message)? What impact do you think it can have when someone with a large social media following spreads hate online? Do people who have more influence online have different responsibilities around their speech? Why or why not?

Related Materials

  • Slides Student Activities: When Online Hate Speech Has Real World Consequences
  • Link Online Hate and Harassment: The American Experience 2021

How Do Antisemitic Tropes Appear on Social Media? (35-45 minutes)

When antisemitism shows up in social media, it is often in the form of conspiracy theories infused with old antisemitic tropes. In the following activity, students will learn the definitions and origins of a trope commonly evoked in contemporary antisemitic rhetoric, then examine two actual examples of antisemitic social media posts that incorporate this trope, in order to help them identify and deconstruct antisemitism they may see online. If this is the first time your students are learning about contemporary antisemitism, please start with reviewing  our contemporary antisemitism explainer Antisemitism and Its Impacts with your students before continuing with this activity. 

Ask students to read the following definition (which can also be found in the Slides for this mini-lesson):

Antisemitic conspiracy theories rely on tropes —widely shared ideas, stereotypes, phrases, images or stories. Tropes can be neutral, like common movie or literary tropes, but antisemitic tropes cause great harm. In this activity, you will learn about the definition and origin of an antisemitic trope that frequently appears in antisemitic hate speech on social media.

Then, ask students to reflect on the following prompt in a private journal entry:

Think of a time when someone made a negative assumption or stereotype about you. What happened? How did it feel?

When students have finished writing, ask them to read the following passage (which can also be found in the Slides for this mini-lesson):

Global Domination/Power : a conspiracy theory that Jews are global puppet masters who secretly control the media, the entertainment industry, the economy and powerful governments. This conspiracy originated in an early twentieth century publication entitled The Protocols of the Learned Elders of Zion , which claimed to document the secret meeting of powerful Jews who were conspiring to take over the world. A meeting like this never happened. In the 1920s, American industrialist Henry Ford brought The Protocols to the US, printed it first as a series of articles in his newspaper, and then in its entirety. It became the second-highest selling book beneath the Christian bible during that time. The Protocols became widely published, translated into 16 languages, and played a role in Nazi ideology. 1 It still circulates today in white supremacist groups. Terms like “Globalists,” “The Cosmopolitan or Academic Elite,” “Cabal” and “The Rothschilds” are often used as code words to spread this trope.

In small groups of 3 to 4, have students examine the language and images from these actual social media posts below and answer the following questions for each example.

  • What are the messages being expressed in the text or image?
  • Who do you think the “you” is in each of these examples? Does the “you” shift to different groups?  How does the use of the term create an “in group” and “out group”? 
  • How does this post connect to the trope of Jewish "global domination/power"? Where specifically do you see the trope being employed? 
  • How might it feel to come across an online post that stereotypes or threatens you based on your identity?
Ye’s October 8, 2022 Twitter Posts included the following text: “...I’m going death con 3 On JEWISH PEOPLE…You guys have toyed with me and tried to black ball anyone whoever opposes your agenda.” A second tweet posed the rhetorical question, “Who you think created cancel culture?”
The following meme (sometimes with the Star of David removed) has been reposted widely on social media platforms by other celebrities and influencers. The quote is falsely attributed to Enlightenment era thinker Voltaire. Its origins are a 1993 radio broadcast by a self-described white supremacist and Holocaust denier. 2
  • 1 ADL, “Jews Have too Much Power” .
  • 2 Sophia Tulp, “US congressman shares neo-Nazi’s quote wrongly attributed to Voltaire, ” Associated Press , January 31, 2022.

Next, read the following excerpt from The Atlantic article Kanye West Destroys Himself out loud, with students following along. This excerpt and the questions that follow are designed to allow students to consider why the Global Domination/Power trope is so persistent and what sort of responses can be effective in countering the trope:

[W]hen an anti-Semite suffers consequences for falsely claiming that sinister Jews control the world, he can then point to that punishment as vindication of his views. For Jews, this is a no-win scenario: If they stay silent, the anti-Semitism continues unabated; if they speak up, and their assailant is penalized by non-Jewish society, anti-Semites feel affirmed. Heads, the bigots win; tails, Jews lose. This is the cruel paradox that has perpetuated anti-Semitism for centuries. 1

Ask students to discuss the following questions, in their small groups, or as a whole class:

  • Rosenberg argues that when the people who spread antisemitic conspiracy theories suffer consequences for their speech, it can actually fuel more antisemitism. Why does this happen?
  • How might this create a situation that can feel like a no-win for Jews?
  • How do the ideas in this passage connect to the social media posts you analyzed?
  • Upstanders are people who speak or act in support of an individual or cause, particularly someone who intervenes on behalf of a person being attacked or bullied. How can non-Jewish allies speaking up to denounce antisemitism counter the self-fulfilling nature of these conspiracy theories?

Note : While online antisemitism is difficult to confront, it may be helpful to explain to your students that there are actions that people and organizations can take to combat it. The final activity of this mini-lesson helps students think through some of these actions.

  • 1 Yair Rosenberg, Kanye West Destroys Himself , The Atlantic , October 27, 2022.
  • Explainer Antisemitism and Its Impacts

What Can People Do to Combat Online Hate? (20 minutes)

This activity uses the Jigsaw teaching strategy to guide a discussion on actions that different groups of people or organizations can take to stop the spread of hate online. ( Note : You can also organize this activity using the Big Paper strategy by asking students to silently “discuss” one prompt in their initial groups, by writing down their responses on a shared poster paper, and then to read and comment on other groups’ papers.)

Place your students in initial groups of 3 to 4 and assign them one of the following prompts 1 to discuss:

If social media companies take the issue of online hate seriously, what actions should they take? What is the responsibility of social media companies to stop the spread of online hate and where does their responsibility end? If celebrities and influencers take the issue of online hate seriously, what actions should they take? What is the responsibility of celebrities and influencers to fight the spread of online hate and where does their responsibility end? If schools take the issue of online hate seriously, what actions should they take? What is the responsibility of schools to stop the spread of online hate and where does their responsibility end?

Students should take notes on what they discuss with their groups and be prepared to share. Once students have finished their discussions in their initial groups, move them into new groups that contain at least one person who discussed each prompt. Ask them to take turns sharing what they discussed in their initial groups. Then, students can discuss any additional ideas they have. 

Finish by asking students to respond to the following prompt in their journals: 

If I take the issue of online hate seriously, what are the day-to-day implications for how I live my life? What might my personal actions and behaviors look like? What might I choose to do differently? When and where might I find myself speaking out?

To help students generate ideas for the journal reflections, share the following suggestions (which can also be found in the Slides for this mini-lesson) for steps they can take when they encounter hate online:

Actions individuals can take to stop the spread of hate online:

When you see posts on social media that contain hate, use the reporting function to flag the post, if the platform has one Be aware that responding to posts that contain hate or reposting them with your own commentary, even to disagree, can have the unintended consequence of amplifying the initial post and cause the site’s algorithms to place similar content in your feed Do not forward posts containing hate to people who are targeted by the hate, because seeing these posts can cause them harm Learn about the identity and culture of those who are targeted from individuals, communities, and organizations within that identity and culture in order to counter stereotypes. Reach out to people who you think might be impacted by hate you see online to offer support Limit your own exposure to online hate, for example by unsubscribing to individuals or groups that spread hate If you have been the target of online hate, reach out to people who can offer you support, such as friends, family, or a school counselor. Report online hate (or other incidents of hate) to the ADL.
  • 1 These prompts and the final journal prompt are adapted from Project Zero’s “4 If’s” thinking routine ( The Power of Making Thinking Visible: Practices to Engage and Empower All Learners (Jossey-Bass, 2020), 190.).
  • Teaching Strategy Jigsaw: Developing Community and Disseminating Knowledge
  • Teaching Strategy Big Paper: Building a Silent Conversation

Materials and Downloads

Explore the materials, student activities: when online hate speech has real world consequences, antisemitism and its impacts, resources from other organizations, additional resources, related facing history resources & learning opportunities, confronting hate: the perpetuation of hate on social media, responding to hate in your school, antisemitism resource collection, you might also be interested in…, responding to rising antisemitism, stereotypes, media, and islamophobia, rising antisemitism and fading memories of the holocaust, acts of hate in schools, discussing contemporary antisemitism in the classroom, racialized antisemitism, antisemitism today, addressing current events in the classroom, combating antisemitism and racism, introducing antisemitism and antisemitic tropes, exploring antisemitic tropes in further depth, addressing antisemitism online, unlimited access to learning. more added every month..

Facing History & Ourselves is designed for educators who want to help students explore identity, think critically, grow emotionally, act ethically, and participate in civic life. It’s hard work, so we’ve developed some go-to professional learning opportunities to help you along the way.

Exploring ELA Text Selection with Julia Torres

Working for justice, equity and civic agency in our schools: a conversation with clint smith, centering student voices to build community and agency, inspiration, insights, & ways to get involved.

UN logo

Search the United Nations

What is hate speech.

  • Hate speech vs freedom of speech
  • Hate speech and real harm
  • Why tackle hate speech?
  • A pandemic of hate
  • The many challenges of tracking hate
  • Targets of hate
  • The preventive role of education
  • UN Strategy and Plan of Action
  • The role of the United Nations
  • International human rights law
  • Further UN initiatives
  • Teach and learn
  • Test yourself
  • International Day
  • Key human rights instruments
  • Thematic publications
  • Fact sheets [PDF]
  • UN offices, bodies, agencies, programmes

online hate essay

Understanding hate speech

In common language, “hate speech” refers to offensive discourse targeting a group or an individual based on inherent characteristics (such as race, religion or gender) and that may threaten social peace.

To provide a unified framework for the United Nations to address the issue globally, the UN Strategy and Plan of Action on Hate Speech defines hate speech as… “ any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are , in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.”

However, to date there is no universal definition of hate speech under international human rights law. The concept is still under discussion, especially in relation to freedom of opinion and expression, non-discrimination and equality.

While the above is not a legal definition and is broader than “incitement to discrimination, hostility or violence” – which is prohibited under international human rights law -- it has three important attributes:

online hate essay

It’s important to note that hate speech can only be directed at individuals or groups of individuals. It does not include communication about States and their offices, symbols or public officials, nor about religious leaders or tenets of faith.

Challenges raised by online hate speech

We must confront bigotry by working to tackle the hate that spreads like wildfire across the internet.” ANTÓNIO GUTERRES , United Nations Secretary-General, 2023

Secretary-General Portrait

“We must confront hatred wherever and whenever it rears its ugly head. This includes working to tackle hate speech that spreads like wildfire across the internet.”

— United Nations Secretary-General António Guterres, 2023

online hate essay

Misinformation spreads faster when we’re upset. Pause and #TakeCareBeforeYouShare .

Hate Speech reminder

The growth of hateful content online has been coupled with the rise of easily shareable disinformation enabled by digital tools. This raises unprecedented challenges for our societies as governments struggle to enforce national laws in the virtual world's scale and speed.

Unlike in traditional media, online hate speech can be produced and shared easily, at low cost and anonymously. It has the potential to reach a global and diverse audience in real time. The relative permanence of hateful online content is also problematic, as it can resurface and (re)gain popularity over time.

Understanding and monitoring hate speech across diverse online communities and platforms is key to shaping new responses. But efforts are often stunted by the sheer scale of the phenomenon, the technological limitations of automated monitoring systems and the lack of transparency of online companies.

Meanwhile, the growing weaponization of social media to spread hateful and divisive narratives has been aided by online corporations’ algorithms. This has intensified the stigma vulnerable communities face and exposed the fragility of our democracies worldwide. It has raised scrutiny on Internet players and sparked questions about their role and responsibility in inflicting real world harm. As a result, some States have started holding Internet companies accountable for moderating and removing content considered to be against the law, raising concerns about limitations on freedom of speech and censorship.

Despite these challenges, the United Nations and many other actors are exploring ways of countering hate speech. These include initiatives to promote greater media and information literacy among online users while ensuring the right to freedom of expression.

Online hate analysts are calling for greater eSafety powers after study finds rise in anti-Semitism and Islamophobia

A woman in Muslim headdress sitting opposite a man. Both are at a desk using laptops.

Australian analysts tracking offensive online comments since the current Israel-Gaza conflict have found anti-Semitic and Islamophobic posts have skyrocketed.

They say the national eSafety Commissioner needs greater power to rein in online hate and there should be funding to train police to tackle it.

What's next?

A report from the Online Hate Prevention Institute comparing anti-Semitic and Islamophobic data will be released in coming months.

Researchers of online hate are calling for the remit of Australia's online regulator to be expanded, amid a "significant" increase in anti-Semitism and Islamophobia.

The Australian-based Online Hate Prevention Institute tracked offensive posts globally on 10 social media platforms for three months the beginning of the war in Gaza on October 7 last year.

The research has already been used for international police training, and the authors says it highlights a flaw in how the issue is tackled by the eSafety Commissioner.

The study, conducted with the Online Hate Task Force in Belgium, involved analysts working in one-hour blocks, seeking out anti-Semitism or Islamophobia.

"It's rough," an Australian researcher who worked on the project told the ABC.

"All of us who do the social media monitoring work, we all struggle with it from time to time."

A woman's hand on using a pink computer mouse.

There can be personal risk for those involved in the work, so she has asked for her name not to be used.

The team has used a "snowball" methodology where they find an offensive post and then click through to those interacting with it, working for an hour at a time to avoid becoming stuck in "an echo chamber".

"Having real people look at this means we see things that artificial intelligence won't," she said.

"There's a lot of dog whistles that are used, coded language, things like that."

Online hate speech has ballooned in past year

The lead author, Andre Oboler, who is also the Online Hate Prevention Institute's CEO, said hate speech targeting both groups was "up significantly on every single platform".

"It's the mainstream, it's the extreme — everything is just up," he said.

The institute's report into Islamophobia and racism against Palestinians and Arabs, found 1,169 offensive posts over 160 hours of searching, from October to February.

Man wearing glasses and business attire.

It divided the hate into 11 categories, including 'inciting violence against Muslims', 'Muslims as a cultural threat', 'demonising or dehumanising', 'xenophobia' and 'anti-Muslim jokes'.

The institute earlier released a report into anti-Semitism, which found 2,898 offensive items.

The posts were sorted into 27 groups, under four broad categories: 'traditional anti-Semitism, 'incitement to violence', 'Holocaust related content', and 'anti-Semitism related to Israel or Israelis'.

A third report comparing the two datasets will be released in coming months.

Research papers piled on top of each other.

The institute had already been working on an anti-Semitism project before the Hamas attacks on Israel on October 7 last year and the ensuing Gaza war, and so was able to compare the data.

It found the volume of offensive posts increased more than five-fold.

The institute did not have a dataset on Islamophobia for a comparison but said "through comparisons with other data we can state with certainty that religious vilification against Muslims has increased substantially".

A man is looking at a research paper while taking part in a Zoom call.

The report estimated the increase had quadrupled.

The report into anti-Semitism was produced in partnership with the Executive Council of Australian Jewry and the report into Islamophobia was produced partly with funding from the Australian government's Safe and Together Community Grants Program.

The institute is seeking funding to run the project again starting in October, to see if rates have changed one year on.

"If we don't have that measurement, we don't have the data to guide government policy, to guide the community, to support the community organisations that need to respond," Dr Oboler said.

It’s the latest in a series of measures showing an increase in discriminatory incidents.

In the weeks after the conflict began , Islamophobia Register Australia recorded a 13-fold increase in incidents compared to the previous year, while the Executive Council of Australian Jewry reported a six-fold increase in anti-Semitic incidents, saying more had occurred following the start of the war than in the entire previous year.

A photo of two police officers standing next to each other in high-vis. Their heads are not in the photo.

'People should be able to disagree peacefully'

In Victoria, police have established 'Operation Park' to investigate offences associated with the Middle East conflict.

They have investigated 88 incidents relating to anti-Semitism and 16 related to Islamophobia, mostly involving things like graffiti and verbal abuse.

Queensland Police said it did not categorise the offences in a searchable way and flagged "police aren’t always involved" in discriminatory incidents.

NSW Police was not able to access potential data and Western Australia also raised an issue with gathering data but said there had been "no incidents involving violence" in the state.

ACT Police said there had been no incidents there, and South Australia said it had "not observed an increase" in reports of racially motivated incidents. The Northern Territory and Tasmanian police forces did not respond.

The federal government's recently appointed Special Envoy on Social Cohesion, Peter Khalil, said his role would "absolutely" be looking at rises in hate on social media.

A man wearing a dark suit gestures with his hands.

His appointment follows the announcement of a special envoy on anti-Semitism, and the promise of one on Islamophobia, although some Muslim leaders have questioned the worth of such an appointment .

Mr Khalil said the appointments were needed to help respond to "the challenges we're facing at the moment" but he hoped "if we do our work effectively, we navigate through what is a difficult period now, those roles won't be necessary in the future".

Mr Khalil said he and the government supported the right to political expression "100 per cent".

"People should be able to disagree peacefully on issues without resorting to the personal attack on someone based on their identity," he said.

"The terrible cost of war, the deep pain and anguish many Australians have felt of what they're seeing overseas, should not mean that Jewish Australians or Muslim Australians should be vilified or attacked, because of their faith and their background."

A man pressure-hosing graffiti off a brick fence.

Grassroots efforts to combat the impacts

Amid the rise in discrimination, members of affected communities are standing up to support one another.

Heshy Adelist, who owns an outdoor cleaning company, has been voluntarily removing anti-Semitic graffiti in Melbourne.

After being alerted to a spray-painted message to "kill Jews" in November last year, he immediately left the job he was on to go and clean it off.

"We already have so much hate in this world already," he said.

A man standing outdoors and wearing a collared t-shirt with a black cap sitting backwards on his head.

"If it had have said kill all Muslims, or kill or Christians, I would have gone out and cleaned it, it's more about the hate.

"But when it said kill the Jews, since I am Jewish, it was a bit more personal."

He has since cleaned off plenty more hate speech, including Nazi symbols, for free and is part of an informal message group where people can report and respond to anti-Semitism.

A man pressure hosing some steps outside a house.

"People at work, people at their homes, being targeted, online — just because they're Jewish."

Also in Melbourne, Abdurrafi Suwarno has been offering emotional and legal support to victims of Islamophobia, who he said were often women of colour.

"There are many online incidents, there's a lot of Islamophobic graffiti that happens, there is a lot of workplace incidents," he said.

"It's been extremely intense and horrible, it feels for our entire community that we've been going through a collective and unending trauma, and while we're crying for our brothers and sisters overseas, we're also seeing the direct impacts on the ground here."

A man wearing a white collared shirt with a dark jumper on top and glasses.

Mr Suwarno works with the Islamophobia Support Service which is run by the Islamic Council of Victoria, and feels a responsibility to hear the often-untold stories.

"I wouldn't say it feels good, I would say it feels necessary," he said.

"I want to see a future where the women in our community, the future generations and children in our community, can have a fair and equitable society where they can contribute and feel free to go about their business, and not have to believe that something will happen to them."

Training for police and calls to increase eSafety powers

The Online Hate Prevention Institute and the Belgium-based Online Hate Task Force have used their latest research to run training for police from across the world.

The session was staged in Brussels and online, with contributions from the European Commission Coordinator on combating anti-Muslim hatred, Marion Lalisse, and the Office of the EU Coordinator on combating anti-Semitism and fostering Jewish life.

Dr Andre Oboler said some Australian officers, including some members of the Federal Police, had taken part.

"The biggest piece of feedback was a view that this sort of training really needs to be rolled out to the grass roots," he said.

"The police we had were generally those working on biased hate crimes, or in the counter-terrorism space, but their view was this is really training that the cops on the beat need."

Dr Oboler has spent more than a decade at the Online Hate Prevention Institute and was previously co-chair of the Israeli government's Global Forum for Combating Anti-Semitism.

He said there was potential for more Australian officers to get access to the training.

A man in business attire sitting at a desk on a zoom call with multiple others.

"We've already had contact with some of the police forces that are interested — again, one of the difficulties comes back to funding."

The institute's latest report has recommended expanding the remit of Australia's eSafety Commissioner, so it can deal with group-based hate.

"At the moment, it can only deal with hate targeting individuals," Dr Oboler said.

"But if it's attacking the entire community right now, it's out of eSafety's remit, they have no power to deal with that."

The research found the rate offensive posts were removed varied "significantly" between social media platforms, but overall, for anti-Semitism 18 per cent of the offending posts were removed, and 32 per cent of the Islamophobic posts were taken down.

"We need eSafety to be able to take down such content as well," Dr Oboler said.

"They need to be able to issue a notice, and they can't do that unless the government updates the legislation and actually gives them that power."

Julie Inman Grant looking down the barrel of the camera in a portrait.

An eSafety spokesperson said the government had commissioned an independent review of Australia's Online Safety Act and the final report was expected by the end of October.

"We will also continue to work closely with the Australian government to ensure the Online Safety Act and related enabling legislation remains fit for purpose and adequately reflects Australians' needs and expectations," the spokesperson said.

  • X (formerly Twitter)
  • Internet Culture
  • Religious Discrimination
  • Social Media
  • Social Problems

PrepScholar

Choose Your Test

  • Search Blogs By Category
  • College Admissions
  • AP and IB Exams
  • GPA and Coursework

Getting College Essay Help: Important Do's and Don’ts

author image

College Essays

feature_help.jpg

If you grow up to be a professional writer, everything you write will first go through an editor before being published. This is because the process of writing is really a process of re-writing —of rethinking and reexamining your work, usually with the help of someone else. So what does this mean for your student writing? And in particular, what does it mean for very important, but nonprofessional writing like your college essay? Should you ask your parents to look at your essay? Pay for an essay service?

If you are wondering what kind of help you can, and should, get with your personal statement, you've come to the right place! In this article, I'll talk about what kind of writing help is useful, ethical, and even expected for your college admission essay . I'll also point out who would make a good editor, what the differences between editing and proofreading are, what to expect from a good editor, and how to spot and stay away from a bad one.

Worried about college applications?   Our world-class admissions counselors can help. We've guided thousands of students to get into their top choice schools with our data-driven, proprietary admissions strategies.

Table of Contents

What Kind of Help for Your Essay Can You Get?

What's Good Editing?

What should an editor do for you, what kind of editing should you avoid, proofreading, what's good proofreading, what kind of proofreading should you avoid.

What Do Colleges Think Of You Getting Help With Your Essay?

Who Can/Should Help You?

Advice for editors.

Should You Pay Money For Essay Editing?

The Bottom Line

What's next, what kind of help with your essay can you get.

Rather than talking in general terms about "help," let's first clarify the two different ways that someone else can improve your writing . There is editing, which is the more intensive kind of assistance that you can use throughout the whole process. And then there's proofreading, which is the last step of really polishing your final product.

Let me go into some more detail about editing and proofreading, and then explain how good editors and proofreaders can help you."

Editing is helping the author (in this case, you) go from a rough draft to a finished work . Editing is the process of asking questions about what you're saying, how you're saying it, and how you're organizing your ideas. But not all editing is good editing . In fact, it's very easy for an editor to cross the line from supportive to overbearing and over-involved.

Ability to clarify assignments. A good editor is usually a good writer, and certainly has to be a good reader. For example, in this case, a good editor should make sure you understand the actual essay prompt you're supposed to be answering.

Open-endedness. Good editing is all about asking questions about your ideas and work, but without providing answers. It's about letting you stick to your story and message, and doesn't alter your point of view.

body_landscape.jpg

Think of an editor as a great travel guide. It can show you the many different places your trip could take you. It should explain any parts of the trip that could derail your trip or confuse the traveler. But it never dictates your path, never forces you to go somewhere you don't want to go, and never ignores your interests so that the trip no longer seems like it's your own. So what should good editors do?

Help Brainstorm Topics

Sometimes it's easier to bounce thoughts off of someone else. This doesn't mean that your editor gets to come up with ideas, but they can certainly respond to the various topic options you've come up with. This way, you're less likely to write about the most boring of your ideas, or to write about something that isn't actually important to you.

If you're wondering how to come up with options for your editor to consider, check out our guide to brainstorming topics for your college essay .

Help Revise Your Drafts

Here, your editor can't upset the delicate balance of not intervening too much or too little. It's tricky, but a great way to think about it is to remember: editing is about asking questions, not giving answers .

Revision questions should point out:

  • Places where more detail or more description would help the reader connect with your essay
  • Places where structure and logic don't flow, losing the reader's attention
  • Places where there aren't transitions between paragraphs, confusing the reader
  • Moments where your narrative or the arguments you're making are unclear

But pointing to potential problems is not the same as actually rewriting—editors let authors fix the problems themselves.

Want to write the perfect college application essay?   We can help.   Your dedicated PrepScholar Admissions counselor will help you craft your perfect college essay, from the ground up. We learn your background and interests, brainstorm essay topics, and walk you through the essay drafting process, step-by-step. At the end, you'll have a unique essay to proudly submit to colleges.   Don't leave your college application to chance. Find out more about PrepScholar Admissions now:

Bad editing is usually very heavy-handed editing. Instead of helping you find your best voice and ideas, a bad editor changes your writing into their own vision.

You may be dealing with a bad editor if they:

  • Add material (examples, descriptions) that doesn't come from you
  • Use a thesaurus to make your college essay sound "more mature"
  • Add meaning or insight to the essay that doesn't come from you
  • Tell you what to say and how to say it
  • Write sentences, phrases, and paragraphs for you
  • Change your voice in the essay so it no longer sounds like it was written by a teenager

Colleges can tell the difference between a 17-year-old's writing and a 50-year-old's writing. Not only that, they have access to your SAT or ACT Writing section, so they can compare your essay to something else you wrote. Writing that's a little more polished is great and expected. But a totally different voice and style will raise questions.

Where's the Line Between Helpful Editing and Unethical Over-Editing?

Sometimes it's hard to tell whether your college essay editor is doing the right thing. Here are some guidelines for staying on the ethical side of the line.

  • An editor should say that the opening paragraph is kind of boring, and explain what exactly is making it drag. But it's overstepping for an editor to tell you exactly how to change it.
  • An editor should point out where your prose is unclear or vague. But it's completely inappropriate for the editor to rewrite that section of your essay.
  • An editor should let you know that a section is light on detail or description. But giving you similes and metaphors to beef up that description is a no-go.

body_ideas.jpg

Proofreading (also called copy-editing) is checking for errors in the last draft of a written work. It happens at the end of the process and is meant as the final polishing touch. Proofreading is meticulous and detail-oriented, focusing on small corrections. It sands off all the surface rough spots that could alienate the reader.

Because proofreading is usually concerned with making fixes on the word or sentence level, this is the only process where someone else can actually add to or take away things from your essay . This is because what they are adding or taking away tends to be one or two misplaced letters.

Laser focus. Proofreading is all about the tiny details, so the ability to really concentrate on finding small slip-ups is a must.

Excellent grammar and spelling skills. Proofreaders need to dot every "i" and cross every "t." Good proofreaders should correct spelling, punctuation, capitalization, and grammar. They should put foreign words in italics and surround quotations with quotation marks. They should check that you used the correct college's name, and that you adhered to any formatting requirements (name and date at the top of the page, uniform font and size, uniform spacing).

Limited interference. A proofreader needs to make sure that you followed any word limits. But if cuts need to be made to shorten the essay, that's your job and not the proofreader's.

body_detective-2.jpg

A bad proofreader either tries to turn into an editor, or just lacks the skills and knowledge necessary to do the job.

Some signs that you're working with a bad proofreader are:

  • If they suggest making major changes to the final draft of your essay. Proofreading happens when editing is already finished.
  • If they aren't particularly good at spelling, or don't know grammar, or aren't detail-oriented enough to find someone else's small mistakes.
  • If they start swapping out your words for fancier-sounding synonyms, or changing the voice and sound of your essay in other ways. A proofreader is there to check for errors, not to take the 17-year-old out of your writing.

body_spill-1.jpg

What Do Colleges Think of Your Getting Help With Your Essay?

Admissions officers agree: light editing and proofreading are good—even required ! But they also want to make sure you're the one doing the work on your essay. They want essays with stories, voice, and themes that come from you. They want to see work that reflects your actual writing ability, and that focuses on what you find important.

On the Importance of Editing

Get feedback. Have a fresh pair of eyes give you some feedback. Don't allow someone else to rewrite your essay, but do take advantage of others' edits and opinions when they seem helpful. ( Bates College )

Read your essay aloud to someone. Reading the essay out loud offers a chance to hear how your essay sounds outside your head. This exercise reveals flaws in the essay's flow, highlights grammatical errors and helps you ensure that you are communicating the exact message you intended. ( Dickinson College )

On the Value of Proofreading

Share your essays with at least one or two people who know you well—such as a parent, teacher, counselor, or friend—and ask for feedback. Remember that you ultimately have control over your essays, and your essays should retain your own voice, but others may be able to catch mistakes that you missed and help suggest areas to cut if you are over the word limit. ( Yale University )

Proofread and then ask someone else to proofread for you. Although we want substance, we also want to be able to see that you can write a paper for our professors and avoid careless mistakes that would drive them crazy. ( Oberlin College )

On Watching Out for Too Much Outside Influence

Limit the number of people who review your essay. Too much input usually means your voice is lost in the writing style. ( Carleton College )

Ask for input (but not too much). Your parents, friends, guidance counselors, coaches, and teachers are great people to bounce ideas off of for your essay. They know how unique and spectacular you are, and they can help you decide how to articulate it. Keep in mind, however, that a 45-year-old lawyer writes quite differently from an 18-year-old student, so if your dad ends up writing the bulk of your essay, we're probably going to notice. ( Vanderbilt University )

body_thumbsup-3.jpg

Now let's talk about some potential people to approach for your college essay editing and proofreading needs. It's best to start close to home and slowly expand outward. Not only are your family and friends more invested in your success than strangers, but they also have a better handle on your interests and personality. This knowledge is key for judging whether your essay is expressing your true self.

Parents or Close Relatives

Your family may be full of potentially excellent editors! Parents are deeply committed to your well-being, and family members know you and your life well enough to offer details or incidents that can be included in your essay. On the other hand, the rewriting process necessarily involves criticism, which is sometimes hard to hear from someone very close to you.

A parent or close family member is a great choice for an editor if you can answer "yes" to the following questions. Is your parent or close relative a good writer or reader? Do you have a relationship where editing your essay won't create conflict? Are you able to constructively listen to criticism and suggestion from the parent?

One suggestion for defusing face-to-face discussions is to try working on the essay over email. Send your parent a draft, have them write you back some comments, and then you can pick which of their suggestions you want to use and which to discard.

Teachers or Tutors

A humanities teacher that you have a good relationship with is a great choice. I am purposefully saying humanities, and not just English, because teachers of Philosophy, History, Anthropology, and any other classes where you do a lot of writing, are all used to reviewing student work.

Moreover, any teacher or tutor that has been working with you for some time, knows you very well and can vet the essay to make sure it "sounds like you."

If your teacher or tutor has some experience with what college essays are supposed to be like, ask them to be your editor. If not, then ask whether they have time to proofread your final draft.

Guidance or College Counselor at Your School

The best thing about asking your counselor to edit your work is that this is their job. This means that they have a very good sense of what colleges are looking for in an application essay.

At the same time, school counselors tend to have relationships with admissions officers in many colleges, which again gives them insight into what works and which college is focused on what aspect of the application.

Unfortunately, in many schools the guidance counselor tends to be way overextended. If your ratio is 300 students to 1 college counselor, you're unlikely to get that person's undivided attention and focus. It is still useful to ask them for general advice about your potential topics, but don't expect them to be able to stay with your essay from first draft to final version.

Friends, Siblings, or Classmates

Although they most likely don't have much experience with what colleges are hoping to see, your peers are excellent sources for checking that your essay is you .

Friends and siblings are perfect for the read-aloud edit. Read your essay to them so they can listen for words and phrases that are stilted, pompous, or phrases that just don't sound like you.

You can even trade essays and give helpful advice on each other's work.

body_goats.jpg

If your editor hasn't worked with college admissions essays very much, no worries! Any astute and attentive reader can still greatly help with your process. But, as in all things, beginners do better with some preparation.

First, your editor should read our advice about how to write a college essay introduction , how to spot and fix a bad college essay , and get a sense of what other students have written by going through some admissions essays that worked .

Then, as they read your essay, they can work through the following series of questions that will help them to guide you.

Introduction Questions

  • Is the first sentence a killer opening line? Why or why not?
  • Does the introduction hook the reader? Does it have a colorful, detailed, and interesting narrative? Or does it propose a compelling or surprising idea?
  • Can you feel the author's voice in the introduction, or is the tone dry, dull, or overly formal? Show the places where the voice comes through.

Essay Body Questions

  • Does the essay have a through-line? Is it built around a central argument, thought, idea, or focus? Can you put this idea into your own words?
  • How is the essay organized? By logical progression? Chronologically? Do you feel order when you read it, or are there moments where you are confused or lose the thread of the essay?
  • Does the essay have both narratives about the author's life and explanations and insight into what these stories reveal about the author's character, personality, goals, or dreams? If not, which is missing?
  • Does the essay flow? Are there smooth transitions/clever links between paragraphs? Between the narrative and moments of insight?

Reader Response Questions

  • Does the writer's personality come through? Do we know what the speaker cares about? Do we get a sense of "who he or she is"?
  • Where did you feel most connected to the essay? Which parts of the essay gave you a "you are there" sensation by invoking your senses? What moments could you picture in your head well?
  • Where are the details and examples vague and not specific enough?
  • Did you get an "a-ha!" feeling anywhere in the essay? Is there a moment of insight that connected all the dots for you? Is there a good reveal or "twist" anywhere in the essay?
  • What are the strengths of this essay? What needs the most improvement?

body_fixer.jpg

Should You Pay Money for Essay Editing?

One alternative to asking someone you know to help you with your college essay is the paid editor route. There are two different ways to pay for essay help: a private essay coach or a less personal editing service , like the many proliferating on the internet.

My advice is to think of these options as a last resort rather than your go-to first choice. I'll first go through the reasons why. Then, if you do decide to go with a paid editor, I'll help you decide between a coach and a service.

When to Consider a Paid Editor

In general, I think hiring someone to work on your essay makes a lot of sense if none of the people I discussed above are a possibility for you.

If you can't ask your parents. For example, if your parents aren't good writers, or if English isn't their first language. Or if you think getting your parents to help is going create unnecessary extra conflict in your relationship with them (applying to college is stressful as it is!)

If you can't ask your teacher or tutor. Maybe you don't have a trusted teacher or tutor that has time to look over your essay with focus. Or, for instance, your favorite humanities teacher has very limited experience with college essays and so won't know what admissions officers want to see.

If you can't ask your guidance counselor. This could be because your guidance counselor is way overwhelmed with other students.

If you can't share your essay with those who know you. It might be that your essay is on a very personal topic that you're unwilling to share with parents, teachers, or peers. Just make sure it doesn't fall into one of the bad-idea topics in our article on bad college essays .

If the cost isn't a consideration. Many of these services are quite expensive, and private coaches even more so. If you have finite resources, I'd say that hiring an SAT or ACT tutor (whether it's PrepScholar or someone else) is better way to spend your money . This is because there's no guarantee that a slightly better essay will sufficiently elevate the rest of your application, but a significantly higher SAT score will definitely raise your applicant profile much more.

Should You Hire an Essay Coach?

On the plus side, essay coaches have read dozens or even hundreds of college essays, so they have experience with the format. Also, because you'll be working closely with a specific person, it's more personal than sending your essay to a service, which will know even less about you.

But, on the minus side, you'll still be bouncing ideas off of someone who doesn't know that much about you . In general, if you can adequately get the help from someone you know, there is no advantage to paying someone to help you.

If you do decide to hire a coach, ask your school counselor, or older students that have used the service for recommendations. If you can't afford the coach's fees, ask whether they can work on a sliding scale —many do. And finally, beware those who guarantee admission to your school of choice—essay coaches don't have any special magic that can back up those promises.

Should You Send Your Essay to a Service?

On the plus side, essay editing services provide a similar product to essay coaches, and they cost significantly less . If you have some assurance that you'll be working with a good editor, the lack of face-to-face interaction won't prevent great results.

On the minus side, however, it can be difficult to gauge the quality of the service before working with them . If they are churning through many application essays without getting to know the students they are helping, you could end up with an over-edited essay that sounds just like everyone else's. In the worst case scenario, an unscrupulous service could send you back a plagiarized essay.

Getting recommendations from friends or a school counselor for reputable services is key to avoiding heavy-handed editing that writes essays for you or does too much to change your essay. Including a badly-edited essay like this in your application could cause problems if there are inconsistencies. For example, in interviews it might be clear you didn't write the essay, or the skill of the essay might not be reflected in your schoolwork and test scores.

Should You Buy an Essay Written by Someone Else?

Let me elaborate. There are super sketchy places on the internet where you can simply buy a pre-written essay. Don't do this!

For one thing, you'll be lying on an official, signed document. All college applications make you sign a statement saying something like this:

I certify that all information submitted in the admission process—including the application, the personal essay, any supplements, and any other supporting materials—is my own work, factually true, and honestly presented... I understand that I may be subject to a range of possible disciplinary actions, including admission revocation, expulsion, or revocation of course credit, grades, and degree, should the information I have certified be false. (From the Common Application )

For another thing, if your academic record doesn't match the essay's quality, the admissions officer will start thinking your whole application is riddled with lies.

Admission officers have full access to your writing portion of the SAT or ACT so that they can compare work that was done in proctored conditions with that done at home. They can tell if these were written by different people. Not only that, but there are now a number of search engines that faculty and admission officers can use to see if an essay contains strings of words that have appeared in other essays—you have no guarantee that the essay you bought wasn't also bought by 50 other students.

body_monalisa.jpg

  • You should get college essay help with both editing and proofreading
  • A good editor will ask questions about your idea, logic, and structure, and will point out places where clarity is needed
  • A good editor will absolutely not answer these questions, give you their own ideas, or write the essay or parts of the essay for you
  • A good proofreader will find typos and check your formatting
  • All of them agree that getting light editing and proofreading is necessary
  • Parents, teachers, guidance or college counselor, and peers or siblings
  • If you can't ask any of those, you can pay for college essay help, but watch out for services or coaches who over-edit you work
  • Don't buy a pre-written essay! Colleges can tell, and it'll make your whole application sound false.

Ready to start working on your essay? Check out our explanation of the point of the personal essay and the role it plays on your applications and then explore our step-by-step guide to writing a great college essay .

Using the Common Application for your college applications? We have an excellent guide to the Common App essay prompts and useful advice on how to pick the Common App prompt that's right for you . Wondering how other people tackled these prompts? Then work through our roundup of over 130 real college essay examples published by colleges .

Stressed about whether to take the SAT again before submitting your application? Let us help you decide how many times to take this test . If you choose to go for it, we have the ultimate guide to studying for the SAT to give you the ins and outs of the best ways to study.

Want to improve your SAT score by 160 points or your ACT score by 4 points?   We've written a guide for each test about the top 5 strategies you must be using to have a shot at improving your score. Download them for free now:

Trending Now

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

ACT vs. SAT: Which Test Should You Take?

When should you take the SAT or ACT?

Get Your Free

PrepScholar

Find Your Target SAT Score

Free Complete Official SAT Practice Tests

How to Get a Perfect SAT Score, by an Expert Full Scorer

Score 800 on SAT Math

Score 800 on SAT Reading and Writing

How to Improve Your Low SAT Score

Score 600 on SAT Math

Score 600 on SAT Reading and Writing

Find Your Target ACT Score

Complete Official Free ACT Practice Tests

How to Get a Perfect ACT Score, by a 36 Full Scorer

Get a 36 on ACT English

Get a 36 on ACT Math

Get a 36 on ACT Reading

Get a 36 on ACT Science

How to Improve Your Low ACT Score

Get a 24 on ACT English

Get a 24 on ACT Math

Get a 24 on ACT Reading

Get a 24 on ACT Science

Stay Informed

Get the latest articles and test prep tips!

Follow us on Facebook (icon)

Anna scored in the 99th percentile on her SATs in high school, and went on to major in English at Princeton and to get her doctorate in English Literature at Columbia. She is passionate about improving student access to higher education.

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

  • Share full article

Advertisement

Supported by

Guest Essay

What’s Happening in Britain Is Shocking. But It’s Not Surprising.

A line of police officers stand in front of buildings in Liverpool.

By Hibaq Farah

Ms. Farah is a staff editor in Opinion. She wrote from London.

The scenes are shocking.

In the wake of the murder of three young girls in the northwestern town of Southport, England, riots erupted across the country. Seizing on misinformation about the suspect’s identity, far-right rioters embarked on a harrowing rampage, setting fire to cars, harassing Muslims, looting stores and attacking mosques as well as hotels housing asylum seekers. In an early August weekend , there were over 50 protests and almost 400 arrests. In the week since, hundreds of rioters have been charged and dozens convicted.

The country is stunned. But for all the events’ eye-popping madness, we shouldn’t be surprised. The animosities underpinning the riots — hatred of Muslims and migrants alike — have long found expression in Britain’s political culture, not least under the previous Conservative government whose cornerstone commitment was to “stop the boats” on which migrants made their way to British shores.

Far-right extremists, emboldened by that government’s turn to migrant-bashing, have been waiting for the perfect chance to take to the streets. Crucially, they have found a home online, where platforms — poorly regulated and barely moderated — allow the spread of hate-filled disinformation, whipping up a frenzy. These have been disturbing days. But the chaos has been coming.

Disinformation is at the heart of the riots. In the aftermath of the killings in Southport, users on X posted and shared false claims, stating that the alleged attacker was an asylum seeker who arrived in Britain by boat — when he was in fact born and raised in Wales. On TikTok, far-right users went live and called on one another to gather in protest. Their reach was wide. Thanks to the platform’s aggressively personalized For You page, it is not difficult to get videos in front of users who have already engaged with far-right or anti-migrant content.

The apparatus of assembly extended to messaging services. On Telegram , far-right group chats shared lists of protest locations; one message included the line “they won’t stop coming until you tell them.” In WhatsApp chats, there were messages about reclaiming the streets and taking out “major bases” of immigrant areas in London. These calls to action were quickly amplified by far-right figures like Andrew Tate and Tommy Robinson, the founder of the English Defense League , who took to X to spread lies and foment hate. Almost immediately, people were out on the streets, wreaking havoc.

There was little to stop the outpouring of false claims and hateful language, even after officials released information about the suspect’s identity. Legislation on internet safety is murky and confusing. Last year, the Conservative government passed the Online Safety Act , whose remit is to protect children and force social media companies to remove illegal content. But there is no clear reference in the law to misinformation.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

IMAGES

  1. (DOC) How social media impacts on online racism and hate speech.docx

    online hate essay

  2. Final Research Essay

    online hate essay

  3. ≫ Commentary on The Hate U Give Free Essay Sample on Samploon.com

    online hate essay

  4. 10+ Hate Speech Example Templates

    online hate essay

  5. Description of Concept of Hate Speech Essay Example

    online hate essay

  6. ≫ Hate Crime Issue in Society Free Essay Sample on Samploon.com

    online hate essay

COMMENTS

  1. Dynamics of online hate and misinformation

    In addition to fuelling the toxicity of the online debate, hate speech may have severe offline consequences. Some researchers hypothesised a causal link between online hate and offline violence 7 ...

  2. Hate Speech on Social Media: Global Comparisons

    Summary. Hate speech online has been linked to a global increase in violence toward minorities, including mass shootings, lynchings, and ethnic cleansing. Policies used to curb hate speech risk ...

  3. Prevalence of Online Hate Speech on Social Media

    The role of popular and politically organized racism in fostering terrestrial climates of intimidation and violence is well documented (Bowling, 1993).The far right, and some popular right-wing politicians, have been pivotal in shifting the 'Overton window' of online political discussion further to the extremes (Lehman, 2014), creating spaces where hate speech has become the norm.

  4. Internet, social media and online hate speech. Systematic review

    This systematic review aimed to explore the research papers related to how Internet and social media may, or may not, constitute an opportunity to online hate speech. 67 studies out of 2389 papers found in the searches, were eligible for analysis. We included articles that addressed online hate speech or cyberhate between 2015 and 2019.

  5. PDF Internet, social media and online hate speech. Systematic review

    This systematic review aimed to explore the research papers related to how Internet and social media may, or may not, constitute an opportunity to online hate speech. 67 studies out of 2389 papers found in the searches, were eligible for analysis. We included articles that addressed online hate speech or cyberhate between 2015 and 2019.

  6. Online hate, digital discourse and critique: Exploring digitally

    The communicative affordances of the participatory web have opened up new and multifarious channels for the proliferation of hate. In particular, women navigating the cybersphere seem to be the target of a disproportionate amount of hostility. This paper explores the contexts, approaches and conceptual synergies around research on online misogyny within the new communicative paradigm of social ...

  7. New UN policy paper launched to counter and address online hate

    The policy paper, Countering and Addressing Online Hate Speech: A Guide for Policy Makers and Practitioners, was developed jointly by the UN Office with the Economic and Social Research Council (ESRC) Human Rights, Big Data and Technology Project, at the UK's University of Essex. 'Unprecedented speed' "We have seen across the world, and time, how social media has become a major vehicle ...

  8. PDF Online Hate Speech: Hate or Crime?

    The essay finally reviews different mechanisms for combating hate speech and attempts to answer who is responsible for leading the fight. The concept of online hate speech Before discussing the legislation regarding online hate speech, the first step should be to define the term itself. The meaning behind hate speech is not as self-evident as ...

  9. Online Hate Speech and the Role of Digital Platforms: What Are the

    The present essay addresses the controversial issue of the repression of hate speech by online platforms and the new role assigned to them, namely regulating users' fundamental rights. ... in addition to the Framework Decision of 2008, Footnote 9 the Code of Conduct on Countering Illegal Hate Speech Online (2016), Footnote 10 the Code of ...

  10. Social media and online hate

    Online hate has negative effects on the well-being of both victims and observers, including 'depression, isolation, paranoia, social anxiety, self-doubt, disappointment, loneliness, and lack of confidence' [5]. The effects of observing and receiving hate via social media, however, may be quite different than the effects of expressing hate ...

  11. Online interventions for reducing hate speech and cyberhate: A

    However, we have also seen instances where online hate speech/cyberhate has escalated to "real life" attacks, leaving the online sphere and spilling into the offline world (e.g., the Christchurch attack in New Zealand, the Poway Synagogue shooting in San Diego, the El Paso shooting in Texas, and their link to hateful communication on the ...

  12. The virtual stages of hate: Using Goffman's work to conceptualise the

    For some people, they are beginning to turn online hate speech into real-world offline hate speech and hate crimes (Williams et al., 2019). Of course, social media is not solely responsible for the increase of hate speech ( Kilvington, 2019 ) as perpetrators' empathy levels ( Brown, 2017 ), moods ( Farrington et al., 2015 ), and the contexts ...

  13. Report: Online hate increasing against minorities, says expert

    A new report by the Special Rapporteur on minority issues, Dr Fernand de Varennes, looks at how to address rising online hate speech against minority groups. Efforts in the fight against "the tsunami of hate and xenophobia in social media" appear to be largely failing, because hate is increasing, not diminishing, according to the Special ...

  14. Hate speech, toxicity detection in online social media: a recent survey

    This paper presents a survey of online hate speech identification using different Artificial Intelligence techniques. This review study looks into a number of research questions shown in Table 1 that will help us to learn about the most recent trends in online hate speech in the field of artificial intelligence. It also includes an overview of recently used machine learning and deep learning ...

  15. A Scoping Review of Research on Online Hate and Sport

    Furthermore, our strict focus on research that primarily considered online hate in sport excluded those papers (discussed in our Research Design section) that examined examples whereby online hate was touched on from a contextual standpoint rather than being the focus of the findings and/or analysis of the paper. Such papers are undoubtedly of ...

  16. Say #NoToHate

    The impact of hate speech cuts across numerous UN areas of focus, from protecting human rights and preventing atrocities to sustaining peace, achieving gender equality and supporting children and ...

  17. Addressing hate speech on social media: contemporary challenges

    Effectively addressing online hate speech cannot solely rely on national legal recourse.⁷ In 2016, a group of major tech companies agreed upon the European Commission's Code of Conduct on Countering Illegal Hate Speech Online, which requires these companies to review hateful speech within a day of receiving a report.

  18. Internet, social media and online hate speech. Systematic review

    This systematic review aimed to explore the research papers related to how Internet and social media may, or may not, constitute an opportunity to online hate speech. 67 studies out of 2389 papers found in the searches, were eligible for analysis. We included articles that addressed online hate speech or cyberhate between 2015 and 2019. Meta-analysis could not be conducted due to the broad ...

  19. Online hate speech and hate crime

    The Committee of Experts on Combating Hate Crime (PC/ADI-CH) started its work in 2022. The main aim of the Committee has been to prepare and draft a comprehensive Recommendation to address hate crimes. The approach also built upon the work of colleagues in the field to help set forward-looking standards in areas such as human rights, criminal justice, victim support and services, civil society ...

  20. When Online Hate Speech Has Real World Consequences

    This mini-lesson is part one of a two-part exploration of online hate speech, celebrity influence, and the real-life consequences they can engender. This installment uses a recent high-profile example of antisemitic rhetoric and action it inspired to prompt students' thinking about the dangerous role that celebrity influence and social media ...

  21. What is hate speech?

    Hate speech is "discriminatory" (biased, bigoted or intolerant) or "pejorative" (prejudiced, contemptuous or demeaning) of an individual or group. Hate speech calls out real or perceived ...

  22. Online hate analysts are calling for greater eSafety powers after study

    The Australian-based Online Hate Prevention Institute tracked offensive posts globally on 10 social media platforms for three months the beginning of the war in Gaza on October 7 last year.

  23. Getting College Essay Help: Important Do's and Don'ts

    Places where more detail or more description would help the reader connect with your essay. Places where structure and logic don't flow, losing the reader's attention. Places where there aren't transitions between paragraphs, confusing the reader. Moments where your narrative or the arguments you're making are unclear.

  24. Opinion

    Guest Essay. What's Happening in Britain Is Shocking. But It's Not Surprising. ... The result is an online world of hate, lies and extremism. The online world is connected to the offline world ...

  25. Violent, racist attacks have gripped several British cities. What

    Joe Mulhall, director of research at Hope Not Hate, a UK-based anti-racism, anti-fascism charity, told CNN over the weekend that the return of Robinson and similar figures to X has "resulted in ...