( )
Note . In each model, the predictor was standardized, but the censoring rates were not. The censoring rates ranged from 0 to 1. The path coefficients reported are unstandardized. † indicates p < .1. * indicates p < .05. ** indicates p < .01. *** indicates p < .001.
Our pre-registered mediational analyses (see SOM-III) suggest that essentialistic beliefs regarding people's stance on abortion rights might be at least one mediating mechanism explaining the fusion effect on selective censoring. In our pre-registration, we also proposed to test the fusion effect controlling for other identity-related measures. We accordingly report a model in which the predictive ability of all the identity-related measures are compared (see SOM-V). Nevertheless, because the measured variables are all strongly related both conceptually and empirically (see Table 2 ), after establishing that multicollinearity was not a problem, we examined whether each of these variables independently predicts selective censoring.
The foregoing analyses revealed that identity fusion with a cause is associated with a tendency to disproportionately censor online content that is incongruent with the cause. To test the pre-registered hypothesis that strongly fused individuals would also display a censoring bias against the authors of incongruent content, we examined a SEM model with two dependent variables corresponding to the binary indicators of whether the participant decided to ban the authors of incongruent, and congruent comments. Fusion was not significantly associated with banning the author of the incongruent comments (OR = 1.17, 95% CI = [0.95, 1.45], p = .14) or congruent comments (OR = 0.99, 95% CI = [0.78, 1.25], p = .90). The difference between the two paths, computed as two times the negative loglikelihood of the difference between the two paths, was not significant (χ 2 (1) = 1.18, p = .28), indicating that fusion was not associated with selectively censoring authors of incongruent comments. However, given that the non-significant coefficients of the two paths were in the predicted direction, it is possible that there exists a small effect that our sample was not sufficiently powered to detect.
To verify Study 1's exploratory finding and our pre-registered hypothesis that the offensiveness of comments would not moderate the effect of fusion on selective censoring, we modeled the paths from fusion to participants' censoring rates for four types of comments: Offensive-Congruent, Offensive-Incongruent, Inoffensive-Congruent, and Inoffensive-Incongruent (see Fig. 5 ).
Structural Equations Model examining the effect of identity fusion on selective censoring of incongruent vs. congruent comments among offensive and inoffensive comments (Study 2). Δ p and Δ q represent fusion's effects on selective censoring among offensive comments and inoffensive comments, respectively. The significant effects indicate that strongly fused people selectively censored incongruent comments whether the comments were offensive or inoffensive. See SOM-IV for path coefficients. * indicates p < .05. ** indicates p < .01.
Among offensive comments, fusion was associated with selectively censoring incongruent comments over congruent comments (Δ p = p1 – p2 ; b = 0.04, 95% CI = [0.02, 0.06], p = .001). Similarly, among inoffensive comments, strongly fused individuals selectively censored incongruent comments (Δ q = q 1 – q 2; b = 0 .02, 95% CI = [0.005, 0.04], p = .008). (The four path coefficients are reported in SOM-IV). The two significant selective censoring effects suggest that strongly fused people's selective intolerance for incongruent comments was observable among both offensive and inoffensive comments. Comparing two selective censoring effects for offensive vs. inoffensive comments (Δ p – Δ q ) revealed a marginally significant difference (χ 2 (1) = 3.34, p = .07), suggesting that fusion's effect on selective censoring may have been larger for offensive than inoffensive comments. What is striking however is that as in Study 1, strongly fused people selectively censored incongruent comments even when the comments were inoffensive.
Thus far, we focused on the effects of identity fusion. Nevertheless, we conducted exploratory analyses testing the possibility that selective censoring of incongruent comments results from a constellation of identity-related processes. To test this possibility, we assessed the effects of attitude strength (attitude extremity, attitude centrality, attitude certainty, and attitude importance), moral conviction, and identification with supporters, which all index different aspects of people's alignment with a cause. Using the same approach as in the fusion analysis, we sequentially tested the relation of each of the seven predictors to selective censoring. Table 3 reports each model's path coefficients from the tested variable to censoring incongruent comments ( c 1 ) and to censoring congruent comments ( c 2 ). Table 3 also reports the chi-square difference between the two paths ( c 1 – c 2 ) indicating the extent to which the tested variable is associated with selectively censoring incongruent comments. The last column presents linear regression coefficients from alternate analyses testing the effect of each identity-related measure on the difference in participants' censoring rates for incongruent vs. congruent comments.
As indicated by the significant chi-square differences (Δ c ) and the significant regression coefficients ( b ) in Table 3 , each of the constructs produced selective censoring similar to the fusion effects, which is preliminary evidence that broader identity-related processes motivate selective censoring.
Interestingly, most of the predictors (attitude certainty, attitude centrality, attitude extremity, identification with cause supporters, and moral conviction) were negatively associated with censoring congruent comments (see c 2 coefficients in Table 3 ), indicating that they produce a tendency to be lenient toward congruent comments. On the contrary, fusion and attitude importance were not correlated with censoring congruent comments; instead, they were positively associated with censoring incongruent comments (see c 1 coefficients in Table 3 ), implying that these constructs were associated with an intolerance for incongruent comments. We speculate that a preference for congruent content and an intolerance against incongruent content reflect two independent mechanisms leading to selective censorship of incongruent comments.
We tested another SEM model (not pre-registered) similar to the fusion analysis to assess the effect of people's stance on abortion rights (pro-choice vs. pro-life). Unlike Study 1, pro-choice participants selectively censored incongruent comments as much as pro-life participants (χ 2 (1) = 2.38, p = .12), which may be due to higher threat levels among pro-choice participants following the, 2018 nomination Justice Kavanaugh to the Supreme Court. That is, owing to the conservative shift in the makeup of the Supreme Court in, 2018, pro-choice participants in Study 2 may have generally faced higher threat relative to Study 1, which could have increased their tendency to selectively censor pro-life comments. There was also no difference in fusion levels among pro-choice and pro-life participants ( t (537) = 0.59, p = .56, d = 0.07).
Study 2 replicated Study 1's main findings that people censor online content that is incongruent with their own political views and that strongly fused individuals are especially likely to selectively censor incongruent content. Strongly fused people's tendency to selectively censor incongruent comments was robust for both offensive and inoffensive comments. Contrary to Study 1, we did not find evidence that pro-life participants selectively censored more than pro-choice participants, which we believe could be due to the socio-political environment during Study 2 data collection.
In addition to replicating Study 1 effects, Study 2 also examined people's willingness to ban the authors of incongruent vs. congruent comments from the forum. We found that cause supporters selectively banned the author who consistently posted cause-incongruent content. Contrary to our hypothesis, this effect was not amplified by fusion. This may have been because banning authors is a relatively extreme action that participants in our samples generally did not endorse. Conceivably, there is a small association of fusion with selective censoring of authors that our sample was underpowered to detect.
Finally, the study found that the selective censoring effect extends to an array of identity-related measures in the literature. The findings also indicate that there may be different paths to selective censorship of opposing content: Whereas fusion and attitude importance were associated with an increased tendency to censor incongruent comments, the other identity-related predictors were associated with a weaker tendency to censor congruent comments.
In short, the results of Study 2 replicated the selective censoring effect that emerged in Study 1. A potential limitation of these studies, however, is that both focused on an issue rooted in religious values, abortion rights. To address this, Study 3 focused on gun rights. The gun-rights issue was particularly relevant in the time that the study was conducted because gun sales peaked during the COVID-19 crisis ( Collins and Yaffe-Bellany, 2020 ).
The method used in Study 3 resembled those used in previous studies except that we used a more controlled manipulation of comment offensiveness that kept the content of the comments constant. Whereas in Study 2 comments were categorized as offensive or inoffensive based on coders' ratings, in Study 3, for each inoffensive comment, we generated an offensive version by adding offensive phrases. In this way, the content of inoffensive and comments was identical except for offensive language. Finally, as in Study 2, we assessed whether the selective censoring effect of fusion generalized to other identity-related measures such as indices of attitude strength, moral conviction, and identification with cause supporters.
8.1. power analysis.
As mentioned in our pre-registration (see https://osf.io/x3w7h/?view_only=a25d722f3a03405e9e4f074a622b10b4 ), an a priori power analysis conducted using Monte Carlo simulations indicated that a sample of 325 participants was required to detect the selective censoring effect detected in Study 2 with an alpha of 0.05 and 80% power. Given the longitudinal nature of the study, we estimated that approximately 30% of the sample would either drop out between T1 and T2 or fail attention checks, and so we decided to recruit 460 participants at T1.
8.2.1. participants.
A sample of 466 participants (49.6% female; 67.0% White; M age = 31.18; SD age = 11.14) from Prolific Academic completed the first part of the study in May 2020. Participants' views on gun rights were measured in the T1 survey (370 pro-gun-control and 96 pro-gun-rights participants).
Participants completed the identity fusion scale for their position on gun rights (either pro-gun or anti-gun) on a seven-point scale (α = 0.93). Using the measures used in Study 2, we measured four facets of attitude strength – attitude extremity, attitude centrality, attitude certainty and attitude importance, moral conviction, and identification with cause supporters (α = 0.86). The order of presentation of the above constructs was randomized. Means, standard deviations, and inter-variable correlations are reported in Table 5 . Finally, they provided demographic information.
Means, standard deviations, and correlations with confidence intervals in Study 3 (N = 371).
Variable | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | ||
---|---|---|---|---|---|---|---|---|---|---|
1. Fusion with cause | 3.45 | 1.43 | ||||||||
2. Attitude extremity | 7.62 | 1.36 | 0.31 | |||||||
3 Attitude centrality | 6.75 | 1.84 | 0.49 | 0.53 | ||||||
4. Attitude certainty | 7.58 | 1.46 | 0.38 | 0.69 | 0.55 | |||||
5. Attitude importance | 6.55 | 1.79 | 0.61 | 0.55 | 0.73 | 0.59 | ||||
6. Moral conviction | 3.37 | 1.03 | 0.49 | 0.49 | 0.68 | 0.54 | 0.56 | |||
7. Identification with cause supporters | 5.65 | 1.06 | 0.50 | 0.63 | 0.56 | 0.63 | 0.61 | 0.57 | ||
8. Rate of censoring congruent comments | 0.28 | 0.18 | −0.04 | −0.01 | −0.05 | −0.01 | −0.04 | −0.06 | −0.04 | |
9. Rate of censoring incongruent comments | 0.37 | 0.20 | 0.08 | 0.11 | 0.10 | 0.13 | 0.13 | 0.07 | 0.07 | 0.56 |
Note . The censoring rates, ranging from 0 to 1, refer to the proportion of comments of each type (congruent and incongruent) that participants censored. Fusion's effect on selective censoring is the difference between fusion's association with the censoring rates of congruent and incongruent comments. This effect was not moderated by position on gun rights. * indicates p < .05. ** indicates p < .01.
8.3.1. participants.
Two weeks after completing the T1 survey, participants were able to complete a “Comment Moderation Task”. A total of 373 participants completed the task. Two participants who completed less than 50% of the task were excluded, leaving us with a final sample of 371 participants (52.85% female; 66.85% White; M age = 31.45; SD age = 11.61; 297 pro-gun-control and 74 pro-gun-rights participants). A sensitivity analysis revealed that our sample had 85% power to detect the fusion effect on selective censoring reported in Study 2. We found a difference in fusion levels between people who did vs. did not complete the T2 session such that individuals who completed T2 were more fused with the cause ( t (462) = 2.01, p = .05, d = −0.23).
As in the previous studies, we asked participants to help moderators of a college-run discussion forum identify inappropriate posts for removal. We gathered 14 pro-gun-rights comments and 14 pro-gun-control comments from the internet, resulting in 28 comments. We created offensive and inoffensive versions of each comment by including or excluding offensive phrases. Participants read either the offensive or inoffensive version of each of the 28 comments. Overall, participants read four types of comments ( N = 7 for each type): Offensive-Pro-gun-rights, Inoffensive-Pro-gun-rights, Offensive-Pro-gun-control, and Inoffensive-Pro-gun-control (See Table 4 for example comments). As in Study 2, each comment was accompanied by a user icon and timestamp like in real online forums. The pro-gun-rights comments were all posted by a single user, and the pro-gun-control comments were all posted by another user. As in the previous studies, for each comment, participants recommended deletion or retention. After evaluating all comments, participants were also asked whether the two users whose comments they read should be banned from the blog (“Ban this user from the blog” or “Do not ban this user from the blog”). Finally, participants rated how much they doubted that the forum was not real on a five-point scale (1 = not at all, 5 = a great deal). The mean rating ( M = 2.65, SD = 0.99) was lower than the mid-point of the scale (i.e., 3 = A moderate amount; t(366) = −6.77, p < .001, d = −0.35), suggesting that participants generally did not doubt the veracity of the paradigm.
Sample comments rated by participants in Study 3. The study included 28 comments (14 pro-gun-rights and 14 pro-gun-control), each of which had an offensive and an inoffensive version. Participants rated either the offensive or inoffensive version of each of the 28 comments. The comments were presented in the format illustrated in Fig. 3 and in random order.
Sample comments rated by Participant 1 | Sample comments rated by Participant 2 | |
---|---|---|
Pro-gun-rights | : We must defend the right to keep and bear arms through communication and coordinated action, retarded dumbasses like you just don't get it. [ ] : Everyone should be pro gun. Pro gun = pro freedom. Pro gun = anti tyranny. [ ] | : We must defend the inherent right to keep and bear arms through communication and coordinated action. [ ] : You're must be an unfixable dumbfuck if you don't get this: Pro gun = pro freedom. Pro gun = anti tyranny. [ ] |
Pro-gun-control | - : Why aren't guns and, oh yeah, assault rifles banned? Why aren't you banned? It is unbelievable that this has been allowed to continue. I am mortified that you exist. Enough is enough! #guncontrol #fuckguns [ ] - : I don't care about Thoughts and Prayers. It's just a phrase that people use instead of “Thoughts and Actions”. [ ] | - : Why aren't guns and specifically assault rifles banned? It is unbelievable that this has been allowed to continue. Enough is enough! #guncontrol #nomoreguns [ ] - : I Don't Give a Fuck About Your Thoughts and Prayers. It's just a shitty, waste of words that people use instead of “Thoughts and Actions”. [ ] |
For each participant, we calculated censoring rates corresponding to comments congruent and incongruent with their own position on guns. For the offensiveness-related analyses, we also computed censoring rates for each of the four types of comments (Offensive-Congruent, Offensive-Incongruent, Inoffensive-Congruent, and Inoffensive-Incongruent). Overall, participants censored offensive comments ( M = 0.58, SD = 0.28) more than inoffensive comments ( M = 0.07, SD = 0.12; t (370) = 33.98, p < .001¸ d = 2.27) indicating that the offensiveness manipulation was successful. The censoring rates for offensive and inoffensive comments were correlated albeit more weakly than in Study 1 ( r (369) = 0.17, p < .001).
9.1. did people selectively censor comments incongruent with their cause and the comments' authors.
We tested the pre-registered hypothesis that people would selectively censor incongruent comments more than congruent comments. We conducted a paired t -test comparing the censoring rates for incongruent vs. congruent comments. Replicating findings from the first two studies, people censored more incongruent comments ( M = 36.97%; SD = 19.64) than congruent comments ( M = 27.88%; SD = 17.62), t (370) = 10.02, p < .001, d = 0.49.
We also conducted a pre-registered analysis testing whether people were disproportionately willing to ban the author of the incongruent comments relative to the author of the congruent comments. Contrary to our hypothesis and the results of Study 1, we did not find a significant difference (χ 2 (1) = 1.92, p = .17). Nevertheless, the means trended in the expected direction. That is, 32.69% of participants banned the user who posted incongruent comments as opposed to just 29.51% who banned the user posting congruent comments.
To test our pre-registered hypothesis that strongly fused individuals would be especially likely to selectively censor incongruent comments, we tested a SEM model (see Fig. 6 ) with residual covariances between the censoring rates. (Alternate analyses treating the difference between censoring rates of incongruent and congruent comments as the selective censoring index, reported in Table 6 below and in SOM-II, result in the same findings). As in Studies 1 and 2, we standardized the predictors in all the SEM analyses, and we report unstandardized regression coefficients. Fusion positively (but not significantly) predicted censoring incongruent comments ( c 1 path; b = 0.02, 95% CI = [−0.004, 0.04], p = .12) but not censoring congruent comments ( c 2 path; b = −0.006, 95% CI = [−0.02, 0.01], p = .49). The difference between the fusion effects on censoring incongruent vs. congruent comments was significant, (Δ c = c 1 - c 2 ; χ 2 (1) = 6.01, p = .01), which is evidence that fusion is associated with selective censoring. To illustrate, participants who were strongly fused (+ 1 SD ) censored 41.47% of the incongruent comments they read but only 28.56% of the congruent comments. Weakly fused participants censored 35.92% of the incongruent comments and 29.52% of the congruent comments, indicating weaker selective censoring. The effect of fusion on selective censoring remained significant when we controlled for whether participants favored pro-gun-rights or pro-gun-control (χ 2 (1) = 9.24, p = .002), and the effect was not moderated by position on gun rights (χ 2 (1) = 0.05, p = .83).
Path coefficients (c 1 and c 2 ) and Chi-sq values (χ 2 ) of SEM models and coefficients from regression models testing the effects of each identity-related measure on selective censoring (Study 3). Note that each model included only one predictor.
Predictor in model | Semantic equation modeling (SEM) | Selective Censoring difference index ( ) | ||
---|---|---|---|---|
Censoring incongruent comments ( ) | Censoring congruent comments ( ) | Selective censoring (Δ = - ) χ | ||
Model 1: Fusion with cause | 0.02 | −0.006 | 6.01 | 0.02 |
Model 2: Attitude importance | 0.03 | −0.01 | 13.45 | 0.03 |
Model 3: Attitude certainty | 0.03 | −0.002 | 9.86 | 0.03 |
Model 4: Attitude centrality | 0.02 | −0.01 | 11.26 | 0.03 |
Model 5: Attitude extremity | 0.02 | −0.002 | 7.01 | 0.02 |
Model 6: Identification with cause supporters | 0.02 | −0.007 | 5.51 | 0.02 |
Model 7: Moral conviction | 0.01 | −0.01 | 7.33 | 0.03 |
Note . In each model, the predictor was standardized, but the censoring rates were not. The censoring rates ranged from 0 to 1. The path coefficients reported are unstandardized. * indicates p < .05. ** indicates p < .01. *** indicates p < .001.
Structural Equations Model depicting the effect of identity fusion on selective censoring of incongruent vs. congruent comments (Study 3). The c 1 and c 2 paths represent the effects of fusion on censoring incongruent and congruent comments respectively. The significant difference between the two paths (Δ c ) indicates that fusion is associated with selectively censoring incongruent comments. * indicates p < .05.
As in the previous studies and consistent with the pre-registration, we modeled the paths from fusion to participants' censoring rates for four types of comments: Offensive-Congruent, Offensive-Incongruent, Inoffensive-Congruent, and Inoffensive-Incongruent (see Fig. 7 ). Among inoffensive comments, fusion was associated with selectively censoring incongruent comments over congruent comments (Δ q = q1 – q2 ; b = 0.03, 95% CI = [0.009, 0.04], p = .003). Among offensive comments, the effect was in the predicted direction but not significant (Δ p = p 1 – p 2; b = 0 .02, 95% CI = [−0.007, 0.04], p = .16). (The four path coefficients are reported in SOM-IV). Comparing two selective censoring effects for offensive vs. inoffensive comments (Δ p – Δ q ) revealed no difference (χ 2 (1) = 0.39, p = .53).
Structural Equations Model examining the effect of identity fusion on selective censoring of incongruent vs. congruent comments among offensive and inoffensive comments (Study 3). Δ p and Δ q represent fusion's effects on selective censoring among offensive comments and inoffensive comments, respectively. The difference between them was not significant, which indicates that comment offensiveness did not moderate fusion's effect on selective censoring. See SOM-IV for path coefficients. ** indicates p < .01.
We then tested our pre-registered hypothesis that fusion's effect on selective censoring would extend to seven identity-related measures. Using models similar to the fusion analysis, we tested the effect of each predictor on selective censoring. Table 6 reports each model's path coefficients from the tested variable to censoring incongruent ( c 1 ) and congruent ( c 2 ) comments, and the chi-square difference between the two paths ( c 1 – c 2 ) indicating the extent to which the tested variable is associated with selective censoring. The last column in Table 6 presents linear regression coefficients from alternate models testing the effect of each identity-related measures on the difference between participants' censoring rates for incongruent and congruent comments. The significant chi-square differences (Δ c ) and regression coefficients ( b ) indicate that the selective censoring effect generalized to each of the seven identity-related measures. In contrast to Study 2, the selective censoring effect was largely driven by positive associations between the identity-related measures and censoring incongruent comments.
We tested another exploratory SEM model to assess the effect of people's stance on gun rights (pro-gun-rights vs. pro-gun-control). Gun-control supporters selectively censored incongruent comments more than gun-rights supporters (χ 2 (1) = 17.09, p < .001) even though pro-gun- rights supporters tended to be more strongly fused than pro-gun- control supporters ( t (367) = 2.18, p = .03, d = 0.28). Study 3 was conducted during a period that saw increased gun sales ( Collins and Yaffe-Bellany, 2020 ), which should have increased the threat perceived by gun-control supporters, increasing their tendency to selectively censor opposition.\.
Study 3 demonstrated that the selective censoring effect extends to issues beyond religiously tinged issues such as abortion rights. Specifically, people selectively censored comments that opposed their views on the gun rights debate, and this effect was amplified among people who were strongly fused with their cause. As in Studies 1 and 2, people selectively censored incongruent comments even when they were inoffensive. Contrary to Study 2, we did not find a significant selective censoring effect on offensive comments, but it could be that our study was underpowered to detect this effect. Further, gun-control proponents selectively censored more than gun-rights proponents, which when taken together with Studies 1 and 2, suggests that people's willingness to selectively censor may depend on the cause at hand (pro-choice or pro-gun-control) and the political context (e.g., level of threat faced by the cause) rather than political ideology (left or right).
Study 3 also replicated the Study 2 finding that selective censoring extends to a range of identity related constructs including attitude strength, identification with supporters, and moral conviction. Nevertheless, we did not find similar results across Studies 2 and 3 regarding the degree to which each identity-related process produced a lenience toward congruent content or an intolerance of incongruent content. Future research will need to disentangle the links between identity related processes and selective censoring.
The current research provides an initial glimpse into how people censor political opponents when moderating online content. Specifically, in three studies, participants who were asked to moderate an online forum deleted approximately 5–12% more identity-incongruent, relative to identity-congruent, comments from putative online forums. Moreover, we found weak evidence that participants were about 3–5% points more likely to ban authors of incongruent as compared to congruent comments. These findings transcend past research on selective exposure and avoidance ( Bakshy et al., 2015 ; Garrett, 2009a ; van der Linden, 2017 ) because censorship is a particularly extreme action that affects not just one's own online environment but also the environments of other people. Furthermore, unlike traditional censorship enforced only by the state ( Bonsaver, 2007 ; Fishburn, 2008 ), the decentralized nature of this new form of censorship implemented by independent users could make it easy to overlook and thus potentially more insidious.
Our evidence that people censor the social media posts of political opponents is consistent with recent evidence that the salutary impact of intergroup contact on intergroup harmony ( Paluck et al., 2018 ) may not extend to online interactions ( Bail et al., 2018 ). We also show, however, that selective censorship of opponents' comments was amplified among people whose cause-related views were firmly rooted in their identities. Strongly fused participants deleted approximately 13–18% more identity-incongruent than identity-congruent comments, while weakly fused participants were much less biased (0–9%). Strikingly, strongly fused individuals disproportionately censored opponents' comments even when the comments conveyed opposing views in an inoffensive and courteous manner. The identity-driven effect on selective censoring generalized to six other identity-related measures including indices of attitude strength, moral conviction, and identification with cause supporters. The converging results across the various predictors suggest that selective censoring results from a combination of several identity-related processes.
Future research might work toward developing a theoretical model of selective censoring that elaborates the relationships between various identity-related processes. Such work might also investigate the two possible mechanisms underlying selective censoring: lenience toward congruent content versus intolerance of incongruent content. Future researchers might also follow up on our evidence that strongly fused participants were especially apt to censor opponents' comments but not their opponents themselves . Also, perhaps people ban individuals based on their most offensive comment rather than based on evaluating multiple comments. Further, whereas we focused on identity-related processes, future research might consider other processes such as expectations regarding the content online subscribers of a given forum prefer ( Haselmayer et al., 2017 ) that may also contribute to moderators' selective censoring.
The censorship effects described here could have considerable impact on online forums and communities that millions of people follow. Studies of moderators have noted that a small number of them govern very large online communities and that they hold enormous power over their communities ( Frith, 2014 ; Matias, 2016b ). Still, past work on moderators has largely focused on how people become moderators ( Shaw and Hill, 2014 ), and the nature of their roles ( Berge and Collins, 2000 ; Colladon and Vagaggini, 2017 ; Frith, 2014 ) and struggles ( Matias, 2016a ). Although some case studies have examined abuse of power by moderators ( Yang, 2019 ), including anecdotal evidence of politically motivated censorship ( Wright, 2006 ), the current research is the first systematic investigation of censoring among people who moderate online communities. This investigation is consequential because selective censoring that favors the viewpoints of a small number of moderators could produce huge biases in the content that millions see. Indeed, censoring by powerful moderators can give onlookers who are not aware that censoring has occurred a false sense of the views of the people in an online community and who belongs there.
Still, our findings may generalize beyond the groups of people who serve as moderators of large online communities or forums. The millions of people who own blogs, YouTube channels, and social media pages, can moderate others' comments on the platforms they control. Even regular social media users can moderate others' comments on their own posts. Of course, in our studies, participants were explicitly given the goal of deleting inappropriate comments. Because most regular social media users may not experience a strong deletion-focused goal, they may censor less than moderators do. Nevertheless, the collective impact of each of these individuals' censoring could produce substantial consequences.
We believe censorship is a potentially overlooked factor in the heightened political polarization our culture is witnessing. This could have important ramifications. For example, selective censoring could lead to a lack of exposure to different viewpoints, creating echo chambers and causing people to develop increasingly extreme opinions ( Price et al., 2006 ) and to overestimate the prevalence of their own viewpoints ( Ross et al., 1977 ). In addition, opponents of causes may witness the increased extremism of inhabitants of the echo chamber and respond in kind by adopting extreme opposing views of their own ( Bail et al., 2018 ). These processes may reinforce themselves, producing more and more polarization over time ( Allcott et al., 2020 ). Censorship could also have implications for the people being censored, who may feel marginalized and become disengaged from the online community or be less likely to share his or her views in the future. Future studies should examine the consequences of selective censoring in online contexts.
Contemporary pundits often blame the apparent increase in polarization on “the internet” or “social media.” Researchers have found some basis for such assertions by demonstrating that internet users are indeed selectively exposed to evidence that would lend support to their views. Our findings move beyond this literature by demonstrating that moderators employ censorship to not only bring online content into harmony with their values, but to actively advance their causes and attack opponents of their causes. From this vantage point, those whose political beliefs are rooted in their identities are not passive participants in online polarization; rather, they are agentic actors who actively curate online environments by censoring content that challenges their ideological positions. By providing a window into the psychological processes underlying these processes, our research may open up a broader vista of related processes for systematic study.
This work was supported by the National Science Foundation [grants BCS-1124382 and BCS1528851 to William B. Swann, Jr.], an Advanced Grant from the European Research Council 694986 to Michael Buhrmester, and grant by Ministerio de Ciencia, Innovación y Universidades RTI2018-093550-B-I00 to Angel Gomez. The funders played no role in the study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication.
We thank Elliot Tucker-Drob and Greg Hixon for their help with the data analysis.
All study materials and data used in this research have been made publicly available and can be accessed at https://osf.io/4jtwk/?view_only=10627a9892464e5aa90fe92360b846ad . The design, methods, and analysis plan of Studies 2 and 3 were pre-registered, and these can be viewed at https://osf.io/2jvau?view_only=754165d77cbe4e69baf6b11740b1a422 and https://osf.io/x3w7h/?view_only=a25d722f3a03405e9e4f074a622b10b4 respectively.
☆This paper has been recommended for acceptance by Ashwini Ashokkumar
1 Selective censorship can occur as a result of two processes: greater censoring of cause-incongruent content and/or less censoring of cause-congruent content. We did not have an a priori hypothesis regarding which of these selective censoring processes fusion would amplify.
2 Note that the data were collected before reports of drop in the quality of the MTurk participant pool surfaced in, 2018 ( TurkPrime, 2018 ).
3 When designing the Study 1 materials, we did not ensure that the three types of comments (i.e., pro-choice, pro-life, and irrelevant comments) were equally offensive. For example, the post-hoc offensiveness ratings suggest that the pro-life comments may have been generally less offensive than the pro-choice and irrelevant comments. For this reason, the estimates of censoring obtained in Study 2, in which we systematically varied offensiveness a priori, are more trustworthy.
4 In Studies 2 and 3, we excluded participants who responded to fewer than 50% of the comments because their censoring rates are likely to be inaccurate estimates. Note that this exclusion criterion was not pre-registered. In both studies, including these participants did not alter our findings.
Appendix A Supplementary data to this article can be found online at https://doi.org/10.1016/j.jesp.2020.104031 .
Supplementary material
Displaying 1 - 20 of 27 articles.
Shahzad Uddin , University of Essex
Tarah Williams , Allegheny College ; Andrew Bloeser , Allegheny College , and Brian Harward , Allegheny College
Samuel Appiah Darko , University of Professional Studies Accra
Tim Luckhurst , Durham University
Yuen Chan , City, University of London
Yuqi Na , Fordham University
Juan-Carlos Molleda , University of Oregon
Amanda Lotz , University of Michigan
Stanley Rosen , USC Dornsife College of Letters, Arts and Sciences
Ozge Ozduzen , Lund University
Yuan Zeng , City University of Hong Kong
Ansgar Koene , University of Nottingham
Sally Xiaojin Chen , University of Sussex
Pan Wang , University of Technology Sydney
James Rodgers , City, University of London
Dianjing Li , University of Westminster
Janet Farrell Brodie , Claremont Graduate University
Suzanne Franks , City, University of London
Jay Gertzman , Mansfield University of Pennsylvania
Thomas Fiedler , Boston University
Reader in International Journalism, City, University of London
Lecturer in Media and Communication, University of Leeds
Professor of Political Science, USC Dornsife College of Letters, Arts and Sciences
Lecturer in Journalism, University of Sussex
Senior Research Fellow, Horizon Digital Economy, UnBias, University of Nottingham
Boston University
PhD Candidate on Media and Communication, University of Westminster
Professor in Political Communication and Journalism, University of Oslo
retired, Mansfield University of Pennsylvania
Postdoctoral Fellow at the Center for Middle Eastern Studies, Lund University
Professor of Global Politics and Cybersecurity, UCL
Senior Lecturer in Chinese and Asian Studies, UNSW Sydney
Professor of Media Studies, Queensland University of Technology
Lecturer European Politics, Leiden University
Professor of Politics, University of Sydney
Discover the world's research
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Learning objectives.
Figure 15.3
Attempts to censor material, such as banning books, typically attract a great deal of controversy and debate.
Timberland Regional Library – Banned Books Display At The Lacey Library – CC BY-NC-ND 2.0.
To fully understand the issues of censorship and freedom of speech and how they apply to modern media, we must first explore the terms themselves. Censorship is defined as suppressing or removing anything deemed objectionable. A common, everyday example can be found on the radio or television, where potentially offensive words are “bleeped” out. More controversial is censorship at a political or religious level. If you’ve ever been banned from reading a book in school, or watched a “clean” version of a movie on an airplane, you’ve experienced censorship.
Much as media legislation can be controversial due to First Amendment protections, censorship in the media is often hotly debated. The First Amendment states that “Congress shall make no law…abridging the freedom of speech, or of the press (Case Summaries).” Under this definition, the term “speech” extends to a broader sense of “expression,” meaning verbal, nonverbal, visual, or symbolic expression. Historically, many individuals have cited the First Amendment when protesting FCC decisions to censor certain media products or programs. However, what many people do not realize is that U.S. law establishes several exceptions to free speech, including defamation, hate speech, breach of the peace, incitement to crime, sedition, and obscenity.
To comply with U.S. law, the FCC prohibits broadcasters from airing obscene programming. The FCC decides whether or not material is obscene by using a three-prong test.
Obscene material:
Material meeting all of these criteria is officially considered obscene and usually applies to hard-core pornography (Federal Communications Commission). “Indecent” material, on the other hand, is protected by the First Amendment and cannot be banned entirely.
Indecent material:
Material deemed indecent cannot be broadcast between the hours of 6 a.m. and 10 p.m., to make it less likely that children will be exposed to it (Federal Communications Commission).
These classifications symbolize the media’s long struggle with what is considered appropriate and inappropriate material. Despite the existence of the guidelines, however, the process of categorizing materials is a long and arduous one.
There is a formalized process for deciding what material falls into which category. First, the FCC relies on television audiences to alert the agency of potentially controversial material that may require classification. The commission asks the public to file a complaint via letter, e-mail, fax, telephone, or the agency’s website, including the station, the community, and the date and time of the broadcast. The complaint should “contain enough detail about the material broadcast that the FCC can understand the exact words and language used (Federal Communications Commission).” Citizens are also allowed to submit tapes or transcripts of the aired material. Upon receiving a complaint, the FCC logs it in a database, which a staff member then accesses to perform an initial review. If necessary, the agency may contact either the station licensee or the individual who filed the complaint for further information.
Once the FCC has conducted a thorough investigation, it determines a final classification for the material. In the case of profane or indecent material, the agency may take further actions, including possibly fining the network or station (Federal Communications Commission). If the material is classified as obscene, the FCC will instead refer the matter to the U.S. Department of Justice, which has the authority to criminally prosecute the media outlet. If convicted in court, violators can be subject to criminal fines and/or imprisonment (Federal Communications Commission).
Each year, the FCC receives thousands of complaints regarding obscene, indecent, or profane programming. While the agency ultimately defines most programs cited in the complaints as appropriate, many complaints require in-depth investigation and may result in fines called notices of apparent liability (NAL) or federal investigation.
Table 15.1 FCC Indecency Complaints and NALs: 2000–2005
Year | Total Complaints Received | Radio Programs Complained About | Over-the-Air Television Programs Complained About | Cable Programs Complained About | Total Radio NALs | Total Television NALs | Total Cable NALs |
---|---|---|---|---|---|---|---|
2000 | 111 | 85 | 25 | 1 | 7 | 0 | 0 |
2001 | 346 | 113 | 33 | 6 | 6 | 1 | 0 |
2002 | 13,922 | 185 | 166 | 38 | 7 | 0 | 0 |
2003 | 166,683 | 122 | 217 | 36 | 3 | 0 | 0 |
2004 | 1,405,419 | 145 | 140 | 29 | 9 | 3 | 0 |
2005 | 233,531 | 488 | 707 | 355 | 0 | 0 | 0 |
|
Although popular memory thinks of old black-and-white movies as tame or sanitized, many early filmmakers filled their movies with sexual or violent content. Edwin S. Porter’s 1903 silent film The Great Train Robbery , for example, is known for expressing “the appealing, deeply embedded nature of violence in the frontier experience and the American civilizing process,” and showcases “the rather spontaneous way that the attendant violence appears in the earliest developments of cinema (Film Reference).” The film ends with an image of a gunman firing a revolver directly at the camera, demonstrating that cinema’s fascination with violence was present even 100 years ago.
Porter was not the only U.S. filmmaker working during the early years of cinema to employ graphic violence. Films such as Intolerance (1916) and The Birth of a Nation (1915) are notorious for their overt portrayals of violent activities. The director of both films, D. W. Griffith, intentionally portrayed content graphically because he “believed that the portrayal of violence must be uncompromised to show its consequences for humanity (Film Reference).”
Although audiences responded eagerly to the new medium of film, some naysayers believed that Hollywood films and their associated hedonistic culture was a negative moral influence. As you read in Chapter 8 “Movies” , this changed during the 1930s with the implementation of the Hays Code. Formally termed the Motion Picture Production Code of 1930, the code is popularly known by the name of its author, Will Hays, the chairman of the industry’s self-regulatory Motion Picture Producers and Distributors Association (MPPDA), which was founded in 1922 to “police all in-house productions (Film Reference).” Created to forestall what was perceived to be looming governmental control over the industry, the Hays Code was, essentially, Hollywood self-censorship. The code displayed the motion picture industry’s commitment to the public, stating:
Motion picture producers recognize the high trust and confidence which have been placed in them by the people of the world and which have made motion pictures a universal form of entertainment…. Hence, though regarding motion pictures primarily as entertainment without any explicit purposes of teaching or propaganda, they know that the motion picture within its own field of entertainment may be directly responsible for spiritual or moral progress, for higher types of social life, and for much correct thinking (Arts Reformation).
Among other requirements, the Hays Code enacted strict guidelines on the portrayal of violence. Crimes such as murder, theft, robbery, safecracking, and “dynamiting of trains, mines, buildings, etc.” could not be presented in detail (Arts Reformation). The code also addressed the portrayals of sex, saying that “the sanctity of the institution of marriage and the home shall be upheld. Pictures shall not infer that low forms of sex relationship are the accepted or common thing (Arts Reformation).”
Figure 15.4
As the chairman of the Motion Picture Producers and Distributors Association, Will Hays oversaw the creation of the industry’s self-censoring Hays Code.
Wikimedia Commons – public domain.
As television grew in popularity during the mid-1900s, the strict code placed on the film industry spread to other forms of visual media. Many early sitcoms, for example, showed married couples sleeping in separate twin beds to avoid suggesting sexual relations.
By the end of the 1940s, the MPPDA had begun to relax the rigid regulations of the Hays Code. Propelled by the changing moral standards of the 1950s and 1960s, this led to a gradual reintroduction of violence and sex into mass media.
As filmmakers began pushing the boundaries of acceptable visual content, the Hollywood studio industry scrambled to create a system to ensure appropriate audiences for films. In 1968, the successor of the MPPDA, the Motion Picture Association of America (MPAA), established the familiar film ratings system to help alert potential audiences to the type of content they could expect from a production.
Although the ratings system changed slightly in its early years, by 1972 it seemed that the MPAA had settled on its ratings. These ratings consisted of G (general audiences), PG (parental guidance suggested), R (restricted to ages 17 or up unless accompanied by a parent), and X (completely restricted to ages 17 and up). The system worked until 1984, when several major battles took place over controversial material. During that year, the highly popular films Indiana Jones and the Temple of Doom and Gremlins both premiered with a PG rating. Both films—and subsequently the MPAA—received criticism for the explicit violence presented on screen, which many viewers considered too intense for the relatively mild PG rating. In response to the complaints, the MPAA introduced the PG-13 rating to indicate that some material may be inappropriate for children under the age of 13.
Another change came to the ratings system in 1990, with the introduction of the NC-17 rating. Carrying the same restrictions as the existing X rating, the new designation came at the behest of the film industry to distinguish mature films from pornographic ones. Despite the arguably milder format of the rating’s name, many filmmakers find it too strict in practice; receiving an NC-17 rating often leads to a lack of promotion or distribution because numerous movie theaters and rental outlets refuse to carry films with this rating.
Regardless of these criticisms, most audience members find the rating system helpful, particularly when determining what is appropriate for children. The adoption of industry ratings for television programs and video games reflects the success of the film ratings system. During the 1990s, for example, the broadcasting industry introduced a voluntary rating system not unlike that used for films to accompany all TV shows. These ratings are displayed on screen during the first 15 seconds of a program and include TV-Y (all children), TV-Y7 (children ages 7 and up), TV-Y7-FV (older children—fantasy violence), TV-G (general audience), TV-PG (parental guidance suggested), TV-14 (parents strongly cautioned), and TV-MA (mature audiences only).
Table 15.2 Television Ratings System
Rating | Meaning | Examples of Programs |
---|---|---|
TV-Y | Appropriate for all children | , , |
TV-Y7 | Designed for children ages 7 and up | , |
TV-Y7-FV | Directed toward older children; includes depictions of fantasy violence | , , |
TV-G | Suitable for general audiences; contains little or no violence, no strong language, and little or no sexual material | , , |
TV-PG | Parental guidance suggested | , , |
TV-14 | Parents strongly cautioned; contains suggestive dialogue, strong language, and sexual or violent situations | , , |
TV-MA | Mature audiences only | , , |
Source: http://www.tvguidelines.org/ratings.htm
At about the same time that television ratings appeared, the Entertainment Software Rating Board was established to provide ratings on video games. Video game ratings include EC (early childhood), E (everyone), E 10+ (ages 10 and older), T (teen), M (mature), and AO (adults only).
Table 15.3 Video Game Ratings System
Rating | Meaning | Examples of Games |
---|---|---|
EC | Designed for early childhood, children ages 3 and older | , , |
E | Suitable for everyone over the age of 6; contains minimal fantasy violence and mild language | , , , |
E 10+ | Appropriate for ages 10 and older; may contain more violence and/or slightly suggestive themes | , , , |
T | Content is appropriate for teens (ages 13 and older); may contain violence, crude humor, sexually suggestive themes, use of strong language, and/or simulated gambling | , , |
M | Mature content for ages 17 and older; includes intense violence and/or sexual content | , , , |
AO | Adults (18+) only; contains graphic sexual content and/or prolonged violence | , |
Source: http://www.esrb.org/ratings/ratings_guide.jsp
Even with these ratings, the video game industry has long endured criticism over violence and sex in video games. One of the top-selling video game series in the world, Grand Theft Auto , is highly controversial because players have the option to solicit prostitution or murder civilians (Media Awareness). In 2010, a report claimed that “38 percent of the female characters in video games are scantily clad, 23 percent baring breasts or cleavage, 31 percent exposing thighs, another 31 percent exposing stomachs or midriffs, and 15 percent baring their behinds (Media Awareness).” Despite multiple lawsuits, some video game creators stand by their decisions to place graphic displays of violence and sex in their games on the grounds of freedom of speech.
Look over the MPAA’s explanation of each film rating online at http://www.mpaa.org/ratings/what-each-rating-means . View a film with these requirements in mind and think about how the rating was selected. Then answer the following short-answer questions. Each response should be a minimum of one paragraph.
Arts Reformation, “The Motion Picture Production Code of 1930 (Hays Code),” ArtsReformation, http://www.artsreformation.com/a001/hays-code.html .
Case Summaries, “First Amendment—Religion and Expression,” http://caselaw.lp.findlaw.com/data/constitution/amendment01/ .
Federal Communications Commission, “Obscenity, Indecency & Profanity: Frequently Asked Questions,” http://www.fcc.gov/eb/oip/FAQ.html .
Film Reference, “Violence,” Film Reference, http://www.filmreference.com/encyclopedia/Romantic-Comedy-Yugoslavia/Violence-BEGINNINGS.html .
Media Awareness, Media Issues, “Sex and Relationships in the Media,” http://www.media-awareness.ca/english/issues/stereotyping/women_and_girls/women_sex.cfm .
Media Awareness, Media Issues, “Violence in Media Entertainment,” http://www.media-awareness.ca/english/issues/violence/violence_entertainment.cfm .
Understanding Media and Culture Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Not long after the first film "talkies" gave artists the power to show audiences audiovisual recordings of real, flesh-and-blood human behavior, television began to broadcast these kinds of recordings on publicly-owned airwaves. Naturally, the U.S. government has had a great deal to say about what the content of these recordings ought to be.
Under the auspices of the Communications Act of 1934, Congress creates the Federal Communications Commission (FCC) to oversee private use of publicly owned broadcast frequencies. While these early regulations primarily apply to radio, they will later form the basis of federal television indecency regulation.
First televised trial. Oklahoma's WKY-TV televises clips from the murder trial of teen cop killer Billy Eugene Manley, who is ultimately convicted of manslaughter and sentenced to 65 years in prison. Prior to 1953, courtrooms were off-limits to television cameras.
Elvis Presley appears twice on The Ed Sullivan Show , and—contrary to the urban legend—his scandalous hip gyrations aren't censored in any way. It isn't until his January 1957 appearance that CBS censors crop out his lower body and film him from the waist up.
ABC broadcasts the miniseries Roots , one of the highest-rated programs in television history and among the first to include uncensored frontal nudity. The FCC does not object. Later television miniseries, most notably Gauguin the Savage (1980) and Lonesome Dove (1989), will also feature frontal nudity without incident.
In FCC v. Pacifica (1978), the U.S. Supreme Court formally acknowledges the FCC's authority to restrict broadcast content deemed "indecent." Although the case deals with a George Carlin radio routine, the Court's ruling provides a rationale for later television broadcast censorship. Justice John Paul Stevens writes for the majority, explaining why broadcast media do not receive the same level of First Amendment protection as print media:
First, the broadcast media have established a uniquely pervasive presence in the lives of all Americans. Patently offensive, indecent material presented over the airwaves confronts the citizen, not only in public, but also in the privacy of the home, where the individual's right to be left alone plainly outweighs the First Amendment rights of an intruder. Because the broadcast audience is constantly tuning in and out, prior warnings cannot completely protect the listener or viewer from unexpected program content. To say that one may avoid further offense by turning off the radio when he hears indecent language is like saying that the remedy for an assault is to run away after the first blow. One may hang up on an indecent phone call, but that option does not give the caller a constitutional immunity or avoid a harm that has already taken place. Second, broadcasting is uniquely accessible to children, even those too young to read. Although Cohen's written message might have been incomprehensible to a first grader, Pacifica's broadcast could have enlarged a child's vocabulary in an instant. Other forms of offensive expression may be withheld from the young without restricting the expression at its source.
It is worth noting that the Court's majority in Pacifica is a narrow 5-4, and that many legal scholars still believe that the FCC's purported authority to regulate indecent broadcast content violates the First Amendment.
The Parents Television Council (PTC) is founded to encourage government control over television content. Of particular offense to the PTC are television programs that portray lesbian and gay couples in a positive light.
NBC broadcasts Schindler's List unedited. Despite the film's violence, nudity, and profanity, the FCC does not object.
Shortly after the inauguration of President George W. Bush, the FCC issues a $21,000 fine to WKAQ-TV for airing a series of bawdy television comedy skits . It is the first FCC television indecency fine in U.S. history.
Several performers, most notably Bono, utter fleeting expletives during the Golden Globe Awards. President George W. Bush's aggressive new FCC board takes action against NBC—no fine, but an ominous warning :
There should be no doubt, my strong preference here would have been to assess a fine against the licensees in this case. Despite this preference, as a legal matter, today's action can be said to represent a departure from a previous line of cases issued before I joined the Commission ... Our action today also represents a fresh, new approach to enforcing our statutory responsibility with respect to profane broadcasts. Regardless of my personal view, in such instances, licensees should have fair notice that the use of this language in a setting such as this would be found actionably indecent and profane. Given the delicate authority the courts have permitted us under the First Amendment to enforce the indecency laws, the Commission must exercise care in affording licensees firm yet fair treatment. Nonetheless, it should be abundantly clear from today's action that we are setting a clear line to broadcast indecency and profanity to which all licensees should adhere and which from now on will result in forfeitures and other enforcement sanctions.
Given the political climate and the obvious need the Bush administration had to appear tough on indecency, broadcasters had reason to wonder whether the new FCC chairman, Michael Powell, was bluffing. They soon learned that he wasn't.
Janet Jackson's right breast is partially exposed for less than one second during a "wardrobe malfunction" at the 2004 Super Bowl Halftime Show, prompting the FCC's largest fine in history - a record $550,000 against CBS. The FCC fine creates a chilling effect as broadcasters, no longer able to predict the FCC's behavior, scale back live broadcasts and other controversial material. NBC, for example, ends its annual Veteran's Day broadcast of Saving Private Ryan . In November 2011, the U.S. 3rd Circuit Court of Appeals strikes down the fine on the basis that the FCC "arbitrarily and capriciously departed from its prior policy excepting fleeting broadcast material."
About the university, research at cambridge.
Censorship over the internet can potentially achieve unprecedented scale Sheharbano Khattak
For all the controversy it caused, Fitna is not a great film. The 17-minute short, by the Dutch far-right politician Geert Wilders, was a way for him to express his opinion that Islam is an inherently violent religion. Understandably, the rest of the world did not see things the same way. In advance of its release in 2008, the film received widespread condemnation, especially within the Muslim community.
When a trailer for Fitna was released on YouTube, authorities in Pakistan demanded that it be removed from the site. YouTube offered to block the video in Pakistan, but would not agree to remove it entirely. When YouTube relayed this decision back to the Pakistan Telecommunications Authority (PTA), the decision was made to block YouTube.
Although Pakistan has been intermittently blocking content since 2006, a more persistent blocking policy was implemented in 2011, when porn content was censored in response to a media report that highlighted Pakistan as the top country in terms of searches for porn. Then, in 2012, YouTube was blocked for three years when a video, deemed blasphemous, appeared on the website. Only in January this year was the ban lifted, when Google, which owns YouTube, launched a Pakistan-specific version, and introduced a process by which governments can request the blocking of access to offending material.
All of this raises the thorny issue of censorship. Those censoring might raise objections to material on the basis of offensiveness or incitement to violence (more than a dozen people died in Pakistan following widespread protests over the video uploaded to YouTube in 2012). But when users aren’t able to access a particular site, they often don’t know whether it’s because the site is down, or if some force is preventing them from accessing it. How can users know what is being censored and why?
“The goal of a censor is to disrupt the flow of information,” says Sheharbano Khattak, a PhD student in Cambridge’s Computer Laboratory, who studies internet censorship and its effects. “internet censorship threatens free and open access to information. There’s no code of conduct when it comes to censorship: those doing the censoring – usually governments – aren’t in the habit of revealing what they’re blocking access to.” The goal of her research is to make the hidden visible.
She explains that we haven’t got a clear understanding of the consequences of censorship: how it affects different stakeholders, the steps those stakeholders take in response to censorship, how effective an act of censorship is, and what kind of collateral damage it causes.
Because censorship operates in an inherently adversarial environment, gathering relevant datasets is difficult. Much of the key information, such as what was censored and how, is missing. In her research, Khattak has developed methodologies that enable her to monitor censorship by characterising what normal data looks like and flagging anomalies within the data that are indicative of censorship.
She designs experiments to measure various aspects of censorship, to detect censorship in actively and passively collected data, and to measure how censorship affects various players.
The primary reasons for government-mandated censorship are political, religious or cultural. A censor might take a range of steps to stop the publication of information, to prevent access to that information by disrupting the link between the user and the publisher, or to directly prevent users from accessing that information. But the key point is to stop that information from being disseminated.
Internet censorship takes two main forms: user-side and publisher-side. In user-side censorship, the censor disrupts the link between the user and the publisher. The interruption can be made at various points in the process between a user typing an address into their browser and being served a site on their screen. Users may see a variety of different error messages, depending on what the censor wants them to know.
“The thing is, even in countries like Saudi Arabia, where the government tells people that certain content is censored, how can we be sure of everything they’re stopping their citizens from being able to access?” asks Khattak. “When a government has the power to block access to large parts of the internet, how can we be sure that they’re not blocking more than they’re letting on?”
What Khattak does is characterise the demand for blocked content and try to work out where it goes. In the case of the blocking of YouTube in 2012 in Pakistan, a lot of the demand went to rival video sites like Daily Motion. But in the case of pornographic material, which is also heavily censored in Pakistan, the government censors didn’t have a comprehensive list of sites that were blacklisted, so plenty of pornographic content slipped through the censors’ nets.
Despite any government’s best efforts, there will always be individuals and publishers who can get around censors, and access or publish blocked content through the use of censorship resistance systems. A desirable property, of any censorship resistance system is to ensure that users are not traceable, but usually users have to combine them with anonymity services such as Tor.
“It’s like an arms race, because the technology which is used to retrieve and disseminate information is constantly evolving,” says Khattak. “We now have social media sites which have loads of user-generated content, so it’s very difficult for a censor to retain control of this information because there’s so much of it. And because this content is hosted by sites like Google or Twitter that integrate a plethora of services, wholesale blocking of these websites is not an option most censors might be willing to consider.”
In addition to traditional censorship, Khattak also highlights a new kind of censorship – publisher-side censorship – where websites refuse to offer services to a certain class of users. Specifically, she looks at the differential treatments of Tor users by some parts of the web. The issue with services like Tor is that visitors to a website are anonymised, so the owner of the website doesn’t know where their visitors are coming from. There is increasing use of publisher-side censorship from site owners who want to block users of Tor or other anonymising systems.
“Censorship is not a new thing,” says Khattak. “Those in power have used censorship to suppress speech or writings deemed objectionable for as long as human discourse has existed. However, censorship over the internet can potentially achieve unprecedented scale, while possibly remaining discrete so that users are not even aware that they are being subjected to censored information.”
Professor Jon Crowcroft, who Khattak works with, agrees: “It’s often said that, online, we live in an echo chamber, where we hear only things we agree with. This is a side of the filter bubble that has its flaws, but is our own choosing. The darker side is when someone else gets to determine what we see, despite our interests. This is why internet censorship is so concerning.”
“While the cat and mouse game between the censors and their opponents will probably always exist,” says Khattak. “I hope that studies such as mine will illuminate and bring more transparency to this opaque and complex subject, and inform policy around the legality and ethics of such practices.”
Barbed wire
Credit: Hernán Piñera
Sign up to receive our weekly research email.
Our selection of the week's biggest Cambridge research news sent directly to your inbox. Enter your email address, confirm you're happy to receive our emails and then select 'Subscribe'.
I wish to receive a weekly Cambridge research news summary by email.
The University of Cambridge will use your email address to send you our weekly research news email. We are committed to protecting your personal information and being transparent about what information we hold. Please read our email privacy notice for details.
© 2024 University of Cambridge
23 Pages Posted: 5 Dec 2020
Sidharth Law College
Date Written: November 21, 2020
India's entertainment industry not only generates maximum income but also creates vast amounts of material for films, TV shows, web series, songs, videos, etc. With technological progress and increasing the participation of individuals in the cyber world. Many people use different channels, such as Hotstar, Zee5, SonyLiv, Prime, etc. In this paper, we will try to find out and examine the different current censorship laws with the necessary recommendations from different platforms from OTT in India, censorship remains primarily an instrument of state interference, established and controlled by the law's parameters. By enacting and enforcing public policy, the task of the state is to rule. In a democracy, public policy development is closely linked to the fulfilment of citizens' needs. In recent years, the media and entertainment industry has seen a paradigm shift in volume and demand for diversified content through platforms and there are several divisions in the industry that merge into a vertical, Movies, Television, Music, Publishing, Radio, Internet, Advertisement and Gaming segments to access the content, leaving viewers to select. Each segment drives more trends that differ with sub-verticals, geographies and customer needs, making each vertical special and at the same time competing, complimenting and merging these sub-verticals to meet the ever-growing demand for entertainment and information worldwide. The media and entertainment industry aims to reach the organisational quality and benchmark of other industries' best-in-class organisations. The major changes are consistent with how analysis, budget determination, content development, clubbing of distribution management with competent project management are conducted.
Keywords: Censorship, Policies, OTT, Platform, Censor laws, Regulations
JEL Classification: K2,K49, K40, K29, K20, K19, K30
Suggested Citation: Suggested Citation
Sidharth law college ( email ), upes ( email ).
Dehradun Uttarakhand India
Paper statistics, related ejournals, law & culture ejournal.
Subscribe to this fee journal for more curated articles on this topic
Innovation law & policy ejournal, legal anthropology: laws & constitutions ejournal, art law ejournal, visual anthropology, media studies & performance ejournal.
IMAGES
COMMENTS
Further research is needed to test how well recent trends respond to our theoretical predictions. However, for the period selected for this dataset (2001-2015), our media reach scale is supported by extant polls on news consumption. ... the audience size for TV (country i, year t) instruments censorship events targeting TV (country i, year t ...
citizens' reach. Scholars have long suggested that censorship is key to the popular support and stability of these regimes (Ford, 1935). Nonetheless, direct empirical evidence about the effect of removing censorship is limited. In this paper, we ask two questions. Does providing access to an uncensored Internet lead citizens to
3! Media Censorship: Freedom Versus Responsibility. Censorship is used to officially control and suppress any expression that can potentially. jeopardize the order of the state. Historically ...
Media censorship is a hallmark of authoritarian regimes. We conduct a field experiment in China to measure the effects of providing citi-zens with access to an uncensored internet. We track subjects' media consumption, beliefs regarding the media, economic beliefs, political attitudes, and behaviors over 18 months.
The action of the government officials impinges upon the freedom of the television sta-tion in a way that limits the information available to the public. From the standpoint of the media industry, the threat of such censorship is external. With managerial censorship, the threat is internal: it arises from operating.
"In some ways, television censorship at that time resembled movie censorship in the 1930s. Like the movie studios in the early 1930s, the television networks of the 1950s had a code, but it was self-administered and largely toothless. But several forces kept the developing TV networks from outraging public decency."
Although authoritarian governments often employ a mix of increasingly sophisticated censorship tactics (e.g., removing a tweet versus demonizing the act of tweeting) belonging to different "generations of control" (Deibert & Rohozinski, 2010), restricting access to content and social media platforms through a variety of technical means (e.g., blocking internet protocols [IPs], removing ...
The spectre of censorship: media regulation, political anxiety and public contestations in India (2011-2013) ... Digital Terrestrial Television in Europe. Mahwah, NJ: Lawrence Erlbaum. Google Scholar. Chhibber M (2012) Ban & seize: congress MP bill out to gag media. Indian Express, 1 May. ... Sage Research Methods Supercharging research opens ...
Includes weekly Variety, Hollywood Reporter, American Cinematographer, Back Stage, Billboard, Broadcasting, Picturegoer, Screen International, Spin, and more. UCLA has access to parts I, II, and III of this database. Searches film and television articles in the International Index to Film Periodicals (1972-current) and Treasures from the Film ...
An introductory guide for researchers studying film and television censorship history in the United States, including books, articles, archives, and online resources. ... Center for Research Libraries (CRL) Foreign Dissertations ... UC's open access repository. Contains books, journals, working papers, conference publications, postprints ...
The Censorship of Television. Democracy is a system that vests the ultimate power of governance in individual citizens. As evidenced by the rule requiring universal distribution of the franchise and our commitment to the one person-one vote principle, much of democracy's appeal flows from a postulate of the moral equality of citizens: the views ...
America's First Network TV Censor: The Work of NBC's Stockton Helffrich is a unique examination of early television censorship, centered around the papers of Stockton Helffrich, the first manager of the censorship department at NBC. Set against the backdrop of postwar America and contextualized by myriad primary sources including original interviews and unpublished material, Helffrich's ...
Although some case studies have examined abuse of power by moderators , including anecdotal evidence of politically motivated censorship (Wright, 2006), the current research is the first systematic investigation of censoring among people who moderate online communities. This investigation is consequential because selective censoring that favors ...
The little-known history of secrecy and censorship in wake of atomic bombings. Janet Farrell Brodie, Claremont Graduate University. US military censors contained information after the bombings at ...
Abstract. This article deliberates the conceptual framework for censorship as a powerful instrument to protect the public interest in relation to over-the-top (OTT) streaming media content in ...
To fully understand the issues of censorship and freedom of speech and how they apply to modern media, we must first explore the terms themselves. Censorship is defined as suppressing or removing anything deemed objectionable. A common, everyday example can be found on the radio or television, where potentially offensive words are "bleeped" out.
This paper will talk about regulating the content for morality, public order, and health. In this paper, we will examine different censorship laws in India and the evolution of these laws over a period of time. We still need to adapt in order to fit into the digital age, from our laws to those governing media and censorship.
History of Television Censorship. Not long after the first film "talkies" gave artists the power to show audiences audiovisual recordings of real, flesh-and-blood human behavior, television began to broadcast these kinds of recordings on publicly-owned airwaves. Naturally, the U.S. government has had a great deal to say about what the content ...
Despite being founded on ideals of freedom and openness, censorship on the internet is rampant, with more than 60 countries engaging in some form of state-sponsored censorship. A research project at the University of Cambridge is aiming to uncover the scale of this censorship, and to understand how it affects users and publishers of information.
Academia.edu is a platform for academics to share research papers. Television and Content Censorship: The Impact of Violent Content on the Developmental Stages in the Personality of the Nigerian Child ... The censorship of televison. Berkman Centre, Internet and Society, 1-33. Moeller, B. (1996). Learning from television: A research review. New ...
In this paper, we will try to find out and examine the different current censorship laws with the necessary recommendations from different platforms from OTT in India, censorship remains primarily an instrument of state interference, established and controlled by the law's parameters.
Race, Gender, and Film Censorship in Virginia, 1922-1965 by Melissa Ooten. Date: 2014-12-18. This book chronicles the history of movie censorship in Virginia from the 1920s to 1960s. At its most basic level, it analyzes the project of state film censorship in Virginia.
The spread of propaganda, misinformation, and biased narratives, especially on social media, is a growing concern in many democracies. This column explores the EU ban on Russian state-led news outlets after the 2022 Russian invasion of Ukraine to find out whether censorship curbs the spread of slanted narratives. While the ban did reduce pro-Russian slant on social media, its effects were ...
Pew Research Center, which tracks public opinion on social media censorship, found virtually no difference nationally between Democrats and Republicans on this issue in a 2018 survey — for both ...
Censorship Vs Curiosity Research Paper. 747 Words 3 Pages. Garrett Roberts Mrs. Skolny ENGL 11H 29 February 2016 Censorship versus Curiosity "Curiosity killed the cat but satisfaction brought it back." (Eugene O'Neill, 1920) Curiosity always had a clash with the censorship era. Wanting to know what was hidden from a select few has been a ...