Legal Regime and Communications Decency Act Concerning Speech

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Facebook, Twitter, and YouTube, in recent times, have become the dominant platforms for content

sharing, which allow people across the world to share content online. These platforms have enabled
people to better connect with each other, and arguably facilitated them to form and maintain bonds
with people who would not have been accessible without these platforms. People have been able to
successfully connect with and maintain a distinct nature of relationship with celebrities, political
leaders, academics, or people from under-developed countries/regions. In June 2018, Shabbir Hussain
while traveling lost his bag containing some essential documents. He posted a video, requesting that it
is necessary that he recovers the lost bag, otherwise his father will not be able to receive medical
treatment as the bag contained his father’s medical records. The posted video, surprisingly, went viral
on social media platforms, and even Director General Inter Service Public Relations (DG ISPR)
shared the video. It turned out that the video received almost 4,000 retweets 1, and as a result, Shabbir
found his bag. It is imperative to remember key ideas here, Shabbir would not have been able to reach
out to this number of people for help, had there been no social media platforms; Shabbir’s plea
received a much greater force after DG ISPR retweeted his post; and lastly, a single retweet from the
DG-ISPR mobilised people to come to Shabbir’s aid. Thesis of this paper is to be found in exact in the
latter tweet of DG-ISPR. He wrote, “Yes, this is the power of the Media… responsible and purposeful
utility. That’s the requirement.”2 Provided that social media platforms have attained enormous reach
and power, that they can start revolutions, it is imperative to utilize them responsibly and
purposefully. This paper briefly discusses the current legal regime surrounding digital intermediaries,
including the statutory protection granted to them and their voluntary effort to self-regulate their
platforms; the possible challenges faced by the state in granting such statutory protection to
intermediaries; challenges to these intermediaries in defining free speech, formulation and
enforcement of policies to maintain a secured and enabling environment for users; and the political
pressure to self-conform the aforementioned policies with the distinct ideal of free speech.

Legal Regime and Communications Decency Act concerning Speech

Section 230 of Communication Decency Act, 1996 (“CDA”) grants immunity to online
websites/intermediaries against a range of laws that might otherwise be used to hold them legally
responsible for the third-party’ speech or actions. 3 The intermediaries include the regular Internet
Service Providers (ISPs) and interactive computer service providers which publish third-party content.
Under the application of said immunity, Facebook, for instance, cannot be held liable for allowing any
speech that may legally be deemed as libelous. 4 This legal safeguard allows innovation and free
speech to flourish in the digital sphere.

The purpose of the grant this of immunity was two pronged: to encourage platforms to be pro- active
in removing offensive content, as one would regulate in physical realm with maximum reason to
promote free speech; and to avoid free speech problems of multiple tier censoring, through the
organizations’ internal setup and through external institutional mechanisms.

In relation to internal content proofing, Facebook has devised a full-fledge content moderation unit
whose aim is to draft objective standards (Community Standards) to regulate the use of speech. The
standards include, but are not limited to, Violence and Criminal Behaviour, Dangerous Individuals
and Organizations, and Objectional Content. 5 In the event that any of these standards are violated, it
could result in varying consequences based on the severity of the violation and the history of a person
on the platform.6 These standards, despite being comprehensive and tangible, are subjected to
1
WebDesk, 'Social media power: Skardu resident’s lost bag found, delivered to him' (PakistanToday, 11 June
2019) <https://www.pakistantoday.com.pk/2018/06/11/social-media-power-skardu-residences-lost-bag-found-delivered-to-
him/> accessed 12 November 2020
2
Ibid.
3
Communication Decency Act, 1996, sc. 230.
4
Ibid.
5
'Community Standards' (Facebook, 2020) <https://www.facebook.com/communitystandards/introduction> accessed 12
November 2020
6
Ibid.
controversy for lack of transparency. It could be argued that if Facebook has an ultimate power to
block/moderate content, then how Facebook will be checked if it acts arbitrarily. Facebook, however,
maintains that it aims to moderate content on the basis of its artificial intelligence and reports from
people who find the content to be in violation of Facebook’s Community Standards. 7 It also
acknowledges that this enforcement is not effective and they aim to moderate speech on the basis of
objective standards instead of organizational whims.

Intermediary Immunity: A Challenge for State

The United State’ First Amendment does not apply to these private entities because of the state action
doctrine.8 However, it is still a bar on the actions of the state. In Packingham v. North Carolina,9 the
court struck down a state-imposed bar on sex offenders from using social media. The Judge concluded
that since these are new public interaction tools, the state is now in more need than ever to devise
robust laws for application of the First Amendment in its spirit. 10 This essentially creates a paradox
that appropriates autonomy to online platforms through intermediary immunity, and simultaneously,
allow a previous sex offender to reobtain the same position that at first place enabled the occurrence
of the offence. In such regards, there arises a concern that whom should the state protect while
keeping the digital arena essentially free as per the spirit of First Amendment: the intermediary, the
consumer (sex offender in this case), or the public good that ensures security against sexual
mistreatment.

In this regard, the author to a certain extent, finds force in Justice Samuel’s concurring opinion. While
holding the aforementioned North Carolina law to be unconstitutional on the grounds that it also
includes websites that may not endanger any other possible minor victims, the Justice disagreed with
the majority that only because these public forums perform vast other functions as well, the state
cannot limit sex offender’s access to these forums. The Justice argued that these platforms allow
sexual offenders to connect with possible victims in ways that otherwise would not have permitted
such contact.11 A similar bar can also be realized from the fact that when a cricketer is found to have
committed spot-fixing, he is awarded either a temporary or permanent ban from the team. The other
purposes/functions of cricket, in this case, are considered to be irrelevant. Even in social settings,
people are hesitant to permit into a similar premise a person who previously committed something
inappropriate. The state, in this case, has a ‘weighty’ interest to protect other possible victims, and can
restrict access of sex-offenders to that extent.

The conclusion in Packingam case would have been more interesting had the first offence been
committed on a social media platfor, and the specific outlet would have either neglected or refused to
restrict any access of the offender to their platform, provided that pursuant to section 230, there can
arise no liability on them to do so. Absent any efforts by these Big Techs to regulate themselves, it
remains a question whether the majority would have maintained its decision, and upheld Packingam’s
First Amendment right.

The Contemporary Sphere of Censorship and Ideal Censorship

The moderation of hate speech still remains a challenge to Big Techs because it requires a deeply
complex algorithm to detect if some speech is hateful based on its distinct context. At one end, these
platforms face the difficulty of identifying hateful speech, while at the other end, they are being
political pressured on account of either taking too much content down, or leaving too much of a

7
Monika Bickert, 'Publishing Our Internal Enforcement Guidelines and Expanding Our Appeals Process' (Facebook, 24
April 2019) <https://about.fb.com/news/2018/04/comprehensive-community-standards/> accessed 12 November 2020
8
Young v. Facebook, Inc., 790 F. Supp. 2d 1110.
9
Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017)
10
(n 4)
11
(n 10).
dangerous content up.12 This raises the question as to whether they should be held responsible for the
distribution of such speech. Since 2016, there is an agreed conduct between social media platforms
such as Twitter and Facebook and the European Commission, where the social media platforms adopt
mechanisms to remove content deemed as hate speech within 24 hours. 13 Despite such resolve, the
whole process of defining hateful content and its removal is harshly criticized. It is as difficult for
these Big Techs, as for the state itself, to determine where to draw a line between free speech and a
limitation thereto. The 2015 publication of allegedly blasphemous caricatures in France instigated an
up-rise against such brazen abuse of multi-culturalism. Social media platforms were inundated with
digital mobilization campaigns which included ‘I am Charlie,’ 14 while Muslims decried this freedom
of speech on digital platforms as an insult to their religion. The French state, however, allowed the
cartoon agency to function quite freely, and did not even flinch a bit on its position to moderate
content on online platforms that may have catalyzed any potential violence in France..

Situations like these posit a challenge for Big Techs to strike a reasonable balance in regulating the
online speech that concomitantly maintains both, free speech and multi-cultural ethics of democracy.

Despite the fact that the law immunes these platforms from any liability that may arise from the
personal use of their consumers, these platforms acknowledge their responsibility for varied reasons,
including the avoidance of any act of violence that may take place offline relating to speech made on
their platforms. However, there is dissatisfaction with regards to the enforcement of such existent
policies or guidelines, and correctly identifying whether such speech is made to incite violence.
Chairman Wicker raised a similar concern before the Senate Committee, highlighting that Ayatollah
Ali Khomeini15 posted tweets calling for jihad and elimination of Zionist regime, and these tweets
were neither flagged nor taken down. 16 Jack Dorsey, however, maintained that they always delete
tweets if, on the reports of the users, they find them to be violative of their policies. In this case, they
considered the tweets to be ‘saber-rattling’ and thereby, not strictly violating their terms of service. 17
This makes it evident that the whole question of ideal speech and regulation of intermediaries and
speakers is closely related with democratic disposition to expression and disagreement in public
arena.

In contemporary world, because of the varying definitions of free speech, it seems that censorship has
become the order of day, and is considered to be a tool to replenish freedom of speech rather than
curbing speech. There, however, seems to be a conflict between the distinct ideals of speech held by
states and the struggle of these platforms to ensure right to free speech to the users. These platforms,
less often than not, bow down to any requests of collateral censorship made to them, or any request of
disclosure/hand over of information, in order to maintain free speech. These platforms believe that
free speech is not limited to it being good. It reflects their reputation, and has a direct affect over the
competitive advantage of these platforms.18 On the other hand, states believe that it is the only entity
endowed with the maintenance of public order and promoting public good. They believe that
democratic governments, elected by the people have legitimate mandate to request any specific
moderation/curation of content because people’s belief in the government as a regulating body is
more than in the judgment of private bodies. This conflict can be understood better in the backdrop of
12
https://www.rev.com/blog/transcripts/tech-ceos-senate-testimony-transcript-october-28
13
'European Commission and IT Companies announce Code of Conduct on illegal online hate speech' (European Union, 31
May 2016) <https://ec.europa.eu/commission/presscorner/detail/en/IP_16_1937?
fbclid=IwAR1Y49Y0Ob79QNOjqiY2heVBO4ZF4kCFFQAfYYSY6OAOtmVbqIq5NZbsCbA> accessed 12 November
2020
14
Libby Nelson, 'The Charlie Hebdo attack, explained' (Vox, 9 Jan
2015) <https://www.vox.com/2015/1/9/18089104/charlie-hebdo-attack> accessed 12 November 2020
15
Supreme Leader of Iran.
16
https://www.rev.com/blog/transcripts/tech-ceos-senate-testimony-transcript-october-28
17
https://www.rev.com/blog/transcripts/tech-ceos-senate-testimony-transcript-october-28
18
Somini Sengupta, 'Twitter’s Free Speech Defender' (The New York Times , 2 Sept
2012) <https://www.nytimes.com/2012/09/03/technology/twitter-chief-lawyer-alexander-macgillivray-defender-free-
speech.html?auth=linked-google> accessed 12 November 2020
the aforementioned example. In the wake of allegedly blasphemous acts in France, Prime Minister
Imran Khan wrote a letter to Mark Zuckerberg, appealing him to put a ban on the dissemination of
Islamophobic content on Facebook.19 The letter has not yet been responded to, but it will not be
stretch to expect Facebook to respond to it in the same manner in dealing with a similar issue when a
video titled as ‘Innocence of Muslims’ was posted on Youtube. Facebook then maintained that it
would hold speech made against leaders, religion, or countries as permissible, however, speech that
attacks people of certain religion, race or country will not be held permissible. 20

Furthermore, it should also be made clear in the debate on censorship that a laissez-faire version of
online media is practically impossible. It is because of the needs of organizations to self-regulate for
better commercial outreach, and to update the community standards with the overly connected,
interdependent world. The media platforms are in a constant competition over the presence and
updating of commercial content and well-regulated online free speech exhibition centers. Hence, the
intentions of individuals can be mapped by pointing out the discursive context, keeping in view
cultural factors, and then conclude whether the specific content falls under hate speech. In the effort to
achieve this, these platforms are often inclined to enter into bilateral dialogues and mutual
understandings with their consumers and the states. This can be seen through Facebook’s decision to
send a delegation/team to Pakistan to further address the reservations of the state, and to conduct
investigation on the issue of blasphemous content. 21 It is competition of freedom within these online
media platform over censorship and adoption of tangible standards thereto, so that free speech, in
actuality, is made possible through these intermediaries.

It is still a concern to many as to why such a statutory immunity is accorded to these intermediaries
and in extension thereto, is also upheld by courts of law, when it comes to the exercise of discretion in
content moderation. It is apparent that not providing these intermediaries with immunity would do
more harm than good. Firstly, it will inhibit free speech and secondly, the punishment for misdeed,
and inconsistency due to use of intermediaries could not be ascertained in tangible sense. Thus,
section 230 of CDA acts as a suitable legal remedy to strike a balance between freedom and collateral
censorship. It does so without any undue intervention and provides natural incentive in the form of
self-regulation to platforms themselves, in order to mitigate haphazard control of content.

Social media outlets, thus, need not only be equipped with modern organizational development
capacities, but also with enhanced knowledge of political, economic and regional differences so that
censorship, which if to be done, must take into account the involved nuances in the speech.

The fact that in most cases involving social media, the final discretion lies with the social media
outlets themselves, still remains an issue. The control of dissemination of speech by ‘Tech-preneurs’
may be the centralization of actual power. The legislatures in United States have resisted the exertion
of control over speech by a policy team in Facebook headquarters. It, however, is not impossible to
envision hat if Congress attempts to regulate speech, notwithstanding section 230, the whole process
may tend to be more arbitrary instead of transparent. It can therefore be maintained that although the
social media platforms have final say in content moderation, this discretion of media outlets should be
reformed by way of increased understanding of specific context and community standards, instead of
allowing state legislatures and governments to politicize the institution of speech in the modern digital
and globalized world.

To conclude, the debate on immunity of platforms in publishing and then regulating the information
flowing from their channel, which is in first place immune from liability, becomes a question of law.

19
SyedIrfan Raza, 'Imran accuses Macron of maligning Islam' (Dawn, 26 Oct
2020) <https://www.dawn.com/news/1587034> accessed 12 November 2020
20
Klonick, Kate, The New Governors: The People, Rules, and Processes Governing Online Speech (March 20, 2017). 131
Harv. L. Rev. 1598.
21
Aamir Jami, 'Facebook removed 85% of blasphemous material on Pakistan's request, high court told' (Dawn, 21 June
2017) <https://www.dawn.com/news/1323131> accessed 12 November 2020
What reason of law can social media platforms use except for self-regulation that exhibits consistency
in their moderation of content keeping in view numerous factors? So far our analysis reveals that only
a clear description of promoting public good and commercial benefit to the users and community, and
avoiding harm such as violence in the digital sphere is imperative for any future regulatory regime on
speech and dissemination of information through intermediaries.

You might also like