Does Government Regulation of Social Media Violate the First Amendment?
One Tuesday afternoon, someone in New York City who opened their social media around 1:51 p.m. would have seen a flurry of social media posts: “Earthquake in D.C.” Moments later, they’d have felt the earth shake.
The earthquake, which centered about 90 miles southwest of Washington, D.C., was felt up and down the East Coast on Aug. 23, 2011. The instantaneous response via social media was unprecedented. “Earthquake” appeared in the status updates of 3 million Facebook users within four minutes. Twitter said users were sending as many as 5,500 tweets per second.
The internet moved faster than the earth itself.
Once information is shared and talked about, it’s hard to move backward. But social media information isn’t always helpful. Sometimes it’s unintentionally — or even intentionally — harmful. That has spurred government responses to avoid that harm.
These efforts are often well-intentioned. But are they legal?
This article looks at when government regulation of social media violates the First Amendment and when it does not.
Efforts at government regulation of social media
Politicians from both major parties have introduced legislation and taken regulatory actions at all levels of government to address concerns ranging from protecting children from harmful content or the addictive properties of social media to preventing threats to national security to ensuring all viewpoints are treated equally.
More than 400 bills have been introduced in state legislatures across the country since 2021 to seek government regulation of social media. Many of these actions have raised First Amendment questions about limiting the free speech rights of both social media platforms and users.
RELATED: The complete guide to free speech on social media
Government regulation of social media content moderation
The highest profile attempts at government regulation of social media were laws passed by Florida and Texas to address claims that conservative voices were being discriminated against by social media platforms like X (formerly Twitter) and Facebook. The Florida and Texas laws prohibited social media platforms from deleting or limiting the reach of posts – or banning users outright – based on the views put forth.
New York enacted a law that requires social media networks to provide a process for users to report hateful conduct.
Courts are reviewing whether these laws violate the First Amendment.
Government regulation of social media content moderation does not always result from legislation. In June 2024, the U.S. Supreme Court said that the Biden administration did not improperly pressure social media platforms to moderate content. In Murthy v. Missouri, Louisiana, Missouri and five social media users claimed that federal offices and employees pressured major social media platforms to moderate content related to COVID-19 vaccines and the 2020 presidential election. After lower courts told some of those offices and employees to stop any coercion of the social media platforms, the Supreme Court overturned those decisions, finding no evidence that communication between the government and the platforms resulted in content being moderated.
Government regulation banning social media platforms
Federal and state officials have gone beyond moderating content on one platform and tried to ban it entirely.
The popular social media platform TikTok is owned by the Chinese company ByteDance. That raised concerns that the Chinese government would use TikTok users’ personal information to deliver disinformation to Americans. Lawmakers also worried that children would copy dangerous acts they had seen in TikTok videos.
In 2020, then-President Donald Trump banned TikTok and WeChat from app stores via an executive order. In 2023, Montana banned TikTok for everyone anywhere in the state. Both were challenged in court and never went into effect.
In spring of 2024, President Joe Biden signed a federal law forcing ByteDance to sell TikTok, stop operating in the United States or pay massive fines. Supporters of the law argued that it does not ban TikTok, it just forces ByteDance to decide whether to continue owning and operating the platform in the United States. They also argued that the law is the only way to avoid harming users’ privacy and national security. A federal court of appeals agreed with them in December 2024, saying the law does not violate the First Amendment. TikTok is likely to seek further review by the Court of Appeals or the Supreme Court, though it is also possible that Trump could refuse to enforce the ban when he becomes President — something he indicated as a possibility while on the campaign trail.
More limited bans prohibiting the download of TikTok to government-owned devices or its use by government employees on their own devices if they do any government business on them have been enacted by the federal government and several states without creating First Amendment issues. Several public universities have rules prohibiting users from accessing TikTok via university Wi-Fi networks or official university computers.
RELATED: The ultimate guide to free speech on college campuses
Social media age-verification requirements
Several state laws ban – or at least limit – access to social media by children under age 18.
A first-in-the-nation law in Utah requires anyone under 18 to get parental permission to sign up for a social media platform. The law requires social media platforms to verify the ages of every user in Utah. Children with parental permission are prohibited from going onto social sites between 10:30 p.m. and 6:30 a.m. Finally, the law prohibits the use of “addictive” design features like infinite scrolling and livestreaming. In the face of court challenges, Utah softened the law to require social media platforms to seek “reasonable assurances” that a user is not a minor and to have default settings limiting children’s access to certain content. It retains the prohibition on addictive design features.
Arkansas and Mississippi have passed laws restricting minors’ access to social media as well, and a Florida law specifically targets the addictive features of social media. More than a dozen other states have laws to require websites (including social media sites) to verify user ages to prevent children from accessing content that is generally described as “harmful to minors,” usually sexual material.
Disclosure requirements
Government regulation of social media that forces sites to remove or allow certain content or users raises significant First Amendment questions. So some states have taken the narrower route of mandating more transparency around sites’ policies. The goal behind these laws is to allow users to better understand how content is delivered to them and what tools they can use to adjust what they see.
California, Georgia and Ohio have explored laws requiring social media platforms to submit regular reports to the state about their content moderation actions, particularly responses to requests to remove certain posts.
The California law, enacted in September 2022, requires social media platforms to disclose policies about moderating hate speech and disinformation. The social media company X sued, but a federal District Court judge said the law did not violate the First Amendment. The judge acknowledged that the law creates a substantial burden on social media companies in terms of their disclosure obligations, but he said the burden is justified by the goal of promoting transparency about the platform for social media users and did not require actual moderation of this content.
Right to be forgotten
The right to be forgotten is a concept that exists mainly outside the United States. Right-to-be-forgotten laws allow someone to ask a search engine to essentially hide information about them that they do not want the public to see. Most laws protecting the right to be forgotten do not erase the information, they just remove any links to that information so it can’t be found.
Proposals to change social media platform liability for content
Social media platforms enjoy significant protection from liability, thanks to a federal law known as Section 230 of the Communications Decency Act.
RELATED: What is Section 230?
Section 230 protects social media platforms and other websites from liability for content that people post. For example, Facebook is not liable for defamation based on its users’ posts. YouTube cannot be sued for invasion of privacy based on videos uploaded by its users. TikTok cannot be sued for negligence if someone is injured when copying a risky act they saw someone post there.
But there is a concern that Section 230 goes too far in protecting social media platforms from liability for content that is unprotected speech. Lawmakers are seeking to repeal it entirely or prune back its reach.
According to the Congressional Research Service, more than a dozen proposals have been introduced in Congress since 2018 to limit Section 230. None have passed.
What are some of the arguments for government regulation of social media?
Because social media platforms are engaged in speech, any law or attempt to regulate them must not violate the First Amendment. Platforms’ content moderation efforts do not violate the First Amendment because they are private companies.
Laws or regulations that target specific types of speech are hard to justify under the First Amendment. The government must demonstrate a specific harm caused by the speech and show that the law or regulation is the only way to avoid that harm while allowing all other speech on the site to flourish.
Many attempts to regulate social media are based on avoiding harm to children – usually sexual content or dangerous acts that might be copied after children see them online – or, in the case of TikTok, preventing harm to national security. However, most of these fail because they call for outright bans on that content for some or all users, which is not the narrowest way to avoid harm emanating from social media.
This is why many legislators attempt to pass restrictions that, while they may have an incidental impact on speech, do not or claim not to intend to censor speech entirely. With the age-limit laws, for example, rather than focusing on the harmful content on social media platforms, lawmakers seek to restrict the way the platforms operate and the addictive features they use.
More generally, advocates of government regulation of social media compare the platforms to other businesses that have been regulated because of their status in society:
- Social media platforms have been compared to common carriers like phone companies that have traditionally been subject to regulation and required to accept all who want to use their services. Much like telephone companies, it is argued, social media platforms control access to the public conversation with near-monopoly status.
- A similar line of thinking compares social media companies to public forums, spaces owned or operated by the government where free speech has traditionally flourished without regulation. Social media platforms are often called the “digital town square.”
- Some have argued for social media companies to be considered public accommodations like hotels, restaurants and public transit, all of which are prohibited from discriminating against customers.
What have the courts said about whether government regulation of social media violates the First Amendment or not?
Generally, courts have struck down attempts to regulate social media based on its content as violations of the First Amendment but allowed smaller, more targeted requirements to stand.
Courts reached different results on whether the Florida and Texas social media laws limiting content moderation violated the First Amendment. On July 1, 2024, the Supreme Court sent the laws back to lower courts for further review but signaled that both raise serious First Amendment questions because social media platforms are engaged in expression, and the government cannot force them to carry certain content. The platforms have a right to create their own environment free from government intervention. Furthermore, the court signaled that seeking to “level the playing field” among viewpoints is not a justifiable reason for regulation.
Just days later, a federal judge struck down a Mississippi law banning anyone under the age of 18 from accessing social media without parental permission. The judge took his cues from the Supreme Court in holding that social media platforms are engaged in expression. The judge accepted the Mississippi attorney general’s argument that there is a compelling interest in protecting the physical and psychological health of children online. However, he held that the law preventing minors from accessing social media was not narrowly tailored to achieve that result both because it is likely to prevent a substantial number of adults from being able to access social media and because minors would still find ways to access social media anyway.
Banning social media platforms, TikTok specifically, fared no better. Both executive orders issued by President Trump banning TikTok and WeChat were struck down because they blocked substantially more speech than needed to protect users and to protect national security. The Montana law befell the same fate when a federal judge ruled that it violated the First Amendment, saying, “The Legislature used an axe to solve its professed concerns when it should have used a constitutional scalpel.”
In December 2022, NetChoice filed suit against the California attorney general over a children’s online safety bill that mandates social media platforms use default settings that protect children’s privacy and safety. NetChoice argued that AB 2733 violated the First and Fourth Amendments and the due process, commerce and supremacy clauses. In September 2023, a federal court blocked implementation of this law pending further review.
Federal courts have recently temporarily blocked age-verification laws in Arkansas, California and Texas from going into effect because they likely violate the First Amendment rights of all internet users.
What questions remain about the future of government regulation of social media when it comes to the First Amendment?
Regulation of social media is still very much in its early stages. Among the questions that courts still must answer:
- Are there any arguments for government regulation of social media content moderation now that the Supreme Court has held that they are engaging in expressive activity? Will comparisons to common carriers, public forums or public accommodations gain any traction?
- Will the court decision upholding the federal TikTok ban be upheld on appeal? If it is, will federal or state governments move to act against other social media sites?
- What is the future of age-verification requirements? Is limiting harm to minors a compelling reason to prevent them from accessing social media? If so, is existing technology sufficient to prevent children from accessing social media while allowing adults to do so? Are attempts to limit access based on the purported addictive features of social media valid or are they just a workaround to enact laws otherwise likely to be struck down as violating the First Amendment?
- What is the future of Section 230? Will it be repealed or revised?
Kevin Goldberg is a First Amendment specialist for the Freedom Forum. He can be reached at [email protected].
What Is Book Banning and Is It Unconstitutional? The Complete Guide
School Dress Codes: A First Amendment Breakdown
Related Content