The fine line between free speech and incitement to terrorism has never been so contested.
If the great fear after 9/11 was terrorism arriving on American shores via airplanes, the great fear after Paris and San Bernardino is that it will come via the Internet. From the U.N. Security Council to Congress to the campaign trail, the last few weeks have seen calls to crack down on incitement to terrorism online: to erase the screeds of al Qaeda propagandists, expunge Islamic State sympathizers from Twitter, and pair federal law enforcement officials and Silicon Valley executives in a quest to extirpate violent radicalism from Twitter and YouTube.
Yet for all of their compelling logic, some calls to digitally disarm extremists run counter to a core precept of the unique American approach to safeguarding freedom of expression and countering pernicious ideas.
Yet for all of their compelling logic, some calls to digitally disarm extremists run counter to a core precept of the unique American approach to safeguarding freedom of expression and countering pernicious ideas. In the United States, willingness to protect even dangerous speech dates back to Thomas Jefferson, who wrote, “We have nothing to fear from the demoralizing reasonings of some, if others are left free to demonstrate their errors and especially when the law stands ready to punish the first criminal act produced by the false reasoning.” An early broad ban on incitement, the 1798 Sedition Act, criminalized publishing “false, scandalous, and malicious writing” against the government with the intent to “excite against [Congress and the president] … the hatred” of the people. The act was used brazenly by the ruling Federalists as a political tool to suppress the Democratic-Republican opposition. It was quickly discredited and allowed to expire. After his election, Jefferson pardoned those convicted under its terms.
In modern times, the 1969 Supreme Court case Brandenberg v. Ohio held that the Constitution permits banning incitement to violence only when it is intentional and when the action it exhorts is both imminent and illegal. Since then, absent any of the three elements — intention, imminence, or lawlessness — the speech in question cannot be constitutionally forbidden. The imminence requirement — embodied in the mythic shouting of fire in a crowded theater — has been interpreted to mean that the directive must be targeted, with a likelihood that the admonition to lawlessness will actually be heeded. Under Brandenberg, broad categories of noxious speech — sloganeering at a Ku Klux Klan rally, holocaust denial, or Nazis marching in Skokie, Illinois — are constitutionally protected. This status contrasts sharply with much of the rest of the world, where such speech is subject to government prohibitions.
The United States has been steadfast in refusing to subordinate its broad free speech protections to international norms. In 1992, when the Senate ratified the International Covenant on Civil and Political Rights, the world’s premier human rights treaty, it spelled out areas where the international protocol would be overridden by the U.S. Constitution. Among the most prominent pertained to Article 20 of the covenant which bans “national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence.” Believing that this provision violates the First Amendment, the Senate adopted a reservation stating that “Article 20 does not … restrict the right of free speech and association protected by the Constitution and laws of the United States.” In short, while most nations — including such liberal societies as Canada, France, and Germany — accept the Article 20 injunction on incitement to “discrimination, hostility or violence,” the United States has judged it unconstitutional.
In recent years, Washington has vociferously defended this standard, rejecting efforts at the United Nations to invoke Article 20 as grounds for broad proscriptions of what some consider contemporary forms of incitement. The Obama administration has argued that, while they may be in poor taste, efforts to ban depictions of the Prophet Mohammed and other offenses to religious sensibilities violate basic precepts of free expression. In time-honored American fashion, free speech advocates ranging from the conservative Heritage Foundation and the Becket Fund for Religious Liberty to progressive groups including Human Rights First have urged the United States to stick to a firm line against global bans on hate speech, arguing that the answer to inflammatory speech is not suppression, but more speech.
But now the long-standing U.S. refusal to support broad bans on incitement is under pressure. Families of terrorism victims, political candidates, and the White House are all calling for Silicon Valley to join with the government in silencing digital purveyors of terrorism. The rationale is clear: depriving terrorist conspirators of the tools to proselytize, prey on the impressionable, and plan dastardly acts. Amid the sprawling subcultures of social media, the task of rebutting dangerous speech with more inspiring ideologies, anti-Islamic State Twitter accounts, and better viral videos — the “meet pernicious speech with more speech” approach — however laudable and necessary, seems impossible. Lawmakers are now clamoring for a more aggressive approach. In mid-December, the House of Representatives unanimously passed a bill entitled “Combat Terrorist Use of Social Media Act of 2015,” calling for a comprehensive strategy to curb terrorist messaging online. Earlier last year, the bill’s chief sponsor, Rep. Ted Poe (R-Texas), and other lawmakers wrote to the then-CEO of Twitter expressing concern that terrorists “actively use Twitter to disseminate propaganda, drive fundraising, and recruit new members — even posting graphic content depicting the murder of individuals they have captured.”
If online companies block this speech voluntarily under their own private content restrictions — rather than submitting to official dictates of censorship — the long-held American distaste for broad government bans on incitement will technically stand intact. Right now, online platforms are rushing to introduce tightened rules on content, undoubtedly hoping to preempt having such rules imposed on them by the government or the courts. Facebook is now being sued by more than 20,000 Israelis who seek no damages, but rather an injunction requiring the service to excise incitement and terrorism recruitment. Last week, Twitter adopted a stiffer definition of prohibited speech, declaring: “You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.” A census by the Brookings Institution calculated that Twitter was home to at least 46,000 accounts maintained by the Islamic State, many of which will be shut off if the new standard is enforced.
Tech moguls and speech-control advocates must proceed with caution.
Tech moguls and speech-control advocates must proceed with caution. While social media companies are indeed private actors, some of the most powerful platforms they control are the modern equivalent of a public square. Ideas and voices that are barred from these arenas will face high hurdles to be heard. While that’s good news when it comes to violent masterminds, it also means that in the rush to expunge terrorism, legitimate voices, ideas, and perspectives could also be suppressed, potentially on suspect grounds relating to religion or national origin. Borderline questions will inevitably arise with respect to speech that has historically been protected, but has now taken on a new, more ominous quality: speech that celebrates or praises violent acts already committed; speech by a known terrorist sympathizer that is innocent on its face but could have a hidden meaning to followers; a quotation from scripture that could have violent implications if applied literally. The questions of judgment are potentially endless, and it isn’t clear who will answer them — intelligence analysts, judges, or junior staff snacking on free food in a Menlo Park conference room. Moreover, pervasive calls from lawmakers and even the White House for collaboration between Washington and Silicon Valley raise even thornier questions about who will really decide what speech is off-limits and whether and how they can be held accountable.
When it comes to content itself, Brandenburg is relevant. Ideological entreaties, glorifications of martyrs, and celebrations of terrorist acts — however repugnant — are not themselves incitement to violence. They can be monitored, flagged, and even taken down by private parties but not censored under U.S. law. This is to the good, because their criminalization would lead us down a slippery slope. In late 2014, Islamic leaders in Australia argued that the definition of incitement under that country’s law could criminalize preaching from the Quran, supporting resistance movements in the West Bank, or backing factions opposed to the rule of Syrian President Bashar al-Assad. The Islamic Council of Victoria argued that a proposed counterterrorism measure “could see to it that there are no voices of disagreement or debate on the subject [of what constitutes terror] for fear of prosecution.”
Unavoidably, messages of resistance that are sinister when posted in the name of the Islamic State may read far more benignly when posted by, for example, radical climate change activists or protesters against Russia President Vladimir Putin. Social media purposefully divorces content from context, forcing readers to discern meaning and intent by drawing on knowledge and understanding that they must aggregate on their own. While a court may be able to weigh considerations of context carefully and objectively, a young analyst assessing thousands of posts a day and hoping to help stave off further government intrusion into the prerogatives of Silicon Valley seems bound to rely on shorthand involving preconceptions and probably also prejudices. One easy out, expanding prohibitions wholesale to encompass debatable forms of religious or political content, would render whole categories of legitimate speech off-limits to Tweeters and Facebookers. The United States should not go there.
A further layer of complexity is added when one considers non-public forms of messaging and incitement — such as San Bernardino assassin Tashfeen Malik’s private messages on Facebook or posts on services like WhatsApp. While most digital platforms rely on voluntary reporting of suspect content as a trigger to take down material that violates community standards and the law, or both, that is less likely to happen regarding private messages that only one person sees. Trying to police these messages — whether the surveillance is undertaken by the online services themselves, the government, or some combination thereof — walks right into a host of legal issues that make the challenges to the PRISM program revealed by Edward Snowden look simple.
Surveilling so-called “metadata” — participants, time, and duration of communications, but not their content — would be of limited use in tracking terrorist efforts to recruit and inspire those with no prior history of extremist activity. Without looking at the words used, the ideas conveyed, and the responses elicited, it is hard to imagine how anyone could usefully discern whether an online communication could be dangerous. Potential terrorists use messaging platforms the way we all use the phone or email: for private, voluntary, and mutual communications with others. Absent rigorous judicial enforcement and search warrants based on individualized suspicion, steps to ferret out dangerous content on private platforms would reshape the very nature of these modes of communication and risk destroying the privacy associated with them.
Some have pointed to evidence suggesting that hate speech, by fanning discrimination and dehumanization, can lower the barriers to the actual commission of hate crimes. Since hate crimes are illegal, the thinking goes, hate speech can be targeted as a predicate to a crime. Yet there is no evidence that banning hate speech will actually break the causal chain that links hateful ideologies to hateful acts. Fourteen European countries have legally prohibited Holocaust denial (bans that would be unconstitutional in the United States), and yet anti-Semitism is nonetheless on the rise throughout most of Europe, according to the United Nations. While there is plenty of room to argue about cause and effect, there is no proof that prohibiting hate speech prevents hate crimes. It is plausible that bad speech may encourage bad acts but also that banning such speech may only strengthen the speaker’s determination to get the message out, in deeds if not words.
Some commentators argue that in an age of instant transmission it is no longer possible to draw a line between hate speech and hate crimes; that the window for law enforcement to intercede between the two has become impossibly narrow. But the real difference in the digital era may be less in the relationship between speech and crimes than in the digital trail that — after the fact — suggests that had we only surveilled, sifted, and spied more intently and intrusively the violence could have been prevented. Missing from that picture are the tens of thousands of hateful and suspicious missives exchanged every day that never culminate in crimes. Unless we are sure we can distinguish the merely disturbing from the truly dangerous, the task of sorting — much less censoring — this vast international ocean of communication risks becoming a deadly distraction.
Moreover, the standards set in the United States will influence the approaches of governments around the world, establishing precedents their citizens will be forced to live with. Whether it’s China’s Tibetans, Turkey’s PKK, or Morocco’s Polisario, since 9/11 many governments have found it useful to brand their domestic antagonists “terrorists” in order to legitimize repressive tactics. If the United States were to erase the online traces of the Islamic State, other governments will surely do the same to those they consider threats: ethnic minorities, alleged separatists, and political opponents. Wasting no time, in late December, China passed a harsh new counterterrorism law, invoking global fears of online recruitment and incitement as justification for a measure that provides for broad powers of surveillance and censorship.
Some governments, such as Pakistan and Saudi Arabia, have also conflated incitement with provocation. Between 1999 and 2010, the Organization of the Islamic Conference introduced annual U.N. resolutions calling for prohibitions on insults to religion on grounds not that they encouraged violence but rather that they could trigger violent reprisals from those who felt offended by them. (The U.S. Ninth Circuit Court of Appeals subscribed to a similar argument in relation to the Innocence of Muslims short film that became a violent flashpoint in 2012. In this excerpt from his dissent, Judge Stephen Reinhardt explains where the court went wrong.) If the United States backs away from its narrow definition of incitement as applying only to intentional encouragement of imminent lawless violence, it could open the door to arguments that Washington has long derided as inimical to free speech.
As officials and social media executives decide how to respond to calls to cut off incitement online, they should begin on the firmest legal ground, by targeting incitement to imminent violence — direct and specific calls to commit murder and acts of terrorism. If there are categories of speech that do not qualify as incitement — such as terrorist recruitment activity — legislators and judges need to consider other ways to target and prohibit them. Inevitably, some commentators will claim that that new laws and judicial decisions settling these matters will take too long. There will be those who argue that whatever is done to curb online incitement does not go far enough, that the risk to national security is too great. America’s high standard for the protection of free speech has been a point of pride for more than 200 years. It should not be willingly surrendered.
Photo credit: Jewel Samad/AFP/Getty Images