Lawful Neutral: What MMO gamers need to know about Section 230

    
24

You might have seen something in the news last week about a thing called Section 230 and lots of panic over the “end of the internet” as a result of the Supreme Court hearing a couple cases, both with ties to Section 230: Gonzalez v. Google and Twitter v. Tammneh. I’m not going to spend ink on the actual cases because there’s enough written about them other places, and neither has anything to do directly with video games anyway.

But they do both deal with a tiny piece of legislation that’s often credited with being the proverbial backbone of the internet. That piece of legislation is just two paragraphs that arguably changed the course of human history. Maybe that’s a little bit melodramatic, but only a little bit. As short as it is, Section 230 is surprisingly dense and nuanced.

So this installment of Lawful Neutral will merely lay the groundwork for what Section 230 is, what the issue is, and why gamers should care about it. This article is, by necessity, not the whole story; no single article can do what dozens of books do. My aim here is to tie it back to MMOs and show that in addition to being The Twenty-Six Words that Created the Internet, Section 230 is also the 26 words that created MMOs.

What is Section 230?

Section 230 refers to the section of the United States’ Communications Decency Act (CDA), which is also known as Title V of the Telecommunications Act of 1996 because we can’t have just one name for things. The original intent of the Communications Decency Act was to protect minors from sexually explicit materials – those deemed “indecent” and “obscene.” Both indecent and obscene are specific legal terms that refer most often to sexual acts, indecent being “normal” and obscene being “outside of community standards,” among other requirements. Older readers might recall that the CDA was declared unconstitutional in 1998 because it violated the First Amendment. But the courts severed Section 230 from the rest of the CDA, and so it remains in effect today.

At the highest level, Section 230 protects “interactive computer services” from liability from what their users post and what the company decides to take down. Section 230(c) has two main parts, the “Leave up” part, and the “Take down” part. The protections from liability boil down to the assertion that interactive computer services aren’t publishers because publishers have all kinds of existing legal obligations and policy bound up in how they have to work. By saying “interactive computer services” aren’t publishers, Section 230 frees companies from publisher obligations. The second liability protection gives broad protections to what companies decide to moderate, as long as those actions are done in good faith. “Good faith” isn’t explicitly defined but has to date largely been used to prevent anti-competitive moderation practices.

Section 230(c)(1) is the “Leave up” part, and it protects companies from liability over what users post. The paragraph states:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

This is what ensures all websites, like MassivelyOP, cannnot be held liable when potentially illegal or infringing content is posted by users of that website. It’s called the “Leave up” part of Section 230 because it relates to the content that’s “left up” on the interactive computer service.

Section 230(c)(2), on the other hand, is the “Take down” part, which states:

No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

This is what allows companies to create and enforce codes of conduct and moderate content that isn’t illegal but that they find “otherwise objectionable.” The phrase “otherwise objectionable” is not defined in Section 230, and as a result it has been broadly interpreted by most courts. It essentially grants substantial control for the interactive computer service over the types of content that appears on that service.

So what’s the controversy here?

The Supreme Court recently heard two cases that are very similar, and a decision in one is a decision in both. In effect, the cases both argue that Google and Twitter should be held liable for promoting and recommending content that radicalized users into terrorists and culminated in terrorist attacks that killed many people. The plaintiff in these cases maintains that because the content was “recommended” by the algorithms on Twitter and YouTube, the companies were executing editorial control and are therefore should be legally responsible as “publishers” of the content. If that happens, these social media platforms would lose Section 230 protection and would be liable for all the content their users post.

The problem is that if the Supreme Court decides to weigh in and sides against the social media giants, it would open the liability floodgates for every “interactive computer service” that has user generated content – i.e., about 99.5% of the internet. Removing Section 230 protections would change the internet overnight because it would be impossible for companies to minimize risk and ensure protection from liability from what users post. Things like the comment section on MassivelyOP would be gone. Review websites like Yelp and the Rotten Tomatoes wouldn’t be able to function. Even profiles on dating sites would be scrubbed because it would be impossible to monitor and maintain protection against liability for any website, regardless of size. The internet as we know it today would literally cease to exist.

So why would we want to remove Section 230 protections at all? Well, in truth, because the people who want to revoke Section 230 have a point. We only need to look at the proliferation of misinformation and disinformation and how much harm they’ve caused to recognize that something isn’t working like it’s supposed to. Major tech and social media corporations do make money off of misinformation and disinformation and content designed to radicalize; they have a financial interest in allowing it to proliferate, regardless of the harm caused by it. When Section 230 was written in the ’90s, the authors couldn’t have reasonably foreseen what Facebook, Twitter, YouTube and the rest of the internet would become, nor could they have foreseen its role in our lives as the internet and social media became central fixtures of geopolitical strife, public health, and political extremism.

Why MMOs?

All right, so how does this all relate to MMOs? Well, it’s not just websites and social media that’d be affected. Without Section 230 protections, MMOs and online games themselves would also be forced to transform and would look drastically different – if they could exist at all. It’s not much of a stretch to say that MMOs exist specifically because of Section 230 protection. It all goes back to the definition of “interactive computer service,” which is epically broad in definition. Here’s Section 230(f)(2):

The term ‘‘interactive computer service’’ means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server…

Based on this definition of interactive computer service, MMOs enjoy Section 230 protections. If you think about all the places in MMOs where there’s the opportunity for you enter content – character names, bios, guild names, even the welcome message from your guild when you log in – each one of those places is possible because of the Section 230 protections. In fact, I would say that more than most other media, MMOs have benefited from Section 230 protections. Without the freedom that Section 230 provides, I don’t think MMOs could be what they are today.

Without that protection, features like in-game chat, voice chat, and even character names themselves would represent too much risk for companies; they wouldn’t be able to ensure that any potential illegal or infringing content is blocked in any of those places, which would leave them open to being sued. For example, the character name “JainaIsARacist” is potentially libelous, and therefore Blizzard could be held liable by a real-life Jaina if she chose to sue. It could be a similar situation with a guild name like “ThrallHatesTomatoes” assuming Thrall were a real person. If you wrote a backstory for one of your characters on Final Fantasy XIV that read a little bit too much like a particular pseudo-religious organization that bases its belief system off of fictional books and is known to be highly litigious… well, Square Enix isn’t going to want to take that chance.

And that’s just for relatively innocuous content. As I detailed in my column about the UN report on extremism in gaming, terrorist groups operate in games by straddling a line of “acceptable” content, using code words and euphemisms to communicate and recruit. At what point does that content become a liability? How would game operators know what’s being used in code versus used normally? It’s hard enough to root it out now; without legal protections, companies might decide it’s easier to turn off player communication entirely.

What about context? When it comes to toxicity in gaming, context is key. Something that’s said in one context can be perfectly fine, but when removed from that context it can be treated at something that causes liability. In that case, what matters more: that something is said in context of the good-natured trash-talking between friends, or that same thing, when taken in isolation, is content that creates risk for the game?

What comes next?

Right now, we’re just waiting to see what the Supreme Court does. There’s understandably a lot of hand-wringing over all of this, especially since the current court is given to flouting decades of established precedent. But at least in this case, there’s something to be lost on both sides of the partisan divide, so in the event the court provides an opinion, it’s less likely to be wildly partisan. Justice Kagan commented, “We’re a court. We really don’t know about these things. These are not the nine greatest experts on the internet.” That statement has given a lot of folks hope that the courts will punt it back to congress to figure out, but we’ll have to see.

This issue is particularly thorny because the harms caused by Section 230 are plentiful and visible and impactful. But so is the environment that that Section 230 created, especially in MMOs. It opened the path for whole new virtual worlds that, at least culturally, were an untamed frontier inside the untamed frontier of the internet. Changing Section 230, no matter the extent, will have unpredictable and wide-ranging impacts across the globe, not just the United States.

Should Section 230 be changed? Yes, I think it should. Seeing the devastation that disinformation, misinformation, radicalization has wrought, it’s hard to say anything other than “we need to deal with this, ASAP!” But how to change it is entirely different question. I am exceptionally happy to say that it’s also a question I don’t have to answer.

Every other week, Andy McAdams braves the swarms of buzzwords and esoteric legalese of the genre to bring you Massively OP’s Lawful Neutral column, an in-depth analysis of the legal and business issues facing MMOs. Have a topic you want to see covered? Shoot him an email!
Advertisement
Previous articleThe FTC grants Microsoft’s request for Sony docs as leaks say the EU will approve ABK merger
Next articleAmazon Games’ head of marketing discusses New World, Blue Protocol, and publishing the East in the West

No posts to display

24 Comments
newest
oldest most liked
Inline Feedback
View all comments