This article has been rolling around in my head since ChatGPT rocketed into the public awareness a few months ago, and just in the last few weeks, there’s been slew of developments of AI in gaming from the likes of Blizzard and Nvidia that give us a peek at where our industry is headed. The AI revolution is coming for your games! Maybe. Sort of? Probably not for a while.
This week in Lawful Neutral, let’s look at the development of AI in MMOs, the promises and potential, the risks and dangers, and what it all means for our most favorite of pastimes.
What words mean
Let’s start with some of the terms that get thrown around in the AI space and what they actually mean. The first draft of this article had about 1000 words just setting the stage and defining various things in the AI space. Then I realized none of it really mattered. What matters is what we are talking about here specifically, which is “Generative Artificial Intelligence,” or GenAI (and not to be confused with artificial general intelligence, which is a very different thing).
GenAI is the application of artificial intelligence in the development of new creative works. In practice, GenAI is the type of AI that tools like DALL-E and ChatGPT use to generate their responses and images. It can also be used to create music, model, textures, dialogue… you see where I’m going with this.
Changing how games are made
According to a recent New York Times article, Blizzard announced the launch of its own GenAI tool called Blizzard Diffusion. The revelation came from Allen Adham, chief design officer at Blizzard, who told workers, “[P]repare to be amazed. We are on the brink of a major evolution in how we build and manage our games.” Blizzard Diffusion claims to be able to generate “effortless concept art” when fed assets from games, like World of Warcraft, Diablo IV, and Overwatch 2. Based on the leaps forward in capability we’ve seen in other image generation tools like Stable Diffusion, MidJourney, and Dall-E, it actually doesn’t seem that farfetched. The article further quotes Blizzard’s hopes that the technology can cut out some “design and development drudgery and make the creation of video games more fun.”
This isn’t the first time Blizzard has dabbled in AI. Blizzard also uses other applications of AI in it games, like machine learning to fight toxic behavior in Overwatch, so its not a huge surprise that it would jump on the GenAI bandwagon. But the reality of the company’s current situation is probably a factor in its interest here as well as it’s anxious to keep to timelines in the wake of layoffs and staff fleeing the company. Plus, since Microsoft is also big in the GenAI space right now, Blizzard is probably trying to score some brownie points with its future overlords too. And let’s be honest, Bobby Kotick isn’t going to be upset about not having to pay employees for something because GenAI could do it without all the pushback from the labor rights groups he’s maligned.
But if we assume some positive intent here to companies besides Blizzard, I think that there’s some really great opportunity to shorten the dev time of these games without actually taking away jobs. I can see a world where an environmental designer comes up with concepts and “training material” for the model and then uses GenAI to apply the output to the environment in a fraction of the time. Then, the environmental artist can go through and make tweaks, add character and nuance, and add the details to the broad strokes painted by the GenAI.
Indeed, the aforementioned New York Times article cites leaders from other companies talking up how GenAI could streamline everything from dialogue creation to quality assurance testing. There’s a lot hopium about how quickly these things will actually be applicable in gaming, but the suits aren’t necessarily wrong about how AI can speed up the development of games by force-multiplying the existing staff instead of displacing them.
There’s also some potential for independent designers who could never have made or funded games in the current industry to finally create what’s in their heads, utilizing GenAI to help them. Consider an MMORPG like Project Gorgon and where that game might be today if the indie team had GenAI to help. There’s some really cool “democratization of game creation” that could happen here, and it could potentially be a net positive for the gaming space. (A net positive, but it’s not without some serious negatives, and we’ll get to those shortly.)
Changing how we play games
In the last week of May, NVIDIA announced its own application for GenAI, and in my opinion it’s amazing. The demo shows an interaction where someone speaks, verbally, to a character in a game and gets a response in a synthesized voice from the NPC. The demo literally shows someone having a conversation with an NPC. Now, granted I don’t like talking to people in general, and I don’t see always wanting to talk to every quest giver ever because really I don’t need to have a conversation about why you need me take three steps to left to deliver a package to another NPC who’s within arm’s reach. But the application is astounding.
Other companies are using GenAI to create human-like NPCs, who move around, talk to other NPCs, and make purchases. While games certainly have these capabilities already, the scale of options is certainly expanding rapidly, and thinking about the potential of those in new MMOs is mind-boggling. It extends beyond just NPC interaction as well: You could tell the GenAI to write a library of lore-appropriate books for players to pick up and read. There are very few games with novel-length in-game books (the Elder Scrolls franchise comes to mind), but if studios could generate those lore books with little cost, it could really add to the aliveness and immersion of the world for the gamer (but also potentially put human writers out of work).
Ultimately, if we take all of this potential collectively, what it results in is a world that feels more like a world. The “alive virtual world” feel has been the pipedream of developers and gamers alike for decades. It’s not here yet, but we can actually see it.
Changing the risk in games
You can probably tell that I am excited about all the potential here. Getting more games, of better quality, with more immersion, much faster is one hell of vision for the MMO space. I think we’ll find ourselves there not in decades but in years and months.
But that’s not to say there isn’t a downside to all of this. So let’s talk about the human cost.
It would be silly of us to assume that people like Bobby Kotick won’t look at this technology and think, “Great, I can fire 25% of my staff and get the same amount done. Do it, and take that bucket for their tears. I’m almost out.” There will be unscrupulous developers who cut human staff in favor AI technologies that don’t perform as well but cost much less, and gamers will be left with few options other than to discourage the behavior and refuse to purchase the resulting product.
These GenAI models, as awesome as they are, are also really good at lying. They aren’t actually “intelligent,” so they hallucinate and invent things that don’t exist at the behest of the user, like the case of the lawyer who used ChatGPT for a legal hearing; ChatGPT made up all the cases referenced and lied to the lawyer when he asked if they were real (and he didn’t dig more deeply until he’d already been caught in court). Game developers will have to content with how to keep GenAI-driven NPCs from making up people and places in the game world that don’t exist or sending players marching off to find a thing that was never there to begin with. (Also something that has already happened in the gaming space!)
There’s also the environmental cost to consider, as creating these big GenAI systems is expensive. According to Ars Technica, it’s 10 times more computationally expensive to ask ChatGPT a question than it is to do a Google search. The more computation it uses, the more environmentally expensive it is as ChatGPT uses huge amounts of power; it costs about $700k a day to run ChatGPT, which means that each question you ask it costs about 36 cents.
There’s also a whole slew of copyright considerations, privacy violations, cybersecurity risks, and the massive legal repercussions stemming from how these systems get trained, where they get their data, and what they do with those data. And even if we assume that companies find ways around all of those serious roadblocks (which is far from guaranteed), game developers specifically need to have a plan to deal with things like hate speech coming from the AI. They need to make sure they mitigate the harm in the algorithms, assure privacy of data entered, and prevent threat actors from poisoning the data and skewing the model. There’s a lot here to be worried about.
But none of these risks is insurmountable, and some of them, like the human cost, are possibly inevitable. But it reminds us for as exciting as the future is with GenAI, there’s still a lot to do and a lot of harm to avoid. We’ve never been closer to Oasis, but it’s a rocky road ahead.
What does it all mean?
We probably won’t wake up tomorrow to a brave new world underpinning how MMOs are made and played. But the change is coming. In another year, we’ll be able to look back and point at some things that happened as a result of GenAI – and not all of them will be good. In fact, there’s a decent chance that most of them won’t be good (we’ve already seen that, unfortunately). But it’s hard to look ahead at the horizon and not be at least a little excited about the potential we see there. A new era for games, tech, and humanity.
Assuming it doesn’t, you know, all go horribly wrong before then.