The Division’s cheating plague may be a result of its bad network model, consultant argues

    
105
Under the city.

The Division just can’t catch a break.

Glenn Fiedler, a former lead network programmer for Sony and Respawn and current games tech consultant, says that videos of rampant client-side cheating in The Division make him suspect Ubisoft is using a trusted client network model. “I sincerely hope this is not the case, because if it is true, my opinion of can this be fixed is basically no. Not on PC. Not without a complete rewrite,” he says.

Fiedler explains that top-tier FPS games like Overwatch and Call of Duty use a networking model where the server itself doesn’t trust your client but rather takes your input and calculates it itself, all while minimizing your lag. But if The Division is indeed employing a network model that inherently trusts the client, then any ol’ random cheaterhead can screw with that client to create the messes we’re seeing right now.

“If a competitive FPS was networked the other way, with client trusted positions, client side evaluation of bullet hits and ‘I shot you’ events sent from client to server, it’s really difficult for me to see how this could ever be made completely secure on PC.”

Source: Gaffer on Games via PC Gamer. Thanks, Celestial!
newest oldest most liked
Subscribe to:
AaronVictoria
Guest
AaronVictoria

DPandaren AaronVictoria Both of those titles used trusted client architecture, and both of those faced hacks too. They weren’t as discussed in the press as games like The Division for some reason; my guess is because there is overwhelming hate for Ubisoft. Even games like Call of Duty use trusted client architecture to make real-time or faked real-time combat believable. Call of Duty is being hacked daily, and you never hear about it, mostly because Activision’s security team is relentless on shutting down and banning hackers. Much like the consultant in the article said, in most cases there is nothing that can be done about these kinds of hacks, because once the client is trusted, you can’t just do something on the server-side to protect it. You’ll have to program an entirely new game to take advantage of new architecture, so they just have to quickly ban people and continue ban them as they hack.
It’s not really about the player amount, and more about the way the server architecture handles data and interfaces the client.

DPandaren
Guest
DPandaren

AaronVictoria DPandaren That still doesn’t really explain Planetside. Or even Tribes 2 with servers that had the max player pop pushed up to 128.

AaronVictoria
Guest
AaronVictoria

DPandaren AaronVictoria Personally I feel like the projectiles should have used far more
tracking. As an archer, I felt PvP battles went on too long do to the
shoddy dynamics of projectiles.

AaronVictoria
Guest
AaronVictoria

DPandaren AaronVictoria Asheron’s Call was a tab-targeting, dice-driven game. It wasn’t a real-time combat game. It did deviate from the rooted footing, and absolute-tracking projectiles used by most MMORPGs of that era, though. If you’re referring to projectiles-specifically, you may have forgotten the proximity compensation on projectiles. If a projectile missed you by 10 feet it would still register as a hit. They also had minor arc compensation on them to help guide them towards the target more due to the terrible net inconsistencies of those times, or no one would ever had landed a ranged attack.
I know these things because one of my mentors that taught me many of the ins-and-outs of MMORPG programming was the Lead Programmer on Asheron’s Call.

AaronVictoria
Guest
AaronVictoria

Leilonii AaronVictoria Tera Online is hacked a lot. There are even some dedicated website with directions on hacks. Most of them aren’t terribly dangerous due to how Tera handles data, because that is authoritative. There are often times where this can be seen, if you watch close enough. From time-to-time, you’ll see floating damage numbers on Tera, but if you look at your combat logs, you’ll notice the damage reports aren’t equal to the numbers you just saw. It uses what is called predictory interpolation on data. It attempts to show you data numbers close to what you should be doing, and then it reports the server’s actual findings and corrects the client after. As Xijit said the real-time nature is actually faked. What’s more, they use overly massive hitboxes to further assist with the illusion.
There is a Youtube video where a guy is playing, and he’s talking about how much he loves Tera because it’s so realistic a blade that drifts slightly behind him after an attack can still hit the target. That wasn’t intentional, as the attack is an obvious forward slashing attack. What he’s encountering is the latency compensation’s inverse reaction to the extremely large hitboxes. When you have time, look around and you’ll find lots of sites with hacks to Tera. That’s one of the reasons I stopped playing it; there were hacks looming in-game, that was present on websites for nearly a year at one point. It shouldn’t take a studio a year to counter hacks that are being openly shared online.

AaronVictoria
Guest
AaronVictoria

Wratts AaronVictoria Most first-person shooter titles don’t use server-authoritative architecture, because most don’t need it in the sense that something more large-scale does. There are quite a few reasons as to why, but a lot has to do with convenience and cost. If you have a game that is server-authoritative, whenever you change anything that should be authorized and monitored by the server, you have you update the server’s architecture as well. If you edit the clients walk speed for a patch, you have to go in and edit the server’s physics system, monitoring system, security system, and edit related files just to insure that the server architecture has full knowledge of what is and isn’t allowed.
Piggybacking on that last statement, and touching on the cost topic I explained earlier, this is a huge reason. A lot of people assume that you have systems like this all on a single machine, with all the authoritative architecture. The truth is that it’s not really a common or acceptable practice, and it can get really costly to make tech to handle things this way. The server software itself is usually operates on a core dedicated machine. This is usually networked between multiple dedicated machines that operate as back-ups, overflows, and emergency operational systems that are what we call “The Server”. That’s the location where centralized data is managed. Accompanying that are usually database servers, physics servers, monitoring servers, and security servers; each are clusters of equal make-up to The Server. The database servers are pretty self-explanatory; they hold all the varying data-structures and stores data that is either recalled/written during runtime, built on server startup, and is usually networked between The Server and the monitoring servers for validation. Physics servers are a cluster of dedicated machines that communicate with The Server, and that include instances of builds or data that outline the world physics; this normally governs the collision elements of maps, zones, or other physical objects. The physics servers communicate with the monitoring servers which communicate with The Server, and constantly reporting details of what is occurring on the physics servers’ instances.
The Server is constantly cross-checking data between itself and those 3 advanced computer groups, and if The Server finds some kind of inconsistency, it usually does a few more explicit checks and then determines if something is acceptable. If it’s not, it reports the information to the security server, which passes judgment on the offender and reports to The Server as to how to handle them. All of this is the basic, lowest level implementation of an authoritative system. Depending on the requirements of the product, some engineers combine tech into a single cluster, such as developing a physics server solution with security built directly into the solution. Most development studios prefer to just add client-side anti-cheat and anti-hack tools, because in that case that only need to implement The Server and data servers. When updating the client that is all they are concerned with unless there is an event where you have some new fields or information needed for player data. But updating a database server is far quicker and cost-efficient compared to having to hands physics, monitoring, and security servers.
With games like Overwatch, where you only have a few characters, sending a few packets during gameplay, it’s really easy to do authoritative server architecture with no hitches. You can kind of think about a bandwidth pipeline like a funnel for pouring fluids. An MMO’s data is so thick and overly inclusive, it’s the equivalent of pouring a gallon of reused synthetic oil into the funnel. Only so much of the oil can make it down the drain of the funnel at once, and everything else kind of flows in after it. Imagine you shoot an arrow and you are at the top of the funnel opening, and imagine how long it will take for the proper report to make it to your machine, with all the other fluid before it. Now putting an FPS into perspective the data is less dense and the packets are far smaller, especially if you use reasonable encryption and decryption tech. It’s equivalent to pouring a shot-glass worth of water down the funnel. That’s how quickly packets from an FPS are able to travel through the same pipeline.
So it’s not really difficult, it’s all about that bottom-dollar.

AaronVictoria
Guest
AaronVictoria

RagnarTheDrunk AaronVictoria You’re welcome. I’ve tried my hand at a few real-time combat systems, that were properly server-authoritative, and it was absolutely terrible. One of the testers and hurled fireballs and arrows at each other for literally 7 minutes before we landed a single attack. He only hit me because I wasn’t paying attention and ran into a rock and got hung up on it long enough for him to pluck me with an arrow. We tried a lot of solutions like speeding up the projectiles which looked really bad on things like fireballs, frostballs, or other projectiles that needed to leave trails. We tried the Planetside 2 route where damage colliders are scaled much larger than the model to compensate for server trajectory correction in the client. Then we tried the Halo route where the projectile tracks the target to a very small degree in very short increments. Nothing seemed to feel right, because nothing was displaying actual skill.
After all that R&D we just went with soft-targeting, and it works pretty well. You can see my very early work with it in Project Gorgon here:

AaronVictoria
Guest
AaronVictoria

Xijit I’ve worked for a few studios that did it that way because it’s cheaper and easier to monitor that way. It also allows for acceptable data latency. I personally believe you should have a server cluster in a centralized location in each region you support; in some cases more than one per region is necessary depending on how large it is.

dragonherderx
Guest
dragonherderx

natecanbefound It has better netcode than most ubi titles sadly lol

BalsBigBrother
Guest
BalsBigBrother

Xijit VasDrakken 2/3rd I got “the” and after that it’s a bit of a blur