Graal-like Server Architecture

Handling player data and compensating for lag often comes with a lot of trade offs. More precision = more CPU + bandwidth thus we need some tricks up our sleeves.
After having read https://developer.valvesoftware.com/…yer_Networking I’m curious as to what sort of features a server should have regarding lag compensation, and for those currently developing, what you actually did.
Having never coded something of this nature before, I like the idea of having the server send out a fixed rate of ‘snapshots’ of the player’s current level per second while caching previous position data for a second’s worth for each player, this in turn would be extremely useful when it comes to hit detection when sparring but at the same time, this leads to a lot of additional complexity and would take a fair amount of development time to perfect.

Thoughts?


[ATTACH=JSON]{“data-align”:“none”,“data-size”:“custom”,“height”:“588”,“title”:“Screenshot.png”,“width”:“700”,“data-attachmentid”:186256}[/ATTACH]

After I finished reading you I was thinking “isn’t smalltalk some kind of low level network programming language”. Now I know it’s not, but delving deeper in Wikipedia I fell on refactoring.

I don’t know shit. XD

Edit:
vbulletin doesn’t like carets, Rammy… try wrapping your sutff in code tags.

edit edit:
fdsfasdfdasf////dsaf<<
Ok, maybe it wasn’t carets.

edit edit edit:
I meant accentuated letters… yes, those.

vbullshitin’

Edit: I will never succumb to the limitations of shitty forum code.

So it sounds like each time the client is updated(potentially every frame), the client writes to the ‘galera cluster’/short-term DB which still resides on the server-side as it needs to push data to the other clients. This sounds like something I wish to avoid to the excessive amount of network traffic/bandwidth even though the size of the packets would be minuscule.

Just so you know, I don’t have any kind fancy infrastructure like this. Instead, I’m just using plain old java sockets where I have to encode and parse messages on both the client and server ends. Either way, thank you for sharing your approach. Very interesting to see what others are doing.

Believe me when I say you do not want “Graal-like Server architecture” unless you want a countless army of 12 year olds to hack your game and do things like wallhacking with ease. Graal took many shortcuts to achieve what was ultimately pretty laggy anyways, even for it’s time. (Although I respect Stefan as a developer for being able to complete such a large project by himself). But Graal took MANY shortcuts, and it simply doesn’t scale. What you see in Graal is nothing like the valve source document, it stores x and y values on the client instead of validating with the server. All credible modern multiplayer games don’t this, including Valve multiplayer games and World of Warcraft. Any that do are ridden with hackers (usually asian action-mmos that want to cheat to achieve their smooth action combat, and attempt to get around it by essentially rootkitting your computer with a malware-like anticheat program, which usually doesn’t end up working anyways). Never trust the client, that is a hackers heaven. Always check everything with the server with every snapshot, including every single player input and wall collisions. Will it create larger delay? Yes, but that’s where all the clientside lag compensation techniques and prediction come in.

I’ve already added counterstrike and tribe-style client-side prediction, entity interpolation and server reconciliation to my engine a while ago, with a fixed tickrate of 60hz. That valve documentation was the first thing I read, among other things. It’s not perfect, some things need to be tweaked, and I need to create a better entity interpolation algorithm with tweening and extrapolation. It’s not quite as smooth as something like this, but it’s getting there:

And it’s using Google’s Go, which is an extremely fast new language (not quite as fast as highly optimized C++, Rust or any other systems language obviously), but the great concurrency features make it more optimized than 99.9% anyone is actually going to write with C++ or Rust anyways. It also just saves time.

I originally considered NodeJS, but the biggest problem with NodeJS is that you can’t compile to a binary, so people wouldn’t really be able to host their own servers, at least not without installing NodeJS which is just a pain for the end-user. It’s also single threaded, which limits scale. (At least without adding major complexity like node clusters). So ultimately I decided to go with Go for being able to release .exe/bin server binary files for users to host their own server painlessly. Although Google has made great strides with javascript in recent years with the v8 engine and nodejs, Go is also just way faster in general because it’s a compiled language, and javascript is a JIT (just in time) compiled language meaning it can only be so fast.

But I’m proud of what’s there already. Sometimes I have up to a 1000ms delay/ping from my server but that’s mostly because I’m running on a shit VPS most likely, eventually I’m going to upgrade to a high-end VPS or dedicated server for testing, because I don’t want the game to happen a full millisecond in the past to have lag compensation (contrary to popular belief, there are some twitch shooters that go this far), and interpolate in between. I’ll also probably let users customize tickrate in their server settings most likely in the future. My server is fairly optimized, multithreaded, has 1 core receiving socket data, and has 1 core has one core sending socket data, and splits into more cores whenever calculations or heavy processing need to be done, and an extremely fast serialization rate (benchmarked as some of the fastest out of any method right now, pretty cutting edge), so I’m pretty proud of that. Even then I’m still planning on other server optimizations like dynamic throttling for scale, but at the moment it’s more than good enough. I don’t have time at the moment, but I’ll probably eventually write a more detailed post about how it works on my blog if anyone is interested.

Also if anyone is interested in the papers I’ve read where I got all these ideas, check out papers on quake III, obviously the source engine lag compensation wiki, tribes networking whitepaper, and the book “Multiplayer Game Programming” by Josh Glaze which references most of these papers and other things like Doom, and goes over the history of how different multiplayer game development techniques can to be.

Thanks for taking the time to write this post. Will definitely acquaint myself with those sources you mentioned when I get the time. Definitely looks like this is going to take a lot more time than I anticipated to get this right unfortunately.

That’s how I felt when I first looked into it too. But you’ll thank yourself in the long run, and it’s just good knowledge to have in general. I definitely recommend picking up that ebook by Josh Glaze if you can find it for cheap, it just puts all these concepts together. I think the best thing you can do for now is just focus on getting networking in general working with sockets, and try to make sure your code as clean and modular as possible so that you can make changes and apply all these techniques as you go along (or just make a messy prototype/testbed and scrap it, you can still learn something from that. I had to do that a few times).

“Plain old Java sockets” are definitely good enough for this sort of thing. I’ve never used sockets with Java before, nor am I a big fan of Java, but Java and other VM languages like C# are very fast nowadays and are now comparable to well written C/C++ in some cases.

https://benchmarksgame.alioth.debian.org/u64q/java.html

https://benchmarksgame.alioth.debian…java&lang2=gpp

Modern Java is also faster than C# in some benchmarks.

https://benchmarksgame.alioth.debian…ng2=csharpcore