[HN Gopher] How do video games stay in sync?
       ___________________________________________________________________
        
       How do video games stay in sync?
        
       Author : whack
       Score  : 248 points
       Date   : 2022-05-25 23:33 UTC (2 days ago)
        
 (HTM) web link (medium.com)
 (TXT) w3m dump (medium.com)
        
       | aerovistae wrote:
       | sidenote, but:
       | 
       | > To keep that in perspective, light can barely make it across
       | the continental united states in that time
       | 
       | that is not true
        
         | jayd16 wrote:
         | Seems correct to me although they took the longest distance
         | from Florida to Washington.
         | 
         | 2800miles / c = 15ms
        
       | forrestthewoods wrote:
       | Once upon a time (2011) I wrote a blog post about Supreme
       | Commander's netcode.
       | 
       | https://www.forrestthewoods.com/blog/synchronous_rts_engines...
       | 
       | SupCom was a pretty classic synchronous + deterministic system.
       | It's a pretty major PITA and the next RTS I worked on was more
       | vanilla client-server.
        
       | DonHopkins wrote:
       | In the earlier discussion about how you should not use text
       | pixelation to redact sensitive info, I wrote this about how when
       | re-developing The Sims into The Sims Online, the client and
       | server would get out of sync whenever a Sim would take a dump,
       | because the pixelization censorship effect used the random number
       | generator:
       | 
       | https://news.ycombinator.com/item?id=30359560
       | 
       | DonHopkins 3 months ago | parent | context | favorite | on: Don't
       | use text pixelation to redact sensitive info...
       | 
       | When I implemented the pixelation censorship effect in The Sims
       | 1, I actually injected some random noise every frame, so it made
       | the pixels shimmer, even when time was paused. That helped make
       | it less obvious that it wasn't actually censoring penises, boobs,
       | vaginas, and assholes, because the Sims were actually more like
       | smooth Barbie dolls or GI-Joes with no actual naughty bits to
       | censor, and the players knowing that would have embarrassed the
       | poor Sims.
       | 
       | The pixelized naughty bits censorship effect was more intended to
       | cover up the humiliating fact that The Sims were not anatomically
       | correct, for the benefit of The Sims own feelings and modesty, by
       | implying that they were "fully functional" and had something to
       | hide, not to prevent actual players from being shocked and
       | offended and having heart attacks by being exposed to racy
       | obscene visuals, because their actual junk that was censored was
       | quite G-rated. (Or rather caste-rated.)
       | 
       | But when we later developed The Sims Online based on the original
       | The Sims 1 code, its use of pseudo random numbers initially
       | caused the parallel simulations that were running in lockstep on
       | the client and headless server to diverge (causing terribly
       | subtle hard-to-track-down bugs), because the headless server
       | wasn't rendering the randomized pixelization effect but the
       | client was, so we had to fix the client to use a separate user
       | interface pseudo random number generator that didn't have any
       | effect on the simulation's deterministic pseudo random number
       | generator.
       | 
       | [4/6] The Sims 1 Beta clip  "Dana takes a shower, Michael seeks
       | relief"  March 1999:
       | 
       | https://www.youtube.com/watch?v=ma5SYacJ7pQ
       | 
       | (You can see the shimmering while Michael holds still while
       | taking a dump. This is an early pre-release so he doesn't
       | actually take his pants off, so he's really just sitting down on
       | the toilet and pooping his pants. Thank God that's censored! I
       | think we may have actually shipped with that "bug", since there
       | was no separate texture or mesh for the pants to swap out, and
       | they could only be fully nude or fully clothed, so that bug was
       | too hard to fix, closed as "works as designed", and they just had
       | to crap in their pants.)
       | 
       | Will Wright on Sex at The Sims & Expansion Packs:
       | 
       | https://www.youtube.com/watch?v=DVtduPX5e-8
       | 
       | The other nasty bug involving pixelization that we did manage to
       | fix before shipping, but that I unfortunately didn't save any
       | video of, involved the maid NPC, who was originally programmed by
       | a really brilliant summer intern, but had a few quirks:
       | 
       | A Sim would need to go potty, and walk into the bathroom,
       | pixelate their body, and sit down on the toilet, then proceed to
       | have a nice leisurely bowel movement in their trousers. In the
       | process, the toilet would suddenly become dirty and clogged,
       | which attracted the maid into the bathroom (this was before
       | "privacy" was implemented).
       | 
       | She would then stroll over to toilet, whip out a plunger from
       | "hammerspace" [1], and thrust it into the toilet between the
       | pooping Sim's legs, and proceed to move it up and down vigorously
       | by its wooden handle. The "Unnecessary Censorship" [2] strongly
       | implied that the maid was performing a manual act of digital sex
       | work. That little bug required quite a lot of SimAntics [3]
       | programming to fix!
       | 
       | [1] Hammerspace:
       | https://tvtropes.org/pmwiki/pmwiki.php/Main/Hammerspace
       | 
       | [2] Unnecessary Censorship:
       | https://www.youtube.com/watch?v=6axflEqZbWU
       | 
       | [3] SimAntics: https://news.ycombinator.com/item?id=22987435 and
       | https://simstek.fandom.com/wiki/SimAntics
        
       | hpx7 wrote:
       | I've been working on my own realtime networking engine[0] and I
       | think there are a few important points related to network syncing
       | that are not mentioned in this article:
       | 
       | 1) Bandwidth. The users internet can only handle so much network
       | throughput, so for fast paced games (where you're sending data to
       | each client at a rate of 20+ frames per second) it becomes
       | important to optimize your per-frame packet size. This means
       | using techniques like binary encoding and delta compression (only
       | send diffs).
       | 
       | 2) Server infrastructure. For client-server games, latency is
       | going to be a function of server placement. If you only have a
       | single server that is deployed in us-east and a bunch of users
       | want to play with each other in Australia, their experience is
       | going to suffer massively. Ideally you want a global network of
       | servers and try to route users to their closest server.
       | 
       | 3) TCP vs UDP. Packet loss is a very real problem, and you don't
       | want clients to be stuck waiting for old packets to be resent to
       | them when they already have the latest data. UDP makes a major
       | difference in gameplay when dealing with lossy networks.
       | 
       | [0] https://github.com/hathora/hathora
        
         | throwaway894345 wrote:
         | How does UDP / packet loss work with delta compression? If
         | you're only sending diffs and some of them may be lost or
         | received out of order, doesn't that break delta compression?
        
           | forgotusername6 wrote:
           | The same way it works for video codecs which also send diffs
           | over UDP. There are mechanisms to introduce redundancy in the
           | stream, ask for retransmission, handle missing information.
        
         | somenameforme wrote:
         | There's another way to look at this. The more data you send per
         | packet, the more that can be reasonably interpolated by the
         | client in between updates. Diffs also becomes impossible of
         | course in cases where you're using UDP. So for instance imagine
         | you're only sending visible targets to a player in updates, and
         | then there is a brief stutter - you end up risking having a
         | target magically warp onto the player's screen, which is obv
         | undesirable. Pack in everybody's location (or at least maybe
         | within some as the crow flies radius) and the client experience
         | will break less frequently. Of course like you said though, now
         | the bandwidth goes up.
        
           | softfalcon wrote:
           | I've written a diffing algorithm using UDP. You tell it to
           | diff against a previous packet with an id. Every so often you
           | send a full key frame packet so they always stay in sync and
           | have full game state.
           | 
           | It works really well and cut my network traffic down by a
           | whole couple orders of magnitude.
           | 
           | The trick is to figure out update grouping so you can create
           | clean groups of things to send and diff on. Ultimately delta
           | compression doesn't even care what the data is, so modern net
           | stacks do some really efficient compression in this way.
        
             | Animats wrote:
             | _I've written a diffing algorithm using UDP. You tell it to
             | diff against a previous packet with an id. Every so often
             | you send a full key frame packet so they always stay in
             | sync and have full game state._
             | 
             | Right. That's how video streams work, too. Every once in a
             | while there's a complete frame, but most frames are diffs.
        
           | ryanschneider wrote:
           | Another trade off with your approach of sending non-visible
           | entities ahead of time is that it makes wall hacks possible.
           | 
           | Anyone aware of any conceptual way to "encrypt" the location
           | data so it's only usable if the player has line of sight? I
           | doubt that's easy/possible but don't even know where to begin
           | searching for research around topics like that.
        
             | cdiamand wrote:
             | Here are some article that address wall hacking - not quite
             | what you're looking for but, still a great read.
             | 
             | https://technology.riotgames.com/news/demolishing-
             | wallhacks-...
             | 
             | https://technology.riotgames.com/news/peeking-valorants-
             | netc...
        
             | ryanschneider wrote:
             | Not quite what I originally had in mind but interesting
             | idea of storing the remote entity locations in the trusted
             | enclave: https://lifeasageek.github.io/papers/seonghyun-
             | blackmirror.p...
        
           | hesdeadjim wrote:
           | There is a useful intermediate approach, send more entities
           | but use an importance algorithm to control how frequently
           | each has their data sent. Clients keep knowledge of more
           | entities this way, but bandwidth usage/frame can be kept
           | stable.
        
             | zionic wrote:
             | Sounds like the recent changes star citizen made via the
             | entity component update scheduler
        
           | cylon13 wrote:
           | Diffs aren't impossible over UDP. The client should be
           | sending to the server which states it has locally, along with
           | inputs. Then since the server has a (potentially non-
           | exhaustive) list of recent states it knows the client has
           | seen, it can choose to make a diff against the most recent of
           | those and tell the client which state the diff is against.
           | Then the client and server just need to keep a small buffer
           | of pre-diff-encoded packets sent/received.
        
         | ZoidCTF wrote:
         | 1) Bandwidth is pretty irrelevant now. Even players on cellular
         | networks have megabits of bandwidth. I stopped spending a large
         | amount of time optimizing for packet size while building the
         | networking for Dota 2. Nobody is playing a 14.4k modem anymore.
         | 
         | 2) Server placement is still an issue. It's still ~200ms round
         | trip from New York to Sydney for example. Fortunately, cloud
         | infrastructures can make getting servers to closer your players
         | much easier now. You don't have to physically install servers
         | into data centers in the region.
         | 
         | 3) Packet loss still occurs, but is incredibly rare that the
         | gap between using TCP and UDP is narrowing. Modern TCP
         | implementations like Microsoft's are amazing at handling loss
         | and retransmission. However, I'd probably use QUIC for game
         | networking if I was to write an engine from scratch these days.
        
           | kabdib wrote:
           | Why not use Valve's game networking stuff? Just curious.
        
           | omegalulw wrote:
           | Another key requirement that must be considered is packet
           | ordering. With games you care about the latest state and thus
           | discarding older out of order packets is a better strategy
           | than waiting for packets to serialize them like TCP would do.
        
             | Animats wrote:
             | You only care about the latest state for _some_ events.
             | Only events which will soon be superseded by a later event
             | should go over UDP. Move A to X, sent on every frame, fine.
             | Create monster at Y, no.
             | 
             | If you find yourself implementing reliability and
             | retransmission over UDP, you're doing it wrong. However, as
             | I mention occasionally, turn off delayed ACKs in TCP to
             | avoid stalls on short message traffic.
             | 
             | Reliable, no head of line blocking, in order delivery -
             | pick any two. Can't have all three.
        
         | TacticalCoder wrote:
         | > 1) Bandwidth. The users internet can only handle so much
         | network throughput, so for fast paced games (where you're
         | sending data to each client at a rate of 20+ frames per second)
         | it becomes important to optimize your per-frame packet size.
         | This means using techniques like binary encoding and delta
         | compression (only send diffs).
         | 
         | Games like Blizzard's Warcraft III / StarCraft II and Age Of
         | Empire linked here in this thread (1500 archers on a 28.8 k
         | baud modem) and oh so many other games approach that entirely
         | differently: the amount of user inputs users can input is
         | tinier than tiny. So instead of sending diff of the game state
         | they send user inputs and the time at which they happened.
         | Because their engine are entirely deterministic, they can
         | recreate the exact same game state for everybody from only the
         | timed user inputs.
         | 
         | Fully deterministic games engine also allow for lots of easy to
         | reproduce bugs and they also allow for tiny save files.
         | 
         | Negligible network traffic. Tiny save files. Bugs are easy to
         | reproduce. When the game allows it, it's the only reasonable
         | thing to do.
        
           | softfalcon wrote:
           | Deterministic games suffer from desync and issues with input
           | limitations. It is true that war3 does this, but it has some
           | serious drawbacks.
           | 
           | It also makes the client easier to cheat on and gives one
           | player (the host) a preferential ping.
           | 
           | Most competitive FPS's use server authoritative instead of
           | replayable deterministic because of this.
           | 
           | If you want to see the limitations, head into the old war3
           | map editor forums and look up the hacks using automated
           | clicks between placeholder variable units just to move a few
           | bytes of data between clients so they can persist character
           | stats between games.
        
           | Armisael16 wrote:
           | This presents a (relative) vulnerability to cheating. If
           | every computer has the full game state but players aren't
           | supposed to be able to know some things there is the
           | potential for hacks.
           | 
           | The most obvious version of this in StarCraft is maphacks
           | that let you see through fog of war, although that's far from
           | the only thing.
           | 
           | Poker meets all the technical requirements here, but sending
           | everyone the contents of all hands would be a disaster.
        
             | hitpointdrew wrote:
             | > Poker meets all the technical requirements here, but
             | sending everyone the contents of all hands would be a
             | disaster
             | 
             | I work in the gambling space. A few notes, gambling games
             | don't ever rely on physics (even roulette, or a coin dozer
             | type of game, everything is decided by a certified rng, no
             | regulatory body that I am aware of allows outcomes based on
             | physics engines). This means there is far less data to keep
             | state on (a hand of cards is very tiny json blob to send).
             | Games like poker etc. don't require "real time", if a
             | player takes 4 seconds to decide if they want to
             | call/raise/fold etc. then an extra 200ms of latency isn't
             | even going to be noticeable. So we don't really care if
             | there is a bit of latency, these aren't FPS games.
        
             | dpedu wrote:
             | This comes up in Minecraft too, and there was a small arms
             | race around it. For the unfamiliar - certain resources in
             | the game are useful and valuable but also rare (diamonds)
             | and requires the player to spend a decent amount of time
             | digging through the ground to find them.
             | 
             | But, since you have the whole game state, you can sift
             | through the data and pinpoint these resources and acquire
             | them quickly and with almost no effort. In multiplayer this
             | is generally considered cheating and is called an "xray"
             | modification to the game client. There are other variations
             | of this hack that involve changing the game's textures to
             | transparent images except for the specific resources you
             | want to find.
             | 
             | Mulitplayer server administrators don't like cheats so they
             | created countermeasures for this. The best example is
             | probably Orebfuscator which "hides" said valuable resources
             | until the player is very close to them.
             | 
             | https://dev.bukkit.org/projects/orebfuscator
        
               | nwiswell wrote:
               | Can't you still gain an unfair advantage using Bayesian
               | search theory where probability drops to zero at the
               | "revealing radius"?
               | 
               | Or is the "revealing radius" somewhat randomized over
               | time in a way that's invisible to the client?
        
         | mbbutler wrote:
         | How does UDP work if you're also using delta compression? I
         | would naively expect that the accumulation of lost diff packets
         | over time would cause game state drift among the clients.
        
           | toast0 wrote:
           | If you get your data small enough to fit multiple updates
           | into a single packet, you can send the last N updates in each
           | packet.
           | 
           | If your updates are bigger; you probably will end up with
           | seqs, acks and retransmitting of some sort, but you may be
           | able to do better than sending a duplicate of the missed
           | packet.
        
             | hpx7 wrote:
             | Exactly, you assign a sequence number to each update, have
             | the client send acks to convey which packets it has
             | received, and the server holds onto and sends each unacked
             | update in the packet to clients (this is an improvement
             | over blindly sending N updates each time, you don't want to
             | send updates that you know the client has already
             | received).
             | 
             | If the client misses too many frames the server can send it
             | a snapshot (that way the server can hold a bounded number
             | of old updates in memory).
        
               | shepherdjerred wrote:
               | You just described TCP
        
               | vvanders wrote:
               | TCP forces sequencing across all packets, SCTP is a bit
               | closer.
        
               | xyzzyz wrote:
               | It's not TCP, it's TCP without head-of-line blocking,
               | which makes it much more suitable for real time games.
        
               | hpx7 wrote:
               | It's close but TCP will retransmit frames rather than
               | packing multiple updates in a single frame.
               | 
               | It's common for people to build this kind of
               | retransmission logic on top of UDP (especially for
               | networked games), it's sometimes referred to as "reliable
               | UDP".
        
           | Matheus28 wrote:
           | The simplest way I've done it: say client and server start on
           | tick 1, and that's also the last acknowledgement from the
           | client that the server knows about. So it sends a diff from 1
           | to 2, 1 to 3, 1 to 4, until server gets an ack for tick 3,
           | for example. Then server sends diffs from 3 to 5, 3 to 6,
           | etc. The idea is that the diffs are idempotent and will take
           | the client to the latest state, as long as we can trust the
           | last ack value. So if it's a diff from 3 to 6, the client
           | could apply that diff in tick 3, 4, 5 or 6, and the final
           | result would be the same.
           | 
           | This is done for state that should be reliably transmitted
           | and consistent. For stuff that doesn't matter as much if they
           | get lost (explosion effects, or what not), then they're
           | usually included in that packet but not retransmitted or
           | accounted for once it goes out.
           | 
           | This is different (and a lot more efficient) than sending the
           | last N updates in each packet.
        
             | crdrost wrote:
             | That is a fascinating use of idempotence, bravo!
        
           | softfalcon wrote:
           | You don't delta compress everything, only a significant part
           | of the payload. Each diff is referential to a previous packet
           | with a unique id. If you don't have the previous packet, you
           | just ignore the update.
           | 
           | Every 30 frames or so, you send a key frame packet that is
           | uncompressed so that all clients have a consistent
           | perspective of world state if they fell behind.
           | 
           | Using sentTime lets clients ignore old data and interpolate
           | to catch up if behind as well.
           | 
           | It does work, I wrote one from scratch to create a
           | multiplayer space rpg and the bandwidth savings were
           | incredible.
        
       | travisgriggs wrote:
       | This reminded me of some of the stuff that went into TeaTime in
       | Croquet-OS (https://en.wikipedia.org/wiki/Croquet_OS)
        
         | chillpenguin wrote:
         | I don't know if you have seen, but the old Croquet team is
         | back: https://www.croquet.io/
        
       | sowbug wrote:
       | Any autonomous-vehicle engineers here? This problem seems
       | similar, if not identical. Are self-driving cars the "clients"
       | that predict the state of the "server" that is the real world?
        
         | sgtnoodle wrote:
         | I think the analogy would be more that cars, pedestrians, and
         | obstacles to avoid are all peers, and the real world is the
         | network.
        
         | jayd16 wrote:
         | I think it's a stretch. In games you only have to deal with out
         | of date but accurate information. In AI you have fuzzy images
         | to interpret.
        
       | Waterluvian wrote:
       | I know this might oversimplify, or perhaps is obvious to many,
       | but when I got into amateur game dev one surprise was realizing
       | that real-time games are just turn-based games where you don't
       | control the turns.
        
         | Tarsul wrote:
         | that explains how it was possible for the developers of Diablo
         | 1 to turn the game from turn-based to real-time in a single
         | afternoon :)
         | 
         | yes, originally it was developed as turn-based and I often
         | wonder if that's one reason why the animations are sooo
         | satisfying. But could be that they simply had great animators.
        
         | mikeyjk wrote:
         | Seems reasonable. Although I would tweak it to say you can't
         | control when each turn ends.
        
         | taneq wrote:
         | I've never thought of it in that specific way before (although
         | obviously when you're writing the game that's how it goes) and
         | that's a great way to explain it. Thanks!
        
         | ytch wrote:
         | I learned it from games that porting to newer platform with
         | faster FPS. For example, Dark Souls II runs at 30 FPS at
         | release. Then upgrade to 60 FPS after porting to PC, but double
         | the FPS also doubles the weapons break pace.
        
           | the_only_law wrote:
           | I'm currently close to abandoning bannerlord, a medieval
           | combat game where less latency gives you a better chance in
           | MP.
        
             | cinntaile wrote:
             | Isn't that how it's supposed to be? If you have a good
             | connection, you get the game state earlier than someone
             | with a worse connection.
        
           | Waterluvian wrote:
           | There's an architecture to avoid this by decoupling the
           | simulation tickrate from the renderer.
           | 
           | The really simple way is to just pass a delta milliseconds to
           | each system so it can simulate the right amount of time.
           | 
           | But yeah, it was wild how DS2 at 60fps fundamentally altered
           | a lot of things like weapon durability and jump distance.
        
           | jayd16 wrote:
           | From seems to be guilty of this even today. Eldenring
           | recently had a dog that did more bite damage at higher fps. I
           | guess they're just ticking damage every frame the dog head
           | and the player collider overlap.
        
         | jstimpfle wrote:
         | If you mean the networking architecture, that seems indeed like
         | an oversimplification. AFAIK lockstep synchronization isn't a
         | good networking strategy, and most games netcode will do have
         | some prediction and rollback components.
        
           | taneq wrote:
           | You're conflating game logic with engine logic here -
           | whatever hijinks the engine pulls to make things seem
           | seamless (prediction / replay etc.) is (or at least SHOULD
           | be) orthogonal to the game logic. In game logic terms all
           | games are turn based because there are no infinitely fast
           | computers. The turns just happen at your game's tick rate
           | which might be pretty quick.
        
             | jstimpfle wrote:
             | Where did I make a distinction between engines and game
             | logic? How can you say I conflated anything?
             | 
             | Anyway. I only have unfinished attempts at low-latency
             | netcode and rollback, so can't say I'm speaking from solid
             | experience. But I would doubt that engines implement
             | rollback netcode for you. Essentially the game needs to be
             | structured in a way to accomodate storage of game state as
             | snapshots. And it needs to decide how to incorporate
             | messages that arrive late.
        
               | jayd16 wrote:
               | Again you're conflating netcode with game rules. The
               | players don't know of any rollbacks. That's not part of
               | the game, just the implementation of the client.
               | 
               | The comment was that (surprisingly) all games are single
               | threaded and feel very turn based. Even real-time games.
        
             | DarylZero wrote:
             | Even with an infinitely fast client computer, the server
             | shouldn't accept infinite ticks from one player before
             | processing ticks from another.
        
         | extrememacaroni wrote:
         | for those unfamiliar, it's because the game's main loop looks
         | like                 while (!exitRequested) {
         | player.updateState();         someCharacter.updateState();
         | someOtherCharacter.updateState();       }
         | 
         | you could in theory make this kind of updates in parallel but
         | then the entire game becomes a non-deterministic chaos and
         | trying to deal with synchronising threads in a context like
         | this is such a nightmare and I'm sure intractable performance-
         | wise. did anyone even try this, ever?
         | 
         | bottom line, real-time or turn-based, a piece of code needs to
         | execute before or after another, not at the same time.
         | 
         | the order in which things take their "turn" each frame becomes
         | very important the more complex the game btw, so even the order
         | in which things execute serially cannot be entirely arbitrary.
         | usually for things that depend on other things to update their
         | state in order to accurately update their own state. which is a
         | lot of things in every game. for example, you wanna update the
         | text on the GUI that says how much gold the player has. you'll
         | update the text after everything that could have influenced the
         | gold this frame has updated (i.e. at the end of the frame).
         | player input state (keyboard input e.g.) is updated at the
         | beginning of the frame before you make the player's character
         | do anything based on input.
         | 
         | particular stuff can be parallelized or turned into coroutines
         | that "update a little bit each loop" so as to not kill
         | performance. like pathfinding, a character needs to go from
         | point A to point B, he doesn't really need to find the whole
         | path _now_. a partial path while a separate thread calculates
         | the entire path can do. or just make him think a bit while the
         | pathfinding thread finds a path, the advantage is characters
         | thinking about their actions is also realistic :P
        
           | Waterluvian wrote:
           | I learned about system order the hard way.
           | 
           | A system would try to apply damage to a ship that was already
           | destroyed.
           | 
           | It taught me that you often have to flag things and then do a
           | cleanup phase at the end. So destroyed = true but don't
           | outright delete the entity until the end of the "turn."
        
           | ratww wrote:
           | _> did anyone even try this, ever?_
           | 
           | I think it depends on the granularity.
           | 
           | Coarse-grained parallelism is already common: some things
           | like AI (like your example), Physics, Resource Management,
           | Shader Compilation, Audio and even Netcode are already
           | commonly run in separate threads.
           | 
           | However you won't see the _updateInputs()_ , all the
           | _updateState()_ and sometimes even _render()_ running in
           | parallel for two reasons: first because it 's often cheaper
           | and easier/simpler to run in the main thread than dispatching
           | async jobs anyway. Second because each operation often
           | depends on the previous ones, and you can't wait until the
           | next frame to take it into account: you often want instant
           | feedback.
           | 
           | However these things can in theory be run in parallel without
           | becoming chaos. ECS is often very parallelizable: you can run
           | multiple Systems in different threads, as long as the output
           | of one System is not a dependency of another running at the
           | same time. You could also process multiple components of a
           | system in multiple threads, but that would negate ECS main
           | advantage: being cache-friendly by virtue of running in a
           | single thread.
        
             | Waterluvian wrote:
             | I think Bevy in Rust demonstrates this well. Your systems
             | are structured in a dependency graph and, I think,
             | automatically figure out what things can be done in
             | parallel.
        
           | CodeArtisan wrote:
           | Naughty Dog developers did a talk at GDC 2015 where they
           | explain how they parallelized their game engine.
           | 
           | https://www.gdcvault.com/play/1022186/Parallelizing-the-
           | Naug...
           | 
           | > _for example, you wanna update the text on the GUI that
           | says how much gold the player has. you 'll update the text
           | after everything that could have influenced the gold this
           | frame has updated (i.e. at the end of the frame)._
           | 
           | Modern game engines are pipelined; you render the previous
           | frame logic. In the talk aforementioned, they show a three
           | stages deep pipeline looking like this
           | [FRAME]       [FRAME+1]       [FRAME+2]
           | ----------------------------------------------
           | [LOGIC]       [LOGIC+1]       [LOGIC+2]
           | [RENDER LOGIC]  [RENDER LOGIC+1]
           | [GPU RENDERING]
           | 
           | each stage is independent and doesn't require syncing. they
           | call that "frame centric design".
        
             | extrememacaroni wrote:
             | Ooooh this is like that                 for {          a[i]
             | = something;          b[i] = something;          c[i] =
             | a[i] + b[i]; // can only do a and b at the same time
             | because c depends on them       }                 // handle
             | a[0],b[0]       for {         a[i] = something;
             | b[i] = something;         c[i-1] = a[i-1] + b[i-1]; // can
             | do all 3 at the same time because no deps.       }
             | // handle c[last]
             | 
             | optimization I saw in a talk about how CPUs can do many
             | instructions at once if they don't depend on each other.
             | 
             | I was unaware of how something like this could play into
             | game engines at the loop level, thanks for the link I'll
             | watch it asap.
        
             | hypertele-Xii wrote:
             | This system introduces yet more lag, which is increasingly
             | awful kinesthetically. It makes modern games feel...
             | sticky, sluggish, unresponsive, imprecise. We've gone from
             | instant feedback to ridiculous degrees of input latency.
             | 
             | You press a button. It takes 1 frame for the signal from
             | your USB device to get polled through the OS into your
             | engine. Then it takes 1 frame for the input to affect the
             | logic. Then it takes 1 frame for that change in logic to
             | get "prepared" for rendering. Then it takes 1 frame for the
             | GPU to draw the result. And depending on your video
             | buffering settings and monitor response time, you're still
             | adding a frame until you see the result.
             | 
             | If you're running 60 frames per second, that's an abysmal
             | 83 milliseconds lag on _every player input._ And that 's
             | _before network latency._
        
               | extrememacaroni wrote:
               | Most people barely notice or mind, which is unfortunate.
               | Elden Ring has so much input lag but most dont even
               | notice.
        
               | bluescrn wrote:
               | And then there's the output latency of LCD TVs. Seems
               | less of a problem than it used to be, but some older TVs
               | could easily add 50-100ms of latency (particularly if not
               | switched to Game Mode, but even a game mode didn't
               | guarantee great results)
               | 
               | But these days it's hard enough to convince people that
               | 'a cinematic 30fps' really really sucks compared to 60
               | (or better), and there's an even smaller number of
               | gamers/devs who seem to notice or care about latency
               | issues.
        
         | taeric wrote:
         | Seems a fine way to think about things. Essentially turn based
         | games treat player actions as a clock tick?
        
           | [deleted]
        
           | extrememacaroni wrote:
           | They're real-time games but over the characters looms an all-
           | powerful controlling entity (like a TurnBasedActionController
           | object) that will only allow one character at a time to think
           | about and execute an action. The others can do fun taunting
           | animations but not much more.
        
       | superjan wrote:
       | A colleage told me once that his manager went into a panic after
       | realizing that their multiplayer game would be unplayable over
       | the network... Unbeknownst to said manager, the debug builds all
       | this time had a 1 second delay built in to ensure all code would
       | be able to deal with real-world delays.
        
         | gfodor wrote:
         | I'm guessing you're simplifying - but a fixed delay like this
         | is a good way to fool yourself, since a system built that can
         | handle fixed latency can go to pieces in the face of jitter and
         | packet loss.
        
       | dbttdft wrote:
       | This article explains what every figures out when making their
       | own engine (and basing it off guessing what other games do and
       | reading 2 blogposts instead of diving into philosophical rabbit
       | holes). It also misses that most games are made with
       | frameworks/libs now that give the dev no control over what things
       | require round trips to the server (I assume this is the
       | explanation for that; Fortnite took 2 years to fix weapon switch
       | lag and LoL still has round trip bugs).
       | 
       | What immediately happens in practice with interpolation, is that
       | every second player has a bad network connection and so you get
       | unevenly spaced movement packets from him, and he just warps from
       | here to there (via teleport, or smooth movement, neither of which
       | look good or are possible in single player mode), among other
       | problems. Interpolation also adds latency which is already a
       | constrained area. Your game should already be designed to handle
       | a constant rate of in and outbound packets and is broken if it
       | sends them on demand instead of packing them into this stream of
       | evenly paced constant rate packets. If you can't send packets for
       | a few frames you should be punished until you switch ISPs, as
       | opposed to you getting advantage because you teleport somewhere
       | on the other clients's screens. The idea of "right away" is a
       | misconception here. Since you get packets at a constant rate
       | (which should be high enough to be displayed directly on screen),
       | interpolation is not necessary. 60 tick is literally nothing, for
       | a game with less than 100 players and little to no moving
       | objects. Of course, most gamedevs are not concerned with this
       | stuff as the framerate drops to 10 in most games if 1-6 players
       | are on screen depending on how shit their game is. Also, client
       | side damage is a mistake.
        
         | maccard wrote:
         | > also misses that most games are made with frameworks/libs now
         | that give the dev no control over what things require round
         | trips to the server (I assume this is the explanation for that;
         | Fortnite took 2 years to fix weapon switch lag
         | 
         | I can't speak for riot/league but I've worked in unreal for
         | close to a decade and the engine provides complete control over
         | what requires a round trip. I won't speculate on the Fortnite
         | weapon switch lag (although I did work for epic on Fortnite at
         | the time), but these kinds of bugs happen in the same way any
         | other bug happens - accidental complexity. You call a function
         | that you know requires a round trip, but then 6 months later
         | someone else calls your function but doesn't realise that
         | theres a round trip in there.
         | 
         | > Since you get packets at a constant rate (which should be
         | high enough to be displayed directly on screen), interpolation
         | is not necessary.
         | 
         | This is just nonsense. There is no such thing as a perfect
         | connection, particularly when you're communicating across the
         | internet. Even in your perfect world situation it doesn't work
         | - if both you and I are 16 ms away from the server and it's
         | running at 60hz (which is a stretch too - many games are
         | running at much lower update rates because of the expense of
         | running these services), in the worst case you have over 60ms
         | of latency to handle, which is 4 frames.
         | 
         | > Of course, most gamedevs are not concerned with this stuff as
         | the framerate drops to 10 in most games if 1-6 players are on
         | screen depending on how shit their game is
         | 
         | This is the sort of comment I expect on Reddit and not here.
         | Most developers, myself included would do anything in their
         | power to avoid that happening, and I can't think of a single
         | game that drops to even close that bad that was released in the
         | last decade.
         | 
         | > Also, client side damage is a mistake.
         | 
         | Client side hit detection is a trade off. On one end it allows
         | for abuse, but most client interpolation systems (including the
         | one that comes out of the box with unreal engine) will mostly
         | negate that. On the other, it allows for local-feeling play in
         | a huge number of situations.
        
         | oneoff786 wrote:
         | Fortnite is on Epic's own engine so that seems a little
         | unlikely.
        
         | hasel wrote:
         | > Also, client side damage is a mistake. I'm assuming you mean
         | client side hit detection, as a person who lives in a region
         | where a lot of games are 100 ping, that's absolutely necessary.
         | Without it players with latency above the tick rate would have
         | to lead their shots to hit other players. While it causes some
         | unfairness for the victim (ex. getting hit behind cover), it's
         | still the best way to do it but it must be disabled at a
         | certain threshold, preferable where it doesn't completely ruin
         | the experience for players with average connections playing on
         | their closest server. That said it is a band aid and ideally
         | you would just set up servers closer to players.
        
         | s-lambert wrote:
         | > It also misses that most games are made with frameworks/libs
         | now that give the dev no control over what things require round
         | trips to the server (I assume this is the explanation for that;
         | Fortnite took 2 years to fix weapon switch lag and LoL still
         | has round trip bugs).
         | 
         | League of Legends is using a custom game engine built from the
         | ground up, it's always been a really buggy mess though.
        
       | SahAssar wrote:
       | A great talk about this is about halo reach'es networking:
       | https://www.youtube.com/watch?v=h47zZrqjgLc
       | 
       | I haven't seen it in a few years (re-watching it now), but IIRC
       | they talk about how they do forecasting of things like shielding,
       | grenade throws but need to reconciliate state after.
        
       | elbigbad wrote:
       | I am not a real-time systems engineer and know very little about
       | the tech, but do know a lot about network engineering. I was
       | honestly kind of disappointed by this article because it seemed
       | like the network and latency side of things was just "YOLO" and
       | the solutions were basically "interpolate with AI" and that was
       | it. I was hoping for more insights into solving the problems but
       | instead feel like it was more "here are ways to make it appear
       | that there's no problem."
       | 
       | Definitely open to being wrong on this opinion.
        
       | greggman3 wrote:
       | One thing I don't see mentioned is basing movement of some things
       | on a synced clock
       | 
       | on a desktop machine open this link in multiple windows and size
       | the windows so they are all at least partially visible at the
       | same time
       | 
       | http://greggman.github.io/doodles/syncThreeJS/syncThreeJS.ht...
       | 
       | they should all be in sync because they are basing the position
       | of the spheres on the system clock.
       | 
       | A networked game can implement a synced clock across systems and
       | move some things based on that synced clock.
       | 
       | Burnout 3 did this for npc cars. Once a car is hit then its
       | position needs to be synced but before it is hit it's just
       | following a clock based path
        
       | bombela wrote:
       | Unrelated to the topic. But at first the blog only contained an
       | introduction, no content. But because others have been
       | commenting, clearly I was missing something. I reopened the page
       | few times. And at some point; some JavaScript I suppose?; started
       | loading the text.
       | 
       | How is it possible for this medium service to be so spectacularly
       | bad? We are talking about text here...
       | 
       | Edit: Did more test. The page updates and loads the content up to
       | 5s after initial loading. There is no feedback. What a miserable
       | experience.
        
       | ofek wrote:
       | This is also an interesting writeup by Valve:
       | https://developer.valvesoftware.com/wiki/Latency_Compensatin...
        
       | ivan_ah wrote:
       | This is pretty well explained, and the visualizations make it
       | understandable.
       | 
       | An example of a netcode that does "prediciton" and "rollback" is
       | GGPO, which is used in fighting games:
       | https://en.wikipedia.org/wiki/GGPO
       | 
       | I believe a version of this is what runs in fightcade2 (see
       | https://www.fightcade.com/), which is the best fighting
       | experience I've ever seen. I can play against people all the way
       | around the world, and it still works. Very impressive, and highly
       | recommend to anyone in Gen X or Y who grew up on street fighter.
        
       | porkbrain wrote:
       | Related interesting read:
       | 
       | 1500 Archers on a 28.8: Network Programming in Age of Empires and
       | Beyond
       | 
       | https://www.gamedeveloper.com/programming/1500-archers-on-a-...
        
         | jamesu wrote:
         | Speaking of networking older games, I think the "TRIBES Engine
         | Networking Model" is also an interesting read. Managing ~128
         | players over the internet in a single game back in the late
         | 90's was no mean feat. A lot of these kinds of optimizations
         | are still greatly applicable even today!
         | 
         | https://www.gamedevs.org/uploads/tribes-networking-model.pdf
        
           | y-c-o-m-b wrote:
           | I'm a Tribes 1 and 2 vet. I think the largest games I played
           | were still capped at 64 people. It was definitely some
           | impressive network code, for sure. Latency was still a huge
           | issue, but what really helped that game out was the fact that
           | it basically required being able to accurately predict
           | everything yourself. You constantly had to predict how high
           | up you could jetpack, when to initiate the jetpack, where you
           | should land on a slope, where you should jump from, where to
           | shoot "ahead" to get "mid-air" shots (shooting people in the
           | air as they're jetting past you at incredible speeds). This
           | act of priming oneself to the game's fast-paced environment
           | made the latency far more tolerable than it probably would
           | have been.
        
       | 10000truths wrote:
       | Glenn Fiedler's Gaffer on Games is a worthwhile read for anyone
       | who wants to dive into the technical details of networked
       | physics:
       | 
       | https://gafferongames.com/categories/game-networking/
        
       | Animats wrote:
       | "Client side interpolation" is a term misused in the game world.
       | It's really client side extrapolation. Interpolation is
       | generating more data points within the range of the data.
       | Extrapolation is generating more data points off the end of the
       | data.
       | 
       | Interpolation error is bounded by the data points on both sides.
       | Extrapolation error is not bounded, which is why bad
       | extrapolation can produce wildly bogus values. So you need
       | filtering, and limits, and much fussing around.
        
       | sbf501 wrote:
       | Carmack wrote a great article on this a decade or so ago, but now
       | every time I search for "carmack + latency + network + games" I
       | get millions of hits about his recent VR stuff. Thanks SEO.
       | Anyone remember the original article?
        
         | [deleted]
        
         | fuckcensorship wrote:
         | Pretty sure this is it:
         | https://web.archive.org/web/20130305033105/http://www.altdev...
         | 
         | If not, maybe this:
         | https://fabiensanglard.net/quakeSource/johnc-log.aug.htm.
         | 
         | Or maybe in this archive:
         | https://fabiensanglard.net/fd_proxy/doom3/pdfs/johnc-
         | plan_19....
        
           | sbf501 wrote:
           | The first one is it! Thanks!!
        
         | bruce343434 wrote:
         | His #AltDevBlog posts are no longer available, maybe that has
         | to do with it. I too could only find stuff having to do with
         | render latency (in the context of VR).
        
         | tines wrote:
         | Can always add a "-vr" term to your search.
        
       | reiziger wrote:
       | Just out of curiosity: does anyone know any real-time multiplayer
       | game that runs the physics simulations on the server side? AFAIK,
       | only Rocket League is doing this (and they have some good talks
       | about it on GDC).
        
         | bob1029 wrote:
         | Here is the GDC talk:
         | https://www.youtube.com/watch?v=ueEmiDM94IE
         | 
         | Of particular note is the fact that they run their physics loop
         | at 120hz to minimize error.
        
         | maccard wrote:
         | Multiplay Crackdown 3 (I worked on it) runs a fairly meaty
         | physics simulation on the server side.
        
         | DecoPerson wrote:
         | Physics on the server? Nearly every shooter game. Server-
         | authoritative is the way. As per the article, the clients only
         | predict some physics objects (requires simulation) and
         | interpolates & extrapolates others (does not require
         | simulation). The clients have no first authority over the
         | server's simulation, other than their own player inputs.
        
         | GartzenDeHaes wrote:
         | Minecraft does physics and lighting on the server, although the
         | physics model is very simplistic.
        
           | strictfp wrote:
           | Physics is also ran on the client, but the server is
           | authoritative and can correct or override client decisions.
        
         | Thaxll wrote:
         | Pretty much every serious online games do it.
        
         | piinecone wrote:
         | For my game, King of Kalimpong, I run physics on the client and
         | the server. The server is the boss but the client feels good.
         | 
         | I suspect most games where movement is primarily physics-based
         | are doing this, but who knows, netcode tends to be very game-
         | specific.
        
         | antris wrote:
         | If it's anything serious/competitive and has to have integrity
         | without having trust between enemy players, the server _has_ to
         | run the simulations. Otherwise it would be extremely easy to
         | cheat just by modifying the client code and memory.
        
           | bruce343434 wrote:
           | Though you can have client-side physics engine running as an
           | interpolation between authoritative server states.
        
       | robwwilliams wrote:
       | What a great article!
       | 
       | For those of you interested in cognition, our brains have almost
       | precisely the same problem of temporal delay and prediction.
       | 
       | Each of many sensory-motor systems has about 50 to 150
       | milliseconds of jitter and offset. And the jitter and offset in
       | timing depends on many factors---intensity of the stimulus and
       | your state of mind for example.
       | 
       | How does the brain help "consciousness" create an apparently
       | smooth pseudo-reality for us from a noisy temporal smear of
       | sensory (sensory-motor) input spread out over 100 milliseconds or
       | more?
       | 
       | It is damn hard, but the CNS plays the same games (and more) as
       | in this great Medium article--interpolation, smoothers, and
       | dynamic forward prediction.
       | 
       | Just consider input to your human visual system: color-encoding
       | cone photoreceptors are relatively fast to respond--under 30
       | msec. In contrast, the rod photoreceptors are much slower
       | integrators of photons--up to 100 msec latencies. So even at one
       | spot of the retina we have a serious temporal smear between two
       | visual subsystems (two mosaics). It gets much worse--the far
       | periphery of your retina is over a centimeter from the optic
       | nerve head. Activity from this periphery connects slowly over
       | unmyelinated fibers. That add even more temporal smear relative
       | to the center or your eye.
       | 
       | And then we have the very long conduction delays of action
       | potentials going from retina, to first base--the dorsal
       | thalamus--, and then finally to second base--the primary visual
       | cortex at the very back of your head. That is a long distance and
       | the nerve fibers have conduction velocities ranging 100-fold:
       | from 0.3 meters/sec to 30 meters/sec.
       | 
       | The resulting input to layer four of your visual cortex should be
       | a complete temporal mess ;-) But it isn't.
       | 
       | What mechanisms "reimpose" some semblance of temporal coherence
       | to your perception of what is going on "out there"?
       | 
       | Neuroscientist do not spend much time thinking about this problem
       | because they cannot record from millions of neurons at different
       | levels of the brain.
       | 
       | But here is a good guess: the obvious locus to interpolate and
       | smooth out noise is the feedback loop from visual cortex to
       | dorsal thalamus. I mentioned the dorsal thalamus (formally the
       | dorsal lateral geniculate nucleus) as "first bade" to visual
       | system input. Actually it get 3X more descending input from the
       | visual cortex itself--a massive feedback loop that puzzles
       | neuroscientists.
       | 
       | This huge descending recurrent projection is the perfect Bayesian
       | arbiter of what makes temporal sense. In other worlds the
       | "perceiver", the visual cortex, provides a descending feedback to
       | its own input that tweak synaptic latencies in dorsal thalamus to
       | smooth out the visual world for your operational game playing
       | efficacy. This feedback obviously cannot remove all of the
       | latency differences, but it can clean up jitter and make the
       | latencies highly predictable and dynamically tunable, via top-
       | down control.
       | 
       | Quite a few illusions expose this circuitry.
       | 
       | for more @robwilliamsiii or labwilliams@gmail.com
        
         | bruce343434 wrote:
         | Do you have any example illusions? Lately I notice that I read
         | a lot of sentences wrong, and a second reading gives different
         | words (though both readings were made up of similar letters),
         | perhaps my feedback loop is too strong on the prediction?
        
       | swyx wrote:
       | Can't read it, ran out of free medium dot com articles.
        
         | kencausey wrote:
         | Open in Private/Incognito mode?
        
         | andai wrote:
         | https://archive.ph/Io4BF
        
         | nyanpasu64 wrote:
         | https://scribe.rip/e923e66e8a0f
        
           | Hackbraten wrote:
           | Thank you - TIL that's a thing!
        
         | DarylZero wrote:
         | https://github.com/iamadamdev/bypass-paywalls-chrome
        
           | swyx wrote:
           | i was on mobile :(
           | 
           | but more to the point im trying to get self respecting
           | developers OFF of medium
        
       | bob1029 wrote:
       | One elegant solution to this problem is to keep 100% of the game
       | state on the server and to stream x264 to clients.
       | 
       | I think if you engineered games _explicitly_ for this use case,
       | you could create a much more economical path than what products
       | like Stadia offer today.
        
         | dylan604 wrote:
         | But it's not just one video render. It is one video render per
         | user. People have to build pretty decent rigs to get their
         | single view of the game to render at speed. Multiplying that
         | times the number of players seems wildly expensive for a single
         | machine to do to be a non-starter after 3 seconds of thought.
        
         | Const-me wrote:
         | The article is about hiding latency between other people's
         | input, and your display. While it happens, your own input only
         | has couple frames of latency.
         | 
         | The solution you proposed introduces latency between your own
         | input, and your own display. Unless the server is in your house
         | or at least within 100km from it, that latency gonna make the
         | game unplayable.
        
           | toast0 wrote:
           | Unless the server is in my house, I get at least 20ms round
           | trip (thanks bonded VDSL2), so I'm a frame behind already.
           | Add input sampling, codec delays, and video output delays and
           | I'm at least two frames, probably three. That's going to be
           | not great.
        
         | orev wrote:
         | This seems so obvious to me that I'm surprised Stadia doesn't
         | already do that (not having done any real reading on how it
         | works; just making assumptions based on the marketing I've
         | seen).
         | 
         | I just assumed it was a video stream with a touch control
         | overlay.
        
         | dbttdft wrote:
         | there's a very small chance that cloud gaming may one day work.
         | even if that small window of opportunity exists, the
         | incompetent game industry will miss it.
        
         | milgrim wrote:
         | How is that elegant? In return for easy synchronization the
         | server has to perform ALL computations. Sounds more like brute
         | force to me.
        
           | atq2119 wrote:
           | Depending on how high the encode/decode/transmit overhead is,
           | it might be a more efficient use of resources. Most game
           | consoles and gaming PCs are idle most of the time.
           | Centralizing the rendering in a data center is going to yield
           | better resource utilization rates.
           | 
           | Mind you, they're still not going to be _great_ rates,
           | because the data center needs to be geographically close and
           | so utilization will vary a lot in daily and weekly patterns.
           | 
           | Then again, maybe you can fill all those GPUs with ML
           | training work when the gaming utilization is low...
        
             | milgrim wrote:
             | Latency has a big impact on user experience in most
             | multiplayer games and players with a low latency have an
             | advantage in many games. I am not sure how satisfied people
             | will be with low cost low powered systems made for game
             | streaming in the long run when other users have a much
             | better experience and even advantages in gameplay.
        
           | addingnumbers wrote:
           | It's very elegant depending on your goals.
           | 
           | It's the ultimate DRM for games, you can't crack a game's
           | copy protection if you can't see it's binaries. You can't
           | data-mine the locations of valuables if you don't have the
           | map data. You can't leak unreleased assets ahead of their
           | marketing debut if you don't have a copy of the assets. You
           | can't expose all the Easter eggs by decompiling if you don't
           | have the code. With subscription and premium currency models
           | those abilities can all be interpreted as lost revenue.
           | 
           | The markets for people buying $400 consoles vs buying a $20
           | HDMI stick and a $15/mo subscription are very different.
           | After the colossal (and to me, surprising) rise of mobile
           | gaming I think the latter might be where the real money will
           | be 10 years from now.
           | 
           | They'll address the bottlenecks on the data center end. I'm
           | pretty sure you can list a dozen problems that make it
           | prohibitively expensive right now and for every one of them
           | some Microsoft or NVIDIA engineer can tell you how they are
           | working on engineering away that problem in a couple years.
        
             | milgrim wrote:
             | Of course, it solves a lot of problems. But you have to pay
             | for it by increasing the load on the servers by orders of
             | magnitude. This makes it a brute force approach in my
             | opinion.
             | 
             | This is of course depending on your meaning of ,,elegant",
             | but for me this would imply to solve these issues without
             | increasing the server load so much. Let the client decide
             | as much as possible, but check the decisions randomly and
             | in case of suspicious player stats. And DRM for multiplayer
             | games should be no problem anyways? Verify the accounts or
             | your users, but that applies to stadia like services and
             | also the ,,conventional" ones. Solving the data mining
             | issue is another topic and yes, giving the server more
             | authority for things that the player should be able to see
             | might be the only way to deal with this. But maybe the
             | server could hand out client specific decryption keys when
             | they are needed? That would be elegant, and not just
             | keeping all the content server-side.
             | 
             | Game streaming services will find their place, but they
             | address mainly the entry hurdle and not the issue of game
             | state synchronization.
        
         | viktorcode wrote:
         | You don't have to stream encoded video from the server. It is
         | much faster and simpler to stream game state and have the game
         | clients render it. Esentailly, that's how all the games with
         | "authoritative server" work.
        
         | rossnordby wrote:
         | Another related option that sidesteps a big chunk of
         | perceptible latency is to send clients a trivially
         | reprojectable scene. In other words, geometry or geometry
         | proxies that even a mobile device could draw locally with up to
         | date camera state at extremely low cost. The client would have
         | very little responsibility and effectively no options for
         | cheating.
         | 
         | View independent shading response and light simulations can be
         | shared between clients. Even much of view dependent response
         | could be approximated in shared representations like spherical
         | gaussians. The scene can also be aggressively prefiltered;
         | prefiltering would also be shared across all clients.
         | 
         | This would be a massive change in rendering architecture,
         | there's no practical way to retrofit it onto any existing
         | games, and it would still be incredibly expensive for servers
         | compared to 'just let the client do it', and it can't address
         | game logic latency without giving the client more awareness of
         | game logic, but... seems potentially neat!
        
         | lewispollard wrote:
         | Then you have a server rendering essentially 4 or 8 or 32 or
         | whatever individual games, capturing and encoding them to a
         | streamable format, streaming via a socket or webrtc or whatever
         | to the client, who then sees the action, they input a control,
         | the control has to get back to the server to be processed,
         | render the next frame for every client, send them all back, and
         | then the client sees the result of their action.
         | 
         | Doesn't seem elegant to me, it seems like a way to have wildly
         | uncontrollable latency, and have one player's poor connection
         | disrupt everyone else's experience.
         | 
         | I have a Steam Link hardware device to stream games from my PC
         | to my TV over ethernet LAN, and even that can have issues with
         | latency and encoding that make it a worse experience than just
         | playing on the PC.
        
         | kaetemi wrote:
         | Stream OpenGL rendering calls from the server to the client.
        
           | Const-me wrote:
           | That works fine for simple 3D scenes. One application, that's
           | how MS remote desktop worked until recently -- Windows
           | streamed Direct3D rendering calls to the client.
           | 
           | However, I'm not sure this gonna work with modern triple-A
           | games. The current-gen low level GPU APIs were designed to
           | allow developers of game engines to _saturate bandwidth of
           | PCI-express_ with these rendering calls. That's too many
           | gigabytes /second for networking, I'm afraid.
        
           | bob1029 wrote:
           | Not all clients have the same capabilities with regard to 3d
           | hardware. Virtually any modern device can decode x264 without
           | difficulty.
        
         | GuB-42 wrote:
         | It doesn't solve the problem, in fact, it limits your options.
         | And I may be wrong but x264 doesn't seem to have the lowest
         | latency codecs.
         | 
         | There are essentially two ways of dealing with latency : input
         | lag (wait until we know everything before showing player
         | actions) and prediction/rollback (respond immediately trying to
         | guess what we don't know yet and fix it later, what is shown in
         | the article). Games often do a mix of both, like a bit of input
         | lag for stability and prediction to deal with latency spikes.
         | With video streaming, you limit your prediction options.
        
         | TinkersW wrote:
         | That defeats most of what this article is talking about--hiding
         | the perceived latency. In addition to the other issues like
         | higher server cpu & bandwidth requirements, and poor video
         | quality.
        
         | datalopers wrote:
         | Most online games are 100% authoritative on the server already.
         | Streaming rendered video is absurdly inefficient and does not
         | help with the problems.
        
           | AustinDev wrote:
           | Yeah, I've worked on AAA games with great networking stacks
           | and the network traffic is orders of magnitude lower than
           | streaming something like 4k@60fps to the client over x264.
        
           | alduin32 wrote:
           | It is inefficient, but it makes cheating much harder (but
           | unfortunately still possible, it will never be possible to
           | prevent it completely).
        
             | setr wrote:
             | Keeping the server 100% authoritative but maintaining all
             | rendering on the client with no interpolation gets you all
             | of the same benefits and drawbacks as shipping video, and
             | ease of implementation, but with dramatically lower network
             | costs. It's not really done however except for turn-based
             | games, because you still have horrendous input latency.
             | It's also entirely the same defense against cheating,
             | except I suppose a user could do things like edit assets
             | locally but I don't think anyone cares about that concern
        
       | paraknight wrote:
       | I did my PhD on exactly this area (thesis on yousefamar.com, DM
       | me if you'd like to chat about gamedev and netcode!) and
       | optimising for cost and easy setup to support indie devs.
       | 
       | Afterwards, when I tried to validate this as a product
       | (libfabric.com) I realised that I'm trying to solve a problem
       | that nobody has, except for a very small niche. The main problem
       | that developers of large-scale networked games have is acquiring
       | players. They use whatever existing service+SDK (like Proton) for
       | networking and don't think about it. Once they have good
       | traction, then they can afford scalable infrastructure that
       | others have already built, of which there are many.
        
       ___________________________________________________________________
       (page generated 2022-05-28 23:00 UTC)