![]() Increasing can help reduce lag but only up to a point, beyond which the round trip time dominates. Slowing down the update frequency also reduces bandwidth consumption. The cost is that the delay in processing player inputs may increase by up to. However, in massively multiplayer games, the purpose of bandwidth reduction is really to increase the maximum number of concurrent players. ![]() For small competitive games, like Quake or Starcraft, this is often than sufficient to solve all bandwidth problems. The easiest way to conserve bandwidth is to cap the number of players. This leaves us with two independent parameters, and, that can be adjusted to reduce bandwidth. Assuming a shared environment where every player sees the same state, then. Clients have a downstream bandwidth of and upstream of just. In this model, the server needs to send bits/(second * player) or total bits. Variables affecting bandwidthĪsymptotically the bandwidth consumed by a game is determined by three variables: ![]() In this situation, the network connection is effectively broken, thus rendering all communication impossible causing players to drop from the game. It is the third issue though that is the most urgent. If a game generates data at a higher rate than the overall throughput of the connection, then eventually the saturated connection will increase the latency towards infinity. Again, reducing the size of a state update below the MTU will have no further effect on performance. This increases transmission delay, propagation delay, and queuing delay. If updates from the game do not fit within a single MTU, then they will have to be split apart and reassembled. The second issue is also important, at least up to a point. On the other hand many mobile and cloud based hosting services charge by the byte, and in these situations it can make good financial sense to reduce bandwidth as much as possible. In the United States at least, most home internet services do not have download quotas, so optimizing bandwidth below a certain threshold is wasted effort. ![]() The first of these is not universally applicable, especially for small, locally hosted games. Bandwidth is finite, and if a game pushes more data than the network can handle it will drop connections.High bandwidth consumption can increase the latency, due to higher probability of packets splitting or being dropped.In this situation, lowering bandwidth reduces the material costs of running the game. Some ISPs implement usage based billing via either traffic billing or download quotas.Optimizing for low bandwidth usage is a purely technical problem, while managing high latency requires application specific design tradeoffs.īut while it is not as tough a problem, optimizing bandwidth is still important for at least three reasons:.Bandwidth scales cheaply, while physical constraints impose limitations on latency.Analyzing bandwidth requirements is easy, while latency requirements are closely related to human perception and the nature of the game itself.To paraphrase a bit, bandwidth is not as big a deal as latency because: Though that article is pretty old, many of its points still ring true today. This has been observed many times, perhaps most colorfully by Stuart Cheshire: Compared to latency, bandwidth is much easier to discuss. Today I want to move onto the other major network resource, which is bandwidth. Last time, I finished up talking about latency issues for networked video games.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |