Button Button

A pattern for a distributed game engine

This is mostly from a chat log from over a year ago. I think that the pattern is still interesting (the problems certainly haven't gone away), but more needs to be done to minimize the processing requirements of using a blockchain, including achieving the desired properties in distributed ledgers with lower computational and governance overhead

Virtual Assets

The current problem with online collectibles is that you have to rely on gatekeepers to certify ownership of virtual assets. Few gatekeepers have motivation to support asset transfers in any meaningful way and none of them are interested in assuring that their product retains value if they cease operating the game. Even with these horrible fundamentals, people still invest tens and sometimes hundreds of thousands of dollars in asset portfolios in online games

Among the other issues this presents, the financial viability of the game developer becomes an issue in the marketplace. So games from established development houses may generate more revenue than indie games, even if they're derivative in nature and boring to play. Incremental improvement and appeal to the psychology of addiction rather than enjoyment is already a significant issue in the industry.

Indie developers may need to sell to established houses to unlock the market value in their ideas. When this happens, they'll often not have much leverage because the established houses have more experience in contract law and access to offshore labor to replicate the concept with more polished art assets

A Collectible Game as an Example of Distributed Trust

Now imagine a collectible game where anyone interested in the game can maintain their own copy of the record of who owns which assets. The game publisher contributes to an open source game engine that supports the game mechanics coded in their decks. The developer is compensated from the initial sale of the game asset. The existence of an aftermarket economy independent of the original developer means that buyers can always enjoy the value of their virtual asset regardless of what happens to the original developer. This increases the ability of indie developers to unlock the market potential of concepts that are strong creatively. The potential to design such games for play on ad hoc, peer to peer, and federated networks, reduces operating costs an order of magnitude in the same manner as electronic issue reduces publishing costs

Reducing operating costs and risk premium for games is how we can create vibrant ecosystems that support creators. This is the opposite of walled gardens, where creators of original games are unpaid labor clearing space for big content. The walled garden creates a power imbalance between players and producers that ultimately allows the gardeners to cease effective curation, leaving a poor substitute for an ecosystem

World Generation

There are ways to apply this to other genres, like sector based, procedurally generated worlds played on distributed systems like the one I worked on with a designer 20 years ago. The trust required for his vision of a distributed universe wasn't mathematically supported twenty years ago, so the idea wouldn't have scaled outside of a play testing community. Until now

The world, itself, is a shared asset collection. The programming code to generate locations and entities is open. The random seed for generating a location in the world is obtained by hashing a fixed value, called a salt, with the location ID. For games where secrets or partial information isn't an important factor in game play, the salt can be public information and player interactions appended to the ledger

Games relying on secrets can have information scopes in a public ledger defined by public key cryptography

If it's a 4X or other type of game that requires scoped interactions, then the salt is a secret kept by a trust service. The trust service exists to generate random seeds and one time passwords agnostic of game logic, so it can scale independently of the game/s serviced, but it must rely on a service with access to the game logic for information about whether an agent has permission to access a location. Accessing a location for the first time generates a ledger for that location initialized with a one time password and the random seed for the location generated by a cryptographic hash of the salt and the location id. These are encrypted with the public keys for the agents which have access, one of which is the game service that adjudicates access when agents change locations. As long as the list of agents in a location remains unchanged, the ledger is an append only transaction log of actions in the location that affect state. When an agent leaves a location, the departure entry terminates an OTP chain and a new OTP is generated for encrypting the next chain. When a new agent enters a location, the game service (not the trust service) must generate a state description to be appended to the ledger and inform the trust service of the new agent in the location. The game service may periodically validate the activity in a location and publish a state summary

Two facts are key to understanding the relevance of the benefits derived from this distributed architecture - First, there is no global state - Second, all known information about local state that is relevant to any set of players can be easily recovered with information supplied by those players and the recovered information can be trusted

Additionally, access to keys owned by the game services agent allow restoration of locations that were known to players outside that set and access to the salt supplied to the trust service would allow newly explored locations to be identical to those if the game had not been interrupted or forked, but neither is necessary for continuity

So a game can continue with any arbitrary group of players, with or without the cooperation of the current operators, in a way that allows all those players to retain all in game assets and knowledge

The trust service can be a single system servicing many games or a cluster of computers serving a game with any number of players. Likewise the services that perform validation and authorize agent location changes exhibit similar scaling characteristics. Unlike a client server game where the size of the game world is constrained by the hardware and the entire game world must be updated in a single game tick (often on a single processor thread), that constraint only applies to an individual location. There's no constraint on the number of locations

End procedurally generated world section

Other notes from the chat that aren't organized coherently

It's useful to know whether the design goal is a turn based or fast twitch game in order to use vocabulary and describe benefits of the timing engine relevant for that structure

The client server pattern is so pervasive in multi-player games that most people can't imagine any other pattern. The game on a computer, console, or other device is a program with the primary purpose of obtaining game information from the server, presenting it to you to interact with, then communicating your actions to the server where the canonical version of the game world is altered to reflect your actions

There's actually a small complication, in that we're simulating activities as if information was being communicated at immediate range at the speed of light in an atmosphere, but the information is actually traveling hundreds or guidance of miles through data systems that can include copper switches. In order to compensate for network delays, game clients make intelligent guesses about what's happening in the world. Those guesses can be wrong

When that happens, the server has to reconcile actions that were taken in separate and inconsistent copies of the game world. So you get artifacts like an opponent you're targeting in a first person shooter ducks as you pull the trigger. He thinks he's safe behind cover, then the server tells him he's dead. You go to loot his corpse and his body is in a position where you'd have never been able to shoot him

We think there needs to be a server to arbitrate the sequence of actions because there isn't any way to tell which happened first, objectively. We assume both players have the same network distance to the server and accept the time that events arrive at the server as canonical, despite the fact that players spend thousands of dollars on network hardware and fast internet service in order to avoid being disadvantaged in exactly that situation

Related to this issue is tick rate, which is how often the main server thread processes its order queue, which includes AI and game world events as well as actions taken by all the players. Server tick rate needs to be frequent enough to provide natural play experience while allowing time for the server to consistently process its entire order queue. Much of that processing duplicates events that have already been adjudicated in clients, but need to be performed independently because players cheat. If there was a means whereby each client could maintain a copy of the game world that would be eventually consistent, with any attempt at tampering instantly detectable, that would open up new patterns in distributed gaming

The server load, especially during peak use, could be dramatically reduced. Instead of simulating the entire world in server hardware, the world would be generated and adjudicated in the connected clients. A server application for the game is only needed for certain situations where game logic is required to determine scope or auditing the ledger for bad actors. It could operate independently of the tick server, which would operate in near linear time regardless of player load

Even the partial information problem can be handled in a distributed fashion with significantly lower overhead, at least for servers, with some caution. The main consideration involved being that it's important for the game design to minimize the advantage of out of band communications. So even if a player receives a key that decrypts privileged information and uses it in a hacked client, they still need to take steps to acquire the information in game to act upon it in game. The server can still serve as arbiter in those situations where this isn't acceptable, but reducing the amount of computation necessary to implement partial information requirements is of enormous benefit

Instead of maintaining a copy of the game world that is canonical in real time, it only manages a transaction log. This timing server doesn't actually need any game logic coded. It just generates codes for signing the ledger so that the connected clients know whether another client is on the current tick when they see a message from one

Once the ledger is closed for a tick, game clients can know that their world is current and valid to that point. Information with the code for the current tick can be used predictively. If the timing code on orders from a different client aren't valid for the current tick, those orders should be ignored

That's the timing issue, which is medium difficulty for the system but harder conceptually

The inclusion of fast twitch games above is naive

At the time of writing, I didn't know the specific commit rate of blockchains and I intended to discover the limits through experimentation

Another limitation of blockchain is that you must have a consistent state for inclusion in the ledger to be useful. If a delta refers to an object not already in the ledger, the referent could be altered prior to its subsequent inclusion in the ledger

Finally, existence of an affordable service that could sign a blockchain for a game running at 28 ticks/second probably wouldn't bode well for the future of the technology

Applications where you could use a distributed ledger include turn based games, embedded economic systems in fast twitch games, tournament results, and certain sector based MMOs. That ledger does not need to be a blockchain unless maintained jointly by independent actors in the absence of mutual trust