One of the key architectural decisions we took was to make each agent as independent as possible. That means trying to avoid inter-agent references and only having operations based on agent-to-agent "conversations". We'll see in a bit what this means, when we talk about some examples.
Lets start with the internal workings of a robot after being placed fresh in the world. Bots have a two level finite state machine, shown in the following image (only some of the actual states are being shown there):
(note: each state is run over many frames)
Bots start in IDLE state. Depending on their type, they decide what are their options for next steps. For instance default bots checks its internal status first (do I need more power, do I need repairs) then ask for a job. If it finds a job, it moves to EXECUTING state.
While in EXECUTING state, the 2nd level of FSM is looping.
Because we try to distribute CPU load evenly each agent maintains its own "game tick" timer and only processes intensive operations on a game tick. Later this balancing will be moved to a thread pool but for now (and for many development reasons), the agents are called in sequence.
Any job is then split to 1..n operations (shown a bit in the previous article diagram).
Each time there's an agent-to-agent interaction needed, there's a conversation between the robot and the agent. In the situation where an solar antenna has full batteries, it needs a transport robot to come and load the batteries so it posts a job of type Transport, passing its own agent Id.
When the robots is in the ATDESTINATION status (it has reached the antenna), it starts talking, a bit like this:
The BUSY state means the bot is involved in long internal (computing) operations and doesn't want interactions. That's for later AI implementation, when the behavior may be much smarter and thus take longer to compute.
Lets start with the internal workings of a robot after being placed fresh in the world. Bots have a two level finite state machine, shown in the following image (only some of the actual states are being shown there):
(note: each state is run over many frames)
Bots start in IDLE state. Depending on their type, they decide what are their options for next steps. For instance default bots checks its internal status first (do I need more power, do I need repairs) then ask for a job. If it finds a job, it moves to EXECUTING state.
While in EXECUTING state, the 2nd level of FSM is looping.
Because we try to distribute CPU load evenly each agent maintains its own "game tick" timer and only processes intensive operations on a game tick. Later this balancing will be moved to a thread pool but for now (and for many development reasons), the agents are called in sequence.
Any job is then split to 1..n operations (shown a bit in the previous article diagram).
Each time there's an agent-to-agent interaction needed, there's a conversation between the robot and the agent. In the situation where an solar antenna has full batteries, it needs a transport robot to come and load the batteries so it posts a job of type Transport, passing its own agent Id.
When the robots is in the ATDESTINATION status (it has reached the antenna), it starts talking, a bit like this:
- Ask antenna for "permission to dock"
- If accepted, it asks for a "transfer", specifying the job details (item type is battery, load is full)
- If antenna acknowledges, it initiates transfer, one item per agent game tick
- Loop transfer until the transfer is denied (no more items or the other agent just doesn't want)
- implementation is easier to follow/change/debug
- most of the inter agent operations take place over a number of frames and each agent may decide to change its internal FSM between commands
- introduce a single point of entry for agent commands to implement multiplayer later
- allow arbitrary changes in the world, at different moments. Because between the moment when a bot starts moving towards the antenna and the moment it got there, the antenna may have been destroyed or just have its batteries destroyed, and so on.
The BUSY state means the bot is involved in long internal (computing) operations and doesn't want interactions. That's for later AI implementation, when the behavior may be much smarter and thus take longer to compute.
No comments:
Post a Comment