Communication
Model artifacts are patterns or errors that emerge from the model because of the way it works as a model, not how it replicates reality.
As any example, consider a moving window algorithm which moves across a largely empty grid, and copies information that's in the top left of the window to the centre of it.
If we start with data only in the top left corner (cell 0,0), when we get below and to the right of it (cell 1,1), the data will be copied to that cell. However, what happens next depends on how we scan the grid. If we scan it left-to-right, top-to-bottom, when we get to cell 2,2, there's now data in cell 1,1, so cell 2,2 gets that data copied into it.
If, on the other hand, we start at the lower right corner and scan right-to-left and bottom-to-top, when we reach cell 3,3, we haven't yet processed cell 2,2, so there's no data in it. It isn't until we reach 2,2 that we find that it's neighbour 1,1 has data in it to copy, but which time we've already processed 2,2.
Hopefully you can see that if we scan left-to-right, top-to-bottom, we'll end up, in one iteration, with a line of data diagonally across our grid. Conversely, if we scan right-to-left and bottom-to-top, we'll only copy one data cell per iteration.
This is a classic artifact and why we always copy results into a new "results" array when doing moving windows. The key thing, however, is that regularities in our model (for example, constantly using left-to-right top-to-bottom scanning, can cause artifacts like this.
For agent based models, an equivalent might be to always allow the first agent to initialise and finish economic negotiations, such that it accumulates unusual levels of wealth compared with agents further down the list of agents.
Because of this, it is usual to randomise the order in which agents are processed each iteration. There's a helpful function in the random
library, which shuffles lists and other sequences. Can you find it in the documentation, and
implement it in your model.py
code to shuffle the list of agents each iteration before they do their stuff?
Once you've done that, you've got the skeleton of all the major parts of an ABM. Compare it with the UML below.
Next practical we'll look at a few additions you can think about. Until then, here's a couple of things you might like to try, if you have time:
All the major model parameters are in model.py
, as we discussed earlier. Can you get the model so that it reads these from
the command line using argv
, the command line arguments we talked about in the lecture? i.e., so it runs like this:
python model.py 200 20 30
Where, for example, 200 is the number of agents, 20 is the number of iterations, and 30 is the neighbourhood. Remember that you may need to catch exceptions when the user types something that can't be cast to an int.
If you can do this, can you write a python program that uses subprocess.call
to run the model with a
variety of results using ranges to set those parameters (remember to leave some defaults)? For example, can you get it to run stepping up agent numbers by ten each time it runs,
and append the total amount stored to a file for each run? This is called "parameter sweeping", and it isn't unusual to have a model running class that
runs a model multiple times to explore how it responds to parameter variations. You might want an argv variable that
also turns off the visual output for multiple runs (if you want to make this a boolean, note that all non-empty strings, even "False" are true. For the solution, see this StackOverflow answer).