Control flow
The reason the code doesn't work is, for example, that for i = 0
j = 0
at the start of the run, the line
sum += data[i-1][j]
is trying to read into negative list space that doesn't exist. This occurs at every boundary in some
way.
To sort this out, one option is to reduce the size of the results image by setting the ranges to:
# Blur.
for i in (range(1,98)):
datarow = []
for j in (range(1,98)):
That way there is always an available border cell around each point processed. This is only one way of dealing with boundary issues, and doesn't help us with our issue, as agents will still wander off. Other options are:
(Near)Infinite plain: this works ok, but usually we have limited data for environments, and we want our models to have some interaction, rather than allowing agents to wander off forever.
Solid wall / wall bounce: anything trying to go off the edge finds the edge solid: for moving windows, we adjust the algorithm at the edges to reduce the window used, for agents they either stop until they move away from the wall, or they bounce off at some angle. The easiest thing to do is just stop them until they move away, thus:
# Move agent.
if random.random() < 0.5:
agents[i][0] += 1
else:
agents[i][0] -= 1
# Check if off edge and adjust.
if agents[i][0] < 0:
agents[i][0] = 0
if agents[i][1] < 0:
agents[i][0] = 0
if agents[i][0] > 99:
agents[i][0] = 99
if agents[i][1] > 99:
agents[i][1] = 99
Torus: a common solution is to allow agents leaving the top of an area to come in at the bottom, and leaving left, come in on the right, etc. This
effectively makes the space into a giant doughnut shape, or "torus". This is good for abstract models where realism of each agent's history isn't so
important. This can be achieved by adapting the code above (can you think how?), or using the modulus operator (%
: which gives the remainder of a division), thus:
if random.random() < 0.5:
agents[i][0] = (agents[i][0] + 1) % 100
else:
agents[i][0] = (agents[i][0] - 1) % 100
Can you work out how this works?
We'll use a torus, as it is simple to implement and we're not that worried about realism. Build this into your model and check you can see all the agents repeatedly (how might we test this more formally?).
That done, and you're finished for this practical. Check out your code now: it should be significantly shorter and more resilient to errors than before. Next practical, we'll build some of our own functions to analyse our model as it runs.