Data++
[Agent practical 2 of 9]


Finally, then, we'll add some more complicated behaviour to our agents.


 

Thinking back to our algorithm, we said we'd like to:

//       get agents to move
//       get agents to eat

Let's think about the latter first.

To eat a bit of the data in the environment, all we need to do is to set the value in the data array at the x and y coordinates of our agent to the value there already, minus some figure. By using the x and y coordinates of the agent to determine a location within the data array, we tie the geography of the agents to the geography of the environment data. So, for example, we could do:

int x = agents[j].x;
int y = agents[j].y;
world.data[y][x] = world.data[y][x] - 10.0;

Pretty neat, hu? Now, if our agents change x and/or y, they'll eat in a different location on the data array. Note that height (y) is the first dimension. Also remember, this kind of thing:

a = a + 1;

where a variable appears on both sides of an equation, is very common. The right-hand side is evaluated before the answer is copied back over the original value of a. There is nothing to stop us using values in arrays in the same way:

world.data[y][x] = world.data[y][x] - 10.0;

Put the full code above into the centre of your model-run loops, and change the println statement to:

System.out.println("value at " + x + " " + y + " is " + world.data[y][x]);

Compile the code and run it to see what happens.

You'll notice that the value for the spot the agents are in eventually goes negative. Ideally we'd like to limit this to zero (our stopping condition might be when they've eaten all the data down to zero, for example). Given this, let's query the data before we eat it, like this:

int x = agents[j].x;
int y = agents[j].y;
if (world.data[y][x] > 10.0) {
   world.data[y][x] = world.data[y][x] - 10.0;
} else {
   world.data[y][x] = 0;
}

Change the code to this, recompile and rerun to see the difference.

Although the behaviour isn't yet kept inside the agent objects (we'll see how to do that next practical), we still have here the fundamentals of a set of agents that both query and interact with their environment. This could easily be replaced by code to check for other resources, or code that changes agent behaviour depending on the type of area it is in.

All we need to do now is get the agents to change their x and y coordinates before eating. Here, we'll make them walk randomly.

 


On the surface, a random walk algorithm is relatively simple to implement. We just need to change the x and y coordinates of the agent by one in a random direction. There is some built in code in Java for generating a random number between zero and just-less-than-one, thus:

Math.random()

So we could do this kind of thing:

int x = agents[j].x;
double randomNumber = Math.random();
if (randomNumber < 0.33) x--;
if (randomNumber > 0.66) x++;
agents[j].x = x;

Leaving x with some chance of staying the same. If we did the same with y, using a different randomNumber variable, there'd be a chance the agent's location would move in one of eight directions or it would stay in the same location.

Add this code, and equivalent code for y, above your eating code, but inside the run loops. It should replace the code:

int x = agents[j].x;
int y = agents[j].y;

as it will do this job as well, and we can't have two x and/or y variables with the same name being created within the same 'main' block (note that for the same reason, you'll need a different name for your second random number variable). Note that the name arrays[j].x is not the same as x, so those are fine to use in the same block together.

Doomed: Run the model for 10000 iterations. Is there a problem with the code as it stands now? Does it run smoothly, or does it break? If so, why? What can you tell from the printlns that work and the debugging messages?

Depending on your luck, the code above *will* break. The reason is that x and/or y could become negative, causing our agent to eat off the left or upper side of our data array space. Alternatively, x and/or y could become greater than world.width and/or world.height causing our agent to eat off the right or lower side of our data array space. This is a classic boundary problem. So, how are we going to sort it out?

The solution is to check our x and y coordinates are in range before moving to them. Thus, for x:

int x = 0;
do {
   x = agents[j].x;
   double randomNumber = Math.random();
   if (randomNumber < 0.33) x--;
   if (randomNumber > 0.66) x++;
} while ((x < 0) || (x > world.width - 1));
agents[j].x = x;

Note that the x variable is set up outside the loop, so it can still be seen (it has "scope") outside of/after the while loop.

It's actually quite rare to use do-while loops, but as we need the x to definitely change *before* we assess whether the new coordinate is good, do-while is perfect. If we used a standard while loop with the condition at the start, we'd be checking the old x at best. As it is, the do-while loop will keep running all the time that x is either less than zero or greater than world.width - 1 (because array indices go from zero to length - 1). It will keep generating new x coordinates until it escapes back into the array space. We just need the same code writing for the y coordinate.

So, we now have all the chunks of code we need to implement our run behaviour:

// Loop through iterations with counter i.
//    Loop through agents with counter j.
//       get agents to move
//       get agents to eat
//    } End agents loop.
// } End iterations loop.

Have a go at building the model. Leave in the last System.out.println given, and see how the model works over ten iterations. You should see the x, y and world.data[y][x] values changing.


Once you've got that working, we're done for this practical. We've tried out a wide variety of looping structures, including:

At the same time, we've made massive progress on our model. We've basically got the core of it up and running. We have a model with:

 


All we need to do now is get the behaviour more cleanly inside the agents, instead of sitting around outside them in the Model class, and secondly get the agents communicating with each other. It would also be nice to add a stopping condition. We'll look at these in the next practical.