Parallelisation and Java
We've now done the first four parts of our algorithm,
- Make an local array containing the densities in the landscape on a particular node
- Send the local array to node zero
- Receive all the local arrays on node zero
- Add them up on node zero to make a global density array
All we now need to do is...
- Send the global array to all the other nodes from node zero
- Receive the global density array back from node zero on the other nodes
- Move if necessary
Which we add to the end of the Model.java run section before the end of the time iteration loop...
} // End of if not node zero loop from above.
if (node == 0) {
for (int i = 1; i < numberOfNodes; i++) {
try {
MPI.COMM_WORLD.Send
(densities, 0, width * height, MPI.INT, i, 100);
} catch (MPIException mpiE) {
mpiE.printStackTrace();
}
}
} else {
int [] globalDensities = int [width*height];
try {
MPI.COMM_WORLD.Recv
(globalDensities, 0, width * height, MPI.INT, 0, 100);
} catch (MPIException mpiE) {
mpiE.printStackTrace();
}
landscape.setDensities(globalDensities);
for (int i = 0; i < numberOfAgents; i++) {
agents[i].step();
}
} // End of if node is zero
} // End of time iterations loop from above.
Note the change in message tag number, and that, having got the global densities back,
we temporarily replace the local densities in the landscape with the global ones, using
the landscape.setDensities
method we wrote earlier.
Once we've got our global densities into each node's landscape, we can safely call the agent[i].step
method, knowing it will be calculated on global not local densities. Note that the
agents[i].step
call is now snug within code that only runs on worker nodes, not node zero.
Finally, the last problem is that, in the above, the landscape.setDensities(globalDensities)
is done inside a node != 0 section. This means the landscape isn't set up for node == 0. Thus our final
adjustment is to alter the reporting section to set this up...
// Report
if (node == 0) {
landscape.setDensities(densities);
for (int x = 0 ; x < width; x++) {
for (int y = 0; y < height; y++) {
System.out.print(landscape.getDensity(x,y) + " ");
}
System.out.println("");
}
}
Overall we've only made two communications
per worker node (the bottleneck is node zero - but it isn't doing any other processing), rather
than each Agent having to talk to all the others to find out where they are. Smashing - we just Finalize MPI at the end and
job's a good 'un. Here's the finished classes:
Model.java,
Landscape.java,
Agent.java. Other than accessor/mutator methods for the densities
array
in Landscape.java, we've only had to change Model.java. Plainly there are other efficiency savings that
could be made if we're happy for the code to be even more obscure, but it should run ok.
So, let's do that now in the final section.