Summary
We saw how we can run our Python code on an HPC cluster using a job scheduler such as HTCondor or PBS.
Many aspects were not covered in the chapter due to space constraints. Probably, the most notable is MPI (Message Passing Interface), which is the main interprocess communication library standard for HPC jobs. Python has bindings for MPI, and probably the most commonly used one is mpi4py, which is available at http://pythonhosted.org/mpi4py/ and on the Python Package Index (https://pypi.python.org/pypi/mpi4py/).
Another topic that did not fit in the chapter is the ability to run distributed task queues on an HPC cluster. For those types of applications, one could submit a series of jobs to the cluster; one job would start the message broker, some other jobs could start the workers, and finally, one last job could start the application itself. Particular care should be paid to connecting workers and applications to the broker that will be running on a machine not known at submission...