Grid Computing over IPv6

Problem: private cluster addresses

IPv6 Grid

The Message Passing Interface (MPI) is a standard specification for message-passing libraries. It is the most widely used message passing library for parallel applications on compute clusters. It has become a de facto standard for high-performance parallel applications and is supported on a wide range of architectures, starting from clusters of PCs up to shared memory and vector machines. Various groups from industry and academia are working on MPI implementations. Several freely available implementations exist and, further, so called vendor MPI implementations exist, which are tuned for special hardware. We investigated how MPI can be implemented easily on top of an IPv6 network.

MPICH Layers

But why is there a need to enable these new MPI-2 implementations to support IPv6? The motivation is given by the Grid computing trend where several different compute sites are used to run parallel applications. For example, a user may want to run his applications distributed over different medium sized compute clusters within a university campus. The typical situation then is that the nodes within a cluster have private IP addresses, which makes inter-cluster communication impossible.

We propose to make use of IPv6 because the available solutions for IPv4 impose some performance penalties. Virtual private networks (VPN) require a lot of administration efforts. Special purpose deamons on the head node of each cluster (like PACX) have to handle the communication of all compute nodes of its cluster.

In cooperation with the University Jena, we developed IPv6 enabled MPI versions for the both most popular implementations: MPICH2 and OpenMPI. Measurements of the implementations show that both MPI/IPv6 implementations have similar performance compared to their IPv4 versions.

MPICH IPv6 Throughput