Open MPI
Slides from Open MPI SC’15 SotU and MPI Forum BOFs
Here’s the slides from the Open MPI State of the Union (SotU) BOF that were presented by George Bosilca, Nathan Hjelm, and myself at the SC’15 trade show. Thanks to all who were able to join us at the BOF in person! UPDATE: Per request, I have also added the slides from the MPI Forum BOF…
Open MPI new versioning scheme and roadmap
Open MPI recently updated its version numbering scheme and development roadmap. This new scheme aims to provide more semantic information in the Open MPI “A.B.C” version number to both end users and system administrators.…
The state of libfabric in Open MPI
Yesterday morning, I gave a presentation at the 2015 OpenFabrics Software Developers’ Workshop. I discussed the status of libfabric support in Open MPI. Here’s a copy of my slides:…
A Farewell to LAM/MPI
With a little sadness, I note that LAM/MPI was officially retired recently. LAM/MPI’s hosting provider, Indiana University, made the decision not to renew the lam-mpi.org domain any more. As of a few weeks ago, LAM/MPI’s web site is no more, and its domain is in the process of expiring.…
Open MPI: behind the scenes
Working on an MPI implementation isn’t always sexy. There’s a lot of grubby, grubby work that needs to happen on a continual basis to produce a production-quality MPI implementation that can be used for real-world HPC applications. Sure, we always need to work on optimizing short messag…
Tree-based launch in Open MPI (part 2)
In my prior blog entry, I described the basics of Open MPI’s tree-based launching system over ssh (yes, there are still some valid / good reasons for using ssh over a native job scheduler / resource manager’s parallel launch mechanisms…). That entry got a little long, so I split th…
Tree-based launch in Open MPI
I’ve mentioned it before: the run-time systems of MPI implementations are frequently unsung heroes. A lot of blood, sweat, tears, and innovation goes into parallel run time systems, particularly those that can scale to very large systems. But they’re not discussed often, mainly because…
The “vader” shared memory transport in Open MPI: Now featuring 3 flavors of zero copy!
Today’s blog post is by Nathan Hjelm, a Research Scientist at Los Alamos National Laboratory, and a core developer on the Open MPI project. The latest version of the “vader” shared memory Byte Transport Layer (BTL) in the upcoming Open MPI v1.8.4 release is bringing better small me…
MPI over 40Gb Ethernet
Half-round-trip ping-pong latency may be the first metric that everyone looks at with MPI in HPC, but bandwidth is one of the next metrics examined. 40Gbps Ethernet has been available for switch-to-switch links for quite a while, and 40Gbps NICs are starting to make their way down to the host. How d…