HPC

MPI Forum: What’s Next?

Now that we’re just starting into the MPI-3.0 era, what’s next? The MPI Forum is still having active meetings.  What is left to do?  Isn’t MPI “done”? Nope.  MPI is an ever-changing standard to meet the needs of HPC.  And since HPC keeps changing, so does MPI.…

Ain’t your father’s TCP

TCP?  Who cares about TCP in HPC? More and more people, actually.  With the commoditization of HPC, lots of newbie HPC users are intimidated by special, one-off, traditional HPC types of networks and opt for the simplicity and universality of Ethernet. And it turns out that TCP doesn’t suck ne…

Modern GPU Integration in MPI

Today’s guest post is from Rolf vandeVaart, a Senior CUDA Engineer with NVIDIA. GPUs are becoming quite popular as accelerators in High Performance Computing clusters. For example, check out Titan; a recent entry into the Top 500 list from Oak Ridge Laboratories. Titan has 18,688 nodes (299,00…

Process and memory affinity: why do you care?

I’ve written about NUMA effects and process affinity on this blog lots of times in the past.  It’s a complex topic that has a lot of real-world affects on your MPI and HPC applications.  If you’re not using processor and memory affinity, you’re likely experiencing performance…

Message size: big or small?

It’s the eternal question: should I send lots and lots of small messages, or should I glump multiple small messages into a single, bigger message? Unfortunately, the answer is: it depends.  There’s a lot of factors in play.…

MPI and Java: redux

In a prior blog entry, I discussed how we are resurrecting a Java interface for MPI in the upcoming v1.7 release of Open MPI. Some users have already experimented with this interface and found it lacking, in at least two ways: Creating datatypes of multi-dimensional arrays doesn’t work becaus…

MPI_REQUEST_FREE is Evil

It was pointed out to me that in my last blog post (Don’t leak MPI_Requests), I failed to mention the MPI_REQUEST_FREE function. True enough — I did fail to mention it.  But I did so on purpose, because MPI_REQUEST_FREE is evil. Let me explain……

Don’t leak MPI_Requests

With the Mayan apocalypse safely behind us, now we can now safely discuss MPI again. An MPI application developer came to me the other day with a potential bug in Open MPI: he noticed that Open MPI was consuming vast amounts of memory such that trying to allocate memory from his application failed.…

McMPI

Today’s guest blog entry comes from Daniel Holmes, an Applications Developers at the EPCC.  I met Jeff at EuroMPI in September, and he has invited me to write a few words on my experience of developing an MPI library. My PhD involved building a message passing library using C#; not accessing a…