mpi
Overlap of communication and computation (part 2)
In part 1 of this series, I discussed various peer-wise technologies and techniques that MPI implementations typically use for communication / computation overlap. MPI-3.0, published in 2012, forced a change in the overlap game. Specifically: most prior overlap work had been in the area of individua…
Overlap of communication and computation (part 1)
I’ve mentioned computation / communication overlap before (e.g., here, here, and here). Various types of networks and NICs have long-since had some form of overlap. Some had better quality overlap than others, from an HPC perspective. But with MPI-3, we’re really entering a new realm of…
HPC over UDP
A few months ago, I posted an entry entitled “HPC in L3“. My only point for that entry was to remove the “HPC in L3? That’s a terrible idea!” knee-jerk reaction that us old-timer HPC types have. I mention this because we released a free software update a few days ago f…
Unsung heros: MPI run time environments
Most people immediately think of short message latency, or perhaps large message bandwidth when thinking about MPI. But have you ever thought about what your MPI implementation has to do before your application even calls MPI_INIT? Hint: it’s pretty crazy complex, from an engineering perspecti…
Traffic in parallel
In my last entry, I gave a vehicles-driving-in-a-city analogy for network traffic. Let’s tie that analogy back to HPC and MPI.…
Still more traffic
I periodically write about network traffic, and how general / datacenter network traffic analysis is related to MPI / HPC. In my last entry, I mentioned how network traffic has many characteristics in common with distributed computing. Routing decisions, for example, are made independently at each…
Traffic (redux)
I’ve written about network traffic before (see this post and this post). It’s the subject of endless blog posts, help forums, and instructional guides across the internet. In a High Performance Computing (HPC) context, there are some fascinating aspects about network traffic that are fai…
BigMPI: You can haz moar counts!
Jeff Hammond has recently started developing the BigMPI library. BigMPI is intended to handle all the drudgery of sending and receiving large messages in MPI. In Jeff’s own words: [BigMPI is an] Interface to MPI for large messages, i.e. those where the count argument exceeds INT_MAX but is st…
Networks for MPI
It seems like we’ve gotten a rash of “how do I setup my new cluster for MPI?” questions on the Open MPI mailing list recently. I take this as a Very Good Thing, actually — it means more and more people are tinkering with and discovering the power of parallel computing, HPC, a…