Avatar

Jeff Squyres

The MPI Guy

UCS Platform Software

Dr. Jeff Squyres is Cisco's representative to the MPI Forum standards body and is Cisco's core software developer in the open source Open MPI project. He has worked in the High Performance Computing (HPC) field since his early graduate-student days in the mid-1990's, and is a chapter author of the MPI-2 and MPI-3 standards.

Jeff received both a BS in Computer Engineering and a BA in English Literature from the University of Notre Dame in 1994; he received a MS in Computer Science and Engineering from Notre Dame two years later in 1996. After some active duty tours in the military, Jeff received his Ph.D. in Computer Science and Engineering from Notre Dame in 2004. Jeff then worked as a Post-Doctoral research associate at Indiana University, until he joined Cisco in 2006.

In Cisco, Jeff is part of the VIC group (Virtual Interface Card, Cisco's virtualized server NIC) in the larger UCS server group. He works in designing and writing systems-level software for optimized network IO in HPC and other high-performance types of applications. Jeff also represents Cisco to several open source software communities and the MPI Forum standards body.

Articles

Social media login no longer required for comments

A number of you complained when blogs.cisco.com switched to requiring a social medial login to leave comments. It turns out that you were not alone. Industry-wide, it seems that many people do not want to associate their personal Facebook/Twitter/etc. logins with work-related social media (i.e., thi…

EuroMPI 2013: papers due soon!

Consider this a public service announcement: don’t forget that EuroMPI 2013 papers are due soon! EuroMPI is the place to see where the documented standard of MPI hits reality, both in terms of implementations and applications.  Come talk to real implementors, real users, and hear about state-o…

MPI for mobile devices (or not)

Every once in a while, the idea pops up again: why not use all the world’s cell phones for parallel and/or distributed computations? There’s gazillions of these phones — think of the computing power! After all, an army of ants can defeat a war horse, right? Well… yes… a…

MPI Forum: What’s Next?

Now that we’re just starting into the MPI-3.0 era, what’s next? The MPI Forum is still having active meetings.  What is left to do?  Isn’t MPI “done”? Nope.  MPI is an ever-changing standard to meet the needs of HPC.  And since HPC keeps changing, so does MPI.…

Ain’t your father’s TCP

TCP?  Who cares about TCP in HPC? More and more people, actually.  With the commoditization of HPC, lots of newbie HPC users are intimidated by special, one-off, traditional HPC types of networks and opt for the simplicity and universality of Ethernet. And it turns out that TCP doesn’t suck ne…

Modern GPU Integration in MPI

Today’s guest post is from Rolf vandeVaart, a Senior CUDA Engineer with NVIDIA. GPUs are becoming quite popular as accelerators in High Performance Computing clusters. For example, check out Titan; a recent entry into the Top 500 list from Oak Ridge Laboratories. Titan has 18,688 nodes (299,00…

Process and memory affinity: why do you care?

I’ve written about NUMA effects and process affinity on this blog lots of times in the past.  It’s a complex topic that has a lot of real-world affects on your MPI and HPC applications.  If you’re not using processor and memory affinity, you’re likely experiencing performance…

Message size: big or small?

It’s the eternal question: should I send lots and lots of small messages, or should I glump multiple small messages into a single, bigger message? Unfortunately, the answer is: it depends.  There’s a lot of factors in play.…

MPI and Java: redux

In a prior blog entry, I discussed how we are resurrecting a Java interface for MPI in the upcoming v1.7 release of Open MPI. Some users have already experimented with this interface and found it lacking, in at least two ways: Creating datatypes of multi-dimensional arrays doesn’t work becaus…