Avatar

Jeff Squyres

The MPI Guy

UCS Platform Software

Dr. Jeff Squyres is Cisco's representative to the MPI Forum standards body and is Cisco's core software developer in the open source Open MPI project. He has worked in the High Performance Computing (HPC) field since his early graduate-student days in the mid-1990's, and is a chapter author of the MPI-2 and MPI-3 standards.

Jeff received both a BS in Computer Engineering and a BA in English Literature from the University of Notre Dame in 1994; he received a MS in Computer Science and Engineering from Notre Dame two years later in 1996. After some active duty tours in the military, Jeff received his Ph.D. in Computer Science and Engineering from Notre Dame in 2004. Jeff then worked as a Post-Doctoral research associate at Indiana University, until he joined Cisco in 2006.

In Cisco, Jeff is part of the VIC group (Virtual Interface Card, Cisco's virtualized server NIC) in the larger UCS server group. He works in designing and writing systems-level software for optimized network IO in HPC and other high-performance types of applications. Jeff also represents Cisco to several open source software communities and the MPI Forum standards body.

Articles

The state of libfabric in Open MPI

Yesterday morning, I gave a presentation at the 2015 OpenFabrics Software Developers’ Workshop.  I discussed the status of libfabric support in Open MPI. Here’s a copy of my slides:…

Cisco usNIC libfabric provider presentation

Earlier this morning, I gave a presentation at the 2015 OpenFabrics Software Developers’ Workshop.  I discussed Cisco’s experiences with writing providers for both the Linux Verbs API and the Libfabric API. Here’s a copy of my slides:…

MPI-3.1! …not quite yet

The MPI Forum met for our quarterly meeting last week in Portland, Oregon. The main goal of the meeting was to pass the MPI-3.1 standard into law.  MPI-3.1 contains a bunch of errata from MPI-3.0, and a small number of new things.…

A Farewell to LAM/MPI

With a little sadness, I note that LAM/MPI was officially retired recently. LAM/MPI’s hosting provider, Indiana University, made the decision not to renew the lam-mpi.org domain any more.  As of a few weeks ago, LAM/MPI’s web site is no more, and its domain is in the process of expiring.…

Open MPI: behind the scenes

Working on an MPI implementation isn’t always sexy.  There’s a lot of grubby, grubby work that needs to happen on a continual basis to produce a production-quality MPI implementation that can be used for real-world HPC applications. Sure, we always need to work on optimizing short messag…

MPI 3.1: coming soon to an implementation near you

The next MPI Forum meeting will be in Portland, OR, USA, in early March. One of the major topics on the agenda will be voting on the MPI 3.1 standard. You might be wondering what’s new in MPI-3.1. I’m glad you asked.…

Tree-based launch in Open MPI (part 2)

In my prior blog entry, I described the basics of Open MPI’s tree-based launching system over ssh (yes, there are still some valid / good reasons for using ssh over a native job scheduler / resource manager’s parallel launch mechanisms…). That entry got a little long, so I split th…

Tree-based launch in Open MPI

I’ve mentioned it before: the run-time systems of MPI implementations are frequently unsung heroes. A lot of blood, sweat, tears, and innovation goes into parallel run time systems, particularly those that can scale to very large systems.  But they’re  not discussed often, mainly because…

Holiday wishes

As usual, in the post-Supercomputing / post-US-Thanksgiving-holiday lull, the work that we have all put off since we started ignoring it to prepare for Supercomputing catches up to us.  Inevitably, it means that my writing here at the blog falls behind in December.  Sorry, folks! To make up for that…