Avatar

Jeff Squyres

The MPI Guy

UCS Platform Software

Dr. Jeff Squyres is Cisco's representative to the MPI Forum standards body and is Cisco's core software developer in the open source Open MPI project. He has worked in the High Performance Computing (HPC) field since his early graduate-student days in the mid-1990's, and is a chapter author of the MPI-2 and MPI-3 standards.

Jeff received both a BS in Computer Engineering and a BA in English Literature from the University of Notre Dame in 1994; he received a MS in Computer Science and Engineering from Notre Dame two years later in 1996. After some active duty tours in the military, Jeff received his Ph.D. in Computer Science and Engineering from Notre Dame in 2004. Jeff then worked as a Post-Doctoral research associate at Indiana University, until he joined Cisco in 2006.

In Cisco, Jeff is part of the VIC group (Virtual Interface Card, Cisco's virtualized server NIC) in the larger UCS server group. He works in designing and writing systems-level software for optimized network IO in HPC and other high-performance types of applications. Jeff also represents Cisco to several open source software communities and the MPI Forum standards body.

Articles

libfabric support of usNIC in Open MPI

I’ve previously written about libfabric.  Here’s some highlights: libfabric is a set of next-generation, community-driven, ultra-low latency networking APIs The APIs are not tied to any particular networking hardware model Cisco is actively helping define, design, and develop the libfab…

usNIC support for the Intel MPI Library

Cisco is pleased to announce the intention to support the Intel MPI Library™ with usNIC on the UCS server and Nexus switches product lines over the ultra low latency Ethernet and routable IP transports, at both 10GE and 40GE speeds. usNIC will be enabled by a simple library plugin to the uDAPL frame…

Top 5 Reasons the HPC Community Should Care About libfabric

I’ve mentioned libfabric on this blog a few times: it’s a set of next-generation APIs that allow direct access to networking hardware (e.g., high-speed / low latency NICs) from Linux userspace (kernel access is in the works). To give you a little perspective: the libfabric APIs are aimed…

“Using Advanced MPI” book (i.e., MPI-3 for the rest of us)

I’m stealing this text directly from Torsten Hoefler‘s blog, because I think it’s directly relevant to many of this blog’s readers: Our book on “Using Advanced MPI” will appear in about a month — now it’s the time to pre-order on Amazon at a reduced price. It is released by t…

Supercomputing is upon us!

It’s that time of year again — we’re at about T-2.5 weeks to the Supercomputing conference and trade show;  SC’14 is in New Orleans, November 16-21. Are you going to get some tasty gumbo and supercharged computing power?   If so, come say hi!  The Cisco booth is 2715.…

The “vader” shared memory transport in Open MPI: Now featuring 3 flavors of zero copy!

Today’s blog post is by Nathan Hjelm, a Research Scientist at Los Alamos National Laboratory, and a core developer on the Open MPI project. The latest version of the “vader” shared memory Byte Transport Layer (BTL) in the upcoming Open MPI v1.8.4 release is bringing better small me…

HPC schedulers: What is a “slot”?

Today’s guest post comes from Ralph Castain, a principle engineer at Intel.  The bulk of this post is an email he sent explaining the concept of a “slot” in typical HPC schedulers. This is a little departure from the normal fare on this blog, but is still a critical concept to unde…

usNIC provider contributed to libfabric

Today’s guest post is by Reese Faucette, one of my fellow usNIC team members here at Cisco. I’m pleased to announce that this past Friday, Cisco contributed a usNIC-based provider to libfabric, the new API in the works from OpenFabrics Interfaces Working Group. (Editor’s note: I…

MPI-3.1

As you probably already know, the MPI-3.0 document was published in September of 2012. We even got a new logo for MPI-3.  Woo hoo! The MPI Forum has been busy working on both errata to MPI-3.0 (which will be collated and published as “MPI-3.1”) and all-new functionality for MPI-4.0. The…