Avatar

This post expands on my previous posts about what makes onePK better, and the onePK software architecture. Here I focus on the application deployment options onePK makes available.

The deployment options are summarized in the diagram below.

Deployment

Process hosting means that the onePK application is running within a container on the same hardware as the network operating system (NOS). Container models vary depending on the container and the specific NOS. The system resources, that such an application is allocated, can be controlled via container configuration.

Blade hosting is typified by the ISR G2 29XX and 39XX platforms with UCS-E server blades.

End-Point Hosting is the “traditional” mode of application hosting on a server, which we all know about, as this is how we operate today.

Of course, these deployment options are applicable to applications in general, even if they don’t use onePK APIs, but what makes this special is that the applications can run on, or in, the network element, and use the onePK APIs to interrogate and control the NOS. This gives applications much more scope to influence network devices, and so how the network handles application traffic.

In general, the same onePK APIs and capabilities are available to applications running in any of these three deployment options. In practice, the deployment options are associated with different network operating systems (NOSs), hardware platforms and their specific hardware and software features.

As an example of process hosting, an onePK application can run in a virtual machine (VM), packaged as an OVA, on a Nexus platform. Which is what Shelly Cadora has done with the “Wiring” application that we discuss here.

Examples of onePK based applications in the blade model are the Cisco Cloud Connectors, discussed by our colleague Rony Gotesdyner in his blog post here. A specific example of a Cloud Connector is the Identity Cloud Connector, which I shall say about more in my blog from Cisco Live, London, at the end of January 2013.

What these two examples have in common is that the onePK application is running on the same hardware as the NOS, or on hardware that is packaged and deployed in the same chassis as the routing or switching hardware.

This transforms a routing and switching hardware platform into a more general-purpose compute platform. Such a platform is suitable for the delivery of a range of IT services that can be integrated with the network itself, over which the services are running, with very low latency between the “control” application and the NOS. This reduced latency, and the fact that the application is co-resident with the NOS, significantly changes what can be done.

The same form factor, with much the same power and cooling requirements, is now serving not only the network functions, but also as a compute platform for services running over, and/or controlling, the network. This is an extension of the concept we see in the unified computing (UCS) platforms, i.e. the collapse of networking and IT platforms into one, but now across all of our switching and routing platforms.