Avatar

Two weeks ago, I presented a webinar on Dynamic Fabric Automation (DFA) and went over the allocated 1 hour to cover the content.  Yesterday, as I was doing follow up with a hands-on demo, I went over time too. This illustrates how rich DFA is, and how much there is to say about it! Dynamic Fabric Automation is an environment for data center automation that is centered on the CPOM (Central Point of Management), a set of services that are provided with the new Data Center Network Manager (DCNM) release 7.0(1).

The services available on the CPOM provide the following:

  1. Power On Auto Provisioning (POAP)
  2. Inter-switch link connection verification
  3. A single console for configuration
  4. Network Auto-Config Profile provisioning
  5. Message processing for external orchestrator
  6. Automatic host provisioning
  7. Embedded management for network monitoring and data collection

All of these services are provided using standard protocols and applications. For example, the POAP service uses DHCP, TFTP and SCP/SFTP, but using a combination of templates and a very intuitive and easy-to-use GUI, DCNM provides a simplified and systematic way of bringing up your data center fabric. The inter-switch link validation or cable consistency check allows the operator to verify the fabric connections against a predefined template and prevent unexpected connections to come up.

The Jabber process provides the single console for configuration, statistics and troubleshooting. Using any XMPP client, an operator can “chat” with the fabric devices; this approach offers the possibility to organize devices in chat groups that match their role, their location or simply some administrative set. With XMPP, a single command can be sent to multiple devices in a secure way.

The most important element of the CPOM is certainly the network profile provisioning. Network profiles are not new but are usually located on the device itself; with DFA the network profiles are stored in the CPOM LDAP database which comes provisioned with a large set of profiles for various purposes.

The Advanced Message Queuing Protocol (AMQP) is an open standard for passing messages between applications. DFA uses the message queues to interface with the orchestrator. DFA supports the most prevalent orchestrators on the market including Openstack, VMware vCloud Director and UCS Director.  The AMQP function is provided by RabbitMQ that comes with its set of REST APIs. Together with the LDAP stored profiles, it allows the deployment of networks on the fabric directly from the orchestrator dashboard, greatly reducing the risk of configuration error and empower the server administrator to configure the network directly without using any NX-OS CLI commands.

DFA allows for 3 types of host provisioning:

  1. Non-automatic (manual)
  2. Semi-Automatic
  3. Fully Automatic

Manual mode is commonly used to provision physical servers that are configured with a static IP address. Using the GUI, a profile is created in DCNM and stored inside the LDAP database. The network operator can perform a manual pull of the profile on the leaf where the host is attached. This manual provisioning centralizes the host network configuration and reduces the risk of error associated with configuration done manually through the use of CLI commands.

Semi-Automatic is generally used for virtualized hosts managed by any virtual machine manager controller including VMware vCenter. The controller has the capability to attach virtual machines to a virtual switch and assign them to a specific VLAN, the connection to the leaf is in that case defined as a trunk. Similar to the static mode, the virtual machine network is provisioned for its specific VLAN in DCNM but the leaf will detect traffic from the VM tagged for a specific VLAN. If that VLAN is not already configured it will query the LDAP database and instantiate the profile for the VM automatically.

Fully automatic mode is generally used for virtualized hosts behind virtual switch that support VM provisioning and Virtual Station Interface (VSI) Discovery and Configuration Protocol (VDP), and are orchestrated from a central dashboard that communicates the VM profile to DCNM using AMQP. In this mode, the profile is automatically pulled from the LDAP database and the leaf communicates the local VLAN assignment of the VM back to the virtual switch , which automatically tags the traffic for the VM in the correct VLAN.

Finally, DCNM comes with a complete set of SNMP monitoring applications to visualize the CPU, memory and network interface utilization, and also interfaces with VMWare vCenter to provide visibility into the ESX hosts and the virtual machines that they are hosting.

The DCNM comes for free. A license will be required for all the advance monitoring and data collection. DCNM is delivered as single OVA that can be deployed within minutes on an existing VMware vSphere environment.

DCNM Block Diagram
DCNM Block Diagram

While all of these services are great, I haven’t yet talked about the physical infrastructure that uses them.

The DFA sweet-spot is a 2-tier infrastructure composed of leaves and spines (aka CLOS-Fabric) but it also supports all kind of topologies you could imagine. The connection between the leaf and spine is fully meshed and uses Fabric Path (FP) as a frame encapsulation for transport. DFA adds the concept of enhanced forwarding to it by terminating the Layer 2 domain at the leaf level. This approach reduce the failure domain, avoids flooding unwanted BUM traffic (Broadcast, Unknown unicast and Multicast), and allows for VLANs to be locally significant to a leaf.

How does this work? DFA terminates the Layer 2 domain on a VLAN interface at the leaf. A local process on the leaf learns the host IP address and injects the host route into MP-BGP. MP-BGP is the protocol used to propagate these unicast routes inside the fabric. MP-BGP is also used to distribute IP multicast group information. MP-BGP was chosen for its scalability and reduced convergence time.

To support scalable environment, DFA replaces VLANs with Segment-IDs to represent a specific L2 segment (Bridge Domain). The Segment-ID solution consists of using a double 802.1Q tag for a total address space of 24 bits, allowing for the support of 16 Million L2 segments inside the fabric.

A Segment-ID is added/removed by the DFA Leaf nodes and is part of the original Layer 2 Header. DFA Spines usually forward traffic based on FabricPath Switch-ID values, but can prune multi-destination traffic by parsing the segment-ID field. Segment-IDs are unique within the fabric but a particular Segment-ID can map to different VLANs on different leaves (since the VLAN is only locally significant).

Multi Tenancy is achieved by mapping a Segment-Id to a VRF which represents the tenant, thus providing traffic isolation between tenants. This Segment-Id corresponds to a L3 domain and multiple L2 segments can be aggregated on a single L3 Segment-ID. This concept does introduce the support for overlapping subnets between multiple tenants and the 2-level approach allows the creation of multiple L2 networks for a single tenant. In short, Segment-IDs are being used as Fabric global identifier for addressing Layer-2 Bridge Domains (formerly known as VLAN) and VRFs.

Here is an illustration of a complete auto-provisioning process with Openstack in 10 steps:

  1. The operator creates a new project, associate a network to it and spawn a new VM on that network.
  2. Openstack notify the DCNM of the project, DCNM associate a Segment-ID and a VRF that correspond to the project and stores it in the LDAP database.
  3. Openstack notify DCNM of the network Segment-ID and the profile associated, DCNM store the L2 Segment-ID definition and profile parameters in the LDAP database.
  4. The Openstack controller instantiate the VM on the compute node and communicates the L2 Segment-ID characteristics.
  5. The compute node conveys the Segment-ID to the leaf via VDP.
  6. The leaf associates the next free VLAN from a dynamic range for the Segment-ID and passes it on to the compute node.
  7. The leaf queries the DCNM for the L3 Segment-ID and creates the VRF for the project, it also request the L2 Segment-ID and instantiates the profile on the local VLAN interface.
  8. The virtual switch encapsulates the VM traffic with the VLAN tag it received in step 6.
  9. Upon reception of the first GARP the leaf creates a route for the VM IP address inside the VRF by inspecting the ARP table.
  10. The leaf redistributes the route in MP-BGP and sends the update.
Automatic Auto-Provisioning
Automatic Auto-Provisioning

In terms of supported platforms: The Nexus 6000 supports both leaf and spine functionality, the Nexus 7000 series F2/F2e/F3 line cards support the spine functionality, and the Nexus 5500 supports the leaf node functionality with some limitations. The Nexus 1000V distributed virtual switch for VMware vSphere supports VDP and acts as a light leaf within the DFA solution. Cisco has also published an Openstack plugin that supports VDP for Grizzly: http://docwiki.cisco.com/wiki/OpenStack#Installation_.28Cisco_OpenStack_Installer_on_Grizzly.29

I hope I got you hooked up! If you want to know more about about this exciting technology, please attend the 2-hour breakout session scheduled at CiscoLive Milan on Tuesday 28th January. Follow the link below for more details, and write to me if you have any questions.

https://www.ciscolivemilan.com/connect/sessionDetail.ww?SESSION_ID=2015

Can’t wait or can’t go to Milan? Here are links to my two webinars, the second one is a hands on demo that illustrate all the functionality of DFA

https://communities.cisco.com/docs/DOC-37721

https://communities.cisco.com/docs/DOC-38375