Avatar

One part of my job involves designing the virtualization model for our internal unified communications (UC) system deployments around the world. A critical task in this design is specifying which UC virtual machines (VMs) can share a Cisco Unified Computing System (Cisco UCS) server chassis or blade and which ones can’t. When migrating UC servers to a shared virtual environment, we need to make sure we carefully balance each VM’s needs for CPU, storage, network and memory.

Based on the design work I’ve been doing to support the migration of our internal Cisco UC systems to Cisco UCS servers, I have come up with these “rules of thumb” that may help you when creating your own server allocation plans. Please keep in mind that these rules reflect our internal Cisco IT standards and are derived from the extensive recommendations and testing of our Cisco product teams. These rules don’t reflect all server sizes, options, or Cisco UCM versions, so you may need to adjust them to fit your deployment.

The Rules

  1. Don’t oversubscribe the server. Each standard UC server needs to have 2 cores dedicated to it. When hyperthreading is turned on as recommended, VMware will show twice the number of logical CPUs. However, you still need 2 physical cores per UC system to ensure optimal efficiency. This design means that if you have a UCS system with 8 physical cores, you will see 16 logical CPUs in your VMware hypervisor and only 8 of them will be allocated in a “fully configured” system.
  2. Don’t use a CPU with performance levels below those recommended by the product teams. See the performance guidelines in this document:http://docwiki.cisco.com/wiki/UC_Virtualization_Supported_Hardware. Also, ensure that your hardware is fully compatible with the UC application version that you want to virtualize.
  3. Ensure that only UC virtual machines are hosted on a single virtualized hardware instance. Although newer versions of the UC products allow full co-residency, to ensure optimal performance and eliminate any potential of issues, it is simplest to have only UC virtual machines on a specific UCS platform. Specifically, I currently apply the following rules of thumb:
    1. Don’t mix UC Linux-based VMs (e.g., Cisco Unified Communications Manager, Cisco Unity Connection, Cisco Emergency Responder) with Microsoft Windows VMs (e.g., Cisco Unified Contact Center Enterprise (UCCE) and Cisco Unified Customer Voice Portal (CVP)) on the same Cisco UCS.
    2. Don’t mix the Microsoft Windows VM for Cisco Unified CVP with other Microsoft Windows VMs.
  4. Follow specific rules for individual UC systems. For example, with Cisco Unity Connection, you must leave one physical core unallocated for use by ESXi to ensure no contention for resources.
  5. Distribute VMs to maintain redundancy and availability. It always makes sense to avoid putting all of your eggs in one basket. Yet how we distribute VMs among Cisco UCS servers is defined by how many clusters and hardware resources are available for virtual resources at the particular site. In our San Jose data center, we can distribute VMs to the maximum extent because we have more Cisco UCS hosts than we do applications. For example, in our largest data center location, we have Cisco UC Manager VMs for IPCC, campus voice, regional voice, and voice conferencing on the same blade. This design will result in a hardware failure impacting 4 different clusters, but thanks to the distributed and redundant nature of our clusters, users will typically see no impact from the outage.

In smaller locations that only support regional voice, we still distribute VMs, but we have to put multiple regional voice VMs on the same host. However, we distribute these VMs for redundancy, so primary and backup VMs are on different physical servers. This configuration allows for a Cisco UCS host to be down without major interruption (if any) to the services for the cluster.

There are certain configurations to avoid when distributing VMs:

  1. Avoid placing a primary VM and a backup VM on the same server.
  2. For failover groups, avoid placing all actives on the same server.
  3. Avoid placing all VMs that have the same role on the same server.

Cisco has documented these and other rules in the UC Virtualization Sizing Guidelines document. Have you developed any rules of your own when configuring VMs for UC applications on Cisco UCS servers?