Servers must have Intel Xeon 5600 or 7500 series of processors. No other processor vendors or models are supported.
Total physical core count required is based on the sum of UC virtual machine core requirements and the co-residency support policy).
Minimum physical core speed required is based on what UC virtual machines will be used, and at what intended load per VM. Processors of Tested Reference Configurations are sized for full-load virtual machines. It is recommended to use processors with same or higher speeds, as Cisco UC does not test or document lower performance points.
Recall that physical CPU cores may not be over-subscribed for UC VMs at this time (one physical CPU core = one vCPU core).
Cisco TAC will not troubleshoot performance problems in deployments with insufficient physical cores.
The only supported server vendors are:
Cisco Unified Computing System
Cisco UCS Express, Dell and all other server vendors are not supported at this time.
All servers used must be on the VMware Hardware Compatibility List for the version of ESXi you will be running, and must meet all other policy requirements such as required CPU.
Otherwise, any server model/generation from the above vendors that satisfies all other criteria of this policy is supported for UC.
Minimum physical RAM required is 2GB for ESXi plus the sum of UC virtual machines’ vRAM.
Recall that physical memory may not be over-subscribed for UC VMs.
Aside from total physical RAM, UC does not mandate memory module size, density, speed or quantity – follow server vendor requirements for memory hardware configuration.
Cisco TAC will not troubleshoot performance problems in deployments with insufficient physical RAM.
All I/O controllers and adapters used must be on the VMware Hardware Compatibility List for the version of ESXi you will be running.
Only the following I/O Devices are supported:
FC – 2Gbps or faster
Ethernet – 1Gbps or faster
NFS and iSCSI are supported, but require minimum 10Gbps and dedicated NIC for network storage access
Converged Network Adapter or Cisco VIC
FCoE + Ethernet – 10Gbps or faster
RAID Controllers for DAS
SAS SATA Combo
Note that diskless servers for “boot from SAN” (FC, iSCSI, or FCoE) are only supported for UC if the UC app supports both ESXi 4.1 and the “boot from SAN” feature on the VMware Requirements page.
The customer is responsible for configuring an adequate number of I/O devices to handle the aggregate load that the virtual machines running on the server will generate.
Storage access I/O requirements for UC VMs are described in the IO Operations Per Second (IOPS) page..
LAN access I/O requirements for UC VMs are described in the UC application design guides. See also network link sizing and QoS considerations here.
The customer is also responsible for configuring redundant interfaces on the server to handle component failures (e.g. redundant NIC, CNA, HBA or VIC adapters.)
There are no UC restrictions on hardware vendors for I/O Devices other than that VMware and the server vendor/model must both support them.
Cisco TAC will not troubleshoot performance problems in a deployment designed with insufficient I/O devices or overloaded I/O devices. For example, a single 100Mbps NIC servicing eight “CUCM 7500 user OVAs” would be both insufficent and overloaded.
Each OVA provided by Cisco for running a UC application has a published IOPS and disk space requirement. It is the responsibility of the customer to provide a storage system that exceeds the disk space (see Unified Communications Virtualization Downloads (including OVA/OVF Templates) and average IOPS requirements (see IO Operations Per Second (IOPS)) of the UC virtual machines they will be running on that storage system.
If you are using NFS, iSCSI, or FCoE for storage connectivity, the networking configuration must provide Cisco Platinum Class QOS (Fiber Channel Equivalent): http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/securecldg.html.
See also Shared Storage Considerations here.
It is not necessary to configure the storage to handle the simultaneous maximum IOPS load of every virtual machine on the storage system, but the customer must be aware of the excess capacity of the storage system and not, for example, run multiple software upgrades on the virtual machines such that the storage system is over extended.
The kernel disk command latency must not be greater than 2-3 ms and the physical device command latency must not be greater than 15-20 ms. When either of these metrics is not met, Cisco considers the storage system inadequate to serve the UC virtual machines. Cisco will not troubleshoot performance problems in an environment where either metric is not being met.
As a guideline, Cisco has found the use of 15K rpm SAS or FC drives in a RAID 5 configuration to work well. The number of drives used in the array is 5. The recommended size of the hard drives is 300 to 450GB. Recommended LUN size is 500GB to 1.5TB, so that not more than 10 virtual machines reside on a LUN – preferably 8 or less.
This is only a guideline, it is left to the customer to configure their storage for adequate performance and for the redundancy level desired.