•
The OCSBC displays sibling CPUs in lower-case letters:
–
A signaling core with signaling sibling appears as "Ss".
–
There can be no combination of SBC core types, such as "Fd" (Forwarding
with DoS).
–
Cores other than signaling appear as the core type with no sibling, such as
"Dn".
•
The OCSBC displays a verify-config ERROR if there is an error with CPU
assignment, including improperly configured hyper-threaded sibling CPUs.
Host Hypervisor CPU Affinity (Pinning)
Many hardware platforms have built in optimizations related to VM placement. For
example, some CPU sockets may have faster local access to Peripheral Component
Interconnect (PCI) resources than other CPU sockets. Users should ensure that VMs
requiring high media throughput are optimally placed by the hypervisor, so that
traversal of cross-domain bridges, such as QuickPath Interconnect (QPI), is avoided or
minimized.
Some hypervisors implement Non-Uniform Memory Access (NUMA) topology rules to
automatically enforce such placements. All hypervisors provide manual methods to
perform CPU pinning, achieving the same result.
The diagram below displays two paths between the system's NICs and VM-B. Without
configuring pinning, VM-B runs on Socket 0, and has to traverse the QPI to access
Socket 1's NICs. The preferred path pins VM-B to Socket 1, for direct access to the
local NICs, avoiding the QPI.
Figure 5-1 Contrast of Data Paths with and without CPU Pinning
VM - A
VM - B
NIC
NIC
NIC
NIC
NIC
NIC
NIC
NIC
X
VM - B
QPI
CPU 18 … 35
Socket 1
CPU 0 … 17
Socket 0
Chapter 5
Host Hypervisor CPU Affinity (Pinning)
5-2