
l
FCoE Starting Core Offset
. This setting specifies the offset to the first NUMA Node CPU core that
will be assigned to FCoE queue.
l
FCoE Port NUMA Node
. This setting is an indication from the platform of the optimal closest NUMA
Node to the physical port, if available. This setting is read-only and cannot be configured.
Performance Tuning
The Intel Network Controller provides a new set of advanced FCoE performance tuning options. These
options will direct how FCoE transmit/receive queues are allocated in NUMA platforms. Specifically, they
direct what target set of NUMA node CPUs can be selected from to assign individual queue affinity. Selecting
a specific CPU has two main effects:
l
It sets the desired interrupt location for processing queue packet indications.
l
It sets the relative locality of the queue to available memory.
As indicated, these are intended as advanced tuning options for those platform managers attempting to
maximize system performance. They are generally expected to be used to maximize performance for multi-
port platform configurations. Since all ports share the same default installation directives (the .inf file, etc.),
the FCoE queues for every port will be associated with the same set of NUMA CPUs which may result in
CPU contention.
The software exporting these tuning options defines a NUMA Node to be equivalent to an individual processor
(socket). Platform ACPI information presented by the BIOS to the operating system helps define the relation
of PCI devices to individual processors. However, this detail is not currently reliably provided in all platforms.
Therefore, using the tuning options may produce unexpected results. Consistent or predictable results when
using the performance options cannot be guaranteed.
The performance tuning options are listed in the
section.
Example 1
: A platform with two physical sockets, each socket processor providing 8 core CPUs (16 when
hyper threading is enabled), and a dual port Intel adapter with FCoE enabled.
By default 8 FCoE queues will be allocated per NIC port. Also, by default the first (non-hyper thread) CPU
cores of the first processor will be assigned affinity to these queues resulting in the allocation model pictured
below. In this scenario, both ports would be competing for CPU cycles from the same set of CPUs on socket
0.
Socket Queue to CPU Allocation
Using performance tuning options, the association of the FCoE queues for the second port can be directed to
a different non-competing set of CPU cores. The following settings would direct SW to use CPUs on the other
processor socket:
l
FCoE NUMA Node Count = 1: Assign queues to cores from a single NUMA node (or processor
socket).
l
FCoE Starting NUMA Node = 1: Use CPU cores from the second NUMA node (or processor socket) in
the system.
l
FCoE Starting Core Offset = 0: SW will start at the first CPU core of the NUMA node (or processor
socket).
The following settings would direct SW to use a different set of CPUs on the same processor socket. This
assumes a processor that supports 16 non-hyperthreading cores.
Содержание 10 Gigabit AT Ethernet Server Adapter
Страница 1: ...Intel Ethernet Adapters and Devices User Guide ...
Страница 107: ...FCoE Target Selection Menu ...