Virtual Machine Queue Offloading
Enabling VMQ offloading increases receive and transmit performance, as the adapter hardware is able to perform
these tasks faster than the operating system. Offloading also frees up CPU resources. Filtering is based on MAC and/or
VLAN filters. For devices that support it, VMQ offloading is enabled in the host partition on the adapter's Device Man-
ager property sheet, under Virtualization on the Advanced Tab.
Each Intel® Ethernet Adapter has a pool of virtual ports that are split between the various features, such as VMQ Off-
loading, SR-IOV, Data Center Bridging (DCB), and Fibre Channel over Ethernet (FCoE). Increasing the number of vir-
tual ports used for one feature decreases the number available for other features. On devices that support it, enabling
DCB reduces the total pool available for other features to 32. Enabling FCoE further reduces the total pool to 24.
Intel PROSet displays the number of virtual ports available for virtual functions under Virtualization properties on the
device's Advanced Tab. It also allows you to set how the available virtual ports are distributed between VMQ and SR-
IOV.
Teaming Considerations
l
If VMQ is not enabled for all adapters in a team, VMQ will be disabled for the team.
l
If an adapter that does not support VMQ is added to a team, VMQ will be disabled for the team.
l
Virtual NICs cannot be created on a team with Receive Load Balancing enabled. Receive Load Balancing is
automatically disabled if you create a virtual NIC on a team.
l
If a team is bound to a Hyper-V virtual NIC, you cannot change the Primary or Secondary adapter.
SR-IOV (Single Root I/O Virtualization)
SR-IOV lets a single network port appear to be several virtual functions in a virtualized environment. If you have an SR-
IOV capable NIC, each port on that NIC can assign a virtual function to several guest partitions. The virtual functions
bypass the Virtual Machine Manager (VMM), allowing packet data to move directly to a guest partition's memory, res-
ulting in higher throughput and lower CPU utilization. SR-IOV also allows you to move packet data directly to a guest
partition's memory. SR-IOV support was added in Microsoft Windows Server 2012. See your operating system doc-
umentation for system requirements.
For devices that support it, SR-IOV is enabled in the host partition on the adapter's Device Manager property sheet,
under Virtualization on the Advanced Tab. Some devices may need to have SR-IOV enabled in a preboot environment.
NOTES:
l
You must enable VMQ for SR-IOV to function.
l
SR-IOV is not supported with ANS teams.
l
Due to chipset limitations, not all systems or slots support SR-IOV. Below is a chart
summarizing SR-IOV support on Dell server platforms.
NDC or LOM
10Gbe
1Gbe
Intel X520 DP 10Gb DA/SFP+, + I350 DP 1Gb Ethernet, Network Daughter Card
Yes
No
Intel Ethernet X540 DP 10Gb BT + I350 1Gb BT DP Network Daughter Card
Yes
No
Intel Ethernet I350 QP 1Gb Network Daughter Card
Yes
PowerEdge T620 LOMs
No
PowerEdge T630 LOMs
No
Содержание Ethernet 10G 2P X520 Adapter
Страница 1: ...Intel Network Adapters User Guide ...