2. Create the virtual NIC by running a remote Hyper-V manager on a different machine. Please see
Microsoft's documentation for instructions on how to do this.
The following is an example of how to set up the configuration using Microsoft* Windows PowerShell*.
1. Get all the adapters on the system and store them into a variable.
$a = Get-IntelNetAdapter
2. Create a team by referencing the indexes of the stored adapter array.
New-IntelNetTeam -TeamMembers $a[1],$a[2] -TeamMode
VirtualMachineLoadBalancing -TeamName “Team1”
Virtual Machine Queue Offloading
Enabling VMQ offloading increases receive and transmit performance, as the adapter hardware is able to
perform these tasks faster than the operating system. Offloading also frees up CPU resources. Filtering is
based on MAC and/or VLAN filters. For devices that support it, VMQ offloading is enabled in the host partition
on the adapter's Device Manager property sheet, under Virtualization on the Advanced Tab.
Each Intel® Ethernet Adapter has a pool of virtual ports that are split between the various features, such as
VMQ Offloading, SR-IOV, Data Center Bridging (DCB), and Fibre Channel over Ethernet (FCoE). Increasing
the number of virtual ports used for one feature decreases the number available for other features. On devices
that support it, enabling DCB reduces the total pool available for other features to 32. Enabling FCoE further
reduces the total pool to 24.
NOTE:
This does not apply to devices based on the Intel® Ethernet X710 or XL710 controllers.
displays the number of virtual ports available for virtual functions under Virtualization properties
on the device's Advanced Tab. It also allows you to set how the available virtual ports are distributed between
VMQ and SR-IOV.
Teaming Considerations
l
If VMQ is not enabled for all adapters in a team, VMQ will be disabled for the team.
l
If an adapter that does not support VMQ is added to a team, VMQ will be disabled for the team.
l
Virtual NICs cannot be created on a team with Receive Load Balancing enabled. Receive Load Balan-
cing is automatically disabled if you create a virtual NIC on a team.
l
If a team is bound to a Hyper-V virtual NIC, you cannot change the Primary or Secondary adapter.
SR-IOV (Single Root I/O Virtualization)
SR-IOV lets a single network port appear to be several virtual functions in a virtualized environment. If you
have an SR-IOV capable NIC, each port on that NIC can assign a virtual function to several guest partitions.
The virtual functions bypass the Virtual Machine Manager (VMM), allowing packet data to move directly to a
guest partition's memory, resulting in higher throughput and lower CPU utilization. SR-IOV also allows you to
move packet data directly to a guest partition's memory. SR-IOV support was added in Microsoft Windows
Server 2012. See your operating system documentation for system requirements.
For devices that support it, SR-IOV is enabled in the host partition on the adapter's Device Manager property
sheet, under Virtualization on the Advanced Tab. Some devices may need to have SR-IOV enabled in a
preboot environment.