Broadcom Teaming Services: Broadcom NetXtreme II® Network Adapter User Guide
file:///C|/Users/Nalina_N_S/Documents/NetXtremeII/English/teamsvcs.htm[9/5/2014 3:45:08 PM]
The outbound TCP and UDP packets are classified using Layer 3 and Layer 4 header information. This scheme improves the
load distributions for popular Internet protocol services using well-known ports such as HTTP and FTP. Therefore, BASP
performs load balancing on a TCP session basis and not on a packet-by-packet basis.
In the Outbound Flow Hash Entries, statistics counters are also updated after classification. The load-balancing engine uses
these counters to periodically distribute the flows across teamed ports. The outbound code path has been designed to achieve
best possible concurrency where multiple concurrent accesses to the Outbound Flow Hash Table are allowed.
For protocols other than TCP/IP, the first physical adapter will always be selected for outbound packets. The exception is
Address Resolution Protocol (ARP), which is handled differently to achieve inbound load balancing.
Inbound Traffic Flow (SLB Only)
The Broadcom intermediate driver manages the inbound traffic flow for the SLB teaming mode. Unlike outbound load
balancing, inbound load balancing can only be applied to IP addresses that are located in the same subnet as the load-
balancing server. Inbound load balancing exploits a unique characteristic of Address Resolution Protocol (RFC0826), in which
each IP host uses its own ARP cache to encapsulate the IP Datagram into an Ethernet frame. BASP carefully manipulates the
ARP response to direct each IP host to send the inbound IP packet to the desired physical adapter. Therefore, inbound load
balancing is a plan-ahead scheme based on statistical history of the inbound flows. New connections from a client to the
server will always occur over the primary physical adapter (because the ARP Reply generated by the operating system
protocol stack will always associate the logical IP address with the MAC address of the primary physical adapter).
Like the outbound case, there is an Inbound Flow Head Hash Table. Each entry inside this table has a singly linked list and
each link (Inbound Flow Entries) represents an IP host located in the same subnet.
When an inbound IP Datagram arrives, the appropriate Inbound Flow Head Entry is located by hashing the source IP address
of the IP Datagram. Two statistics counters stored in the selected entry are also updated. These counters are used in the
same fashion as the outbound counters by the load-balancing engine periodically to reassign the flows to the physical
adapter.
On the inbound code path, the Inbound Flow Head Hash Table is also designed to allow concurrent access. The link lists of
Inbound Flow Entries are only referenced in the event of processing ARP packets and the periodic load balancing. There is no
per packet reference to the Inbound Flow Entries. Even though the link lists are not bounded; the overhead in processing
each non-ARP packet is always a constant. The processing of ARP packets, both inbound and outbound, however, depends on
the number of links inside the corresponding link list.
On the inbound processing path, filtering is also employed to prevent broadcast packets from looping back through the
system from other physical adapters.
Protocol Support
ARP and IP/TCP/UDP flows are load balanced. If the packet is an IP protocol only, such as ICMP or IGMP, then all data flowing
to a particular IP address will go out through the same physical adapter. If the packet uses TCP or UDP for the L4 protocol,
then the port number is added to the hashing algorithm, so two separate L4 flows can go out through two separate physical
adapters to the same IP address.
For example, assume the client has an IP address of 10.0.0.1. All IGMP and ICMP traffic will go out the same physical adapter
because only the IP address is used for the hash. The flow would look something like this:
IGMP ------> PhysAdapter1 ------> 10.0.0.1
ICMP ------> PhysAdapter1 ------> 10.0.0.1
If the server also sends an TCP and UDP flow to the same 10.0.0.1 address, they can be on the same physical adapter as
IGMP and ICMP, or on completely different physical adapters from ICMP and IGMP. The stream may look like this:
IGMP ------> PhysAdapter1 ------> 10.0.0.1
ICMP ------> PhysAdapter1 ------> 10.0.0.1
TCP------> PhysAdapter1 ------> 10.0.0.1
UDP------> PhysAdatper1 ------> 10.0.0.1
Or the streams may look like this:
IGMP ------> PhysAdapter1 ------> 10.0.0.1