DCB refers to a set of IEEE Ethernet enhancements that provide data centers with a single, robust, converged network to support multiple
traffic types, including local area network (LAN), server, and storage traffic. Through network consolidation, DCB results in reduced
operational cost, simplified management, and easy scalability by avoiding the need to deploy separate application-specific networks.
For example, instead of deploying an Ethernet network for LAN traffic, include additional storage area networks (SANs) to ensure lossless
Fibre Channel traffic, and a separate InfiniBand network for high-performance inter-processor computing within server clusters, only one
DCB-enabled network is required in a data center. The Dell Networking switches that support a unified fabric and consolidate multiple
network infrastructures use a single input/output (I/O) device called a converged network adapter (CNA).
A CNA is a computer input/output device that combines the functionality of a host bus adapter (HBA) with a network interface controller
(NIC). Multiple adapters on different devices for several traffic types are no longer required.
Data center bridging satisfies the needs of the following types of data center traffic in a unified fabric:
Traffic
Description
LAN traffic
LAN traffic consists of many flows that are insensitive to latency requirements, while certain applications, such as
streaming video, are more sensitive to latency. Ethernet functions as a best-effort network that may drop packets
in the case of network congestion. IP networks rely on transport protocols (for example, TCP) for reliable data
transmission with the associated cost of greater processing overhead and performance impact LAN traffic consists
of a large number of flows that are generally insensitive to latency requirements, while certain applications, such as
streaming video, are more sensitive to latency. Ethernet functions as a best-effort network that may drop packets
in case of network congestion. IP networks rely on transport protocols (for example, TCP) for reliable data
transmission with the associated cost of greater processing overhead and performance impact.
Storage traffic
Storage traffic based on Fibre Channel media uses the Small Computer System Interface (SCSI) protocol for data
transfer. This traffic typically consists of large data packets with a payload of 2K bytes that cannot recover from
frame loss. To successfully transport storage traffic, data center Ethernet must provide no-drop service with
lossless links.
InterProcess
Communication
(IPC) traffic
InterProcess Communication (IPC) traffic within high-performance computing clusters to share information. Server
traffic is extremely sensitive to latency requirements.
To ensure lossless delivery and latency-sensitive scheduling of storage and service traffic and I/O convergence of LAN, storage, and server
traffic over a unified fabric, IEEE data center bridging adds the following extensions to a classical Ethernet network:
•
802.1Qbb — Priority-based Flow Control (PFC)
•
802.1Qaz — Enhanced Transmission Selection (ETS)
•
802.1Qau — Congestion Notification
•
Data Center Bridging Exchange (DCBx) protocol
NOTE:
Dell Networking OS supports only the PFC, ETS, and DCBx features in data center bridging.
Priority-Based Flow Control
In a data center network, priority-based flow control (PFC) manages large bursts of one traffic type in multiprotocol links so that it does not
affect other traffic types and no frames are lost due to congestion.
When PFC detects congestion on a queue for a specified priority, it sends a pause frame for the 802.1p priority traffic to the transmitting
device. In this way, PFC ensures that PFC-enabled priority traffic is not dropped by the switch.
PFC enhances the existing 802.3x pause and 802.1p priority capabilities to enable flow control based on 802.1p priorities (classes of
service). Instead of stopping all traffic on a link (as performed by the traditional Ethernet pause mechanism), PFC pauses traffic on a link
according to the 802.1p priority set on a traffic type. You can create lossless flows for storage and server traffic while allowing for loss in
case of LAN traffic congestion on the same physical interface.
The following illustration shows how PFC handles traffic congestion by pausing the transmission of incoming traffic with dot1p priority 4.
Data Center Bridging (DCB)
253
Summary of Contents for S4048T-ON
Page 1: ...Dell Configuration Guide for the S4048 ON System 9 11 2 1 ...
Page 148: ...Figure 10 BFD Three Way Handshake State Changes 148 Bidirectional Forwarding Detection BFD ...
Page 251: ...Dell Control Plane Policing CoPP 251 ...
Page 363: ... RPM Synchronization GARP VLAN Registration Protocol GVRP 363 ...
Page 511: ...Figure 64 Inspecting the LAG Configuration Link Aggregation Control Protocol LACP 511 ...
Page 558: ...Figure 84 Configuring Interfaces for MSDP 558 Multicast Source Discovery Protocol MSDP ...
Page 559: ...Figure 85 Configuring OSPF and BGP for MSDP Multicast Source Discovery Protocol MSDP 559 ...
Page 564: ...Figure 88 MSDP Default Peer Scenario 2 564 Multicast Source Discovery Protocol MSDP ...
Page 565: ...Figure 89 MSDP Default Peer Scenario 3 Multicast Source Discovery Protocol MSDP 565 ...
Page 841: ...Figure 115 Single and Double Tag TPID Match Service Provider Bridging 841 ...
Page 842: ...Figure 116 Single and Double Tag First byte TPID Match 842 Service Provider Bridging ...