Foundry NetIron M2404C and M2404F Metro Access Switches
Configuring HQoS (Rev.03)
QoS/HQoS Implementation
© 2008 Foundry Networks, Inc.
Page 18 of 98
•
Marking/Remarking
o
Marking of VPT field according to the internal FC and Color. Egress VPT is not affected
by the results of policing.
o
Remarking of DSCP field according to the internal FC and Color. Egress DSCP is affected
by the results of policing (if Color was changed), allowing ‘Egress Policing’ of the
outgoing packets.
Once the initial QoS operations (Classification, Filtering, Mapping, Policing and Remarking) have
been performed, the switching decision is made in order to determine the output port/VLAN. Also,
multicast traffic is replicated at this point.
Before the traffic is sent to the output port, the following additional QoS mechanisms are applied:
•
Queuing
. The traffic is queued according to the FC in one of 8 egress queues per port. The
queues use
deep buffering
, which allows for better congestion control and less dropped traffic
due to congestion.
•
A
WRED or Tail Drop (TD)
dropping mechanism can be defined per port, based on one of 3
available profiles (each profile defines a single dropping mechanism for all 8 queues and its
parameters for each one of the 8 queues, and then the complete profile can be attached to a
port). The profiles are color-aware and so the algorithms are able to start dropping Yellow
traffic before any Green traffic is dropped.
•
Scheduling
. Scheduling between the queues is performed according to either Strict Priority
(SP) or Weighted Round-Robin (WRR) algorithm. These algorithms can be hybrid (some
queues SP, other WRR) in order to assure that both mission-critical traffic will be scheduled
immediately, without causing starvation among the lower priority traffic. This also allows for
more flexible SLA definitions.
•
Shaping
. Single-rate shaping can be performed on 2 levels: per queue and per port. This
allows shaping of traffic per FC, thus allocating different overall bandwidth and allowed burst
size per FC out of the total port budget.
Please note that policing, deep buffering, WRED, hybrid scheduling and shaping, when used
together and configure properly, construct a very flexible and powerful tool for congestion
avoidance and oversubscription control.
Add Path: User to Network
The Add path traffic first passes the Packet Processor, which allows its various QoS mechanisms to
be applied on this traffic. The main purposes of the Packet Processor in regards to the Add path
traffic are traffic classification; per-service aggregation and intelligent oversubscription control, as
discussed previously in this document.
The main differences between Local Switching path and Add path processing are that in the Add
path the traffic is assigned to Service Distribution Paths (or Virtual Connections), is encapsulated
using the VC-labels and is eventually sent to the ES Processor via one of the internal connection
ports.
Figure 11
shows the QoS actions applied to the Add path traffic once it has exited the ‘Packet
Processor’ and entered the ‘ES Processor’.