Notes
PES24N3A User Manual
3 - 1
April 10, 2008
®
Chapter 3
Theory of Operation
Introduction
An architectural block diagram of the PES24N3A is shown in Figure 1.1 in Chapter 1. The PES24N3A
contains three ports labeled port 0, port 2, and port 4. Port 0 is always the upstream port and port 2 and port
4 are always downstream ports.
At a high level, the PES24N3A consists of three PCIe stacks and a switch core. Each stack is configured
to operate as a single x8 stack. A stack consists of logic that performs functions associated with the phys-
ical, data link, and transactions layers described in the PCIe base 1.1 specification. In addition, a stack
performs switch application layer functions such as TLP routing using route map tables, processing config-
uration read and write requests, etc.
The switch core is responsible for transferring TLPs between stacks. Its main functions are: input buff-
ering, maintaining per-port ingress and egress flow control information, port arbitration, scheduling, and
forwarding TLPs between stacks. In typical fan-out applications, all data from downstream ports are
destined to memory in the root complex and all TLPs from the root complex are destined to endpoints.
Thus, in general there is no peer-to-peer (i.e., endpoint to endpoint) traffic. Since the PES24N3A is opti-
mized for fan-out applications, its switch core is based on a dual bus architecture.
The downstream bus (D-Bus) is used to transfer TLPs from the upstream port to a downstream port
while the upstream bus (U-Bus) is used to transfer TLPs from a downstream port to an upstream port. D-
Bus and U-Bus transfers may occur in parallel. While not optimized for peer-to-peer traffic, the PES24N3A
supports these transfers. A peer-to-peer transfer occurs by first transferring a TLP from a downstream port
into a bus decoupler queue over the U-Bus. Once in the bus decoupler queue, the TLP is transferred to the
peer downstream port over the D-Bus. Thus, unlike upstream and downstream traffic which utilize either the
U-Bus or D-Bus, peer-to-peer transfers utilize both buses.
The size and constraints of the bus decoupler queue are shown in Table 3.3.
The PES24N3A switch core implements a per-port input buffer called the Input Frame Buffer (IFB). Each
input buffer consists of four queues. These queues are the posted transaction queue (posted queue), the
non-posted transaction queue (non-posted queue), the completion transaction queue (completion queue),
and an insertion buffer to hold TLPs generated by the stack.
The size of each of these queues is shown in Table 3.1. Each queue is implemented as a data queue
and a descriptor queue. Thus, there is a limitation on both the amount of data as well as on the number of
TLPs that can be stored in a queue.
Associated with each port in the data link layer is a shared output and replay buffer. That is, the buffer is
partitioned into two sections with a section dedicated to each x8 port. This buffer contains TLPs that have
been transmitted but have not been acknowledged by the link partner. Space unused to hold replay TLPs is
IFB Queue Total Queue Size and Limitations
Advertised
Header
Credits
Advertised
Data
Credits
Posted
8 KB or up to 64 TLPs
64
416 (6656 bytes)
Non-posted
1.5 KB or up to 64 TLPs
64
64 (1024 bytes)
Completion
8 KB or up to 64 TLPs
64
416 (6656 bytes)
Table 3.1 IFB Buffer Sizes
Summary of Contents for 89HPES24N3A
Page 10: ...IDT Table of Contents PES24N3A User Manual iv April 10 2008 Notes...
Page 12: ...IDT List of Tables PES24N3A User Manual vi April 10 2008 Notes...
Page 14: ...IDT List of Figures PES24N3A User Manual viii April 10 2008 Notes...
Page 18: ...IDT Register List PES24N3A User Manual xii April 10 2008 Notes...
Page 64: ...IDT Link Operation PES24N3A User Manual 4 8 April 10 2008 Notes...
Page 88: ...IDT Power Management PES24N3A User Manual 7 4 April 10 2008 Notes...
Page 160: ...IDT Configuration Registers PES24N3A User Manual 9 66 April 10 2008 Notes...