Figure 1-2 I/O Subsystem Block Diagram
The server is configured with either a PCI-X or a PCIe/PCI-X I/O backplane.
The PCI-X backplane includes three full-length public PCI-X slots. The PCIe/PCI-X backplane
has two full-length public PCIe slots and one full-length public PCI-X slot. Wake-on- LAN is not
supported on any of the PCIe/PCI-X slots. The server does not support PCI hot plug.
The server provides a pair of internal slots that support optional RAID cards for the SAS hard
drives:
•
The first slot supports a PCI expansion card that contains an express I/O adapter (EIOA)
chip for translating ropes to the PCIe bus.
•
The second slot supports the RAID host bus adapter (HBA). The RAID HBA supports an
optional 256-MB mezzanine card and external battery. The battery is mounted on top of the
plastic CPU airflow guide, and is connected to the RAID HBA by a power cable. When the
RAID HBA is installed, the SAS cables are connected to the HBA instead of the LSI 1068
controller on the system board.
PCIe MPS Optimization
For PCIe-based systems, each PCIe device has a configurable MPS (maximum payload size)
parameter. Larger MPS values can enable the optimization to gain higher performance. MPS
Optimization is supported on PCIe systems running HP-UX, Open VMS, and Linux. System
firmware level greater than 01.05 performs an optimization during boot time to set the MPS
value to the largest size supported by both a PCIe root port and the devices below it.
The default server state is optimization disabled. When disabled, system firmware sets MPS to
the minimum value on each PCIe device.
The
info io
command displays the current PCIe MPS optimization setting. See
“info” (page 227)
.
22
Overview