background image

T

ECHNOLOGY 

B

RIEF

 (cont.)

8

ECG066/1198

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

..

Users should also consider tradeoffs in system cost and memory expansion that may result from
optimizing memory performance.  In some cases, optimum memory performance can reduce the
amount of available memory expansion.  For instance, a cost-effective and performance-enhanced
128-MB configuration can be built with eight 16-MB DIMMs.  Upgrading such a system to
512 MB, however, would require the addition of 128-MB DIMMs, which currently are not as cost
effective as eight 64-MB DIMMs or as replacing some of the 16-MB DIMMs with 64-MB
DIMMs.

D

U A L

- P

E E R  

P C I   B

U S E S

Some workstation applications require large I/O bandwidth.  For example, NASTRAN requires
significant amounts of both I/O bandwidth and memory bandwidth.  Other examples include
visualization programs that make heavy use of the 3D-graphics controller.  Such applications can
take full advantage of the new Highly Parallel System Architecture.

The new architecture features two independently operating PCI buses (that is, peer PCI buses),
each running at a peak speed of 133 MB/s.  Together they provide a peak aggregate I/O
bandwidth of 267 MB/s.

Since each PCI bus runs independently, it is possible to have two PCI bus masters transferring
data simultaneously.  In systems with two or more high bandwidth peripherals, optimum
performance can be achieved by splitting these peripherals evenly between the two PCI buses.

The new architecture also includes an I/O cache that improves system concurrency, reduces
latency for many PCI bus master accesses to system memory, and makes more efficient use of the
processor bus.  The I/O cache is a temporary buffer between the PCI bus and the processor bus.  It
is controlled by an I/O cache controller.  When a PCI bus master requests data from system
memory, the I/O cache controller automatically reads a full cache line (32 bytes) from system
memory at the processor transfer rate (533 MB/s) and stores it in the I/O cache.  If the PCI bus
master is reading memory sequentially (which is very typical), subsequent read requests from that
PCI bus master can be serviced from the I/O cache rather than directly from system memory.
Likewise, when a PCI bus master writes data, the data is stored in the I/O cache until the cache
contains a full cache line.  Then the I/O cache controller accesses the processor bus and sends the
entire cache line to system memory at the processor bus rate.  The I/O cache ensures better overall
PCI utilization than other implementations, which is important for high-bandwidth peripherals
such as 3D graphics.

In addition to doubling the I/O bandwidth, the dual-peer PCI buses can support more PCI slots
than a single PCI bus, providing greater I/O expandability.

M

U L T I P L E  

D

R I V E S

The high level of hardware parallelism provided in the new architecture can be enhanced even
more by adding multiple disk drives to the system.  By using more than one disk drive, certain
disk-oriented operations may run faster.  For instance, NASTRAN data sets can grow into
multiple gigabytes of data.  Since this data cannot fit into physical memory, it is paged to the disk
drive, which is then continuously accessed by the program as it performs its calculations on the
data.  To improve performance, a RAID-0 drive array can be used to increase disk performance.
A RAID-0 drive array will access multiple drives as a single logical device, thereby allowing data
to be accessed from two or more drives at the same time.  However, RAID-0 does not implement
fault management and prevention features such as mirroring, as other RAID levels do.

Содержание Professional 5100

Страница 1: ...tion DCC place growing demands on system resources increasing system bandwidth becomes a critical business issue After evaluating available system architectures Compaq determined that only a new highl...

Страница 2: ...TwinTray ROMPaq LicensePaq QVision SLT ProLinea SmartStart NetFlex DirectPlus QuickFind RemotePaq BackPaq TechPaq SpeedPaq QuickBack PaqFax Presario SilentCool CompaqCare design Aero SmartStation Min...

Страница 3: ...s from other architectures used in X86 systems ARCHITECTURE OVERVIEW Unlike any previous architecture used in X86 systems the new architecture being implemented by Compaq incorporates a highly paralle...

Страница 4: ...sional Workstation 8000 will use the 200 MHz Pentium Pro processor with an integrated 512 KB L2 cache that runs at the core processor speed of 200 MHz The high speed processor and cache provide top pe...

Страница 5: ...nd SMP aware applications Each memory bus is 144 bits wide and consists of 128 bits of data plus 16 bits for Error Checking and Correction ECC The new architecture uses buffered 60 ns Extended Data Ou...

Страница 6: ...arge CAS Precharge Figure 4 Basic timeline for sequential reads from the same page of DRAM While it is fairly common for a single processor to access consecutive memory locations consecutive cycles in...

Страница 7: ...MB 2 x 128 MB 4 512 MB 4 x 64 MB 4 x 64 MB 1 512 MB 6 x 64 MB 2 x 64 MB 2 512 MB 8 x 64 MB 3 512 MB 4 x 128 MB 4 1 GB 8 x 64 MB 4 x 128 MB 1 1 GB 4 x 128 MB 4 x 128 MB 2 1 GB 8 x 128 MB 3 1 GB 4 x 256...

Страница 8: ...bus It is controlled by an I O cache controller When a PCI bus master requests data from system memory the I O cache controller automatically reads a full cache line 32 bytes from system memory at the...

Страница 9: ...up to 1 07 GB s two to four times the bandwidth of other NT X86 systems Furthermore with dual peer PCI buses high bandwidth peripherals can be placed on separate PCI buses CPU CPU Memory Controller S...

Страница 10: ...hics controller to access separate memory pools concurrently Furthermore the ELSA Gloria L 3D graphics board and the Diamond Fire GL 4000 3D graphics board available with the new Compaq workstations h...

Страница 11: ...tively However a crossbar switch is an expensive solution in a system with several buses The reason is that all the buses must go into a single chip that has sufficient pins for each bus This requires...

Страница 12: ...o memory bus provides bandwidth of 533 MB s Figure 9 Block diagram of the LX architecture The Highly Parallel System Architecture supports industry standard EDO memory arranged in 2 1 interleaved bank...

Страница 13: ...eaks of up to 40 percent Thus PCI graphics cards still have headroom to double performance without saturating the PCI bus The dual PCI buses in the Highly Parallel System Architecture in some cases pr...

Отзывы: