64
RS/6000 SP and Clustered IBM
^
pSeries Systems Handbook
Since the L2 cache maintains inclusion with the processor's L1 cache, it filters
the majority of other processor snoop transactions, thus, reducing the contention
for the L1 cache and increasing processor performance.
332 MHz SMP node X5 cache controller supports both the shared and modified
intervention protocols of the SMP bus. If one L2 requests a cache line that
another L2 already owns, the memory subsystem is bypassed, and the cache
line is directly transferred from one cache to the other. The typical latency for a
read on the system bus that hits in another L2 cache is 6:1:1:1 bus cycles
compared to a best case memory latency of 14:1:1:1 bus cycles. This will
measurably reduce the average memory latency when additional L2 caches are
added to the system and accounts for the almost linear scalability for commercial
workloads.
2.8.4 Memory-I/O controller
The 332 MHZ SMP node has a high performance memory-I/O controller capable
of providing sustained memory bandwidth of over a 1.3 Gigabyte/sec and
sustained maximum I/O bandwidth of 400 MBps with multiple active bridges.
The memory controller supports one or two memory cards with up to eight
increments of synchronous dynamic RAM (SDRAM) memory on each card. Each
increment is a pair of dual in-line memory modules (DIMMs). There are two types
of DIMMs supported: 32 MB DIMMs that contain 16 Mb technology chips, or 128
MB DIMMs that contain 64 Mb technology chips. The DIMMs must be plugged in
pairs because each DIMM provides 72 bits of data (64 data bits + 8 bits of ECC
word), which, when added together, form the 144 bit memory interface. The
memory DIMMs used in the 332 MHz SMP node are 100 MHz (10 ns), JEDEC
standard non-buffered SDRAMs.
SDRAMs operate differently than regular DRAM or EDO DRAM. There is no
interleaving of DIMMs required to achieve high data transfer rates. The SDRAM
DIMM can supply data every 10 ns clock cycle once its access time is satisfied,
which is 60 ns. However, as with a traditional DRAM, that memory bank is then
busy (precharging) for a period before it can be accessed again. So, for
maximum performance, another bank needs to be accessed in the meantime.
The memory controller employs heavy queuing to be able to make intelligent
decisions about which memory bank to select next. It will queue up to eight
read
commands and eight
write
commands and attempt to schedule all reads to
non-busy banks and fill in with writes when no reads are pending. Writes will
assume priority if the write queue is full or there is a new read address matching
a write in the write queue.
Содержание RS/6000 SP
Страница 2: ......
Страница 6: ...iv RS 6000 SP and Clustered IBM pSeries Systems Handbook...
Страница 16: ...xiv RS 6000 SP and Clustered IBM pSeries Systems Handbook...
Страница 48: ...24 RS 6000 SP and Clustered IBM pSeries Systems Handbook...
Страница 58: ...34 RS 6000 SP and Clustered IBM pSeries Systems Handbook Figure 2 4 375 MHz POWER3 High Node packaging...
Страница 60: ...36 RS 6000 SP and Clustered IBM pSeries Systems Handbook Figure 2 5 375 Mhz POWER3 SMP Wide Node packaging...
Страница 100: ...76 RS 6000 SP and Clustered IBM pSeries Systems Handbook...
Страница 182: ...158 RS 6000 SP and Clustered IBM pSeries Systems Handbook...
Страница 218: ...194 RS 6000 SP and Clustered IBM pSeries Systems Handbook...
Страница 284: ...260 RS 6000 SP and Clustered IBM pSeries Systems Handbook...
Страница 387: ...Appendix A Naming convention 363 IBM RS 6000 7017 Model S70 7017 S70 S70 Node name Machine type M T Nickname used here...
Страница 388: ...364 RS 6000 SP and Clustered IBM pSeries Systems Handbook...
Страница 436: ...412 RS 6000 SP and Clustered IBM pSeries Systems Handbook...
Страница 477: ...0 1 spine 0 1 0 169 53 89 pages RS 6000 SP and Clustered IBM pSeries Systems Handbook...
Страница 478: ......
Страница 479: ......