BCM1250/BCM1125/BCM1125H
User Manual
10/21/02
B r o a d c o m C o r p o r a t i o n
Page
106
Section 6: DRAM
Document
1250_1125-UM100CB-R
When a channel switches from writes to reads (and back) there is a delay while the data bus lines and timing
strobes (DQSs) turnaround from the controller driving to the memory driving (or the reverse). To get the best
bandwidth from the channel the controller batches writes and reads to minimize the number of these
turnaround delays. If the controller is doing reads then any writes get buffered in the RQQ. Eventually there
will either be no new read requests or the RQQ will become full of writes, the controller will then only have
writes to service and can drain them. Since all writes are posted (once they have been sent the sender receives
no reply) there is no problem in delaying them in this way.
When the controller is performing a write burst any reads will suffer additional latency as they wait in the RQQ
for the writes to drain. The simple case is that the read is trapped behind all the writes that have been buffered
up during the previous read burst. However, since the writes drain the RQQ additional writes could be inserted
into the queue further delaying the read. To minimize the extra latency a read will suffer during a write burst,
the controller counts the number of writes done in the burst and will switch to reads if the burst is longer than
the wr_limit (and there are reads waiting). The write limit is set per-channel in the
mc_config
register. The
memory bus utilization can be increased by increasing the limit, the average read latency will be decreased by
decreasing the limit.
A similar situation arises for a channel that is just doing reads. Reads to an open page will be issued ahead of
reads to a closed page. Once a read is completed the data can be returned and the RQQ entry freed. It is
therefore possible for more requests to the open page to be added to the queue, these will also bypass the
read to the closed page. To limit the length of time the read to the closed page is blocked, the controller limits
the number of reads that may pass it. Every time a read is bypassed its age is incremented, when it reaches
the age_limit no new entries will be permitted to pass it and it (and any entries ahead of it) will drain from the
queue. The age_limit is set per-channel in the
mc_config
register. The trade-off is the same as for the wr_limit:
the memory bus utilization will be increased by increasing the limit, the average read latency will be decreased
by decreasing the limit.
Reads from I/O bridge 1 can be given priority to ensure timely servicing. In most systems this should be
enabled (which is the default) to avoid transmit buffer underruns at high data rates. The low relative frequency
of I/O bridge 1 DMA requests minimizes the impact of this prioritization on other requests, but ensures that the
latency sensitive requests are not delayed behind a high frequency request stream (for example from the CPU
or Data Mover). Along with this RQQ entries may be reserved for I/O bridge 1 requests. If the number of entries
in the queue reaches the limit set in the iob1_qsize field in the
mc_config_1
register then all agents apart from
I/O bridge 1 will back off from accessing the memory until the number of entries in the queue falls below the
iob1_qsize set in
mc_config_0
. Some hysteresis is provided by having the two limits. The number of buffers
that should be reserved depends on the number of active DMA channels, their bandwidth, and the amount of
other memory activity in the system. For a relatively high load on the system a reasonable staring point is to
reserve 5 buffers. Note that if zeros are used for the iob1_qsize fields the memory controller performance will
be extremely poor.
Each memory channel interface consists of:
•
The memory pipeline control queue and scheduler (MCQ).
•
Memory data FIFO registers (MFIFO).
•
Configuration registers (CRREG).
The MCQ keeps track of the activities of the memory cycle, open rows and banks. It supplies this information
to the issue logic for the request queue and the memory scheduler.
Содержание BCM1125
Страница 18: ...BCM1250 BCM1125 BCM1125H User Manual 10 21 02 Broadcom Corporation Page xviii Document 1250_1125 UM100CB R ...
Страница 28: ...BCM1250 BCM1125 BCM1125H User Manual 10 21 02 Broadcom Corporation Page xxviii Document 1250_1125 UM100CB R ...
Страница 515: ...BCM1250 BCM1125 BCM1125H User Manual 10 21 02 Broadcom Corporation Page vii Index Document 1250_1125 UM100CB R ...