9-5
PERFORMANCE CONSIDERATIONS
an external bus cycle. In addition, the internal cache is updated when the address written to is con-
tained in the cache. This policy ensures consistency between the on-chip cache and the external
memory. The IntelDX4 processor can be configured to update main memory using the write-back
policy. During writes, the cache is updated when the address being written to is contained in the
cache. The write is not propagated through the system to memory, but is stored and written to
memory during a future update.
9.3.2
Performance Effects of the On-Chip Cache
If all program operations use on-chip resources, the fastest possible execution is achieved, as the
on-chip registers and cache satisfy all requests. However, on cache read misses or any memory
write operation, the external bus has to be accessed, reducing system performance.
A hit rate of approximately 95% is realized from the on-chip cache, depending on the application.
The high level of cache hits has three main effects.
1.
Performance is improved. The Intel486 processor can access data from its on-chip cache
every clock. This high bandwidth allows the execution unit of the Intel486 processor to
execute many common instructions in one clock.
2.
The system bus utilization decreases. Because a high percentage of reads are satisfied by
the cache, the Intel486 processor bus is idle a large percentage of the time. Additional bus
masters can reside in the system without bus saturation and the resulting performance
degradation.
3.
The ratio of writes to reads is increased on the external bus. The number of reads is
decreased but the amount of writes remains constant. Therefore, main memory systems
should have low latency on write operations.
Internally, two separate 128-bit wide prefetch buffers interface to the L1 cache unit. These can
be filled with data fetched from the on-board cache in one clock cycle, or by external memory in
as few as four clock cycles. Because the wide prefetch buffers satisfy multiple prefetches, the
usual degradation caused by a combined code cache and data cache scheme is avoided.
To optimize performance during cache line fills, a technique called bypassing is used. The first
cycle of a cache line fill satisfies the original request. Data read in during the first cycle is sent
directly to the requesting unit. Because of this, it is not necessary to wait for the entire cache line
to fill before the requested data can be used.
Figure 9-1
shows the on-chip hit rates for prefetch and read operations when running the pro-
grams shown in
Table 9-2
.
Содержание Embedded Intel486
Страница 16: ......
Страница 18: ......
Страница 26: ......
Страница 28: ......
Страница 42: ......
Страница 44: ......
Страница 62: ......
Страница 64: ......
Страница 138: ......
Страница 139: ...5 Memory Subsystem Design Chapter Contents 5 1 Introduction 5 1 5 2 Processor and Cache Feature Overview 5 1 ...
Страница 140: ......
Страница 148: ......
Страница 150: ......
Страница 170: ......
Страница 172: ......
Страница 226: ......
Страница 228: ......
Страница 264: ......
Страница 282: ......
Страница 284: ......