192
January, 2004
Developer’s Manual
Intel XScale® Core
Developer’s Manual
Optimization Guide
A.4.1.4.
Locking Code into the Instruction Cache
One very important instruction cache feature is the ability to lock code into the instruction cache.
Once locked into the instruction cache, the code is always available for fast execution. Another
reason for locking critical code into cache is that with the round robin replacement policy,
eventually the code will be evicted, even if it is a very frequently executed function. Key code
components to consider for locking are:
•
Interrupt handlers
•
Real time clock handlers
•
OS critical code
•
Time critical application code
The disadvantage to locking code into the cache is that it reduces the cache size for the rest of the
program. How much code to lock is very application dependent and requires experimentation to
optimize.
Code placed into the instruction cache should be aligned on a 1024 byte boundary and placed
sequentially together as tightly as possible so as not to waste precious memory space. Making the
code sequential also insures even distribution across all cache ways. Though it is possible to choose
randomly located functions for cache locking, this approach runs the risk of landing multiple cache
ways in one set and few or none in another set. This distribution unevenness can lead to excessive
thrashing of the Data and Mini Caches