![Infineon Technologies XC2200 User Manual Download Page 31](http://html1.mh-extra.com/html/infineon-technologies/xc2200/xc2200_user-manual_2055439031.webp)
XC2200 Derivatives
System Units (Vol. 1 of 2)
Architectural Overview
User’s Manual
2-4
V2.1, 2008-08
ArchitectureX22, V1.1
2.1.1
High Instruction Bandwidth/Fast Execution
Based on the hardware provisions, most of the XC2200’s instructions can be executed
in just one clock cycle (1/f
SYS
). This includes arithmetic instructions, logic instructions,
and move instructions with most addressing modes.
Special instructions such as JMPS take more than one machine cycle. Divide
instructions are mainly executed in the background, so other instructions can be
executed in parallel. Due to the prediction mechanism (see
), correctly
predicted branch instructions require only one cycle or can even be overlaid with another
instruction (zero-cycle jump).
The instruction cycle time is dramatically reduced through the use of instruction
pipelining. This technique allows the core CPU to process portions of multiple sequential
instruction stages in parallel. Up to seven stages can operate in parallel:
The two-stage instruction fetch pipeline
fetches and preprocesses instructions from
the respective program memory:
PREFETCH:
Instructions are prefetched from the PMU in the predicted order. The
instructions are preprocessed in the branch detection unit to detect branches. The
prediction logic determines if branches are assumed to be taken or not.
FETCH:
The instruction pointer for the next instruction to be fetched is calculated
according to the branch prediction rules. The branch folding unit preprocesses detected
branches and combines them with the preceding instructions to enable zero-cycle
branch execution. Prefetched instructions are stored in the instruction FIFO, while stored
instructions are moved from the instruction FIFO to the instruction processing pipeline.
The five-stage instruction processing pipeline
executes the respective instructions:
DECODE:
The previously fetched instruction is decoded and the GPR used for indirect
addressing is read from the register file, if required.
ADDRESS:
All operand addresses are calculated. For instructions implicitly accessing
the stack the stack pointer (SP) is decremented or incremented.
MEMORY:
All required operands are fetched.
EXECUTE:
The specified operation (ALU or MAC) is performed on the previously
fetched operands. The condition flags are updated. Explicit write operations to CPU-
SFRs are executed. GPRs used for indirect addressing are incremented or
decremented, if required.
WRITE BACK:
The result operands are written to the specified locations. Operands
located in the DPRAM are stored via the write-back buffer.