background image

    

A Detailed Look Inside the Intel

®

 NetBurst

 Micro-Architecture of the Intel Pentium

®

 4 Processor

Page 11

µops called traces, which are stored in the execution trace cache. The execution trace cache stores these µops in the
path of program execution flow, where the results of branches in the code are integrated into the same cache line.
This increases the instruction flow from the cache and makes better use of the overall cache storage space  since the
cache no longer stores instructions that are branched over and never executed. The execution trace cache can deliver
up to 3 µops per clock to the core.

The execution trace cache and the translation engine have cooperating branch prediction hardware. Branch targets
are predicted based on their linear address using branch prediction logic and fetched as soon as possible. Branch
targets are fetched from the execution trace cache if they are cached there, otherwise they are fetched from the
memory hierarchy. The translation engine’s branch prediction information is used to form traces along the most
likely paths.

The Out-of-Order Core

The core’s ability to execute instructions out of order is a key factor in enabling parallelism. This feature enables the
processor to reorder instructions so that if one µop is delayed while waiting for data or a contended resource, other
µops that appear later in the program order may proceed around it. The processor employs several buffers to smooth
the flow of µops. This implies that when one portion of the entire processor pipeline experiences a delay, that delay
may be covered by other operations executing in parallel (for example, in the core) or by the execution of µops
which were previously queued up in a buffer (for example, in the front end).

The delays described in this paper must be understood in this context. The core is designed to facilitate parallel
execution. It can dispatch up to six µops per cycle through the issue ports. (The issue ports are shown in Figure 4.)
Note that six µops per cycle exceeds the trace cache and retirement µop bandwidth. The higher bandwidth in the
core allows for peak bursts of greater than 3 µops and to achieve higher issue rates by allowing greater flexibility in
issuing µops to different execution ports.

Most execution units can start executing a new µop every cycle, so that several instructions can be in flight at a time
for each pipeline. A number of arithmetic logical unit (ALU) instructions can start two per cycle, and many floating-
point instructions can start one every two cycles. Finally, µops can begin execution, out of order, as soon as their
data inputs are ready and resources are available.

Retirement

The retirement section receives the results of the executed µops from the execution core and processes the results so
that the proper architectural state is updated according to the original program order. For semantically-correct
execution, the results of IA-32 instructions must be committed in original program order before it is retired.
Exceptions may be raised as instructions are retired. Thus, exceptions cannot occur speculatively, they occur in the
correct order, and the machine can be correctly restarted after an exception.

When a µop completes and writes its result to the destination, it is retired. Up to three µops may be retired per cycle.
The Reorder Buffer (ROB) is the unit in the processor which buffers completed µops, updates the architectural state
in order, and manages the ordering of exceptions.

The retirement section also keeps track of branches and sends updated branch target information to the Branch
Target Buffer (BTB) to update branch history.  Figure 3 illustrates the paths that are most frequently executing
inside the Intel NetBurst micro-arachitecture: an execution loop that interacts with multi-level cache hierarchy and
the system bus.

The following sections describe in more detail the operation of the front end and the execution core.

Front End Pipeline Detail

The following information about the front end operation may be useful for tuning software with respect to
prefetching, branch prediction, and execution trace cache operations.

Содержание NetBurst

Страница 1: ...A Detailed Look Inside the Intel NetBurst Micro Architecture of the Intel Pentium 4 Processor November 2000 ...

Страница 2: ...applications Intel may make changes to specifications and product descriptions at any time without notice Designers must not rely on the absence or characteristics of any features or instructions marked reserved or undefined Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them The Intel Pentium...

Страница 3: ...A Detailed Look Inside the Intel NetBurst Micro Architecture of the Intel Pentium 4 Processor Page 3 Revision History Revision Date Revision Major Changes 11 2000 1 0 Release ...

Страница 4: ... 9 The Design Considerations of the Intel NetBurst Micro architecture 9 Overview of the Intel NetBurst Micro architecture Pipeline 10 The Front End 10 The Out of order Core 11 Retirement 11 Front End Pipeline Detail 11 Prefetching 12 Decoder 12 Execution Trace Cache 12 Branch Prediction 12 Branch Hints 13 Execution Core Detail 13 Instruction Latency and Throughput 13 Execution Units and Issue Port...

Страница 5: ...s the foundation for the Intel Pentium 4 processor It includes several important new features and innovations that will allow the Intel Pentium 4 processor and future IA 32 processors to deliver industry leading performance for the next several years This paper provides an in depth examination of the features and functions the Intel NetBurst micro architecture ...

Страница 6: ... typical SIMD computation Here two sets of four packed data elements X1 X2 X3 and X4 and Y1 Y2 Y3 and Y4 are operated on in parallel with the same operation being performed on each corresponding pair of data elements X1 and Y1 X2 and Y2 X3 and Y3 and X4 and Y4 The results of the four parallel computations are a set of four packed data elements SIMD computations like those shown in Figure 1 were in...

Страница 7: ...re continues to run correctly without modification on IA 32 microprocessors that incorporate these technologies Existing software also runs correctly in the presence of new applications that incorporate these SIMD technologies The SSE and SSE2 instruction sets also introduced a set of cacheability and memory ordering instructions that can improve cache usage and application performance For more in...

Страница 8: ...two packed double precision floating point operands Adds 128 bit data types for SIMD integer operation on 16 byte 8 word 4 doubleword or 2 quadword integers Adds support for SIMD arithmetic on 64 bit integer operands Adds instructions for converting between new and existing data types Extends support for data shuffling Extends support for cacheability and memory ordering operations The SSE2 instru...

Страница 9: ...h the legacy IA 32 code and applications based on single instruction multiple data SIMD technology at high processing rates b to operate at high clock rates and to scale to higher performance and clock rates in the future To accomplish these design goals the Intel NetBurst micro architecture has many advanced features and improvements over the Pentium Pro processor micro architecture The major des...

Страница 10: ...n the pipeline The Front End The front end of the Intel NetBurst micro architecture consists of two parts fetch decode unit execution trace cache The front end performs several basic functions prefetches IA 32 instructions that are likely to be executed fetches instructions that have not already been prefetched decodes instructions into µops generates microcode for complex instructions and special...

Страница 11: ...in Figure 4 Note that six µops per cycle exceeds the trace cache and retirement µop bandwidth The higher bandwidth in the core allows for peak bursts of greater than 3 µops and to achieve higher issue rates by allowing greater flexibility in issuing µops to different execution ports Most execution units can start executing a new µop every cycle so that several instructions can be in flight at a ti...

Страница 12: ...oes not hold all of the µops that need to be executed in the execution core In some situations the execution core may need to execute a microcode flow instead of the µop traces that are stored in the trace cache The Pentium 4 processor is optimized so that most frequently executed IA 32 instructions come from the trace cache efficiently and continuously while only a few instructions involve the mi...

Страница 13: ...ir performance These hints take the form of prefixes to conditional branch instructions These prefixes have no effect for pre Pentium 4 processor implementations Branch hints are not guaranteed to have any effect and their function may vary across implementations However since branch hints are architecturally visible and the same code could be run on multiple implementations they should be inserte...

Страница 14: ...mmonly used IA 32 instructions which consist of four or less µops are provided in the Intel Pentium 4 Processor Optimization Reference Manual to aid instruction selection Execution Units and Issue Ports Each cycle the core may dispatch µops to one or more of the four issue ports At the micro architectural level store operations are further divided into two parts store data and store address operat...

Страница 15: ...he order of 12 processor cycles to get to the bus and back within the processor and 6 12 bus cycles to access memory if there is no bus congestion Each bus cycle equals several processor cycles The ratio of processor clock speed to the scalable bus clock speed is referred to as bus ratio For example one bus cycle for a 100 MHz bus is equal to 15 processor cycles on a 1 50 GHz processor Since the s...

Страница 16: ...y before initiating the fetches Must be added to new code does not benefit existing applications In comparison hardware prefetching for Pentium 4 processor has the following characteristics Works with existing applications Requires regular access patterns Has a start up penalty before the hardware prefetcher triggers and begins initiating fetches This has a larger effect for short arrays when hard...

Страница 17: ... having to wait until a write to memory and or cache is complete Writes are generally not on the critical path for dependence chains so it is often beneficial to delay writes for more efficient use of memory access bus cycles Store Forwarding Loads can be moved before stores that occurred earlier in the program if they are not predicted to load from the same linear address If they do read from the...

Отзывы: