Computational Units
2 – 33
2
2.4.2.5 Normalize
Numbers with redundant sign bits require normalizing. Normalizing a
number is the process of shifting a twos-complement number within a
field so that the rightmost sign bit lines up with the MSB position of the
field and recording how many places the number was shifted. The
operation can be thought of as a fixed-point to floating-point conversion,
generating an exponent and a mantissa.
Normalizing is a two-stage process. The first stage derives the exponent.
The second stage does the actual shifting. The first stage uses the EXP
instruction which detects the exponent value and loads it into the SE
register. This instruction (EXP) recognizes a (HI) and (LO) modifier. The
second stage uses the NORM instruction. NORM recognizes (HI) and (LO)
and also has the [SR OR] option. NORM uses the negated value of the SE
register as its shift control code. The negated value is used so that the shift
is made in the correct direction.
Here is a normalization example for a single precision input:
SE=EXP AR (HI);
Detects Exponent With Modifier = HI
Input:
11110110 11010100
SE set to:
–3
Normalize, with modifier = HI Shift driven by value in SE
Input:
11110110 11010100
SR:
10110110 10100 000 00000000 00000000
For a single precision input, the normalize operation can use either the
(HI) or (LO) modifier, depending on whether you want the result in SR1
or SR0, respectively.
Double precision values follow the same general scheme. The first stage
detects the exponent and the second stage normalizes the two halves of
the input. For double precision, however, there are two operations in each
stage.