Page 3 of 5 - 22 March 2005
Technical specifications are subject to change without prior notice
The motion estimation engine uses an advanced directional search algorithm able to precisely and
rapidly match the current macroblock (16x16 pixels) with its equivalent in the reference frame. The core
uses the frame stored in external memory as a reference. The processing generates one motion vector
per macroblock, giving the direction and amplitude of the detected motion. The matching precision is
half a pixel.
The motion estimation engine features advanced capabilities in order to shorten the search time as
much as possible. This module also delivers statistics used by the rate allocation algorithm. The module
also detects when a macroblock cannot be registered correctly to the reference frame and should better
be encoded as an Intra macroblock.
Motion compensation
This module computes the estimation error induced by the use of the vector generated by the motion
estimation engine. This module is bypassed for Intra-coded pictures (I-VOP). It makes the difference
between the current macroblock (in luminance and chrominance planes) and the predicted macroblock
from the reference frame, using the estimated motion vector. The result is known as the prediction error
and must be encoded by the texture coding.
Texture coding
This module encodes error frames when using P-VOPs (resulting from motion compensation) or
complete frames when using I-VOPs. This module has advanced low-power features where part of the
processing is switched off when it is detected to be useless. An approximation of its result is then used
instead.
The texture coding is made of Discrete Cosine Transform (DCT), AC/DC prediction, quantization and
zigzag encoding and works on block level (8x8):
•
The DCT decorrelates the frequency contents of the 8x8 blocks and delivers a matrix of 64
frequency coefficients, representing the frequency contents of the original block of data.
•
This is then quantized using a scalar quantizer. The quantization factor is programmable by the
user, allowing him to set the quality level.
•
The AC/DC predictor is used for I-VOPs and performs a prediction of the first line or the first column
of the quantized matrix, based on the transformed blocks situated on the left and on top of the
current block. The prediction source (top or left) is determined by a gradient analysis of DC
coefficients of the transformed blocks situated on top, top-left and left. This prediction results in a
higher compression efficiency.
•
The quantized matrix is then processed by the zigzag encoder, which reads this 8x8 matrix in a pre-
defined scan order; this results in a chain of coefficients where most of these are zero’s. This is then
further encoded thanks to a run encoder in order to reduce the size of the representation.
Entropy encoder
The entropy encoder finalizes the data compression by applying a Huffman encoding to both the motion
vectors and the compressed pixels. This module uses pre-defined look-up tables.
Bitstream packetization
This module generates compliant MPEG-4 VOL and VOP headers (short headers and data partitioning
are not supported but the core can be customized to add these features).
The encoder includes an output buffer allowing the user to generate a stream at constant bit rate (CBR
mode) by coupling the core to a small microprocessor running a rate allocation algorithm (Nios or
Microblaze for instance). The rate allocation algorithm can be purchased optionally.
Texture update
This feedback loop is performing the inverse operations of the texture coding: unzigzag, inverse
quantization and Inverse Discrete Cosine Transform (IDCT). This allows the encoder to take into account
quantization errors occurring at the decoder side when the picture is decoded. The encoder then uses
the result of this texture update module to update the contents of the frame store (when needed). This
new contents is then ready to be used as a reference frame for encoding the next frame.