15 Basics
189
information plus redundancy) is called the
code rate
. A very simple code, the (7,4)
Hamming code for example has a
code
rate
of 4/7, as for every four useful bits of
information, three redundancy bits are added. It can correct exactly one bit error per block
of seven bits. If however two or more bits have their polarity changed in transmission,
then this simple code fails.
The coding theory distinguishes between two main classes of codes: The block codes and
the convolutional codes. In block codes (e.g. Hamming codes, the Golay code or Reed
Solomon codes), the data stream is chopped into relatively small pieces called blocks.
The coding rules or algorithm is then carried out on these blocks. Block codes were the
first to be developed due to their simplicity. Unfortunately, in practice they have all
proved to be rather weak, as only a very few bits per block can be corrected. The (24,12)
Golay code for example can only correct a maximum of three bits in a block of 24, even
though there is a redundancy of twelve bits contained in each block. The coding rate is
therefore classed as 1/2. (for insiders: The problem with block codes is mainly that they
do not adhere to one of Shannon’s theorems. According to Shannon, good codes should
be as long as possible, and as unsystematic as possible).
At the beginning of the sixties, the convolutional codes began to slowly gain importance.
In this form of coding, a message (or a data packet) is coded as a complete entity. The
actual encoder consists of a tapped shift register, and carries out an algorithm which
strongly resembles the mathematical convolution integral - hence the name. The length of
the shift register is called the
constraint length
, and sets a limit to the correction capacity
that can be achieved. To decode convolutional codes, a number of different methods can
be employed. The optimum decoder, that really can achieve the maximum possible gain
from the code, is called a Viterbi decoder. Unfortunately, there is an exponential relation-
ship between constraint length and the computing time required by a Viterbi decoder.
This is why the use of the Viterbi decoder for real time tasks has been limited to a maxi-
mum constraint length of six for many years. The present day generation of DSP’s in the
meantime, allow use to constraint length nine or in special cases, even more. As opposed
to block codes, convolutional codes with a Viterbi decoder easily allow the fine analogue
resolution of the received signal to be included in the decoding process, and hence even
more gain to be obtained. This method is called
soft decision
, and, depending on the form
of interference present, can give several dB additional gain compared to
hard decision
.
Another point, which occurs often in connection with coding, is so-called interleaving.
This is nothing more than a shuffling of the data. All codes, irrespective of whether block
codes or convolutional codes, when developed for maximum gain in noise, react more or
less over sensitively to short error bursts. On HF channels, the error burst (QRN, clicks,
short
fadeouts
etc). is about the most prevalent form of error found. In any optimized
error correction method for shortwave use, it is obligatory to use interleaving. Usually the
transmitted data is dismembered into short blocks (e.g. 16 bit long strings) that are
stacked one over another in a memory. The data is then transmitted, not in the original
sequence, but in vertical rows. At the receiver, exactly the reverse operation occurs. If
during transmission, an error burst takes place, this is cut into relatively widely spaced
single bit errors by the interleaving / de-interleaving process. These bit errors resemble
noise during the de coding, which the decoder is designed to handle easily.