15 Basics
Modulation
Coderate Net absolute throughput (bit/sec)
DBPSK 1/2
100
DQPSK 1/2
200
8-DPSK 2/3
400
16-DPSK 7/8
700
Table 15.2: The four speed settings and coding.
15.3.4
Online data compression
As with the Level-I protocol, PT-II uses Huffman coding for text compression on a
packet by packet basis. As an alternative, PACTOR-II can also use pseudo Markov
coding (PMC) as a compression method. PMC has been developed by
SCS
, and in-
creases the throughput of plain text by a factor of 1.3 compared to Huffman coding. The
PTC-IIex examines each packet individually to see if it would be faster to send it using
Huffman, PMC, or normal ASCII transmission. There are thus no disadvantages incurred
by using PMC. As a further selection criterion, the PT-II protocol supports separate
German and English coding tables for PMC, as well as a capitals mode for Huffman
coding and PMC. There is a total of 6 different compression variations available for use.
The PTC-IIex checks each packet automatically, and then very reliably chooses the best
compression method for transmitting the data. Additionally, PT-II uses "run length
coding", so that sequences of repeated characters, e.g. underlining, or columns in
graphics, may be transmitted very efficiently. With "run length coding", the system does
not transmit each character individually, instead an sample character is sent, followed by
the required number of same.
A few words on how PMC functions would not be out of place here. Normal Huffman
compression makes use of the statistical frequency distribution of characters in plain
language text. The characters most used (e.g. ‘e’ and ‘n’ ) are coded with only two or
three bits. Rare characters such as ‘Y’ can conversely be up to 15 bits long. On an
average, one obtains a symbol length of around 4.7 bits, which is a considerable
compression factor compared to 7 bit ASCII of constant length. The Markov coding, to
put it very sloppily, is like a
doubled
Huffman compression. Here it is not just the simple
frequency distribution of characters which plays a role. Instead, the interest is in the
frequency distribution of the
leading’
or initial letter of any two byte sequence. Let us take
our example of an ‘e’. It is very probable that an ‘n’, an ‘r’ or a ‘t’ may follow. On the
other hand, it is extremely unlikely that an ‘X’ would be the next character. The resultant
frequency distribution is more accurate than the simple frequency distribution of the
characters in a text, and therefore allows a better compression. Every
leading
character
should allow, in principle, its own Huffman code for the following character to be built
up. Every
leading
character therefore lays down its own Huffman table for the following
characters.
Unfortunately, although very convincing in theory, this system has two very obvious
weak points. Firstly, the coding table would be impracticably large, as there would have
to be a Huffman table for every character. Secondly, the least common characters in
particular, show a very unstable (context dependent) resultant probability, and it must be
reckoned that particularly these characters would lead to a decrease in the effective
transmission speed with (non-adaptive) Markov compression.
177
Содержание PTC-IIex
Страница 14: ...List of Figures and Tables XII...
Страница 30: ...3 Installation 16...
Страница 108: ...7 Audio 94...
Страница 126: ...8 FAX 112...
Страница 173: ...12 SYStest 159...
Страница 183: ...14 Circuit Description 169...
Страница 195: ...15 Basics 181...
Страница 201: ...B Technical Data 187...
Страница 202: ...C Layout Appendix C 19 Layout B 1 Motherboard Figure B 1 Motherboard 188...
Страница 203: ...C Layout 189...
Страница 215: ...Index 202...