/
&
'
&&
8 ,
! '
5
< & ! /
!
In the beginning there was five. Five bits of information per character. In the beginning
this was enough (barely). For with five bits you could represent 32 different characters.
This is how the Electro-mechanical Teletype received the characters it was to print.
But soon people wanted both upper and lower case characters and other symbols. So in
the second age of electronic communication we advanced to seven bits of data to
represent the characters we wanted to communicate to electronic devices. Now we had
127 different characters we could use, both upper and lower case and many symbols.
Surely this is enough, now we can rest…
“This is good, but I am Spanish and I need to communicate a ‘c’ cedilla character.”
Hmmm, we can add a bit, now we will use eight bits of data and we can have up to 255
characters that we can use. This is good, now we use eight bits, a byte.
But then… “I am Greek, I am Russian, I am Arabic…” was heard, “We need to use
characters that we are familiar with.”
What can be done to accommodate these languages? We will create code pages, one
for each language. Now with these code pages each language will have its own 255
characters that that language can use. This is the third age of electronic communication;
code pages.
Surely this is enough, now we can rest…
In a land far, far away, there are a people that use characters so different to us that they
do not look like characters at all, more like mini-pictures. And they have over 13,000
different characters.
Now, the fullness of time has come and we need to unify all these languages and
symbol sets into one coherent system, we will call it “Unicode”. Unicode code will use
two bytes, 16 bits, to communicate characters to and from electronic devices. Now we
can communicate more than 65,000 different characters and symbols.
Surely this is enough, so we rest.
&%
5
Unicode is a way of mapping all characters and symbols in use by modern languages. A
Unicode character consists of two bytes. These two bytes, 16 bits, allows us to represent
more than 65,000 characters.
Most legacy computing devices use one byte to represent a character or symbol. This
single byte design causes a problem when we want to use Unicode to represent
characters. Some steps have been taken to accommodate this two byte character
representation. One, is the creation of a new operating system using a New Technology.
Содержание EPIC 630
Страница 1: ......
Страница 9: ......
Страница 10: ...This page intentionally left blank...
Страница 16: ...4 7 8 5 This page intentionally left blank...
Страница 17: ......
Страница 18: ...This page intentionally left blank...
Страница 30: ......
Страница 31: ...This page intentionally left blank...
Страница 37: ......
Страница 38: ...This page intentionally left blank...
Страница 41: ......
Страница 42: ...This page intentionally left blank...
Страница 57: ...8...
Страница 162: ...8 5 1...
Страница 163: ...8 This page intentionally left blank...
Страница 173: ...8 This page intentionally left blank...
Страница 174: ......
Страница 175: ......
Страница 176: ...5 This page intentionally left blank...
Страница 179: ...5 0 1...
Страница 180: ...This page intentionally left blank...
Страница 194: ......
Страница 196: ......