(Excerpts from a message dated 09-11-99, Leonard Erickson to George
White)
Hi Leonard--
ML> Nowadays, the Average Idiot Home User hasn't heard of any character
ML>code other than ASCII, if even that :-(. As you well know, the stupid
ML>collating sequence of ASCII is due to the great desire of AT&T to pack
ML>everything they thought anybody would ever want into seven bits, while
ML>letting the presence of a single bit differentiate between lower- and
ML>upper-case alpha characters (a desirable characteristic only for fully
ML>mechanical terminals).
LE>ASCII *was* defined in 1963 (or was it 68?) you know. It was
>originally intended as a standard for moving data between different
>brands of mainframes (which all had their *own* character sets back
>then)
ASCII was defined as (and still is) a 7-bit code in the early
1960's. ASCII was designed primarily for the benefit of the "new" AT&T
TTY service, and had nothing whatsoever to do with transmitting data
between mainframes. At the time, all announced mainframes (big iron) in
existence used a 6-bit character set; the upcoming machines, that the
perpetrators weren't talking about yet, all used 8-bit codes. So the
manufacturers couldn't care less about a 7-bit standard, and all went
along with AT&T. (In retrospect, this was a very poor decision because
ASCII was, and still is, a very poor character-set code for computers.
Unfortunately, those of us who are not programming for mainframes still
live with that mistake!)
If you read my original post to George, rather than his excerpts
from it in his post to me, you would know that I stated that ASCII was
(and is) a seven-bit code. It was embedded in 8-bit media (primarily
mag tape) in an additional ANSI Standard in (IIRC) 1965, after IBM had
announced its family of 8-bit machines: S/360. The first desktop
computers used ASCII as a 7-bit code embedded in an 8-bit character set,
for reasons I have never understood. As I said in my original post, the
"upper 128" (heretofore unused) character codes were added by the IBM PC
designers in 1981, who called it "extended ASCII." AFAIK, there is no
official standard for an 8-bit code based on ASCII.
LE>Also, that uppercase/lower case distinction being on bit was
>important for programmers writing *tight* code. I used to use that
>single bit trick in programs back when 16k or DRAM cost several
>hundred dollars.
I wouldn't know. My first "ASCII" computer was a fully-populated
CP/M machine (64K RAM), vintage 1979. All my previous programming
experience was on either 6-bit or 8-bit machines, none of which used any
version of ASCII. AFAIAC, the convenience of one-bit differentiation
between upper- and lower-case characters doesn't make up for the
inconvenience of dealing with the stupid ASCII collating sequence (that
interspersed the special characters within the alphabetic characters.)
Regards,
--Murray
___
* MR/2 2.25 #120 * Prediction is very difficult, especially about the future
--- Maximus/2 2.02
* Origin: OS/2 Shareware BBS, telnet://bbs.os2bbs.com (1:109/347)
|