Computer History

Destiny Viator

Computer History An Illustrated History of Computers Part 4 ___________________________________ John Kopplin © 2002 The title of forefather of today’s all-electronic digital computers is usually awarded to ENIAC, which stood for Electronic Numerical Integrator and Calculator. ENIAC […]

Computer History


An Illustrated History of Computers

Part 4


___________________________________



John Kopplin © 2002




The title of forefather of today’s all-electronic digital computers is
usually awarded to ENIAC, which stood for Electronic Numerical Integrator
and Calculator. ENIAC was built at the University of Pennsylvania between
1943 and 1945 by two professors, John Mauchly and the 24 year old
J. Presper Eckert,
who got funding from the war department after promising they could
build a machine that would replace all the “computers”, meaning the
women who were employed calculating the firing tables for the army’s artillery
guns. The day that Mauchly and Eckert saw the first small piece of ENIAC work,
the persons they ran to bring to their lab to show off their progress were some
of these female computers (one of whom remarked, “I was astounded that it
took all this equipment to multiply 5 by 1000”).


ENIAC filled a 20 by 40 foot room, weighed 30 tons, and used more than 18,000
vacuum tubes. Like the Mark I, ENIAC employed paper card readers obtained from
IBM (these were a regular product for IBM, as they were a long established part
of business accounting machines, IBM’s forte). When operating, the ENIAC was
silent but you knew it was on as the 18,000 vacuum tubes each generated waste
heat like a light bulb and all this heat (174,000 watts of heat) meant that the
computer could only be operated in a specially designed room with its own heavy
duty air conditioning system. Only the left half of ENIAC is visible in the
first picture, the right half was basically a mirror image of what’s visible.




Two views of ENIAC: the “Electronic Numerical Integrator and
Calculator” (note that it wasn’t even given the name of computer
since “computers” were people) [U.S. Army photo]






To reprogram the ENIAC you had to rearrange the patch cords that you can
observe on the left in the prior photo, and the settings of 3000 switches that
you can observe on the right. To program a modern computer, you type out a
program with statements like:



    Circumference = 3.14 * diameter




To perform this computation on ENIAC you had to rearrange a large number of
patch cords and then locate three particular knobs on that vast wall of knobs
and set them to 3, 1, and 4.




Reprogramming ENIAC involved a hike [U.S. Army photo]



Once the army agreed to fund ENIAC, Mauchly and Eckert worked around the
clock, seven days a week, hoping to complete the machine in time to contribute
to the war. Their war-time effort was so intense that most days they ate all 3
meals in the company of the army Captain who was their liaison with their
military sponsors. They were allowed a small staff but soon observed that
they could hire only the most junior members of the University of Pennsylvania
staff because the more experienced faculty members knew that their proposed
machine would never work.


One of the most obvious problems was that the design would require 18,000
vacuum tubes to all work simultaneously. Vacuum tubes were so notoriously
unreliable that even twenty years later many neighborhood drug stores provided
a “tube tester” that allowed homeowners to bring in the vacuum tubes
from their television sets and determine which one of the tubes was causing
their TV to fail. And television sets only incorporated about 30 vacuum tubes.
The device that used the largest number of vacuum tubes was an electronic
organ: it incorporated 160 tubes. The idea that 18,000 tubes could function
together was considered so unlikely that the dominant vacuum tube supplier of
the day, RCA, refused to join the project (but did supply tubes in the interest
of “wartime cooperation”). Eckert solved the tube reliability problem
through extremely careful circuit design. He was so thorough that before he
chose the type of wire cabling he would employ in ENIAC he first ran an
experiment where he starved lab rats for a few days and then gave them samples
of all the available types of cable to determine which they least liked to eat.
Here’s a look at a small number of the vacuum tubes in ENIAC:




Even with 18,000 vacuum tubes, ENIAC could only hold 20 numbers at a time.
However, thanks to the elimination of moving parts it ran much faster than the
Mark I: a multiplication that required 6 seconds on the Mark I could be
performed on ENIAC in 2.8 thousandths of a second. ENIAC’s basic clock speed
was 100,000 cycles per second. Today’s home computers employ clock speeds of
1,000,000,000 cycles per second. Built with $500,000 from the U.S. Army,
ENIAC’s first task was to compute whether or not it was possible to build a
hydrogen bomb (the atomic bomb was completed during the war and hence is older
than ENIAC). The very first problem run on ENIAC required only 20 seconds and
was checked against an answer obtained after forty hours of work with a
mechanical calculator. After chewing on half a million punch cards for six
weeks, ENIAC did humanity no favor when it declared the hydrogen bomb feasible.
This first ENIAC program remains classified even today.


Once ENIAC was finished and proved worthy of the cost of its development,
its designers set about to eliminate the obnoxious fact that reprogramming the
computer required a physical modification of all the patch cords and switches.
It took days to change ENIAC’s program. Eckert and Mauchly’s next teamed up
with the mathematician
John von Neumann to design EDVAC, which
pioneered the stored program. Because he was the first to publish
a description of this new computer, von Neumann is often wrongly credited with
the realization that the program (that is, the sequence of computation steps)
could be represented electronically just as the data was. But this major
breakthrough can be found in Eckert’s notes long before he ever started
working with von Neumann. Eckert was no slouch: while in high school Eckert
had scored the second highest math SAT score in the entire country.


After ENIAC and EDVAC came other computers with humorous names such as
ILLIAC, JOHNNIAC, and, of course, MANIAC. ILLIAC was built at the University of
Illinois at Champaign-Urbana, which is probably why the science fiction author Arthur
C. Clarke chose to have the HAL computer of his famous book “2001: A Space
Odyssey” born at Champaign-Urbana. Have you ever noticed that you can
shift each of the letters of IBM backward by one alphabet position and get HAL?




ILLIAC II built at the University of Illinois (it is a good thing computers
were one-of-a-kind creations in these days, can you imagine being
asked to duplicate this?)






HAL from the movie “2001: A Space Odyssey”. Look at the previous
picture to understand why the movie makers in 1968 assumed computers
of the future would be things you walk into.



JOHNNIAC was a reference to John von Neumann, who was unquestionably a
genius. At age 6 he could tell jokes in classical Greek. By 8 he was doing
calculus. He could recite books he had read years earlier word for word. He
could read a page of the phone directory and then recite it backwards. On
one occasion it took von Neumann only 6 minutes to solve a problem in his
head that another professor had spent hours on using a mechanical calculator.
Von Neumann is perhaps most famous (infamous?) as the man who worked out
the complicated method needed to detonate an atomic bomb.


Once the computer’s program was represented electronically, modifications to
that program could happen as fast as the computer could compute. In fact,
computer programs could now modify themselves while they ran (such programs
are called self-modifying programs). This introduced a new way for a program to
fail: faulty logic in the program could cause it to damage itself. This is one
source of the general protection fault famous in MS-DOS and the
blue screen of death famous in Windows.


Today, one of the most notable characteristics of a computer is the fact
that its ability to be reprogrammed allows it to contribute
to a wide variety of endeavors, such as the following completely unrelated
fields:

  • the creation of special effects for movies,
  • the compression of music to allow more minutes of music
    to fit within the limited memory of an MP3 player,
  • the observation of car tire rotation to detect and prevent
    skids in an anti-lock braking system (ABS),
  • the analysis of the writing style in Shakespeare’s work with
    the goal of proving whether a single individual really was
    responsible for all these pieces.


By the end of the 1950’s computers were no longer one-of-a-kind hand built
devices owned only by universities and government research labs. Eckert and
Mauchly left the University of Pennsylvania over a dispute about who owned the
patents for their invention. They decided to set up their own company.
Their first product was the famous UNIVAC computer, the first
commercial (that is, mass produced) computer. In the 50’s, UNIVAC (a contraction of
“Universal Automatic Computer”) was the household word for
“computer” just as “Kleenex” is for “tissue”. The
first UNIVAC was sold, appropriately enough, to the Census bureau. UNIVAC was
also the first computer to employ magnetic tape. Many people still confuse a
picture of a reel-to-reel tape recorder with a picture of a mainframe computer.




A reel-to-reel tape drive [photo courtesy of The Computer Museum]



ENIAC was unquestionably the origin of the U.S. commercial computer
industry, but its inventors, Mauchly and Eckert, never achieved fortune from
their work and their company fell into financial problems and was sold at a
loss. By 1955 IBM was selling more computers than UNIVAC and by the 1960’s the
group of eight companies selling computers was known as “IBM and the seven
dwarfs”. IBM grew so dominant that the federal government pursued
anti-trust proceedings against them from 1969 to 1982 (notice the pace of our
country’s legal system). You might wonder what type of event is required to
dislodge an industry heavyweight. In IBM’s case it was their own decision to
hire an unknown but aggressive firm called Microsoft to provide
the software for their personal computer (PC). This lucrative contract allowed
Microsoft to grow so dominant that by the year 2000 their market capitalization
(the total value of their stock) was twice that of IBM and they were convicted
in Federal Court of running an illegal monopoly.


If you learned computer programming in the 1970’s, you dealt with what today
are called mainframe computers, such as the IBM 7090 (shown below),
IBM 360, or IBM 370.




The IBM 7094, a typical mainframe computer [photo courtesy of IBM]






There were 2 ways to interact with a mainframe. The first was called
time sharing because the computer gave each user a tiny sliver of
time in a round-robin fashion. Perhaps 100 users would be simultaneously logged
on, each typing on a teletype such as the following:




The Teletype was the standard mechanism used to
interact with a time-sharing computer



A teletype was a motorized typewriter that could transmit your keystrokes to
the mainframe and then print the computer’s response on its roll of paper. You
typed a single line of text, hit the carriage return button, and waited for the
teletype to begin noisily printing the computer’s response (at a whopping 10
characters per second). On the left-hand side of the teletype in the prior
picture you can observe a paper tape reader and writer (i.e., puncher). Here’s
a close-up of paper tape:




Three views of paper tape









After observing the holes in paper tape it is perhaps obvious why all
computers use binary numbers to represent data: a binary bit (that is, one
digit of a binary number) can only have the value of 0 or 1 (just as a decimal
digit can only have the value of 0 thru 9). Something which can only take two
states is very easy to manufacture, control, and sense. In the case of paper
tape, the hole has either been punched or it has not. Electro-mechanical
computers such as the Mark I used relays to represent data because a relay
(which is just a motor driven switch) can only be open or closed. The earliest
all-electronic computers used vacuum tubes as switches: they too were either
open or closed. Transistors replaced vacuum tubes because they too could act as
switches but were smaller, cheaper, and consumed less power.


Paper tape has a long history as well. It was first used as an information
storage medium by Sir Charles Wheatstone, who used it to store Morse code that
was arriving via the newly invented telegraph (incidentally, Wheatstone was
also the inventor of the accordion).


The alternative to time sharing was batch mode processing,
where the computer gives its full attention to your program.
In exchange for getting the computer’s full attention at run-time, you had to
agree to prepare your program off-line on a key punch machine which
generated punch cards.




An IBM Key Punch machine which operates like a typewriter except it
produces punched cards rather than a printed sheet of paper



University students in the 1970’s bought blank cards a linear foot at a time
from the university bookstore. Each card could hold only 1 program statement.
To submit your program to the mainframe, you placed your stack of cards in the
hopper of a card reader. Your program would be run whenever the computer made
it that far. You often submitted your deck and then went to dinner or to bed
and came back later hoping to see a successful printout showing your results.
Obviously, a program run in batch mode could not be interactive.


But things changed fast. By the 1990’s a university student would typically
own his own computer and have exclusive use of it in his dorm room.




The original IBM Personal Computer (PC)



This transformation was a result of the invention of the
microprocessor. A microprocessor (uP) is a computer that is
fabricated on an integrated circuit (IC). Computers had been around for 20
years before the first microprocessor was developed at Intel in 1971.
The micro in the name microprocessor refers to the physical size. Intel didn’t invent
the electronic computer. But they were the first to succeed in cramming an entire
computer on a single chip (IC). Intel was started in 1968 and
initially produced only semiconductor memory (Intel invented both the DRAM and
the EPROM, two memory technologies that are still going strong today). In 1969
they were approached by Busicom, a Japanese manufacturer of high
performance calculators (these were typewriter sized units, the first
shirt-pocket sized scientific calculator was the Hewlett-Packard HP35
introduced in 1972). Busicom wanted Intel to produce 12 custom calculator
chips: one chip dedicated to the keyboard, another chip dedicated to the
display, another for the printer, etc. But integrated circuits were (and are)
expensive to design and this approach would have required Busicom to bear the
full expense of developing 12 new chips since these 12 chips would only be of
use to them.




A typical Busicom desk calculator



But a new Intel employee (Ted Hoff) convinced Busicom to instead accept a
general purpose computer chip which, like all computers, could be reprogrammed
for many different tasks (like controlling a keyboard, a display, a printer,
etc.). Intel argued that since the chip could be reprogrammed for alternative
purposes, the cost of developing it could be spread out over more users and
hence would be less expensive to each user. The general purpose computer is
adapted to each new purpose by writing a program which is a
sequence of instructions stored in memory (which happened to be Intel’s forte).
Busicom agreed to pay Intel to design a general purpose chip and to get a price
break since it would allow Intel to sell the resulting chip to others. But
development of the chip took longer than expected and Busicom pulled out of the
project. Intel knew it had a winner by that point and gladly refunded all of
Busicom’s investment just to gain sole rights to the device which they finished
on their own.


Thus became the Intel 4004, the first microprocessor (uP). The 4004 consisted of
2300 transistors and was clocked at 108 kHz (i.e., 108,000 times per second).
Compare this to the 42 million transistors and the 2 GHz clock rate (i.e.,
2,000,000,000 times per second) used in a Pentium 4. One of Intel’s 4004
chips still functions aboard the Pioneer 10 spacecraft, which is now the
man-made object farthest from the earth. Curiously, Busicom went bankrupt
and never ended up using the ground-breaking microprocessor.


Intel followed the 4004 with the 8008 and 8080. Intel priced
the 8080 microprocessor at $360 dollars as an insult to IBM’s famous
360 mainframe which cost millions of dollars.
The 8080 was employed in the MITS Altair
computer, which was the world’s first personal computer (PC). It
was personal all right: you had to build it yourself from a kit of parts that
arrived in the mail. This kit didn’t even include an enclosure and that is the
reason the unit shown below doesn’t match the picture on the magazine cover.




The Altair 8800, the first PC



A Harvard freshman by the name of Bill Gates
decided to drop out of college so he could concentrate all his time writing
programs for this computer. This early experienced put Bill Gates in the right
place at the right time once IBM decided to standardize on the Intel
microprocessors for their line of PCs in 1981. The Intel Pentium 4 used in
today’s PCs is still compatible with the Intel 8088 used in IBM’s first PC.


If you’ve enjoyed this history of computers, I encourage you to try your
own hand at programming a computer. That is the only way you will really
come to understand the concepts of looping, subroutines, high and low-level
languages, bits and bytes, etc. I have written a number of Windows programs
which teach computer programming in a fun, visually-engaging setting. I start
my students on a programmable RPN calculator where we learn about programs,
statements, program and data memory, subroutines, logic and syntax errors,
stacks, etc. Then we move on to an 8051 microprocessor (which happens to
be the most widespread microprocessor on earth) where we learn about
microprocessors, bits and bytes, assembly language, addressing modes, etc.
Finally, we graduate to the most powerful language in use today: C++ (pronounced
“C plus plus”). These Windows programs are accompanied by a book’s worth of
on-line documentation which serves as a self-study guide, allowing you to
teach yourself computer programming! The home page (URL) for this collection
of software is
www.computersciencelab.com.




Bibliography:


“ENIAC: The Triumphs and Tragedies of the World’s First Computer” by Scott McCartney.



Source Article

Next Post

Technology in the NZC / Welcome to Technology Online

Revised technology learning area The New Zealand Curriculum Online – Technology The Ministry of Education has revised the technology learning area in The New Zealand Curriculum. It now includes two new technological areas: The goal of this change is to ensure that all learners know about digital technologies and understand […]
Technology in the NZC / Welcome to Technology Online