The Progress of Computer Graphics technology from 1948 – 1979



Download 66.12 Kb.
Date29.01.2017
Size66.12 Kb.
#11325

The Progress of Computer Graphics technology from 1948 – 1979


Pages 37-44 from Nick Lambert’s thesis A Critical Examination of “Computer Art”: its history and application. [Submitted Oxford, 2003]
If Computer Art was the artistic outcome of several interlinked art movements, it also built on very recent developments in computer technology, both in the United States and Western Europe. The generation of artists who used analogue computers, such as Ben Laposky and Whitney, is closely linked with those who worked with the early digital machines, for instance, Charles Csuri and A. Michael Noll. However, while analogue computers emerged in the late 1960s as a range of video synthesisers and TV graphics machines, digital computers were deployed for Computer Aided Design and other drafting applications.
This in turn stemmed from the digital computer’s origins in military defence systems, going back to MIT’s U.S. Navy-funded Whirlwind system in the late 1940s.
As related by Norman H Taylor, who worked on the Whirlwind from 1948, the very first display was a representation of the machine’s storage tubes, the primitive form of memory used at that time. By running through each tube in turn, the program would show which ones were working or not. The programmers then wanted a way of addressing the non-working tubes, so Bob Everett invented a “light gun” to place over the points on the screen and identify the relevant tube. Norman Taylor believed this was the very first “man machine interactive control of the display”; the light cannon could also place or erase light spots on the screen, thus creating the MIT logo by direct interaction in 1948.
This graphical display immediately attracted the MIT public relations department and news organisations; Taylor presciently noted: “it was clear that displays attracted potential users; computer code did not.”1
More graphical programs followed, including the Bouncing Ball animation and a primitive “computer game” where this ball could be made to fall through a hole in the “floor” by means of turning frequency knobs to control its bouncing.
In 1949 or 50, a student called Dom Combelec used this system for designing the placement of antennae, creating patterns of distribution. The Whirlwind’s impact was considerable: so many visitors came to see it in the early 1950s (over 25,000) that the computing group had to set up a department to handle the visitors. They were mainly attracted by the displays and the novel means of interacting with the computer. As Taylor put it:
It was a little dull when you just put in numbers and got out numbers. But when we got the display started, it changed the whole thing, and I think that meant not only the bandwidth you can get out of a display system, but the man machine interaction of the light pen seemed to excite people beyond comprehension.2
From the Whirlwind, which had 5,000 tubes and occupied around a quarter-acre, the Lincoln Laboratory developed the SAGE air defence system, which used points of light to track radar traces of aircraft flying over the US coastline. (Taylor slide 13). The system’s terminals consisted of a circular screen, operated using a light gun and rows of switches. These were active from 1956 until 1978, with no less than 82 consoles connected to the central computer.

At this point in the Cold War, with both sides possessing nuclear weapons that could be delivered by bombers flying over the North Pole, the greatest fear in America was annihilation in a surprise nuclear strike. Huge resources were diverted into new aircraft detection systems like SAGE, a computer-driven radar targeting workstation operated with a light-pen that helped to speed the development of computer graphical interfaces. Just as importantly, SAGE was a distributed computer network that used servers to compare data and assess the threat from possible incoming Soviet bombers flying westwards over the Arctic Circle.


With these systems in place by the mid-1950s, the next step was to ensure they had a chance of surviving the dreaded “first strike” and enable some kind of continuity in US defence. A whole defensive infrastructure was created, with installations deep underground and scattered throughout the whole continental United States, which needed a new kind of distributed communications network to keep functioning. At this time, the first mainframe computers installed in major US universities and research institutes were being networked using simple telephone connections, but these were mainly point-to-point networks that were only functional between two nodes. Any more complex network would surely need a single central command centre and the problem was that this might be destroyed in the first attack. The question for the American researchers was how to create a survivable and robust network. The network that emerged, the ARPANET, laid the groundwork for the Internet that we know today, but more on this another time.
The Whirlwind was replaced by the TX-0, which used transistors and featured a more advanced display. It also had a wider range of programs available, and a refined version of the light gun: a smaller, lighter version called the light pen which was developed by Ben Gurley, later involved with the DEC PDP-1. Jack Gilmore described how the simulation of a scientific workstation with a grid of symbols led to a primitive drawing program, where the pictorial elements of lines and shapes were moved around with the light pen.3
Soon, Gilmore and his team realised that, as well as characters, “we could literally produce pieces of drawings and then put them together […] we actually developed a fairly primitive drawing system.” 4 Because they had developed the concept of cutting and pasting text around the screen, they developed a “tracking routine” for the light pen so it could move these text strings.5
The first scanner was invented at the National Bureau of Standards in 1957 and attached to their SEAC computer. The ability to input images from outside the computer would transform the nature of computer graphics; however, this first machine was designed to scan text:
It occurred to me that a general-purpose computer could be used to simulate the many character recognition logics that were being proposed for construction in hardware. A further important advantage of […] such a device was that it would enable programs to be written to simulate the […] ways in which humans view the visible world.6
In a move that would have serious and lasting consequences for digital images, the SEAC group decided to make the scans into binary representations, broken up into picture elements or pixels, of uniform size. The reasons were mainly connected with hardware limitations, as is often the case with computer standards established early on, which later restrict more advanced developments:
Several decisions made in our construction of the first scanner have been enshrined in engineering practice ever since. Perhaps the most insidious was the decision to use square pixels. Simple engineering considerations as well as the limited memory capacity of SEAC dictated that the scanner represent images as rectangular arrays […] No attempt was made to predicate the digitization protocol on the nature of the image. Every image was made to fit the Procrustean requirement of the scanner. 7
Thus as early as 1957 a basic format of the digitised image had been put in place, though most of the computer images of this period were vector graphics. In fact, the impetus for vector-based displays came from the field of Computer-Aided Design, because the computer was initially employed for industrial CAD applications.
The following year, another signal development took place using computer graphics for the most distinctive computer entertainment format: the computer game. Although the 1962 game “Spacewar” on the DEC PDP is generally held to be the first of its kind, it was preceded by “Tennis for Two”. This was devised in 1958 by physicist Willy Higinbotham at the Brookhaven National Laboratory, an atomic research station on Long Island. As part of a science education campaign to reassure local residents about the research work, Higinbotham hit upon the idea of making a simple computer tennis game that drew on graphics work with analogue computers.
Transistors simulated the effects of gravity and force, and a ball travelled between two paddles; even wind drag was included. It was highly successful, with people queuing for two hours to play it; yet it had only a localised impact and was withdrawn when the science exhibit was revised in the early 1960s.8 This testifies to the swift comprehension of the computer’s potentials amongst the engineers who created it; [significantly, this game predates the first interactive Computer Art by at least six years.]
Lawrence G. Roberts, meanwhile, was working on scanning photographs with the TX-0’s successor, the TX-2. As part of this work, he wanted to find ways of processing 3D information contained in these images into true 3D graphics. Roberts had to find ways of representing 3D images, taking in a range of theories speculating how humans perceived in three dimensions. In the course of this work, he developed the “hidden line display capability”, which basically hides any lines invisible to the viewer on a 3D object.
Roberts also needed a way of displaying objects within a perspective view, for which he had to integrate perspective geometry with the matrices that described the coordinates of the object’s component points. As Roberts recalled, he looked at the perspective geometry of the 19th century to see how perspective objects were displayed, and then introduced matrices so it could be represented on a computer. His main point was that contemporary mathematicians had little interest, indeed knowledge, of the area: he had to return to the earlier texts to discover how it was achieved. But the integration of the two areas was his achievement, and out of this came the field of “computational geometry”.9
By combining the two, Roberts created the “four dimensional homogenous coordinate transform”, which is the basis for perspective transformations on the computer.10 The images shown here are some of the earliest, and even with the primitive hardware of the period, Roberts was able to rotate the images almost in real-time. This was the beginning of true three-dimensional imagery on the computer. As Binkley says:
Computational algorithms or picturing do not require placement in any real setting; indeed, if one wants to depict an actual object, the first step is to abstract its shape from the real world of a selected coordinate space [note that 3D programs often start with Platonic solids and make “real” objects by building with them or deforming them]. The object must be described using numbers to fix its characteristics (XO, YO, ZO). The picture plane is similarly determined with points or an equation (z =ZP), and the point of view simply becomes an ordered triple (XV, YV, ZV)11
Because the computer’s three-dimensional space derives directly from projective geometry, as outlined by Roberts, its heritage is not only pictorial space, but the plans and diagrams that model components and buildings in physical reality. In other words, there has to be a direct correspondence between points in computer space and in physical space, to allow for the accurate manufacture of physical objects. This has led to a close reciprocity between physical and digital; yet one of the major uses of the computer’s modelling abilities has been to create “realistic” images of fantastic creatures and fictional settings. It is an interesting paradox that systems intended to simulate realistic appearances have more often been employed for fictional and imaginative purposes. However, this would seem to be inherent in the notion of simulation.
Also at this time, Steven Coons was developing algorithms for describing surfaces through parametric methods. He also considered how best to produce “a system that would […] join man and machine in an intimate cooperative complex” – a CAD system.12 Coons described a setup not unlike the SAGE terminals, with an operator working on a CRT screen with a light pen.
Here, then, is a starting point for direct interaction with the computer: blocks of graphics being dragged into place with a light pen to create larger diagrams, taking place on a CRT display that enabled all actions to be viewed in real-time. Obviously, its primitive nature cannot be overstressed: in no way was the output of this machine considered “art”, nor can it be seen as such in retrospect. But it was pioneering and helped to pave the way directly for Ivan Sutherland’s seminal Sketchpad program of the early 1960s.
Sketchpad was produced on the TX-0’s successor, the TX-2, which was much more graphics-oriented than its predecessor. Gilmore recalled its use for simulating planets and gravity, so that their motion around the sun, and the moons around the planets, could be observed. Their velocity and acceleration could be controlled with the light pen, another real-time usage.13
This was an early exercise in computer animation and simulation, and in some ways pointed the way towards interactive video games, as did SPACEWAR, an early attempt in this direction on the DEC PDP-1 in 1961. Certainly, the TX-2 was quite capable in the area of graphics and manipulation. The DEC machine was also used by Gilmore as the basis of an Electronic Drafting Machine - EDM - for the ITEK corporation.
The ITEK EDM was based on the PDP-1. This was developed by Norman Taylor, Gilmore and others from 1960-62, and was intended for use in the architectural and engineering industries. Using the light pen and drafting software, the operator could draw lines, circles and other pictorial elements, as well as specifying distances and angles. They could also link these elements together to produce sub drawings, or macros, that could be copied and reflected around each axis. The interaction of light pen and visual display created “the illusion of drawing on the CRT with very straight-edged tools.”14 Thus the concept of the drawing program, and of direct interaction, was in place by 1962 when Time magazine reported on the ITEK in its March 2nd edition.
This system evidently contained in embryo many of the techniques that are now familiar to all computer-based illustrators and CAD users. Basically, it assisted the draftsman with the more time-consuming operations or those which required the pinpoint accuracy that only a computer could supply; its functionality was constrained by its assigned task and there is no evidence that the EDM was used directly in any Computer Art projects. However, it did influence the design of illustration and CAD software and was sold to Lockheed Aircraft, Martin Marietta and the U.S. Air Force.
It is also interesting to note the transition to CAD/CAM (Computer-aided Manufacture) systems as described by Pierre Bézier, who developed the ubiquitous Bézier curve which describes a curve between two points. Bézier worked on Renault’s in-house CAD system during the late 1960s-1970s, which was intended to be an interactive program capable of being used by the designers, to produce drawings and then prototype them as models. Such systems connected the freeform world of design with the physical realm; as Bézier put it, they “came from the ability to work, think, and react in the rigid Cartesian world of machine tools and, at the same time, in the more flexible, n-dimensional parametric world.”15 Thus the computer design package was grounded in “real-world” requirements from the start, especially the need for three-dimensional positioning and accurate dimensions.
The market for such design systems in the 1960s was limited to those with the funds to acquire room-filling computers and specialised technicians, and thus the main customers were the military and large university labs. Indeed, the military’s input into early graphics computers was crucial, because they needed machines with graphical displays in order to co-ordinate their defences and as visual systems in aircraft cockpits. In fact, the term “computer graphics” was coined by William Fetter of Boeing for his cockpit displays.
In summary, many conventions of computer graphics were laid down early on, and in spite of over 30 years’ refinements - including the revolution caused by cheap powerful desktop computers that has placed this technology within reach of most people - we are still using direct descendants of these initial systems. Of course, the experimental nature of these systems should not be forgotten. As Herbert Freeman noted, the need for a system that “in some sense would mirror the often barely realized but visually obvious relationships inherent in a two-dimensional picture” required special algorithms for generating shapes, hiding lines and shading. These all required powerful computers and stretched the capabilities of contemporary machines.16
These problems meant that most computer graphics research was carried out in labs where there was access to the most powerful mainframes available - universities and military establishments. Many of the initial difficulties listed by Freeman were overcome by sheer inventiveness on the part of engineers and programmers, and nowhere was this more apparent than in the case of Sketchpad.

Ivan Sutherland’s system seemed to spring fully-formed into existence, with all the accoutrements that we have come to expect from modern graphics packages. Working with the capable TX-2 computer, this graduate student at MIT created over the course of two years from 1961 to 1963 a comprehensive basis for all succeeding two-dimensional vector graphics software, and which had no small influence on three-dimensional projects as well.


The connection between physical action and its corresponding screen-based effect is fundamental to the GUI. When Sutherland introduced Sketchpad,the promise was that using the computer would become as transparent as drawing on paper. Freeman sees Sutherland’s system as the first in an evolutionary line that, via the Xerox STAR would lead to the Apple Macintosh and thence to Microsoft Windows, becoming the most widespread form of Human-Computer Interaction: “it was not until Sutherland developed his system for man-machine interactive picture generation that people became aware of the full potential offered by computer graphics.” 17
[Plate IX: Ivan Sutherland working with the first version of Sketchpad c.1961.]
This was certainly true of the young Andries van Dam, who saw a demonstration of Sketchpad in 1964; a film that Sutherland had released. It impressed him so much that he switched his course to the nascent field of computer graphics. Later, in 1968 he witnessed Doug Engelbart’s amazing demonstration of “window systems, the mouse, outline processing, sophisticated hypertext features and telecollaboration with video and audio communication”.18 Van Dam in no way overstates the importance of the GUIs that resulted from Engelbart’s early work when he asserts that the PC would be nowhere near as pervasive or successful without it.
Picture: Doug Engelbart’s first patent application for the mouse, showing a computer setup almost identical to modern desktops. Also, the first mouse prototype, 1968.
By the 1970s, the windowing system prototyped by Engelbart was being developed by Xerox’s Palo Alto Research Center (PARC). The Graphical User Interface was adopted by a number of operating systems in the early 1980s, most famously the Apple Macintosh, and thereafter the computer’s use in graphical fields grew exponentially.
After SEAC defined the concept of a “pixel”, there were a number of attempts to create paint programs which addressed them directly. Joan Miller at Bell Labs implemented a crude paint program so that users could “paint” on a frame buffer, in 1969-70; then Dick Shoup wrote SuperPaint, the first complete computer hardware and software solution at Xerox PARC in 1972-3. This contained all the essential elements of later paint packages: it allowed a painter to colour and modify pixels, using a palette of tools and effects.19 In a sense, this harked back to the Whirlwind developers writing “MIT” with their light gun, except that the increased power of 1970s workstations allowed for much more complex graphics.
In 1979, Ed Emshwiller at the New York Institute of Technology created his ground-breaking computer animation Sunstone using the Paint package created by Mark Levoy at Cornell on a 24-bit frame buffer.20
Sunstone marked the beginning of the maturity of computer animation and paint systems. Around this time they began to supplant the Scanimate and other analogue graphics computers. With the development of Lucasfilm’s paint package into the first version of Photoshop, this form of graphics software was combined with photo-manipulation and became widespread on desktop computers.
===========

Doug Engelbart demonstrated the first mouse-windows system at the Falls Joint Computer Conference in 1968.21 Building on Ivan Sutherland’s approach to directly interacting with computers via a light-pen aimed at the screen, Engelbart developed a device to move a cursor around a screen. He also created a graphical interface with the computer, using the metaphor of windows that displayed the available files. This system was developed further in the 1970s at Xerox’s Palo Alto research centre (PARC) and provided the basis for all future WIMP (windows-icon-mouse-pointer) interfaces. http://www.artmuseum.net/w2vr/archives/Engelbart/Engelbart.html

Engelbart’s vision was of “augmenting human intellect” by using computers in a very different way to their previous deployment as super-fast calculators. He saw them in terms of information storage and retrieval, and indeed as tools to help human decision-making. As Engelbart put it:

We see the quickest gains emerging from (1) giving the human the minute-by-minute services of a digital computer equipped with computer-driven cathode-ray-tube display, and (2) developing the new methods of thinking and working that allow the human to capitalize upon the computer's help. By this same strategy, we recommend that an initial research effort develop a prototype system of this sort aimed at increasing human effectiveness in the task of computer programming.22


Engelbart then went on to propose how an architect might use a visual system to design a building with access to all important data being fed through a live graphical system. The fact that this seems entirely natural to us some 46 years later shows just how prescient Engelbart’s vision was, although it took over fifteen years of research to bring it to fruition in the Xerox STAR workstation.

Whilst Engelbart was working at Stanford, there was an important cultural input in California that ran counter to the militaristic establishment, but made full use of the establishment’s need for cutting-edge research by inhabiting the universities and institutes that it established. This was the general counterculture of San Francisco, especially as personified in Stuart Brand, editor of the Whole Earth Catalog and also an early technophile who supported computer developments. He has since propagated the story that the very concept of a “personal computer” emerged in large part because of the anarchic spirit of these Californian technicians, and indeed this seems to have been an important influence:


In a 1995 special issue of Time magazine entitled "Welcome to Cyberspace," Stewart Brand wrote an article arguing that that the personal computer revolution and the Internet had grown directly out of the counterculture. "We Owe It All to the Hippies," claimed the headline. "Forget antiwar protests, Woodstock, even long hair. The real legacy of the sixties generation is the computer revolution." According to Brand, and to popular legend then and since, Bay area computer programmers had imbibed the countercultural ideals of decentralization and personalization, along with a keen sense of information's transformative potential, and had built those into a new kind of machine. In the late 1960s and the early 1970s, Brand and others noted, computers had largely been mainframes, locked in the basements of universities and corporations, guarded by technicians. By the early 1980s, computers had become desktop tools for individuals, ubiquitous and seemingly empowering.23
Another influential figure at this time was Ted Nelson, both through his demonstration of the concept of Hypertext and also through his book Computer Lib. 24 Nelson coined the term “hypertext” as early as 1960 and by 1963 was lecturing on the concept. He also had a strong influence on the IBM team that designed the Personal Computer: in 1978 he was invited to lecture them in Atlanta and in the course of a 90 minute presentation, explained the basic concepts of computers being used as information retrieval machines and aids to human creativity. 25

Nelson foresaw many aspects of the later World Wide Web as implemented by Tim Berners-Lee but crucially feels that the current Web does not fully embody his intentions: Berners-Lee only produced a heavily simplified system. For instance, Nelson feels that the Hypertext Markup Language (HTML) that forms the core of all web pages is heavily flawed, since links have to be embedded manually and can easily break down. As Nelson explains:


In 1960 I had a vision of a world-wide system of electronic publishing, anarchic and populist, where anyone could publish anything and anyone could read it. (So far, sounds like the web.) But my approach is about literary depth-- including side-by-side intercomparison, annotation, and a unique copyright proposal. I now call this "deep electronic literature" instead of "hypertext," since people now think hypertext means the web.26

The initial proposal for hypertext included the following facets:


a word processor capable of storing multiple versions, and displaying the differences between these versions. Though he did not complete this implementation, a mockup of the system proved sufficient to inspire interest in others.

On top of this basic idea, Nelson wanted to facilitate nonsequential writing, in which the reader could choose his or her own path through an electronic document.


This idea of multiple paths through documents and links from one document to another served as the main inspiration for all hypertext documents and artworks, from the late 1970s onwards.

Nelson then incorporated the hypertext vision into “Project Xanadu” from 1967, its name inspired by Coleridge’s poem Kubla Khan. Xanadu was to be a both an operating system and browser (to use its nearest modern equivalents) and Nelson’s goals for the project are very deep-reaching. Instead of simply trying to provide an open-document format like PDF, he wants to fundamentally change the relationship of text within documents to other documents, and to other types of digital media:


• We want to provide a principled alternative to today's electronic formats and enclosed, canopic-jar document conventions.

• We want to show far deeper hypertext than is possible on the web.

• We want to unify hypertext with word processing, audio and video, email, instant messaging and other media..

• We want to offer a principled new form of rewritable, reworkable content.

• We want to make the processes of work simpler and more powerful.27

The problem is that Nelson’s long-promised “Project Xanadu” that truly incorporates all his ideas is still far from complete, though again some of its concepts have been influential. His idealism and perfectionism (not to mention his insistence on holding out for the exact implementation of his ideas) meant that although he generated useful concepts, he consistently missed the boat on actually making workable products.28 As with so many technological breakthroughs, it is interesting that Nelson and Engelbart were working almost simultaneously on comparable ideas, and both are credited with important discoveries. Nelson has a reputation for being both idiosyncratic and competitive:


Several years ago I bumped into Mr. Nelson at Mr. Engelbart’s 80th birthday party. Without missing a beat and without bothering to say hello, he raised his finger and exclaimed, “I’ve discovered conclusive proof that I invented the Web browser back button!”29


1 “Retrospectives I: The Early Years in Computer Graphics” SIGGRAPH 89, panel sessions

2 “Retrospectives” ibid, p38.

3 “Retrospectives II”, ibid, p47

4 SIGGRAPH ‘89 panels… “Retrospectives II”

5 SIGGRAPH ‘89 panels… “Retrospectives II”

6 Russell A. Kirsch, “SEAC and the Start of Image Processing at the National Bureau of Standards”, IEEE Annals of the History of the Computing, Vol.20, No.2 1998, p10.

7 Russell A. Kirsch, ibid

8 “LECTURE on LOW BIT GAMES” William Linn, May 4th 98, Linz Harbour, www.timesup.org/Obsolete/lectureBolt.html

9 “Retrospectives II”, ibid, p72

10 “Retrospectives II”, ibid, p59

11 Timothy Binkley, “The Wizard of Virtual Pictures and Ethereal Places,” Leonardo: Computer Art in Context supplemental issue, 1990

12 IEEE Annals of the History of Computing, Vol.20, No.2, 1998, p21, quoting Coons from MIT article

13 SIGGRAPH ‘89 Panel Proceedings, ibid

14 “Retrospectives II”, ibid, p51

15 IEEE Annals of the History of Computing, Vol.20, No.2, 1998

16 Freeman, Herbert “Interactive Computer Graphics” IEEE Computer Society Press, 1980. Quoted by Wayne Carlson, [get web ref]

17 Herbert Freeman, ibid

18 van Dam, Andries “The Shape of Things to Come” ACM SIGGRAPH Retrospective Vol.32 No.1 February 1998

19 “Digital Paint Systems: An Anecdotal and Historical Overview”, Alvy Ray Smith, IEEE Annals of the History of Computing, 2001, p6

20 Alvy Ray Smith, ibid, p23

21 http://www.livinginternet.com/w/wi_engelbart.htm

22 Doug Engelbart, “Augmenting Human Intellect: A Conceptual Framework”, Stanford Research Institute, October 1962
http://www.bootstrap.org/augdocs/friedewald030402/augmentinghumanintellect/AHI62.pdf

23 Excerpted from from Counterculture to Cyberculture, Fred Turner, http://www.press.uchicago.edu/Misc/Chicago/817415_chap4.html

24 Excerpts from Computer Lib at http://www.digibarn.com/collections/books/computer-lib/

25 “When Big Blue got a glimpse of the future”, John Markoff, New York Times, 11th Dec 2007, http://bits.blogs.nytimes.com/2007/12/11/when-big-blue-got-a-glimpse-of-the-future/

26 http://hyperland.com/mlawLeast.html

27 Ted Nelson at http://transliterature.org/

28 As discussed in this thread: http://www.reddit.com/r/programming/comments/1zed6/will_you_be_sued_by_ted_nelson/

29 “When Big Blue got a glimpse of the future”, John Markoff, New York Times, 11th Dec 2007, http://bits.blogs.nytimes.com/2007/12/11/when-big-blue-got-a-glimpse-of-the-future/





Download 66.12 Kb.

Share with your friends:




The database is protected by copyright ©ru.originaldll.com 2024
send message

    Main page