Bk Precision 20 Mhz Oscilloscope Model 2120 Manual Arts

30 mhz dual trace analog oscilloscope with probes, Data sheet, Model 2120b

B&K Precision 20 MHz. Or Best Offer. BK Precision 2120 Dual Trace 20MHz Analog Oscilloscope Box Probes Manual Nice See more like this. BK Precision 2120 Oscilloscope 20MHz. Or Best Offer. BK Precision 2120 Oscilloscope 20MHz Model See more like this. 54600b oscilloscope manual pdf. Pod hd 300 user manual; Id technology model 252 manual arts. Flextight 646 manual arts; Remington 1100 lt 20 owners manual. B&K Precision 2190B Dual Trace Analog Oscilloscope, 100 MHz. BK Precision 388B Manual-Ranging, Average-Sensing Digital Multimeter, with Logic Test and Transistor Gain, 20 Amp, 750VAC, 1000VDC, 40 Megaohms, 40 Microfarads, 4 MHz. 20 Amp, 750VAC, 1000VDC, 40 Megaohms, 40 Microfarads, 4 MHz 4.3 out of 5 stars 5. B&K Precision 2703C.

Advertising

Data Sheet

Technical data subject to change
© B&K Precision Corp. 2012

v013012

www.bkprecision.com

Tel.: 714.921.9095

B&K Precision's model 2120B is a dual trace oscilloscope
that offers high performance at a low price. Most competi-
tor's entry level oscilloscopes have a 20 MHz bandwidth,
while B&K Precision's models 2120B has a bandwidth of
30 MHz. This oscilloscope is built by and backed by B&K
Precision, a company that has been selling reliable,
durable, value priced test instruments for over 50 years.

Dual or single trace operation

5 mV/div sensitivity

AUTO/NORM triggered sweep operation with AC,
TVH, TVV and line coupling

Compact low profile design

cUL certified

Speci ficat ions

21 20 B

VERTICAL AMPLIFIERS (CH 1 and CH 2)

Sensitivity

5 mV/div to 5 V/div, 1 mV/div to 1 V/div at X5

Attenuator

2120

10 steps in 1-2-5 sequence. Vernier control provides full adjustment between steps

Accuracy

±3%, ±5% at X5

Input Resistance

1 MΩ ±2%

Input Capacitance

25 pF ±10pF

Frequency Response

5 mV to 5 V/div: DC to 30 MHz (-3dB). X5: DC to 10 MHz (-3dB)

Rise Time

12 ns (Overshoot ≤5%)

Operating Modes

CH 1: CH 1, single trace

CH 2

CH 2, single trace

ALT

dual trace, alternating

CHOP

dual trace, chopped

ADD

agebraic sum of CH 1 + CH 2

Polarity Reversal

CH 2 only

Maximum Input Voltage

400 V (DC + AC peak)

SWEEP SYSTEM

Sweep Speed

0.1 µs/div to 2 s/div in 1-2-5 sequence, 23 steps,

Vernier control provides fully adjustable sweep time between steps.

Accuracy

±3%

Sweep Magnification

10x

TRIGGERING

Triggering Modes

AUTO (free run) or NORM, TV-V, TV-H

Trigger Source

CH 1, CH 2, ALT, EXT, LINE

Maximum External Trigger Voltage

300 V (DC + AC peak)

Trigger Coupling

AC 30 Hz to 30 MHz

TV H

Used for triggering from horizontal sync pulses

TV V

Used for triggering from vertical sync pulses

TRIGGER SENSITIVITY

Auto

Bandwidth:100 Hz-30 MHz, Internal: 1.5 div, External: 100 mV

Norm

Manual

Bandwidth: DC to 30 MHz, Internal: 1.5 div, External: 100 mV

TV V

Bandwidth: 20 Hz-1 kHz, Internal: .5 div, External: 100 mV

TV H

Bandwidth:1 kHz-100 kHz, Internal: .5 div, External: 100 mV

HORIZONTAL AMPLIFIER (Input through channel 2 input)

X-Y Mode

Switch selectable using X-Y switch. CH 1: X axis, CH 2: Y axis

Sensitivity

Same as vertical channel 1

Input Impedance

Same as vertical channel 1

Frequency Response

DC to 1 MHz typical (-3 dB)

X-Y Phase Difference

Approximately 3˚ at 50 kHz

Maximum Input Voltage

Same as vertical channel 1

CRT

Type

Rectangular with internal graticule

Display Area

8 x 10 div (1 div = 1 cm)

Accelerating Voltage

2 kV

Phosphor

P31

Trace Rotation

Electrical, front panel adjustable

Calibrating Voltage

1 kHz (±10%) Positive Square Wave, 2 V p-p (±3%)

GENERAL

Temperature

Within Specified Accuracy: 50˚ to 95˚F (10˚ to 35˚C), ≤85% RH

Full Operation: 32˚ to 104˚F (0˚ to 40˚C), ≤85% RH

Storage: -4˚ to 158˚F ( -20˚ to +70˚C

Power Requirements

100/120/220/240 VAC ±10%, 50/60 Hz, approximately 40 W.

Dimensions (WxHxD)

7 x 14.5 x 17.25' (180 x 370 x 440 mm)

Weight

Approximately 17.2 lbs (7.8 kg)

Two Year Warranty

Supplies Accessories

Instruction Manual, Two PR-33A x1/x10 Probes or equivalent,

AC Power Cord, Spare Fuse

Optional Accessories

PR-32A Demodulator Probe, PR-37A x1/x10/REF. Probe, PR-100A x100 Probe,

PR-55 High Voltage x1000 Probe, LC-210A Carrying Case

30 MHz Dual Trace Analog Oscilloscope With Probes

Model 2120B

Bk Precision 20 Mhz Oscilloscope Model 2120

Advertising
Popular Brands

Bk Precision 20 Mhz Oscilloscope Model 2120 Manual Arts

Popular manuals

Bk Precision 20 Mhz Oscilloscope Model 2120 Manual Arts Council

egg S
ACM Turing Award Lectures The First Twenty Years
ACM PRESS The ACM Press develops and publishes books and other materials in computer science and engineering. The ACM Press includes the following book series that are a collaboration between the Association for Computing Machinery, Inc. (ACM) and Adcison-Wesley Publishing Company. ACM PRESS SERIES The Anthology Series are collections of articles and papers, in book form, from ACM and other publications in fields of related interest. The TutorialSeries are books based on or developed from the technically excellent tutorial programs sponsored by ACM. The History of Computing Series are based on ACM conferences that provide a historical perspective on selected areas of computing. The PublicationsDatabaseExtract Series are books of timely information in specific areas that are extracted from ACM publications, such as reviews and bibliographies. The Conference Proceedings a:-C edited conference and workshop proceedings, in book form, sponsored by ACM and others, with overviews by technical experts.
0..
S S.
ACM Turing Award Lectures The First Twenty Years ACM PRESS ANTHOLOGY SERIES
ACM Press New York, New York
A vy Addison-Wesley Publishing Company Reading, Massachusetts * Menlo Park, California Don Mills, Ontario * Wokingham, England * Amsterdam Sydney * Singapore * Tokyo * Madrid Bogota * Santiago * San Juan
Library of Congress Cataloging-in-Publication Data ACM
TURING AAARi) LECTURES.
(ACM Press anthology series) Includes-bibliographies and index. 1. Electronic data processing. 2. Computers. I. Series 004 86-3372 QA76.24.A33 1987 ISBN 0-201-07794-9
ACM Press Anthology Series Copyright © 1987 by the ACM Press, A Division of the Association for Computing Machinery, Inc. (ACM). All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the United States of America. Published simultaneously in Canada. Reproduced by Addison-Wesley from camera-ready copy supplied and approved by the ACM Press. ABCDEFGHIJ-HA-8987
Contents Authors' Biographies
ix
Preface
xvii
Introduction to Part I Programming Languages and Systems Susan L. Graham
1
1966 The Synthesis of Algorithmic Systems Alan J. Perlis
5
1972 The Humble Programmer Edsger W Dijkstra
17
1974 Computer Programming as an Art Donald E. Knuth
33
1976 Logic and Programming Languages Dana S. Scott
47
1977 Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs John Backus V
63
1978 The Paradigm of Programming Robert W. Floyd
131
1980 The Emperor's Old Clothes Charles Antony Richard Hoare
143
1983 Reflections on Software Research Dennis M. Ritchie
163
1983 Reflections on Trusting Trust Ken Thompson
171
1984 From Programming Language Design to Computer Construction Niklaus Wirth
179
Introduction to Part I[ Computers and Computing Methodologies Robert L. Ashenhurst
191
1967 Computers Then and Now Maurice V. Wilkes
197
1968 One Man's View of Computer Science R. W. Hamming
207
1969 Form and Content in Computer Science Marvin Minsky
219
1970 Some Comments from a Numerical Analyst
243
J. H. Wilkinson 1971 Generality in Artificial Intellig( nce (Postscript) John McCarthy
257
1973 The Programmer as Navigator Charles W. Bachman
269
1975 Computer Science as Empirical Inquiry: Symbols and Search Allen Newell and Herbert A. Simon vi
287
1976 Complexity of Computations Michael 0. Rabin
319
1979 Notation as a Tool of Thought Kenneth E. Iverson
339
1981 Relational Database: A Practical Foundation for Productivity E. F. Codd 1982 An Overview of Computation Complexity Stephen A. Cook 1985 Combinatorics, Complexity, and Randomness Richard M. Karp Piecing Together Complexity (Postscript) Karen Frenkel Complexity and Parallel Processing: An Interview with Richard Karp (Postscript) Karen Frenkel Index According to ACM Computing Reviews Classification Scheme Name Index Subject Index
vii
391
411
433 456 458
467 473 477
Authors' Biographies Charles W. Bachman received the Turing Award in 1973 for his work in database technology while working for General Electric and Honeywell. He has received numerous patents for his work in database management systems and data models and was the creator of the Integrated Data Store (IDS), which is the basis of the CODASYL database systems (IDMS, DMS 1100, and many others). He was instrumental in the development of the ISO Reference Model for Open Systems Interconnection while he was the chairman of ISO/TC97/SC16. Now he is the president of Bachman Information Systems, Inc., a company he founded in 1983 to provide software developers with products that support the entire application software life cycle through the use of database, system engineering, and artificial intelligence techniques. John Backus received the Turing Award in 1977 for work he did in developing the computer language FORTRAN and the syntax description language BNF (Backus-Naur Form). After working on a number of different projects at the IBM Research Laboratory in San Jose, California, he developed a functional style of programming founded on the use of combining forms for creating programs. Today his research interests include function-level programming data types, algebraic program transformation, and optimization. With ix
John H. Williams and Edward L. Wimers he has developed the new language FL, which is a general-purpose language with facilities for input-output permanent files, and interactive programs. The language emphasizes precise semantic description and the treatment of abstract data types. With his colleagues, he is currently developing an optimizing compiler using algebraic transformations. E. F. Codd was serving as a Fellow at the IBM Research Laboratory, San Jose, California, when he won the Turing Award in 1981 for his contributions to the theory and practice of database management systems. Codd turned his attention to the management of large commercial databases in 1968 and began by creating the relational model as a foundation. In the early 1970s he enriched this work with the development of normalization techniques for use in database design and two quite different kinds of database languages-one algebraic, the other based on predicate logic. In 1985 Codd established two new companies in San Jose with his colleague C. J. Date: The Relational Institute, which organizes and promotes professional seminars on the relational approach to database management, and Codd and Date Consulting Group, which advises on database management problems. Stephen A. Cook, the 1982 Turing Award winner, received a Ph.D. degree in mathematics from Harvard University in 1966 and soon after joined the Department of Mathematics at the University of California at Berkeley. In 1970 Cook came to the Department of Computer Science at the University of Toronto, where he has been teaching courses in computer science to both undergraduates and graduates. He is also doing research in the areas of computational complexity and the theory of feasibly contructive proofs, 'which he describes as a primary means of tying mathematical logic and complexity theory together. Underlying all his research has been a concern for finding lower bounds for the complexity of computational problems. His work in NP-completeness has brought him particular satisfaction. Edsger W. Dijkstra was at Eindhoven University of Technology in The Netherlands teaching and designing an imperative programming language for which he could derive programs nicely when he received the Turing Award in 1972. In 1973 he became a Burroughs Research Fellow and in the next decade wrote close to 500 technical reports on various research projects. In 1984 he was invited to become professor and Schlumberger Centennial Chair in Computer Sciences at the University of Texas at Austin. His current work involves the derivation and exposition of programs and the streamlining of the mathematical argument. x
Throughout his research, he has been concerned with simplification and with making mathematics elegant. In his writing he has developed a new style in scientific and technical communications, which lies somewhere between writing for the media and writing a letter to a friend. Robert W. Floyd, who received the Turing Award in 1978, has been Professor of Computer Science at Stanford University since 1968. His current research interests include syntax error recovery and analysis of algorithms. He hopes someday to complete an introductory textbook on algorithms, and one on what computers (real or imagined) can and cannot do. He believes mathematics is the serious computer user's best tool. R. W. Hamming received the 1968 Turing Award for his work on errorcorrecting codes at AT&T Bell Laboratories, where he worked for 30 years on a multitude of research projects. He joined the Naval Postgraduate School in Monterey, California, in 1976 as Adjunct Professor, where he is currently involved in teaching and writing books on probability and combinatorics. Hamming is a scientific generalist whose aim now is to teach his students an attitude of excellence toward science, not just technical skills. Charles Antony Richard Hoare began his career in 1960 writing computer programs for Elliot Brothers, Ltd., in England. During his eight years there, he designed everything from simple subroutines to highlevel programming languages. He left the business world in 1968 to teach computer science at the Queen's University in Belfast, Ireland, moving to Oxford's Computer Science Department in 1977 where he is now Professor of Computation. He received the Turing Award in 1980 and is noted for his contribution to the formal definition of programming languages by means of axiomatic semantics. He has made the establishment of computer programming as a serious professional discipline a driving force in his work. Hoare points to his development of the concept of communicating sequential processes as one of his major interests. He is currently interested in technology transfer, databases, and mathematical methods of computing science. Kenneth E. Iverson, while at Harvard University as a graduate student and later as an assistant professor, originated the analytical programming language (APL) as a basis for a better mathematical language and as an attempt to provide a clear and precise expression in writing and teaching. When he joined IBM's Thomas J. Watson Research Center in 1960, he persuaded a number of his colleagues to join him in the xi
further development and implementation of APL. In 1980 he left IBM to join I. P. Sharp Associates in Toronto, a company devoted to providing APL services and products, largely in the financial community. He has recently retired from I. P. Sharp to work on introducing the use of APL in education, as a tool for teaching programming, mathematics, and related subjects. Richard M. Karp earned his Ph.D. in applied mathematics at Harvard University in 1959. He later became a researcher at the IBM Thomas J. Watson Research Center until 1968, holding visiting professorships at New York University, the Urn diversity of Michigan, and the Polytechnic Institute of Brooklyn. He is now Professor of Computer Science, Operations Research, and Mathematics at the University of California at Berkeley. Karp, the Turing Award winner in 1985, is a leader in complexity theory research. His current work on combinatorial algorithms and NP-completeness has altered the way computer scientists approach such practical problems as routing, packing, covering, matching, and the traveling salesman problem. He is presently writing a book on probabilistic analysis of combinatorial algorithms.
Donald E. Knuth had just finished the third volume of the seven-part series, The Art of Computer Programming, when he received the 1974 Touring Award. His own writing experiences and general dissatisfaction with the way galleys were presented sparked Knuth's interest in computer typography. This interest culminated in the typesetting algorithm TEX and the font-designing algorithm Metafont. He is presently Fletcher Jones Professor of Computer Science at Stanford University and is now working on the fourth volume of his series. The aim of these books .s to teach the reader how to write better algorithms instead of how to use someone else's algorithms better. Knuth believes that preparing programs for a computer can be an aesthetic experience, much like composing poetry or music.
John McCarthy became interested in artificial intelligence in 1949 while a graduate student in mathematics. He is now Professor of Computer Science in the Computer Science Department and Charles M. Pigott Professor in the School of Engineering, Stanford University, where he has mainly been interested in the formalization of common sense knowledge. He invented LISP in 1958, developed the concept of time sharing, and since the early 1960s has worked on proving that computer programs meet their specifications. His most recent theoretical development is .he circumscription method of nonmonotonic reasoning. He received the Turing Award in 1971 for his influence on the development of artificial intelligence. xii
Marvin Minsky was working on Robot C, the first robot to move its hands in a human fashion, when he received the Turing Award in 1969. He continues to teach and do research at the Massachusetts Institute of Technology (MIT), where he is now Donner Professor of Science with the Department of Electrical Engineering and Computer Science. His career has revolved around a variety of projects mainly in the AI field, including mathematical theory of computation and robotics. He is founder of MIT's AI Laboratory, Logo Computer Systems, Inc., and Thinking Machines, Inc. Minsky has also worked as an advisor on projects for groups as diverse as NASA and the National Dance Institute. His research interests have included musical cognition and physical optics. He has developed an explanation for the way a 'thinking machine' should operate, and presents his theories in the recently published book, The Society of Mind (Simon and Schuster). Allen Newell, a joint winner of the 1975 Turing Award with Herbert A. Simon, began his career as a scientist for the Rand Corporation in the 1950s. He joined Carnegie-Mellon University in 1961 where he is U. A. and Helen Whitaker University Professor of Computer Science. His research has focused on problem solving and cognitive architecture in the fields of artificial intelligence and cognitive psychology. Newell's contributions to computer science include list processing, computer description languages, and psychologically based models of human/computer interaction. His recent work and interests involve the development of architectures for problem solving and learning, a unified theory of cognition, and hardware architectures for production systems. Alan J. Perlis was the recipient of ACM's first Turing Award given in 1966. He founded the Digital Computer Center at Purdue University in 1952, and in 1956 founded the Computation Center at Carnegie Institute of Technology (now Carnegie-Mellon University). In 1965 he established the first graduate department in computer science at the Carnegie Institute of Technology and became its first chairman. Perlis was involved in defining ALGOL 58 and ALGOL 60, as well as the later extensions of the language. In 1971 he joined the Department of Computer Science at Yale University as Eugene Higgins Professor of Computer Science. His research has been primarily in the area of programming language design and the development of programming techniques. Perlis is the founding editor of Communications of the ACM and was President of the ACM from 1962 to 1964. His current interests include programming languages for parallel processing and the dynamic behavior of software systems. Michael 0. Rabin currently has a dual appointment as T. J. Watson Sr. Professor of Computer Science at Harvard University and as Albert xiii
Einstein Professor of Mathematics and Computer Science at the Hebrew University in Jerusalem. The main thrust of his work has been the study of the theory of algorithms with particular emphasis on direct applications to computer technology (several of the randomized algorithms he has worked on have found applications in computer security and secure operating systems). Corecipient of the 1976 Turing Award, he has pioneered the theory of complexity of computations and the notion of nondeterministic computations. Rabin introduced probability to computations beginning with a paper on probabilistic automata. He later worked on testing large integers for primality and many other applications of randomization to computing. His work on tree automata has settled many open-decision problems in mathematical logic. Dennis M. Ritchie, who won the 1983 Turing Award with Ken Thompson, joined the technical staff of AT&T Bell Laboratories, Murray Hill, New Jersey, in 1968, where he continues to design computer languages and operating systern^s. The team of Ritchie and Thompson is best known as the creators and architects of the UNIX operating system and designers of the C language in which UNIX is written. Ritchie has also contributed to the development of the MULTICS system. Dana S. Scott is University Professor of Computer Science, Mathematical Logic, and Philoscphy at Carnegie-Mellon University, where he has been since 1981. He received the Turing Award (jointly with Michael Rabin) in 1976 while Professor of Mathematical Logic at Oxford University, Oxford, England. Scott's work in logic touched model theory, automata theory, set theory, modal and intuitionistic logic, constructive mathematics, and connections between category theoryy and logic. His current interests are broadly in applications of logic to the semantics of programming languages and in computational linguistics. Herbert A. Simon is Richard K Mellon Professor of Computer Science and Psychology at Carnegie-Mellon University. His research work in cognitive science began in 194, and involved the use of computers to simulate human thinking and problem solving. He won the 1975 Turing Award, with Allen Newell, for his work showing how to use heuristic search to solve problems. Since the mid-1970s, Simon has been involved in three main areas of research: the psychology of scientific discovery, which involves writing computer programs that simulate the discovery process; methods of learning, which use production systems that model how students learn; and the study of how people represent and understand things that are initially presented to them in natural language. Most xiv
recently, he has been studying how people take verbal statements and turn them into visual diagrams. Why this is important to people, that is, why a picture is worth a thousand words, is the subject and title of his recent article in Cognitive Science. Ken Thompson, a member of the technical staff at AT&T Bell Laboratories, Murray Hill, New Jersey, was the corecipient of the 1983 Thring Award along with his colleague Dennis Ritchie. His research has centered around compiler technology, programming languages, and operating systems. He was the originator, with Ritchie, of the wellknown UNIX operating system. Thompson has written a multitude of published articles and reports that run the gamut from operating systems, to creating algorithms, to playing computer chess. One of his programs, BELLE, won the world computer chess championship. He is currently immersed in designing and building new compilers and new operating systems. Maurice V. Wilkes's early career began in 1945 as director of the University of Cambridge Computer Laboratory, where he developed the first serviceable stored program computer called the Electronic Delay Storage Automatic Calculator (EDSAC) in 1949. Retiring from Cambridge in 1980, he later served as a staff consultant to Digital Equipment Corporation and is now a Member for Research Strategy on the Olivetti Research Board. Wilkes has been a Fellow of the United Kingdom's Royal Society since 1956 and was named the first president of the British Computer Society in 1957. He won the Turing Award in 1967. Wilkes has pioneered research in stored program machinery, program libraries, and microprogramming used in the design of EDSAC II. He recently returned to Cambridge and has published his autobiography, Memoirs of a Computer Pioneer (MIT Press). J. H. Wilkinson, the recipient of the 1970 Turing Award, was one of the most celebrated numerical analysts. Until his death on October 5, 1986, Wilkinson devoted his career to the research of numerical analysis, particularly numerical linear algebra and perturbation problems. His key contributions inciude the creation of the ACE computer and the development of backward-error analysis. Wilkinson was a graduate of Trinity College in Cambridge, England, and a Fellow of the U. K.'s Royal Society. He was Chief Scientific Officer of the National Physical Laboratory in England from 1946 to 1977. He also spent three months each year, from 1977 to 1984, as a visiting professor of Stanford University, where he continued his research on generalized eigenvalue problems and other numerical analysis projects. Niklaus Wirth received the Turing Award in 1984 while working on the design and construction of a personal computer and single-pass xv
compiler for Modula-2 at the Swiss Federal Institute of Technology (ETH) in Zurich. After spending a sabbatical year at Xerox Research Laboratory in Palo Alto, Califcrnia, Wirth returned to ETH in 1986, where he continues to work or operating environments, workstation designs, the analysis of processor architectures, and the foundations of programming. Since receiving a Ph.D. degree in electrical engineering from the University of California at Berkeley in 1963, Wirth has been working to overcome what he calls unnecessarily complex computer designs. Of particular importance is his creation of the Lilith computer system, a project he began in 1978.
xvi
Preface
This first volume of the ACM Press Anthology Series collects 22 essays, representing the first 20 years (1966-1985) of the ACM Tbring Award Lectures. The Turing Award is presented annually 'to an individual selected for contributions of a technical nature to the computing community' that are judged to be of lasting and major importance to the field of computing science. The award commemorates Alan M. Touring, an English mathematician whose work 'captured the imagination and mobilized the thoughts of a generation of scientists,' to quote Alan J. Perlis, the first award recipient. Each year the recipient of the Thring Award delivers a lecture at the annual ACM fall conference and is invited to prepare a version for publication. From 1966 to 1970, the published version appeared in the Journal of the ACM; since 1972, it has appeared in Communications of the ACM. In 1975, 1976, and 1983, the award was made jointly to two persons, and in this case the recipients either authored a joint paper (1975) or gave separate lectures. In all, there have been 23 recipients of the award and 22 different lectures (21 of which have already been published over a 20-year period). It was originally intended that the lectures be presented chronologically, so that readers could get a general idea of the time frame of each lecture from its location in the book. Another option was to organize the lectures by topic. The final decision represents a xvii
compromise between these two options: Since 10 of the 22 lectures are concerned with the major topics of 'Programming Languages and Systems,' it was decided to include them chronologically as Part I of the anthology; the remaining 12 lectures, which range quite nicely over the spectrum of 'Computers arid Computing Methodologies,' make up Part II. This dichotomy is based on the first-level nodes from the ACM Computing Reviews Classification Scheme, and an index is provided at the back of the book showing how the 23 contributions fit into this major taxonomy for computer science. Each of the lectures originally published in Communications was accompanied by some introductory material, which is reprinted here. In addition, each original recipient was invited to contribute additional remarks of the 'looking backward' variety for this volume. These remarks are included as 'Postscripts' following the authors' original published lectures. Since no previously published version existed for the 1971 lecture, John McCarthy was invited to contribute a more extensive Postscript that takes account of both his 1971 lecture and his current perspectives. The Postscripts for Richard Karp's 1985 lecture comprise a short piece, 'Piecing Together Complexity,' and an interview with Karp, both of which were written by Communications features writer, Karen Frenkel, and appeared with the lecture. Susan L. Graham, who has been Editor-in-Chief of the ACM Transactionson ProgrammingLanguages and Systems since its inception in 1978, has written an introduction to Part I, putting the contributions in context. I have provided an introduction to Part II, in which I try to give some idea of how these contributions fit into the larger context of the art and science of computing. L. ASHENHURST ACM Press Anthology Series Editor
-ROBERT
xviii
Introduction to Part I
Programming Languages and Systems Programming is fundamental to the computing field. Programs describe the computational steps carried out by computer devices, both physical and conceptual, and are the basis for automatic control of a wide variety of machines. At a higher level, programs are used to describe and communicate algorithms of all kinds, to organize and manage complexity, and to relieve human beings of many tiresome and repetitive tasks. Programming languages are the means by which almost all of these computations are expressed. They play a dual role both as the notations that influence thought (see Kenneth Iverson's 1979 lecture in Part II) and as the directions for an abstract computing machine, from which instruction of the physical machine is obtained by automated translation. Thus it is not surprising that approximately half of the Touring Awards have recognized contributions to programming languages, programming methodology, and programming systems. The ten lectures in this section were prepared over a period of eighteen years. Some authors give retrospective summaries of their technical contributions, their motivating ideas, and the contexts in
which those ideas fit. Others present the research in which they were engaged at the time of the award. Collectively, the lectures chronicle many of the major advances in the field during a period of great activity. The reader will find in thesMe lectures many citations of the work of other Turing Award winners. One possible way to organize this discussion would be by clusters of related material. I have chosen instead to discuss the papers in chronological order, so as to provide some historical perspective. Alan Perlis delivered the first Turing Award lecture in 1966, at a time when most programming was done using coding sheets and punched cards. Perlis looks to the future. After acknowledging the value of Turing's model of computation as an important influence on our understanding of computation, and the importance of ALGOL in influencing our thought, PerLs discusses the next advances needed in programming languages and systems. He describes the programmer's need to be able to define richer data types and data structures and their associated operations. That need has subsequently been addressed by research in abstract data types and in the ongoing investigations into notions of type systems. Subsequently, these notions have been important components of LISP and Smalltalk systems and are significant issues in the design of programming environments. Although some of the issues Perlis raises have been addressed satisfactorily in subsequent research, others are wilth us still. The 1972 Turing Award winner was Edsger Dijkstra, who was probably most widely known at the time for his letter to the editor of Communications of the ACM deriding the goto statement as 'an invitation to make a mess of one's program.' Dijkstra develops in retrospect some of the major themes of his work. After mentioning the contributions of the EDSAC' systems and of FORTRAN, ALGOL, and LISP, and characterizing full PL/I as a potentially fatal disease, he focuses on the creation cf reliable software. Dijkstra develops the important idea that the kcv to producing reliable software is to avoid introducing bugs rather than eliminate them later. He argues that error-free programming is both economically important and technically feasible. The feasibility will come from restricting ourselves to intellectually manageable programs. The economic arguments are well known by now. By the time of his address in 1974, Donald Knuth had already made many contributions to pr cgramming and programming systems, including his multivolume series, The Art of Computer Programming. Knuth has gone on to develop the highly successful TEX system for high-quality typesetting. Knuth has a scholar's love of the historical foundations for contemporary ideas, together with a programmer's attention to detail. In his address, he explores the historical notions of art and science and argues that programming is an art form. Good programs can and should have elegance and style. In the spirit of a true aficionado, Knuth asserts that programming should be enjoyable -it's 2 SUSAN L. GRAHAM
okay to write programs just for fun. Finally he urges the creation of 'beautiful' tools for programming artists, and languages that encourage good style. Dana Scott and Michael Rabin received the ringg Award in 1976 for their early joint work in automata theory. Since their research fields subsequently diverged, they chose to give separate talks on their more current work. Scott had more recently done fundamental work in the mathematical foundations of programming language semantics, developing the theory of denotational semantics. In his address, he focuses on the major ideas that underlie his work. He outlines the chronology of personal professional experiences that influenced his approach to semantics and then turns to the work itself, explaining the major insight and the principal result he had obtained from it. Scott's work has remained important and has spawned a great deal of subsequent research. Although the primary purpose of the 1977 award was to recognize John Backus's work on FORTRAN and on language description, FORTRAN was by then familiar to much of the computing community, and Backus had long since turned his attention to other research topics. He took the opportunity, therefore, to challenge the fundamental approach of the von Neumann style of programming. He argues that the traditional programming style causes too much effort to be devoted to explicit movement of data back and forth to main store and to the sometimes elaborate naming and scoping rules that support that style. He proposes instead a style of variable-free functional programming that has not only power and elegance, but a mathematically solid semantics. Backus's paper extends his talk and is directed to the specialist, as well as to the general computing audience. Nonspecialists may find the amount of notation intimidating at first, but their efforts will be rewarded, even if they only skim the most formal sections (without skipping the summary at the end!). The theme of Robert Floyd's 1978 address is programming paradigms. Floyd provides many examples, among them structured programming, recursive coroutines, dynamic programming, rule-based systems, and state-transition mechanisms, and shows how he has used some of those paradigms in his own research. He argues that it is important to become conscious of the paradigms we use, to teach those paradigms to novices, and to provide programming language support for expressing the major paradigms used in programming. Although the examples may change over time, the theme of Floyd's paper is as timely today as it was when it was written. I vividly remember listening to C. A. R. Hoare's Turing address in October 1980, about a year after the introduction of Ada.' The audience was spellbound as Hoare introduced many of his important 'Ada is a trademark of the Ada Joint Project Office, U. S. Department of Defense. Programming Languages and Systems 3
contributions to systems and languages in the guise of a personal narrative of his career. After recounting his early experience as a software manager, he described his involvement with language design efforts leading to ALGOL 68 and PL/I. Having built up a series of examples illustrating the benefits of simplicity over complexity in language and system design, he then related his experience with the design of Ada and his opinions on the result. The reader should not miss the story summarized by Hoare. By the time of the 1983 award to Ken Thompson and Dennis Ritchie, UNIX 2 had moved from the research laboratory to the commercial world. In his address Ritchie considers the factors leading to the success of UNIX, including its long gestation period and its creation in an environment where relevance matters but commercial pressure is absent. Ritchie cites other research efforts in which the same nurturing process was important and expresses concern that excessive relevance may inhibit both innovation arc the free exchange of ideas. Ken Thompson gave a separate, but complementary address. Like Dijkstra, Hoare, Knuth, and Floyd before him, Thompson presents himself as a programmer. He develops a program that starts with an example used by Knuth in his Turing address and illustrates a paradigm in the sense of Floyd's paper, although it is a paradigm that Floyd might not choose to teach his students! Both Ritchie and Thompson provide gentle social commentary on the milieu in which progamming systems are developed today. The final Turing Award address in this section was presented by Niklaus Wirth in 1984. Perhaps best known as the inventor of Pascal, Wirth has a distinguished record as a creator of programming languages. In his historical chronicle of th2 major language design efforts of his career to date, Wirth emphasizes the importance of simplicity. Since he always has an eye toward implementation, he also discusses the need for complementary hardware and software, and he relates his experiences in designing the Lilith machine as a companion to Modula-2. Wirth points out the benefits of hands-on experience and the value of well-chosen tools. I have the good fortune to be personally acquainted with all of the authors whose addresses are collected in this section and to have taken courses from four of them. For me, one of the pleasures in rereading these lectures has been to see reflections of the special human qualities that have contributed to the very great impact of their work. We are lucky indeed to have them as :eachers, as authors, as intellectual leaders, and as friends. -SUSAN L. GRAHAM
Berkeley, California 2
UNIX is a trademark of AT&T Bel: Laboratories.
4 SUSAN L. GRAHAM
The Synthesis of Algorithmic Systems ALAN J. PERLIS Carnegie Institute of Technology Pittsburgh, Pennsylvania
Introduction Both knowledge and wisdom extend man's reach. Knowledge led to computers, wisdom to chopsticks. Unfortunately our association is overinvolved with the former. The latter will have to wait for a more sublime day. On what does and will the fame of Turing rest? That he proved a theorem showing that for a general computing device-later dubbed a 'Thring machine' -there existed functions which it could not compute? I doubt it. More likely it rests on the model he invented and employed: his formal mechanism. This model has captured the imagination and mobilized the thoughts of a generation of scientists. It has provided a basis for arguments leading to theories. His model has proved so useful that its generated activity has been distributed not only in mathematics, but through several technologies as well. The arguments that have been employed are not always formal and the consequent creations not all abstract. Presented at the 21st ACM National Conference, August 1966. Author's present address: Computer Science Department, Yale University, P.O. Box 2158, Yale Station, New Haven, CT 06520-2158. 5
Indeed a most fruitful consequence of the Turing machine has been with the creation, study and computation of functions which are computable, i.e., in computer programming. This is not surprising since computers can compute so much more than we yet know how to specify. I am sure that all will agree that this model has been enormously valuable. History will forgive me for not devoting any attention in this lecture to the effect which Turing had on the development of the general-purpose digital computer, which has further accelerated our involvement with the theory and practice of computation. Since the appearance of Turing's model there have, of course, been others which have concerned and benefited us in computing. I think, however, that only one has had an effect as great as Turing's: the formal mechanism called ALGCOL. Many will immediately disagree, pointing out that too few of us have understood it or used it. While such has, unhappily, been the case, it is not the point. The impulse given by ALGOL to the development of research in computer science is relevant while the number of adherents is not. ALGOL, too, has mobilized our thoughts and has provided us with a basis for our arguments. I have long puzzled over why ALGOL has been such a useful model in our field. Perhaps some of the reasons are: (a) its international sponsorship; (b) the clarity of description in print of its syntax; (c) the natural way it combines important programmatic features of assembly and subroutine programming; (d) the fact that the language is naturally decomposable so that one may suggest and define rather extensive modifications to parts of the language without destroying its impressive harmony of structure and notation. There is an appreciated substance to the phrase 'ALGOL-like' which is often used in arguments about programming, languages and computation. A..GOL appears to be a durable model, and even flourishes under surgery- be it explorative, plastic or amputative; (e) the fact that it is tantalizingly inappropriate for many tasks we wish to program. Of one thing I am sure: ALGOL does not owe its magic to its process of birth: by committee. Thus, 'we should not be disappointed when eggs, similarly fertilized, hatch duller models. These latter, while illuminating impressive improvements over ALGOL, bring on only a yawn from our collective imaginations. These may be improvements over ALGOL, but they are not successors as models. Naturally we should and do put to good use the improvements they offer to rectify the weakness of ALGOL. And we should also ponder 6 ALAN J. PERLIS
why they fail to stimulate our creative energies. Why, we should ask, will computer science research, even computer practice, work, but not leap, forward under their influence? I do not pretend to know the whole answer, but I am sure that an important part of their dullness comes from focusing attention on the wrong weaknesses of ALGOL.
The Synthesis of Language and Data Structures We know that we design a language to simplify the expression of an unbounded number of algorithms created by an important class of problems. The design should be performed only when the algorithms for this class impose, or are likely to impose, after some cultivation, considerable traffic on computers as well as considerable composition time by programmers using existing languages. The language, then, must reduce the cost of a set of transactions to pay its cost of design, maintenance and improvement. Successor languages come into being from a variety of causes: (a) The correction of an error or omission or superfluity in a given language exposes a natural redesign which yields a superior language. (b) The correction of an error or omission or superfluity in a given language requires a redesign to produce a useful language. (c) From any two existing languages a third can usually be created which (i) contains the facilities of both in integrated form, and (ii) requires a grammar and evaluation rules less complicated than the collective grammar and evaluation rules of both. With the above in mind, where might one commence in synthesizing a successor model which will not only improve the commerce with machines but will focus our attention on important problems within computation itself? I believe the natural starting point must be the organization and classifying of data. It is, to say the least, difficult to create an algorithm without knowing the nature of its data. When we attempt to represent an algorithm in a programming language, we must know the representation of the algorithm's data in that language before we can hope to do a useful computation. Since our successor is to be a general programming language, it should possess general data structures. Depending on how you look at it, this is neither as hard nor as easy as you might think. How should this possession be arranged? Let us see what has been done in the languages we already have. There the approach has been as follows: (a) A few 'primitive' data structures, e.g., integers, reals, arrays homogeneous in type, lists, strings and files, are defined into the language. The Synthesis of Algorithmic Systems
7
(b) On these structures a 'sufficient' set of operations, e.g., arithmetic, logical, extractive, assignment and combinational, is provided. (c) Any other data structure is considered to be nonprimitive and must be represented in terms of primitive ones. The inherent organization in the nonprimitive structures is explicitly provided for by operations over the primitive data, e.g, the relationship between the real and imaginary-parts of a complex number by real arithmetic. (d) The 'sufficient' set of operations for these nonprimitive data structures is organized as procedures. This process of extension cannot be faulted. Every programming language must permit its facile use, for ultimately it is always required. However, if this process of extension is too extensively used, algorithms often fail to exhibit a clarity of structure which they really possess. Even worse, they tend to execute mcre slowly than necessary. The former weakness arises because the language was defined the wrong way for the algorithm, while the latteP exists because the language forces overorganization in the data anc requires administration during execution that could have been done cnce prior to execution of the algorithm. In both cases, variables have been bound at the wrong time by the syntax and the evaluation rules,. I think that all of us are aware that our languages have not had enough data types. Certainly, in our successor model we should not attempt to remedy this shortcoming by adding a few more, e.g., a limited number of new types and a general catchall structure. Our experience with the definition of functions should have told us what to do: not to concentrate on a complete set of defined functions at the level of general use, but to provide within the language the structures and control from which the efficient definition and use of functions within programs woold follow. Consequently, we should focus our attention in our successor model on providing the means for defining data structures. But this is not of itself enough. The 'sufficient' set of accompanying operations, the contexts in which they occur arnd their evaluation rules must also then be given within the program for which the data structures are specified. A list of some of the capabilities that must be provided for data structures would include (a) structure definition; (b) assignment of a structure to an identifier, i.e., giving the identifier information cells; (c) rules for naming the parts, given the structure; (d) assignment of values to the cells attached to an identifier; (e) rules for referencing the identifier's attached cells; (f) rules of combination, copy and erasure both of structure and cell contents. 8 ALAN J. PERLIS
These capabilities are certainly now provided in limited form in most languages, but usually in too fixed a way within their syntax and evalua-
tion rules. We know that the designers of a language cannot fix how much information will reside in structure and how much in the data carried within a structure. Each program must be permitted its natural choice to achieve a desired balance between time and storage. We know there is no single way to represent arrays or list structures or strings or files or combinations of them. The choice depends on (a) the frequency of access; (b) the frequency of structure changes in which given data are embedded, e.g., appending to a file new record structures or bordering arrays; (c) the cost of unnecessary bulk in computer storage requirements; (d) the cost of unnecessary time in accessing data; and (e) the importance of an algorithmic representation capable of orderly growth so that clarity of structure always exists. These choices, goodness knows, are difficult for a programmer to make. They are certainly impossible to make at the design level. Data structures cannot be created out of thin air. Indeed the method we customarily employ is the use of a background machine with fixed, primitive data structures. These structures are those identified with real computers, though the background machine might be more abstract as far as the defining of data structures is concerned. Once the background machine is chosen, additional structures as required by our definitions must be represented as data, i.e., as a name or pointer to a structure. Not all pointers reference the same kind of structure. Since segments of a program are themselves structures, pointers such as 'procedure identifier contents of (x)' establish a class of variables whose values are procedure names.
Constants and Variables Truly, the flexibility of a language is measured by that which programmers may be permitted to vary, either in composition or in execution. The systematic development of variability in language is a central problem in programming and hence in the design of our successor. Always our experience presents us with special cases from which we establish the definition of new variables. Each new experience focuses our attention on the need for more generality. Time sharing is one of our new experiences that is likely to become a habit. Time sharing focuses our attention on the management of our systems and the management by programmers of their texts before, during and after execution. Interaction with program will become increasingly flexible, The Synthesis of Algorithmic Systems
9
-
and our successor must not make this difficult to achieve. The vision we have of conversational programming takes in much more than rapid turn around time and convenient debugging aids: our most interesting programs are never wrong and never final. As programmers we must isolate that which is new with conversational programming before we can hope to provide an appropriate language model for it. I contend that what is new is the requirement to make variable in our languages what we previously had taken as fixed. I do not refer to new data classes now, but to variables whose values are programs or parts of programs, syntax or parts of syntax, and regimes of control. Most of our attention is now paid to the development of systems for managing files which improve the administration of the overall system. Relatively little is focused on improving the management of a computation. Whereas the former can be done outside the languages in which we write our programs, for the latter we must improve our control over variability within the programming language we use to solve our problems. In the processing of a program text an occurrence of a segment of texts may appear in the text once but be executed more than once. This raises the need to identify both constancy and variability. We generally take that which has the form cf being variable and make it constant by a process of initialization; and we often permit this process itself to be subject to replication. This process of initialization is a fundamental one and our successor must have a methodical way of treating it. Let us consider some instances of initialization and variability in ALGOL:
(a) Entry to a block. On entr7 to a block declarations make initializations, but only about some properties of identifiers. Thus, integer x initializes the property of being an integer, but it is not possible to initialize the values of x as something that will not change during the scope of the block. The declaration procedure P (. .
.)
; .
.;
emphatically initializes the identifier P but it is not possible to change it in the block. array A [1: n, 1 : m] is assigned an initial structure. It is not possible to initialize the values of its cells, or to vary the structure attached to the identifier A. (b) for statement. These expressions, which I will call the step and until elements, cannot be initialized. (c) Proceduredeclaration. This is an initialization of the procedure identifier. On a procedure call, its formal parameters are initialized as procedure identifiers are, and they may even be initialized as to value. However, different calls establish different initializations of the formal parameter identifiers but not different initialization patterns of the values. The choice permitted in AlGOL in the binding of form and value to identifiers has been considered adequate. However, if we look at the 10 ALAN J. PERLIS
operations of assignment of form, evaluation of form and initialization as important functions to be rationally specified in a language, we might find ALGOL to be limited and even capricious in its available choices. We should expect the successor to be far less arbitrary and limited. Let me give a trivial example. In the for statement the use of a construct such as value E, where E is an expression, as a step element would signal the initialization of the expression E. value is a kind of operator that controls the binding of value to a form. There is a natural scope attached to each application of the operator. I have mentioned that procedure identifiers are initialized through declaration. Then the attachment of procedure to identifier can be changed by assignment. I have already mentioned how this can be done by means of pointers. There are, of course, other ways. The simplest is not to change the identifier at all, but rather to have a selection index that picks a procedure out of a set. The initialization now defines an array of forms, e.g., procedure array P [1: k] (lf f2, . . . ff); . . . begin ... end; ... ; begin. . .end; The call P U] (a, a2, .. , a) would select the ith procedure body for execution. Or one could define a procedure switch P := A, B, C and procedure designational expressions so that the above call would select the ith procedure designational expression for execution. The above approaches are too static for some applications and they lack an important property of assignment: the ability to determine when an assigned form is no longer accessible so that its storage may be otherwise used. A possible application for such procedures, i.e., ones that are dynamically assigned, is as generators. Suppose we have a procedure for computing (a) ENO Ck (N) Xk as an approximation to some function (b) f(x) = E' % Ck Xk, when the integer N is specified. Now once having found the Ck(N), we are merely interested in evaluating (a) for different values of x. We might then wish to define a procedure which prepares (a) from (b). This procedure, on its initial execution, assigns, either to itself or to some other identifier, the procedure which computes (a). Subsequent calls on that identifier will only yield this created computation. Such dynamic assignment raises a number of attractive possibilities: .
(a) Some of the storage for the program can be released as a consequence of the second assignment. (b) Data storage can be assigned as the own of the procedure identifier whose declaration or definition is created. (c) The initial call can modify the resultant definition, e.g., call by name or call by value of a formal parameter in the initial call will effect the kind of definition obtained. It is easy to see that the point I am getting at is the necessity of attaching a uniform approach to initialization and the variation of form and value attached to identifiers. This is a requirement of the computaThe Synthesis of Algorithmic Systems
11
tion process. As such our successor language must possess a general way of commanding the actions of initialization and variation for its classes of identifiers. One of the actions we wish to perform in conversational programming is the systematic, or contro [led, modification of values of data and text, as distinguished from the unsystematic modification which occurs in debugging. The performance of such actions clearly implies that certain pieces of a text are understood to be variable. Again we accomplish this by declaration, by initialization and by assignment. Thus we may write, in a block heading, the declarations real x, s; arithmetic expression
t,
u;
In the accompanying text the occurrence of s := x + t; causes the value of the arithmetic expression assigned to t, e.g., by input, to be added to that of x and the result assigned as the value of s. We observe that t may have been entered and stored as a form. The operation + can then only be accomplished after a suitable transfer function shall have been applied. The fact that a partial translation of the expression is all that can be done at the classical 'translate time' should not deter us. It is time that we began to face the problems of partial translation in a systematic way. The natural pieces of text which can be variable are those identified by the syntactic units of the language. It is somewhat more difficult to arrange for unpremeditated variation of programs. Here the major problems are the identification of the text to be varied in the original text, and how to find its correspondent under the translation process in the text actually being evaluated. It is easy to say: execute the original text interpretively. But it is through intermediate solutions lying between translation and interpretation that the satisfactory balance of cosis is to be found. I should like to express a point of view in the next section which may shed some light on achieving this balance as each program requires it.
Data Structure and Syntax Even though list structure; and recursive control will not play a central role in our successor language, it will owe a great deal to LIsP. This language induces humorous arguments among programmers, often being damned and praised for the same feature. I should only like to point out here that its description consciously reveals the proper components of language definition with more clarity than any language I know of. The description of LISP includes not only its syntax, but the representation of its syntax as a data structure of the language, and the representation of the environment data structure also as a data structure of the language. Actually the description hedges somewhat on the latter description, but nct in any fundamental way. From the 12 ALAN J. PERLIS
foregoing descriptions it becomes possible to give a description of the evaluation process as a LISP program using a few primitive functions. While this completeness of description is possible with other languages, it is not generally thought of as part of their defining description. An examination of ALGOL shows that its data structures are not appropriate for representing ALGOL texts, at least not in a way appropriate for descriptions of the language's evaluation scheme. The same remark may be made about its inappropriateness for describing the environmental data structure of ALGOL programs. I regard it as critical that our successor language achieve the balance of possessing the data structures appropriate to representing syntax and environment so that the evaluation process can be clearly stated in the language. Why is it so important to give such a description? Is it merely to attach to the language the elegant property of 'closure' so that bootstrapping can be organized? Hardly. It is the key to the systematic construction of programming systems capable of conversational computing. A programming language has a syntax and a set of evaluation rules. They are connected through the representation of programs as data to which the evaluation rules apply. This data structure is the internal or evaluation directed syntax of the language. We compose programs in the external syntax which, for the purposes of human communication, we fix. The internal syntax is generally assumed to be so translator and machine dependent that it is almost never described in the literature. Usually there is a translation process which takes text from an external to an internal syntax representation. Actually the variation in the internal description is more fundamentally associated with the evaluation rules than the machine on which it is to be executed. The choice of evaluation rules depends in a critical way on the binding time of the variables of the language. This points out an approach to the organization of evaluation useful in the case of texts which change. Since the internal data structure reflects the variability of the text being processed, let the translation process choose the appropriate internal representation of the syntax, and a general evaluator select specific evaluation rules on the basis of the syntax stucture chosen. Thus we must give clues in the external syntax which indicate the variable. For example, the occurrence of arithmetic expression t; real u,v; and the statement u := vl3*t; indicates the possibility of a different internal syntax for v/3 and the value of t. It should be pointed out that t behaves very much like an ALGOL formal parameter. However, the control over assignment is less regimented. I think this merely points out that formal-actual assignments are independent of the closed subroutine concept and that they have been united in the procedure construct as a way of specifying the scope of an initialization. The Synthesis of Algorithmic Systems
13
l
-
In the case of unpremeditated change a knowledge of the internal syntax structure makes possible the least amount of retranslation and alteration of the evaluation rules when text is varied. Since one has to examine and construct the data structures and evaluation rules entirely in some language, it seems reasonable that it be in the source language itself One may define as the target of translation an internal syntax whose character strings are a subset of those permitted in the source language. Such a syntax, if chosen to be close to machine code, can then be evaluated by rules which are very much like those of a machine. While I have spoken glibly about variability attached to the identifiers of the language, I have said nothing about the variability of control. We do not really have a way of describing control, so we cannot declare its regimes. We should expect our successor to have the kinds of control that ALc0IL has-and more. Parallel operation is one kind of control about which much study is being done. Another one just beginning to appear in languages is the distributed control, which I will call monitoring. Process A continuously monitors process B so that when B attains a certain state, A intervenes to control the future activity of the process. The control within A could be written when P then S; P is a predicate which is always, within some defining scope, under test. Whenever P is true, the computation under surveillance is interrupted and S is executed. We wish to mechanize this construct by testing P whenever an action has been performed which could possibly make P true, but not otherwise. We must then, in defining the language, the environment and the evaluation rules, include the states which can be monitored during execution. From these primitive states others can be constructed by programming. With a knowledge of these primitive states, arrangements for splicing in testing at possible points can be done even before the specific predicates are defined within a program. We may then trouble-shoot our programs without disturbing the programs themselves.
Variation of the Syntax Within the confines of a single language an astonishing amount of variability is attainable. Still all experience tells us that our changing needs will place increasing pressure on the language itself to change. The precise nature of these changes cannot be anticipated by designers, since they are the consequence of programs yet to be written for problems not yet solved. Ironically, it is the most useful and successful languages that are most subject to this pressure for change. Fortunately, the early kind of variation to be expected is somewhat predictable. Thus, in scientific computing the representation and arithmetic of numbers varies, but the nature of expressions does not change except through its operands and operators. The variation in syntax from these sources 14 ALAN J. PERLIS
is quite easily taken care of. In effect the syntax and evaluation rules of arithmetic expression are left undefined in the language. Instead syntax and evaluation rules are provided in the language for programming the definition of arithmetic expression, and to set the scope of such definitions. The only real difficulty in this one-night-stand language game is the specification of the evaluation rules. They must be given with care. For example, in introducing this way the arithmetic of matrices, the evaluation of matrix expressions should be careful of the use of temporary storage and not perform unnecessary iterations. A natural technique to employ in the use of definitions is to start with a language X, consider the definitions as enlarging the syntax to that of a language X' and give the evaluation rules as a reduction process which reduces any text in X' to an equivalent one in X. It should be remarked that the variation of the syntax requires a representation of the syntax, preferably as a data structure of X itself.
Conclusion Programming languages are built around the variable-its operations, control and data structures. Since these are concepts common to all programming, general language must focus on their orderly development. While we owe a great debt to Turing for his sample model, which also focused on the important concepts, we do not hesitate to operate with more sophisticated machines and data than he found necessary. Programmers should never be satisfied with languages which permit them to program everything, but to program nothing of interest easily. Our progress, then, is measured by the balance we achieve between efficiency and generality. As the nature of our involvement with computation changes - and it does - the appropriate description of language changes; our emphasis shifts. I feel that our successor model will show such a change. Computer science is a restless infant and its progress depends as much on shifts in point of view as on the orderly development of our current concepts. None of the ideas presented here are new; they are just forgotten from time to time. I wish to thank the Association for the privilege of delivering this first Turing lecture. And what better way is there to end this lecture than to say that if Turing were here today he would say things differently in a lecture named differently. Categories and Subject Descriptors: D.3.1 [Software]: Formal Definitions and Theory; D.3.2 [Software]: Language Classifications -ALGOL; D.3.3 [Software]: Language Constructs -data types and structures; D.4.3 [Software]: File System Management access methods General Terms: Languages, Algorithms The Synthesis of Algorithmic Systems
15
Postscript ALAN J. PERLIS Department of Computer Science Yale University In an activity as dynamic as computing, one should not expect a 20-yearold position paper to remain prescient. Therefore I was pleasantly surprised to find obvious interpretations of its contents which fit with what has happened, and is still happening, to programming languages. We still attach enormous importance to the model of computation that a language clothes. Most of the newer languages that have captured our imaginations provide syntactic sugar and semantic enrichment for these models so that we are seduced into making ambitious experiments that we were loath to try before. Consider four of these models: Pipelining (APL), distributed programming (more commonly called object-oriented programming, as exemplified by Smalltalk), reduction programming (functional programming, as exemplified by LISP, EP, or ML), and nonprocedural programming (as exemplified by logic programming with PROLOG). These models have captured our imagination much as ALGOL ciid 25 years ago. We have no reason to believe that these are the last models that will stir us to attempt more gigantic syntheses. My lecture focused also on the importance of data structure in programming, and hence in programming language. One axis along which languages develop is increasing sophistication of data structure definition and control. Whereas ALGOL and its direct descendants have opted for definable variability in data structures, leading to records and theories of types, the models mentioned above have gained support through their dependence on a single compound data structure, such as the list or array. Of course, as their use becomes more widespread, data structure variability will become more of the concern of these models. The work station, the personal computer, and the network had not yet become commonplace tools into which programming language systems had to be integrated. Editors were primitive and by no means seen as the magic door through which one entered computations. Nevertheless the lecture did point to the importance of conversational computing and languages to support it. It was pointed out that the representation of program as data was crucial to such programming. Lists turn o lt to be better than arrays for such representation, since syntax is a set of nesting and substitution constraints, and so are list structure modification rules. AI'L and pipelining have suffered because of this inequity between arrays and lists. New versions of APL are attempting to redress this inequity. Programming is becoming a ubiquitous activity, and strong efforts are being made to standardize a few languages (models, too). Thus far we have resisted such a limitation, and we are wise to have done so. New architectures and problem areas are sure to suggest new computational models that will stir our imaginations. From them and what we now have will come the next vision of a language playing the role of Ada. As always, we shall continue to avoid the Turing tar pit: being forced to use languages where everything is possible but nothing of interest is easy.
16
The Humble Programmer EDSGER W DIJKSTRA [Extract from the Tring Award Citation read by M. D. McIlroy, chairman of the ACM Wring Award Committee, at the presentation of this lecture on August 14, 1972, at the ACM Annual Conference in Boston.] The working vocabulary of programmers everywhere is studded with words originatedor forcefully promulgatedby E. W Dijkstra-display, deadly embrace, semaphore, go-to-less programming,structuredprogramming. But his influence on programming is more pervasive than any glossary can possibly indicate. The precious gift that this Turing Award acknowledges is Dijkstra's style: his approach to programming as a high, intellectual challenge; his eloquent insistence and practical demonstration that programs should be composed correctly, not just debugged into correctness; and his illuminatingperception of problems at the foundations of program design. He has published about a dozen papers, both technicaland reflective, among which are especially to be noted his philosophical addresses at IFIP,I his already classic papers on cooperating sequential processes, 2 and his memorable indictment of the go-to statement. 3 An influential series of letters by Dijkstra have recently surfaced as a polished monograph on the art of composing programs.4 'Some meditations on advanced programming, Proceedings of the IFIP Congress 1962, 535-538; Programming considered as a human activity, Proceedings of the IFIP Congress 1965, 213-217. 2 Solution of a problem in concurrent programming, control, CACM 8 (Sept. 1965), 569; The structure of the 'THE' multiprogramming system, CACM 11 (May 1968), 341-346. 3 Go to statement considered harmful, CACM 11 (Mar. 1968), 147-148. 4A short introduction to the art of computer programming. Technische Hogeschool, Eindhoven, 1971.
Author's present address: Department of Computer Sciences, The University of Texas at Austin, Austin, TX 78712. 17
We have come to value good programs in much the same way as we value good literature. And at the center of this movement, creating and reflecting patterns no less beautiful than useful, stands E. W Dijkstra. As a result of a long sequence of coincidences I entered the programming profession officially on the first spring morning of 1952, and as far as I have been able to trace, I was the first Dutchman to do so in my country. In retrospect the most amazing thing is the slowness with which, at least in my part of the world, the programming profession emerged, a slowness which is now hard to believe. But I am grateful for two vivid recollections from that period that established that slowness beyond any doubt. After having programmed for some three years, I had a discussion with van Wijngaarden, who was then my boss at the Mathematical Centre in Amsterdam-a discussion for which I shall remain grateful to him as long as I live. The point was that I was supposed to study theoretical physics at the University of Leiden simultaneously, and as I found the two activities harder and harder to combine, I had to make up my mind, either to stop programming and become a real, respectable theoretical physicist, or to carry my study of physics to a formal completion only, with a minimum of effort, and to become..., yes what? A programmer? But was that a respectable profession? After all, what was programming? Where was the sound body of knowledge that could support it as an intellectually respectable discipline? I remember quite vividly how I envied my hardware colleagues, who, when asked about their professional competence, could at least point out that they knew everything about vacuum tubes, amplifiers and the rest, whereas I felt that, when faced with that question, I would stand empty-handed. Full of misgivings I knocked on van Wijngaarden's office door, asking him whether I could speak to him for a moment; when I left his office a number of hours later, I was another person. For after having listened to my problems patiently, he agreed that up till that moment there was not much of a programming discipline, but then he went on to explain quietly that automatic computers were here to stay, that we were just at the beginning and could nct I be one of the persons called to make programming a respectable discipline in the years to come? This was a turning point in my life and I completed my study of physics formally as quickly as I could. One moral of the above story is, of course, that we must be very careful when we give advice to younger people: sometimes they follow it! 'Two years later, in 1957, I married, and Dutch marriage rites require you to state your profession and I stated that I was a programmer. But the municipal authorities of :he town of Amsterdam did not accept it on the grounds that there was no such profession. And, believe it or not, but under the heading 'profession' my marriage record shows the ridiculous entry 'theoretical physicist'! 18
EDSGER W. DIJKSTRA
So much for the slowness with which I saw the programming profession emerge in my own country. Since then I have seen more of the world, and it is my general impression that in other countries, apart from a possible shift of dates, the growth pattern has been very much the same. Let me try to capture the situation in those old days in a little bit more detail, in the hope of getting a better understanding of the situation today. While we pursue our analysis, we shall see how many common misunderstandngs about the true nature of the programming task can be traced back to that now distant past. The first automatic electronic computers were all unique, singlecopy machines and they were all to be found in an environment with the exciting flavor of an experimental laboratory. Once the vision of the automatic computer was there, its realization was a tremendous challenge to the electronic technology then available, and one thing is certain: we cannot deny the courage of the groups that decided to try to build such a fantastic piece of equipment. For fantastic pieces of equipment they were: in retrospect one can only wonder that those first machines worked at all, at least sometimes. The overwhelming problem was to get and keep the machine in working order. The preoccupation with the physical aspects of automatic computing is still reflected in the names of the older scientific societies in the field, such as the Association for Computing Machinery or the British Computer Society, names in which explicit reference is made to the physical equipment. What about the poor programmer? Well, to tell the honest truth, he was hardly noticed. For one thing, the first machines were so bulky that you could hardly move them and besides that, they required such extensive maintenance that it was quite natural that the place where people tried to use the machine was the same laboratory where the machine had been developed. Secondly, the programmer's somewhat invisible work was without any glamour: you could show the machine to visitors and that was several orders of magnitude more spectacular than some sheets of coding. But most important of all, the programmer himself had a very modest view of his own work: his work derived all its significance from the existence of that wonderful machine. Because that was a unique machine, he knew only too well that his programs had only local significance, and also because it was patently obvious that this machine would have a limited lifetime, he knew that very little of his work would have a lasting value. Finally, there is yet another circumstance that had a profound influence on the programmer's attitude toward his work: on the one hand, besides being unreliable, his machine was usually too slow and its memory was usually too small, i.e., he was faced with a pinching shoe, while on the other hand its usually somewhat queer order code would cater for the most unexpected constructions. And in those days many a clever The Humble Programmer
19
programmer derived an immense intellectual satisfaction from the cunning tricks by means of which he contrived to squeeze the impossible into the constraints of his equipment. Two opinions about programming date from those days. I mention them now; I shall return to then later. The one opinion was that a really competent programmer should be puzzle-minded and very fond of clever tricks; the other opinion was that programming was nothing more than optimizing the efficiency of the computational process, in one direction or the other. The latter opinion was the r-esuIt of the frequent circumstance that, indeed, the available equipment was a painfully pinching shoe, and in those days one often encountered the naive expectation that, once more powerful machines were avai able, programming would no longer be a problem, for then the struggle to push the machine to its limits would no longer be necessary and that was all that programming was about, wasn't it? But in the next decades something completely different happened: more powerful machines became available, not just an order of magnitude more powerful, even several orders of magnitude more powerful. But instead of finding ourselves in a state of eternal bliss with all programming problems solved, we found ourselves up to our necks in the software crisis! How ccme? There is a minor cause: in one or two respects modern machinery is basically more difficult to handle than the old machinery. Firstly, we have got the 1/o interrupts, occurring at unpredictable and irreproducible moments; compared with the old sequential machine that pretended to be a fully deterministic automaton, this has been a dramatic change, and many a systems programmer's grey hair bears witness to the fact that we should not talk lightly about the logical problems created by that feature. Secondly, we have got machines equipped with multilevel stores, presenting us problems of management strategy that, in spite of Ihe extensive literature on the subject, still remain rather elusive. Sc much for the added complication due to structural changes of the actual machines. But I called this a minor cause; the major cause is . . . that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now that we have gigantic computers, programming has become an equally gigantic problem. In this sense the electronic industry has not solved a single problem, it has only created them - it has cl eated the problem of using its products. To put it in another way: as the power of available machines grew by a factor of more than a thousand, society's ambition to apply these machines grew in proportion and it was the poor programmer who found his job in this exploded field of tension between ends and means. The increased power of the hardware, together with the perhaps even 20 EDSGER W. DIJKSTRA
more dramatic increase in its reliability, made solutions feasible that the programmer had not dared to dream about a few years before. And now, a few years later, he had to dream about them and, even worse, he had to transform such dreams into reality! Is it a wonder that we found ourselves in a software crisis? No, certainly not, and as you may guess, it was even predicted well in advance; but the trouble with minor prophets, of course, is that it is only five years later that you really know that they had been right. Then, in the mid-sixties something terrible happened: the computers of the so-called third generation made their appearance. The official literature tells us that their price/performance ratio has been one of the major design objectives. But if you take as 'performance' the duty cycle of the machine's various components, little will prevent you from ending up with a design in which the major part of your performance goal is reached by internal housekeeping activities of doubtful necessity. And if your definition of price is the price to be paid for the hardware, little will prevent you from ending up with a design that is terribly hard to program for: for instance the order code might be such as to enforce, either upon the programmer or upon the system, early binding decisions presenting conflicts that really cannot be resolved. And to a large extent these unpleasant possibilities seem to have become reality. When these machines were announced and their functional specifications became known, many among us must have become quite miserable: at least I was. It was only reasonable to expect that such machines would flood the computing community, and it was therefore all the more important that their design should be as sound as possible. But the design embodied such serious flaws that I felt that with a single stroke the progress of computing science had been retarded by at least ten years; it was then that I had the blackest week in the whole of my professional life. Perhaps the most saddening thing now is that, even after all those years of frustrating experience, still so many people honestly believe that some law of nature tells us that machines have to be that way. They silence their doubts by observing how many of these machines have been sold, and derive from that observation the false sense of security that, after all, the design cannot have been that bad. But upon the closer inspection, that line of defense has the same convincing strength as the argument that cigarette smoking must be healthy because so many people do it. It is in this connection that I regret that it is not customary for scientific journals in the computing area to publish reviews of newly announced computers in much the same way as we review scientific publications: to review machines would be at least as important. And here I have a confession to make: in the early sixties I wrote such a review with the intention of submitting it to Communications, but in spite of the fact that the few colleagues to whom the text was sent The Humble Programmer
21
for their advice urged me to do so, I did not dare to do it, fearing that the difficulties either for myself or for the Editorial Board would prove to be too great. This suppression was an act of cowardice on my side for which I blame myself more and more. The difficulties I foresaw were a consequence of the absence of generally accepted criteria, and although I was convinced of the validity of the criteria I had chosen to apply, I feared that my review would be refused or discarded as 'a matter of personal taste.' I still think that such reviews would be extremely useful and I am longing to see them appear, for their accepted appearance would be a sure sign of maturity of the computing community. The reason that I have paid the above attention to the hardware scene is because I have the feeling that one of the most important aspects of any computing too] s its influence on the thinking habits of those who try to use it, anc because I have reasons to believe that the influence is many times stronger than is commonly assumed. Let us now switch our attention Wcthe software scene. Here the diversity has beer so large that I must confine myself to a few stepping stones. I am painf`ully aware of the arbitrariness of my choice, and I beg you not to draw any conclusions with regard to my appreciation of the many effort s that will have to remain unmentioned. In the beginning there was the EDSAC in Cambridge, England, and I think it quite impressive that right from the start the notion of a subroutine library played a central role in the design of that machine and of the way in which it should be used. It is now nearly 25 years later and the computing scene has changed dramatically, but the notion of basic software is still with us, and the notion of the closed subroutine is still one of the key concepts in programming. We should recognize the closed subroutine as one of the greatest software inventions; it has survived three generations of computers and it will survive a few more, because it caters for the implementation of one of our basic patterns of abstraction. Regrettably enough, its importance has been underestimated in the design of the third generation computers, in which the great number of explicitly named registers of the arithmetic unit implies a large overhead on the subroutine mechanism. But even that did not kill the concept of the subroutine, and we can only pray that the mutation won't prove to be hereditary. The second major development on the software scene that I would like to mention is the birth of FORTRAN. At that time this was a project of great temerity, and the people responsible for it deserve our great admiration. It would be absolutely unfair to blame them for shortcomings that only became apparent after a decade or so of extensive usage: groups with a successful look-ahead of ten years are quite rare! In retrospect we must rate FORTRAN as a successful coding technique, but with very few effective aids to conception, aids which are now so urgently needed that time has come to consider it out of date. The 22
EDSGER W. DIJKSTRA
sooner we can forget that FORTRAN ever existed, the better, for as a vehicle of thought it is no longer adequate: it wastes our brainpower, and it is too risky and therefore too expensive to use. FORTRAN's tragic fate has been its wide acceptance, mentally chaining thousands and thousands of programmers to our past mistakes. I pray daily that more of my fellow-programmers may find the means of freeing themselves from the curse of compatibility. The third project I would not like to leave unmentioned is LISP, a fascinating enterprise of a completely different nature. With a few very basic principles at its foundation, it has shown a remarkable stability. Besides that, LISP has been the carrier for a considerable number of, in a sense, our most sophisticated computer applications. LISP has jokingly been described as 'the most intelligent way to misuse a computer.' I think that description a great compliment because it transmits the full flavor of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts. The fourth project to be mentioned is ALGOL 60. While up to the present day FORTRAN programmers still tend to understand their programming language in terms of the specific implementation they are working with - hence the prevalence of octal or hexadecimal dumps while the definition of LISP is still a curious mixture of what the language means and how the mechanism works, the famous Report on the Algorithmic Language ALGOL 60 is the fruit of a genuine effort to carry abstraction a vital step further and to define a programming language in an implementation-independent way. One could argue that in this respect its authors have been so successful that they have created serious doubts as to whether it could be implemented at all! The report gloriously demonstrated the power of the formal method BNF, now fairly known as Backus-Naur-Form, and the power of carefully phrased English, at least when used by someone as brilliant as Peter Naur. I think that it is fair to say that only very few documents as short as this have had an equally profound influence on the computing community. The ease with which in later years the names ALGOL and ALGOL-like have been used, as an unprotected trademark, to lend glory to a number of sometimes hardly related younger projects is a somewhat shocking compliment to ALGOL's standing. The strength of BNF as a defining device is responsible for what I regard as one of the weaknesses of the language: an overelaborate and not too systematic syntax could now be crammed into the confines of very few pages. With a device as powerful as BNF, the Report on the Algorithmic Language ALGOL 60 should have been much shorter. Besides that, I am getting very doubtful about ALGOL 60's parameter mechanism: it allows the programmer so much combinatorial freedom that its confident use requires a strong discipline from the programmer. Besides being expensive to implement, it seems dangerous to use. The Humble Programmer
23
Finally, although the subject is not a pleasant one, I must mention a programming language for which the defining documentation is of a frightening size and complexity. Using PL/I must be like flying a plane with 7,000 buttons, Ewitches, and handles to manipulate in the cockpit. I absolutely fail to see how we can keep our growing programs firmly within our intellectual grip when by its sheer baroqueness the programming language -our basic tool, mind you! -already escapes our intellectual control. And if I have to describe the influence PL/I can have on its users, the closest metaphor that comes to my mind is that of a drug. I remember from a symposium on higher level programming languages a lecture given in defense of PL/I by a man who described himself as one of its devoted users. But within a one-hour lecture in praise of PL/I, he managed to ask for the addition of about 50 new 'features,' little supposing that the main source of his problems could very well be that it contained already far too many 'features.' The speaker displayed all the depressing symptoms of addiction, reduced as he was to the state of mental stagnation in which he could only ask for more, more, more.... When FORTRAN has been called an infantile disorder, full PL/J, with its growth characteristics of a dangerous tumor, could turn out to be a fatal disease. So much for the past. But there is no point in making mistakes unless thereafter we are able to learn from them. As a matter of fact, I think that we have learned so much that within a few years programming can be an activity vastly different from what it has been up till now, so different that we had better prepare ourselves for the shock. Let me sketch for you one of the possible futures. At first sight, this vision of programming in perhaps already the near future may strike you as utterly fantastic. Let me therefore also add the considerations that might lead one to the conclusion that this vision could be a very real possibility. The vision is that, well before the seventies have run to completion, we shall be able to design and implement the kind of systems that are now straining our programming ability at the expense of only a few percent in man-years of what they cost us now, and that besides that, these systems will be virtually free of bugs. These two improvements go hand in hand. In the latter respect software seems to be different from many other products, where as a rule a higher quality implies a higher price. Those who want really reliable software will discover that they must find means of avoiding the majority of bugs to start with, and as a result the programming process will become cheaper. If you want more effective programmers, you will discover that they should not waste their time debugging-they should not introduce the bugs to start with. In other words, both goals point to the same change. Such a drastic change in such a short period of time would be a revolution, and to all persons that base their expectations for the future on smooth extrapolation of the recent past -appealing to some unwritten PL/I,
24
EDSGER W. DIJKSTRA
laws of social and cultural inertia-the chance that this drastic change will take place must seem negligible. But we all know that sometimes revolutions do take place! And what are the chances for this one? There seem to be three major conditions that must be fulfilled. The world at large must recognize the need for the change; secondly, the economic need for it must be sufficiently strong; and, thirdly, the change must be technically feasible. Let me discuss these three conditions in the above order. With respect to the recognition of the need for greater reliability of software, I expect no disagreement anymore. Only a few years ago this was different: to talk about a software crisis was blasphemy. The turning point was the Conference on Software Engineering in Garmisch, October 1968, a conference that created a sensation as there occurred the first open admission of the software crisis. And by now it is generally recognized that the design of any large sophisticated system is going to be a very difficult job, and whenever one meets people responsible for such undertakings, one finds them very much concerned about the reliability issue, and rightly so. In short, our first condition seems to be satisfied. Now for the economic need. Nowadays one often encounters the opinion that in the sixties programming has been an overpaid profession, and that in the coming years programmer salaries may be expected to go down. Usually this opinion is expressed in connection with the recession, but it could be a symptom of something different and quite healthy, viz. that perhaps the programmers of the past decade have not done so good a job as they should have done. Society is getting dissatisfied with the performance of programmers and of their products. But there is another factor of much greater weight. In the present situation it is quite usual that for a specific system, the price to be paid for the development of the software is of the same order of magnitude as the price of the hardware needed, and society more or less accepts that. But hardware manufacturers tell us that in the next decade hardware prices can be expected to drop with a factor of ten. If software development were to continue to be the same clumsy and expensive process as it is now, things would get completely out of balance. You cannot expect society to accept this, and therefore we must learn to program an order of magnitude more effectively. To put it in another way: as long as machines were the largest item on the budget, the programming profession could get away with its clumsy techniques; but the umbrella will fold very rapidly. In short, also our second condition seems to be satisfied. And now the third condition: is it technically feasible? I think it might be, and I shall give you six arguments in support of that opinion. A study of program structure has revealed that programs -even alternative programs for the same task and with the same mathematical content -can differ tremendously in their intellectual manageability. The Humble Programmer
25
A number of rules have been discovered, violation of which will either seriously impair or totally destroy the intellectual manageability of the program. These rules are of two kinds. Those of the first kind are easily imposed mechanically, viz. by a suitably chosen programming language. Examples are the exclusion of go-to statements and of procedures with more than one output parameter. For those of the second kind, I at least-but that may be due to lack of competence on my side-see no way of imposing them mechanically, as it seems to need some sort of automatic theorem prover for which I have no existence proof. Therefore, for the time being and perhaps forever, the rules of the second kind present themselves as elements of discipline required from the programmer. Some of the rules I have in mind are so clear that they can be taught and that there never needs to be an argument as to whether a given program violates them or not. Examples are the requirements that no loop should be written down without providing a proof for termination or without stating the relation whose invariance will not be destroyed by the e xecution of the repeatable statement. I now suggest that we confine ourselves to the design and implementation of intellectually manageable programs. If someone fears that this restriction is so severe that we cannot live with it, I can reassure him: the class of intellectually manageable programs is still sufficiently rich to contain many very realistic programs for any problem capable of algorithmic solution. We must not forget that it is not our business to make programs; it is our business to design classes of computations that will display a desired behavior. The suggestion of confining ourselves to intellectually manageable programs is the basis for the first two of my announced six arguments. Argument one is that, as the programmer only needs to consider intellectually manageable programs, the alternatives he is choosing from are much, much easier to core with. Argument two is that, as soon as we have decided to restrict ourselves to the subject of the intellectually manageable programs, we have achieved, once and for all, a drastic reduction of the solution space to be considered. And this argument is distinct from argument one. Argument three is based on 1he constructive approach to the problem of program correctness. Today a usual technique is to make a program and then to test it. But: program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence. The only effective way to raise the confidence level of a program significantly is to give a convincing proof of its correctness. But one should not first make the program and then prove its correctness, because then the requirement of providing the proof would only increase the poor programmer's burden. On the contrary: the programmer should let correctness proof and program grow hand in hand. Argument three is essentially based on the following observation. If one first asks oneself what the structure of a convincing proof 26
EDSGER W. DIJKSTRA
would be and, having found this, then constructs a program satisfying this proof's requirements, then these correctness concerns turn out to be a very effective heuristic guidance. By definition this approach is only applicable when we restrict ourselves to intellectually manageable programs, but it provides us with effective means for finding a satisfactory one among these. Argument four has to do with the way in which the amount of intellectual effort needed to design a program depends on the program length. It has been suggested that there is some law of nature telling us that the amount of intellectual effort needed grows with the square of program length. But, thank goodness, no one has been able to prove this law. And this is because it need not be true. We all know that the only mental tool by means of which a very finite piece of reasoning can cover a myriad of cases is called 'abstraction'; as a result the effective exploitation of his powers of abstraction must be regarded as one of the most vital activities of a competent programmer. In this connection it might be worthwhile to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise. Of course I have tried to find a fundamental cause that would prevent our abstraction mechanisms from being sufficiently effective. But no matter how hard I tried, I did not find such a cause. As a result I tend to the assumption - up till now not disproved by experience - that by suitable application of our powers of abstraction, the intellectual effort required to conceive or to understand a program need not grow more than proportional to program length. A by-product of these investigations may be of much greater practical significance, and is, in fact, the basis of my fourth argument. The by-product was the identification of a number of patterns of abstraction that play a vital role in the whole process of composing programs. Enough is known about these patterns of abstraction that you could devote a lecture to each of them. What the familiarity and conscious knowledge of these patterns of abstraction imply dawned upon me when I realized that, had they been common knowledge 15 years ago, the step from BNF to syntax-directed compilers, for instance, could have taken a few minutes instead of a few years. Therefore I present our recent knowledge of vital abstraction patterns as the fourth argument. Now for the fifth argument. It has to do with the influence of the tool we are trying to use upon our own thinking habits. I observe a cultural tradition, which in all probability has its roots in the Renaissance, to ignore this influence, to regard the human mind as the supreme and autonomous master of its artifacts. But if I start to analyze the thinking habits of myself and of my fellow human beings, I come, whether I like it or not, to a completely different conclusion, viz. that the tools we are trying to use and the language or notation we are using to express or record our thoughts are the major factors determining that we can think or express at all! The analysis of the influence that The Humble Programmer 27
programming languages have on the thinking habits of their users, and the recognition that, by now, brainpower is by far our scarcest resource, these together give us a new collection of yardsticks for comparing the relative merits of various programming languages. The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. In the case of a well-known conversational programming language I have been told from various sides that as soon as a programming community is equipped with it terminal for it, a specific phenomenon occurs that even has a well-established name: it is called 'the one-liners.' It takes one of two different forms: one programmer places a one-line program on the desk of another and either he proudly tells what it does and adds the question, 'Can you code this in less symbols?' -as if this were of any conceptual relevance! -or he just says, 'Guess what it does!' From this observation we must conclude that this language as a tool is an open invitation for clever tricks; and while exactly this may be the explanation for scme of its appeal, viz. to those who like to show how clever they are, I am sorry, but I must regard this as one of the most damning things that can be said about a programming language. Another lesson we should have learned from the recent past is that the development of 'richer' or 'more powerful' programming languages was a mistake in the sense that these baroque monstrosities, these conglomerations of idiosyncrasies, are really unmanageable, both mechanically and mentally I see a great future for very systematic and very modest programming languages. When I say 'modest,' I mean that, for instance, not only ALGOL 60's 'for clause,' but even FORTRAN's 'DO loop' may find themselves thrown out as being too baroque. I have run a little program T ing experiment with really experienced volunteers, but something quite unintended and quite unexpected turned up. None of my volunteers found the obvious and most elegant solution. Upon closer analysis this turned out to have a common source: their notion of repetition was so tightly connected to the idea of an associated controlled variable to be stepped up, that they were mentally blocked from seeing the obvious. Their solutions were less efficient, needlessly hard to understand, and it took them a very long time to find them. It was a revealing but also shocking experience for me. Finally, in one respect one hopes that tomorrow's programming languages will differ greatly from what we are used to now: to a much greater extent than hitherto they should invite us to reflect in the structure of what we write clown all abstractions needed to cope conceptually with the complexity of what we are designing. So much for the greater adequacy of our future tools, which was the basis of the fifth argument. As an aside I would like to insert a warning to those who identify the difficulty of the program ng task with the struggle against the inadequacies of our current tc'ols, because they might conclude that, 28
EDSGER W. DIJKSTRA
once our tools will be much more adequate, programming will no longer be a problem. Programming will remain very difficult, because once we have freed ourselves from the circumstantial cumbersomeness, we will find ourselves free to tackle the problems that are now well beyond our programming capacity. You can quarrel with my sixth argument, for it is not so easy to collect experimental evidence for its support, a fact that will not prevent me from believing in its validity. Up till now I have not mentioned the word 'hierarchy,' but I think that it is fair to say that this is a key concept for all systems embodying a nicely factored solution. I could even go one step further and make an article of faith out of it, viz. that the only problems we can really solve in a satisfactory manner are those that finally admit a nicely factored solution. At first sight this view of human limitations may strike you as a rather depressing view of our predicament, but I don't feel it that way. On the contrary, the best way to learn to live with our limitations is to know them. By the time we are sufficiently modest to try factored solutions only, because the other efforts escape our intellectual grip, we shall do our utmost to avoid all those interfaces impairing our ability to factor the system in a helpful way. And I cannot but expect that this will repeatedly lead to the discovery that an initially untractable problem can be factored after all. Anyone who has seen how the majority of the troubles of the compiling phase called 'code generation' can be tracked down to funny properties of the order code will know a simple example of the kind of things I have in mind. The wide applicability of nicely factored solutions is my sixth and last argument for the technical feasibility of the revolution that might take place in the current decade. In principle I leave it to you to decide for yourself how much weight you are going to give to my considerations, knowing only too well that I can force no one else to share my beliefs. As in each serious revolution, it will provoke violent opposition and one can ask oneself where to expect the conservative forces trying to counteract such a development. I don't expect them primarily in big business, not even in the computer business: I expect them rather in the educational institutions that provide today's training and in those conservative groups of computer users that think their old programs so important that they don't think it worthwhile to rewrite and improve them. In this connection it is sad to observe that on many a university campus the choice of the central computing facility has too often been determined by the demands of a few established but expensive applications with a disregard of the question, how many thousands of 'small users' who are willing to write their own programs are going to suffer from this choice. Too often, for instance, high-energy physics seems to have blackmailed the scientific community with the price of its remaining experimental equipment. The easiest answer, of course, The Humble Programmer
29
-
is a flat denial of the technical feasibility, but I am afraid that you need pretty strong arguments for that. No reassurance, alas, can be obtained from the remark that the intellectual ceiling of today's average programmer will prevent the revolution from taking place: with others programming so much more effectively, he is liable to be edged out of the picture anyway. There may also be political impediments. Even if we know how to educate tomorrow's professional programmer, it is not certain that the society we are living in will allow us to do so. The first effect of teaching a methodology-rather than disseminating knowledge-is that of enhancing the capacities of the already capable, thus magnifying the difference in intelligence. In a society in which the educational system is used as an instrument for the establishment of a homogenized culture, in which the cream is prevented from rising to the top, the education of competent programmers could be politically unpalatable. Let me conclude. Automatic computers have now been with us for a quarter of a century. They have had a great impact on our society in their capacity of tools, but in that capacity their influence will be but a ripple on the surface of our culture compared with the much more profound influence they will have in their capacity of intellectual challenge which will be without precedent in the cultural history of mankind. Hierarchical systems seem to have the property that something considered as an undivided entity on one level is considered as a composite object on the next lower level of greater detail; as a result the natural grain of space or lime that is applicable at each level decreases by an order of magnitude when we shift our attention from one level to the next lower one. We understand walls in terms of bricks, bricks in terms of crystals, crystals in terms of molecules, etc. As a result the number of levels that can can be distinguished meaningfully in a hierarchical system is kind of proportional to the logarithm of the ratio between the largest and the smallest grain, and therefore, unless this ratio is very large, we cannot expect many levels. In computer programming our basic building block has an associated time grain of less than a microsecond, but our program may take hours of computation time. I do not know of any other technology covering a ratio of 1010 or more: the computer, by virtue of its fantastic speed, seems to be the first to provide us with an environment where highly hierarchical artifacts are both possible and necessary. This challenge, viz. the confrontation with the programming task, is so unique that this novel experience can teach us a lot about ourselves. It should deepen our understanding of the processes of design and creation; it should give us better control over the task of organizing our thoughts. If it did not do so, to my taste we should not deserve the computer at all! It has already taught us a few lessons, and the one I have chosen to stress in this talk is the following. We shall do a much better programming job, provided that we approach the task with a full apprecia30 EDSGER W. DIJKSTRA
tion of its tremendous difficulty, provided that we stick to modest and elegant programming languages, provided that we respect the intrinsic limitations of the human mind and approach the task as Very Humble Programmers.
Categories and Subject Descriptors: D.2.4 [Software]: Program Verification -correctness proofs; D.3.0 [Software]: General-standards; D.3.3 [Software]: Language Constructsprocedures, functions and subroutines; K.2 [Computing Milieux]: History of Computing-people; K.7.1 [Computing Milieux]: The Computing Profession - occupations General Terms: Design, Human Factors, Languages, Reliability Additional Key Words and Phrases: ALGOL 60, EDSAC, FORTRAN, PL/I
The Humble Programmer 31
-
l
Postscript EDSGER W. DIJKSTRA Department of Computer Sciences The University of Texas at Austin My During Award lecture of 1972 was very much a credo that presented the programming task as an intelle ctual challenge of the highest caliber. That credo strikes me now (in 1986) as still fully up to date: How not to get lost in the complexities of our own making is still computing's core challenge. In its proposals of how to rneet that challenge, however, the lecture is clearly dated: Had I to give it now, I would devote a major part of it to the role of formal techniques in programming. The confrontation of my expectations in those days with what has happened since evokes mixed feelings. On the one hand, my wildest expectations have been surpassed: neat, concise arguments leading to sophisticated algorithms that were very hard, if not impossible, to conceive as little as ten years ago are a regular source of intellectual excitement. On the other hand, I am disappointed to see how little of this has penetrated into the average computing science curriculum, in which the effective design of high-quality programs is neglected in favor of fads (say, 'incremental self-improvement of the userfriendliness of expert systems interfaces'). There is an upper bound on the speed with which society can absorb progress, and I guess I have still :c learn how to be more patient.
32
Computer Programming as an Art DONALD E. KNUTH [The Turing Award citation readby BernardA. Galler, chairman of the 1974 Touring Award Committee, on the presentation of this lecture on November 11 at the ACM Annual Conference in San Diego.] The A. M. Touring Award of the ACM is presented annually by the ACM to an individual selected for his contributions of a technical nature made to the computing community. In particular, these contributionsshould have had significant influence on a major segment of the computer field. 'The 1974 A. M. 7Tring Award is presented to ProfessorDonald E. Knuth of Stanford University for a number of major contributions to the analysis of algorithms and the design of programming languages, and in particular for his most significant contributions to the 'art of computer programming' through his series of well-known books. The collections of techniques, algorithms, and relevant theory in these books have served as a focal point for developing curriculaand as an organizing influence on computer science.' Such a formal statement cannot put into proper perspective the role which Don Knuth has been playing in computer science, and in the computer industry as a whole. It has been my experience with respect to the first recipient of the Turing Award, Professor Alan J. Perlis, that at every meeting in which he participates he manages to provide the insight into the problems being discussed that becomes the focal point of discussion for Author's present address: Fletcher Jones Professor of Computer Science, Stanford University, Stanford CA 94305.
33
the rest of the meeting. In a very similar way, the vocabulary, the examples, the algorithms and the insight that Don Knuth has provided in his excellent collection of books and papers have begun to find their way into a great many discussions in almost every area of the field. This does not happen easily. As every author knows, e'ven a single volume requires a great deal of careful organization and hara work. All the more must we appreciate the clear view and the patience and energy which Knuth must have had to plan seven volumes and to set about implementing his plan so carefully and thoroughly. It is significant that this award 2nd the others that he has been receiving are being given to him after three volumes of his work have been published. We are clearly ready to signal to everyone our appreciationof Don Knuth for his dedication and his contribulions to our discipline. I am very pleased to have chaired the Committee that has chosen Don Knuth to receive the 1974 A. M. Trying Award of the ACM. When Communications of the ACM began publication in 1959, the members of ACM'S Editorial Board made the following remark as they described the purposes of ACM':, periodicals [2]: 'If computer programming is to become an important part of computer research and development, a transition of programming from an art to a disciplined science must be effected.' Such a goal has been a continually recurring theme during the ensuing years; for example, we read in 1970 of the 'first steps toward transforming the art of programming into a science' [26]. Meanwhile we have actually succeeded in making our discipline a science, and in a remarkably simple way: merely by deciding to call it 'computer science.' Implicit in these remarks is the notion that there is something undesirable about an area of human activity that is classified as an 'art'; it has to be a Science before it has any real stature. On the other hand, I have been working for more than 12 years on a series of books called 'The Art of Computer Programming.' People frequently ask me why I picked such a tille; and in fact some people apparently don't believe that I really did so, since I've seen at least one bibliographic reference to some books called 'The Act of Computer Programming.' In this talk I shall try to explain why I think 'Art' is the appropriate word. I will discuss what it means for something to be an art, in contrast to being a science; I will try to examine whether arts are good things or bad things; and I will try to show that a proper viewpoint of the subject will help us all to improve the quality of what we are now doing. One of the first times I was ever asked about the title of my books was in 1966, during the last previous ACM national meeting held in Southern California. This was before any of the books were published, and I recall having lunch with a friend at the convention hotel. He knew how conceited I was, already at that time, so he asked if I was 34 DONALD E. KNUTH
going to call my books 'An Introduction to Don Knuth.' I replied that, on the contrary, I was naming the books after him. His name: Art Evans. (The Art of Computer Programming, in person.) From this story we can conclude that the word 'art' has more than one meaning. In fact, one of the nicest things about the word is that it is used in many different senses, each of which is quite appropriate in connection with computer programming. While preparing this talk, I went to the library to find out what people have written about the word 'art' through the years; and after spending several fascinating days in the stacks, I came to the conclusion that 'art' must be one of the most interesting words in the English language.
The Arts of Old If we go back to Latin roots, we find ars, artis meaning 'skill.' It is perhaps significant that the corresponding Greek word was TiXVl, the root of both 'technology' and 'technique.' Nowadays when someone speaks of 'art' you probably think first of 'fine arts' such as painting and sculpture, but before the twentieth century the word was generally used in quite a different sense. Since this older meaning of 'art' still survives in many idioms, especially when we are contrasting art with science, I would like to spend the next few minutes talking about art in its classical sense. In medieval times, the first universities were established to teach the seven so-called 'liberal arts,' namely grammar, rhetoric, logic, arithmetic, geometry, music, and astronomy. Note that this is quite different from the curriculum of today's liberal arts colleges, and that at least three of the original seven liberal arts are important components of computer science. At that time, an 'art' meant something devised by man's intellect, as opposed to activities derived from nature or instinct; 'liberal' arts were liberated or free, in contrast to manual arts such as plowing (cf. [6]). During the middle ages the word 'art' by itself usually meant logic [4], which usually meant the study of syllogisms.
Science vs. Art The word 'science' seems to have been used for many years in about the same sense as 'art'; for example, people spoke also of the seven liberal sciences, which were the same as the seven liberal arts [1]. Duns Scotus in the thirteenth century called logic 'the Science of Sciences, and the Art of Arts' (cf. [12, p. 34f]). As civilization and learning developed, the words took on more and more independent meanings, 'science' being used to stand for knowledge, and 'art' for the application of knowledge. Thus, the science of astronomy was the basis for the art of navigation. The situation was almost exactly like the way in which we now distinguish between 'science' and 'engineering.' Computer Programming as an Art 35
Many authors wrote about the relationship between art and science in the nineteenth century, and believe the best discussion was given by John Stuart Mill. He said the following things, among others, in 1843 [28]: Several sciences are often necessary to form the groundwork of a single art. Such is the complication of human affairs, that to enable one thing to be done, it is often requisite to know the nature and properties of many things.... Art in general consists of the truths of Scinrkce, arranged in the most convenient order for practice, instead of the order which is the most convenient for thought. Science groups and arranges its truths so as tL enable us to take in at one view as much as possible of the general order of t ie universe. Art ...brings together from parts of the field of science most remote from one another, the truths relating to the production of the different and heterogeneous conditions necessary to each effect which the exigencies of practical life require.
As I was looking up these things about the meanings of 'art,' I found that authors have been calling for a transition from art to science for at least two centuries. For example, the preface to a textbook on mineralogy, written in 1784, said the following [17]: 'Previous to the year 1780, mineralogy, though tolerably understood by many as an Art, could scarce be deemed a Science.' According to most dictionaries 'science' means knowledge that has been logically arranged and systematized in the form of general 'laws' The advantage of science is that it saves us from the need to think things through in each individual case; we can turn our thoughts to higher-level concepts. As John Ruskin wrote in 1853 [32]: 'The work of science is to substitute facts for appearances, and demonstrations for impressions.' It seems to me that if the authors I studied were writing today, they would agree with the following characterization: Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it. Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something. Artificial intelligence has been making significant progress, yet there is a huge gap between what computers can do in the foreseeable future and what ordinary people can ctc. The mysterious insights that people have when speaking, listening, creating, and even when they are programming, are still beyond the reach of science; nearly everything we do is still an art. From this standpoint it is certainly desirable to make computer programming a science, and we have indeed come a long way in the 15 years since the publication of the remarks I quoted at the beginning of this talk. Fifteen years ago computer programming was so badly understood that hardly anyone even thought about proving programs correct; we just fiddled with a program until we 'knew' it worked. At that time we didn't even know how to express the concept that a 36
DONALD E. KNUTH
program was correct, in any rigorous way. It is only in recent years that we have been learning about the processes of abstraction by which programs are written and understood; and this new knowledge about programming is currently producing great payoffs in practice, even though few programs are actually proved correct with complete rigor, since we are beginning to understand the principles of program structure. The point is that when we write programs today, we know that we could in principle construct formal proofs of their correctness if we really wanted to, now that we understand how such proofs are formulated. This scientific basis is resulting in programs that are significantly more reliable than those we wrote in former days when intuition was the only basis of correctness. The field of 'automatic programming' is one of the major areas of artificial intelligence research today. Its proponents would love to be able to give a lecture entitled 'Computer Programming as an Artifact' (meaning that programming has become merely a relic of bygone days), because their aim is to create machines that write programs better than we can, given only the problem specification. Personally I don't think such a goal will ever be completely attained, but I do think that their research is extremely important, because everything we learn about programming helps us to improve our own artistry. In this sense we should continually be striving to transform every art into a science: in the process, we advance the art. I can't resist telling another story relating science and art. Several years ago when I visited the University of Chicago, I noticed two signs as I entered one of the buildings. One of them said 'Information Science,' and it had an arrow pointing to the right; the other said 'Information,' and its arrow pointed to the left. In other words, it was one way for the Science, but the other way for the Art of Information.
Science and Art Our discussion indicates that computer programming is by now both a science and an art, and that the two aspects nicely complement each other. Apparently most authors who examine such a question come to this same conclusion, that their subject is both a science and an art, whatever their subject is (cf. [25]). I found a book about elementary photography, written in 1893, which stated that 'the development of the photographic image is both an art and a science' [13]. In fact, when I first picked up a dictionary in order to study the words 'art' and 'science,' I happened to glance at the editor's preface, which began by saying, 'The making of a dictionary is both a science and an art.' The editor of Funk & Wagnall's dictionary [27] observed that the painstaking accumulation and classification of data about words has a scientific character, while a well-chosen phrasing of definitions demands the ability to write with economy and precision: 'The science without the art is likely to be ineffective; the art without the science is certain to be inaccurate.' Computer Programming as an Art
37
When preparing this talk I looked through the card catalog at Stanford library to see how other people have been using the words 'art' and 'science' in the titles of their books. This turned out to be quite interesting. For example, I found two books entitled The Art of Playing the Piano [5, 15], and others called The Science of Pianoforte Technique [10], The Science of Pianoforte Practice [30]. There is also a book called The Art of Piano Playing: A Scientiflh Approach [22]. Then I found a nice little book entitled The Gentle Art of Mathematics [31], which made me somewhat sad that I can't honestly describe computer programming as a 'g Entle art.' I had known for several years about a book called The Art of Computation, published in San Francisco, 1879, by a man named C. Frusher Howard [14]. This was a book on practical business arithmetic that had sold over 400,000 copies in various editions by 1890. I was amused to read the preface, since it shows that Howard's philosophy and the intent of his title were quite different from mine; he wrote: 'A knowledge of the Science of Number is of minor importance; skill in the Art of Reckoning is absolutely indispensable.' Several books mention both science and art in their titles, notably The Science of Being and Art of IUving by Maharishi Mahesh Yogi [24]. There is also a book called The Art of Scientific Discovery [11], which analyzes how some of the great discoveries of science were made. So much for the word 'art' in its classical meaning. Actually when I chose the title of my books, I wasn't thinking primarily of art in this sense, I was thinking more of its current connotations. Probably the most interesting book which turned up in my search was a fairly recent work by Robert E. Mueller called The Science of Art [29]. Of all the books I've mentioned, Mueller's comes closest to expressing what I want to make the central theme of my talk today, in terms of real artistry as we now understand the term. lie observes: 'It was once thought that the imaginative outlook of the artist was death for the scientist. And the logic of science seemed to spell doom to all possible artistic flights of fancy.' He goes on to explore the advantages which actually do result from a synthesis of science and art. A scientific approach is generally characterized by the words logical, systematic, impersonal, calm, rational, while an artistic approach is characterized by the words aesthetic, creative, humanitarian, anxious, irrational. It seems to me that both of these apparently contradictory approaches have great value with respect to computer programming. Emma Lehmer wrote in 1956 that she had found coding to be 'an exacting science as well as an intriguing art' [23]. H. S. M. Coxeter remarked in 1957 that he sometimes felt 'more like an artist than a scientist' [7]. This was at the time C. P. Snow was beginning to voice his alarm at the growing polarization between 'two cultures' of educated people [34, 35]. He pointed out that we need to combine scientific and artistic values if we are to make real progress. 38 DONALD E. KNUTH
Works of Art When I'm sitting in an audience listening to a long lecture, my attention usually starts to wane at about this point in the hour. So I wonder, are you getting a little tired of my harangue about 'science' and 'art'? I really hope that you'll be able to listen carefully to the rest of this, anyway, because now comes the part about which I feel most deeply. When I speak about computer programming as an art, I am thinking primarily of it as an art form, in an aesthetic sense. The chief goal of my work as educator and author is to help people learn how to write beautiful programs. It is for this reason I was especially pleased to learn recently [33] that my books actually appear in the Fine Arts Library at Cornell University. (However, the three volumes apparently sit there neatly on the shelf, without being used, so I'm afraid the librarians may have made a mistake by interpreting my title literally.) My feeling is that when we prepare a program, it can be like composing poetry or music; as Andrei Ershov has said [9], programming can give us both intellectual and emotional satisfaction, because it is a real achievement to master complexity and to establish a system of consistent rules. Furthermore when we read other people's programs, we can recognize some of them as genuine works of art. I can still remember the great thrill it was for me to read the listing of Stan Poley's SOAP II assembly program in 1958; you probably think I'm crazy, and styles have certainly changed greatly since then, but at the time it meant a great deal to me to see how elegant a system program could be, especially by comparison with the heavy-handed coding found in other listings I had been studying at the same time. The possibility of writing beautiful programs, even in assembly language, is what got me hooked on programming in the first place. Some programs are elegant, some are exquisite, some are sparkling. My claim is that it is possible to write grand programs, noble programs, truly magnificent ones! I discussed this recently with Michael Fischer, who suggested that computer programmers should begin to sell their original programs, as works of art, to collectors. The ACM could set up a panel to certify the authenticity of each genuinely new piece of code; then discriminating dealers and a new class of professionals called program critics would establish appropriate market values. This would be a nice way to raise our salaries if we could get it started.
Taste and Style In a more serious vein, I'm glad that the idea of style in programming is now coming to the forefront at last, and I hope that most of you have seen the excellent little book on Elements of ProgrammingStyle by Kernighan and Plauger [16]. In this connection it is most important for us all to remember that there is no one 'best' style; everybody has Computer Programming as an Art
39
his own preferences, and it is a mistake to try to force people into an unnatural mold. We often hear the saying, 'I don't know anything about art, but I know what I like.' The important thing is that you really like the style you are using; it should be the best way you prefer to express yourself. Edsger Dijkstra stressed this point in the preface to his Short Introduction to the Art of Programrning[8]: It is my purpose to transmit the importance of good taste and style in programming, [but] the specific elements of style presented serve only to illustrate what benefits can be derived from 'style' in general. In this respect I feel akin to the teacher of composition at a conservatory: He does not teach his pupils how to compose a particular symphony, lie must help his pupils to find their own style and must explain to them what is imnlied by this. (It has been this analogy that made me talk about 'The Art of Frogramming.')
Now we must ask ourselves, What is good style, and what is bad style? We should not be too rigid about this in judging other people's work. The early nineteenth-century philosopher Jeremy Bentham put it this way [3, Bk. 3, Ch. 1]: Judges of elegance and taste consider themselves as benefactors to the human race, whilst they are really only the interrupters of their pleasure.... There is no taste which deserves the epithet goo i, unless it be the taste for such employments which, to the pleasure actually prod aced by them, conjoin some contingent or future utility: there is no taste which deserves to be characterized as bad, unless it be a taste for some occupation which has a mischievous tendency.
When we apply our own prejudices to 'reform' someone else's taste, we may be unconsciously denying him some entirely legitimate pleasure. That's why I don't condemn a lot of things programmers do, even though I would never enjoy doing them myself. The important thing is that they are creating something they feel is beautiful. In the passage I just quoted, Bentham does give us some advice about certain principles of aesthetics which are better than others, namely the 'utility' of the result. We have some freedom in setting up our personal standards of beauty, but it is especially nice when the things we regard as beautiful are alsc regarded by other people as useful. I must confess that I really enjoy writing computer programs; and I especially enjoy writing programs which do the greatest good, in some sense. There are many senses in which a program can be 'good,' of course. In the first place, it's especially good to have a program that works correctly. Secondly it is often goad to have a program that won't be hard to change, when the time for adaptation arises. Both of these goals are achieved when the program is easily readable and understandable to a person who knows the appropriate language. Another important way for a production program to be good is for it to interact gracefully with its users, especially when recovering from human errors in the input data. It's a real art to compose meaningful error messages or to design flexible input formats which are not error-prone. 40 DONALD E. KNUTH
Another important aspect of program quality is the efficiency with which the computer's resources are actually being used. I am sorry to say that many people nowadays are condemning program efficiency, telling us that it is in bad taste. The reason for this is that we are now experiencing a reaction from the time when efficiency was the only reputable criterion of goodness, and programmers in the past have tended to be so preoccupied with efficiency that they have produced needlessly complicated code; the result of this unnecessary complexity has been that net efficiency has gone down, due to difficulties of debugging and maintenance. The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming. We shouldn't be penny wise and pound foolish, nor should we always think of efficiency in terms of so many percent gained or lost in total running time or space. When we buy a car, many of us are almost oblivious to a difference of $50 or $100 in its price, while we might make a special trip to a particular store in order to buy a 50¢ item for only 25¢. My point is that there is a time and place for efficiency; I have discussed its proper role in my paper on structured programming, which appears in the current issue of Computing Surveys [21].
Less Facilities: More Enjoyment One rather curious thing I've noticed about aesthetic satisfaction is that our pleasure is significantly enhanced when we accomplish something with limited tools. For example, the program of which I personally am most pleased and proud is a compiler I once wrote for a primitive minicomputer which had only 4096 words of memory, 16 bits per word. It makes a person feel like a real virtuoso to achieve something under such severe restrictions. A similar phenomenon occurs in many other contexts. For example, people often seem to fall in love with their Volkswagens but rarely with their Lincoln Continentals (which presumably run much better). When I learned programming, it was a popular pastime to do as much as possible with programs that fit on only a single punched card. I suppose it's this same phenomenon that makes APL enthusiasts relish their 'oneliners.' When we teach programming nowadays, it is a curious fact that we rarely capture the heart of a student for computer science until he has taken a course which allows 'hands on' experience with a minicomputer. The use of our large-scale machines with their fancy operating systems and languages doesn't really seem to engender any love for programming, at least not at first. It's not obvious how to apply this principle to increase programmers' enjoyment of their work. Surely programmers would groan if their manager suddenly announced that the new machine will have only half Computer Programming as an Art
41
as much memory as the old. And I don't think anybody, even the most dedicated 'programming artist:',' can be expected to welcome such a prospect, since nobody likes to lose facilities unnecessarily. Another example may help to clarify the situation: Film-makers strongly resisted the introduction of talking pictures in the 1920's because they were justly proud of the way they could convey words without sound. Similarly, a true programming artist might well resent the introduction of more powerful equipment; today's mass storage devices tend to spoil much of the beauty of our old tape sorting methods. But today's filmmakers don't want to go back to silent films, not because they're lazy but because they know it is quite possible to make beautiful movies using the improved technology. The form of their art has changed, but there is still plenty of room for artistry. How did they develop their skill? The best film-makers through the years usually seem to have learned their art in comparatively primitive circumstances, often in other countries with a limited movie industry. And in recent years the most important things we have been learning about programming seem to have originated with people who did not have access to very large computers. The moral of this story, it seems to me, is that we should make use of the idea of limited resources in our own education. We can all benefit by doing occasional 'toy' programs, when artificial restrictions are set up, so that we are forced to push our abilities to the limi- . We shouldn't live in the lap of luxury all the time, since that tends tc make us lethal gic. The art of tackling miniproblems with all our energy will sharpen our talents for the real problems, and the experience will help us to get more pleasure from our accomplishments on less restricted equipment. In a similar vein, we shouldn't shy away from 'art for art's sake'; we shouldn't feel guilty about programs that are just for fun. I once got a great kick out of writing a one-statement ALGOL program that invoked an innerproduct procedure in such an unusual way that it calculated the mth prime number, instead of an innerproduct [19]. Some years ago the students at Stanford were excited about finding the shortest FORTRAN program which prints itself out, in the sense that the program's output is identical to its own source text. The same problem was considered for many other languages. I don't think it was a waste of time for them to work on this; nor would Jeremy Bentham, whom I quoted earlier, deny the 'utility' of such pastimes [3, Bk. 3, Ch. 1]. 'On the contrary,' he wrote, 'there is nothing, the utility of which is more incontestable. To what shall the character of utility be ascribed, if not to that which is a source of pleasure?'
Providing Beautiful Tools Another characteristic of modern art is its emphasis on creativity. It seems that many artists these days couldn't care less about creating beautiful things; only the novelty of an idea is important. I'm not recommending that computer programming should be like modern 42 DONALD E. KNUTH
art in this sense, but it does lead me to an observation that I think is important. Sometimes we are assigned to a programming task which is almost hopelessly dull, giving us no outlet whatsoever for any creativity; and at such times a person might well come to me and say, 'So programming is beautiful? It's all very well for you to declaim that I should take pleasure in creating elegant and charming programs, but how am I supposed to make this mess into a work of art?' Well, it's true, not all programming tasks are going to be fun. Consider the 'trapped housewife,' who has to clean off the same table every day: there's not room for creativity or artistry in every situation. But even in such cases, there is a way to make a big improvement: it is still a pleasure to do routine jobs if we have beautiful things to work with. For example, a person will really enjoy wiping off the dining room table, day after day, if it is a beautifully designed table made from some fine quality hardwood. Sometimes we're called upon to perform a symphony, instead of to compose; and it's a pleasure to perform a really fine piece of music, although we are suppressing our freedom to the dictates of the composer. Sometimes a programmer is called upon to be more a craftsman than an artist; and a craftman's work is quite enjoyable when he has good tools and materials to work with. Therefore I want to address my closing remarks to the system programmers and the machine designers who produce the systems that the rest of us must work with. Please, give us tools that are a pleasure to use, especially for our routine assignments, instead of providing something we have to fight with. Please, give us tools that encourage us to write better programs, by enhancing our pleasure when we do so. It's very hard for me to convince college freshmen that programming is beautiful, when the first thing I have to tell them is how to punch 'slash slash JOB equals so-and-so.' Even job control languages can be designed so that they are a pleasure to use, instead of being strictly functional. Computer hardware designers can make their machines much more pleasant to use, for example, by providing floating-point arithmetic which satisfies simple mathematical laws. The facilities presently available on most machines make the job of rigorous error analysis hopelessly difficult, but properly designed operations would encourage numerical analysts to provide better subroutines which have certified accuracy (cf. [20, p. 204]). Let's consider also what software designers can do. One of the best ways to keep up the spirits of a system user is to provide routines that he can interact with. We shouldn't make systems too automatic, so that the action always goes on behind the scenes; we ought to give the programmer-user a chance to direct his creativity into useful channels. One thing all programmers have in common is that they enjoy working with machines; so let's keep them in the loop. Some tasks are best done by machine, while others are best done by human Computer Programming as an Art 43
insight; and a properly designed system will find the right balance. (I have been trying to avoid misdirected automation for many years, cf. [18].) Program measurement tools make a good case in point. For years programmers have been unaware of how the real costs of computing are distributed in their programs. Experience indicates that nearly everybody has the wrong idea about the real bottlenecks in his programs; it is no wonder that attempts at efficiency go awry so often, when a programmer is never given a breakdown of costs according to the lines of code he has written. His job is something like that of a newly married couple who try to plan a balanced budget without knowing how much the individual items like food, shelter, and clothing will cost. All that we have been giving programmers is an optimizing compiler, which mysteriously does something to the programs it translates but which never explains what it does. Fortunately we are now finally seeing the appearance of systems which give the user credit for some intelligence; they automatically provide instrumentation of programs and appropriate feedback about the real costs. These experimental systems have been a huge success, because they produce measurable improvements, and especially because they are fun to use, so I am confident that it is only a matter of time before the use of such systems is standard operating procedure. My paper in Computing Surveys [21] discusses this further, and presents some ideas for other ways in which an appropriate interactive routine can enhance the satisfaction of user programmers. Language designers also have an obligation to provide languages that encourage good style, since we all know that style is strongly influenced by the language in which it is expressed. The present surge of interest in structured programming has revealed that none of our existing languages is really ideal for dealing with program and data structure, nor is it clear what an ideal language should be. Therefore I look forward to many careful experiments in language design during the next few years.
Summary To summarize: We have seen that computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty. A programmer who subconsciously views himself as an artist will enjoy what he does and wil] do it better. Therefore we can be glad that people who lecture at computer conferences speak about the state of the Art. Note: The second paragraph on page 5 ('I can't resist......, the fifth paragraph on page 7 ('I discussed this recently.... .. and the first paragraph on page 11 ('Sometimes we're called upon....') were included n the lecture given in San Diego, but were added too late to appear in the originally published version. 44 DONALD E. KNUTH
References 1. Bailey, Nathan. The Universal Etymological English Dictionary. T. Cos, London, 1727. See 'Art,' 'Liberal,' and 'Science.' 2. Bauer, Walter E, Juncosa, Mario L., and Perlis, Alan J. ACM publication policies and plans. J. ACM 6 (Apr. 1959), 121-122. 3. Bentham, Jeremy. The Rationale of Reward. Trans. from Thiorie des peines et des r6compenses, 1811, by Richard Smith, J. & H. L. Hunt, London, 1825. 4. The CenturyDictionaryand Cyclopedia 1. The Century Co., New York, 1889. 5. Clementi, Muzio. The Art of Playing the Piano. `l~ans. from L'art de jouer le pianoforte by Max Vogrich. Schirmer, New York, 1898. 6. Colvin, Sidney. 'Art.' Encyclopaedia Britannica, eds. 9, 11, 12, 13, 1875-1926. 7. Coxeter, H. S. M. Convocation address, Proc. 4th Canadian Math. Congress, 1957, pp. 8-10. 8. Dijkstra, Edsger W. EWD316: A Short Introduction to the Art of Programming. T. H. Eindhoven, The Netherlands, Aug. 1971. 9. Ershov, A. P. Aesthetics and the human factor in programming. Comm. ACM 15 (July 1972), 501-505. 10. Fielden, Thomas. The Science of Pianoforte Technique. Macmillan, London, 1927. 11. Gore, George. The Art of Scientific Discovery. Longmans, Green, London, 1878. 12. Hamilton, William. Lectures on Logic 1. Wm. Blackwood, Edinburgh, 1874. 13. Hodges, John A. Elementary Photography: The 'Amateur Photographer' Library 7. London, 1893. Sixth ed., revised and enlarged, 1907, p. 58. 14. Howard, C. Frusher. Howard's Art of Computation and golden rule for equation of payments for schools, business colleges and selfculture .... C. F. Howard, San Francisco, 1879. 15. Hummel, J. N. The Art of Playing the PianoForte. Boosey, London, 1827. 16. Kernighan B. W., and Plauger, P.J. The Elements of ProgrammingStyle. McGraw-Hill, New York, 1974. 17. Kirwan, Richard. Elements of Mineralogy. Elmsly, London, 1784. 18. Knuth, Donald E. Minimizing drum latency time. J. ACM 8 (Apr. 1961), 119-150. 19. Knuth, Donald E., and Merner, J. N. ALGOL 60 confidential. Comm. ACM 4 (June 1961), 268-272. 20. Knuth, Donald E. Seminumerical Algorithms: The Art of Computer Programming2. Addison-Wesley, Reading, Mass., 1969. 21. Knuth, Donald E. Structured programming with go to statements. Computing Surveys 6 (Dec. 1974), 261-301. 22. Kochevitsky, George. The Art of PianoPlaying: A Scientific Approach. Summy-Birchard, Evanston, Ill., 1967. 23. Lehmer, Emma. Number theory on the SWAC. Proc. Symp. Applied Math. 6, Amer. Math. Soc. (1956), 103-108. 24. Mahesh Yogi, Maharishi. The Science of Being and Art of Living. Allen & Unwin, London, 1963. 25. Malevinsky, Moses L. The Science of Playwriting. Brentano's, New York, 1925. Computer Programming as an Art 45
26. Manna, Zohar, and Pnueli, Amir. Formalization of properties of functional programs. J. ACM 17 (July 1970), 555-569. 27. Marckwardt, Albert H. Preface to Funk and Wagnall's Standard College Dictionary.Harcourt, brace & World, New York, 1963, vii. 28. Mill, John Stuart. A System of Logic, Ratiocinative and Inductive. London, 1843. The quotations are from the introduction, §2, and from Book 6, Chap. 11 (12 in later editions), §5. 29. Mueller, Robert E. The Sc.ence of Art. John Day, New York, 1967. 30. Parsons, Albert Ross. The Science of Pianoforte Practice. Schirmer, New York, 1886. 31. Pedoe, Daniel. The Gentle Art of Mathematics. English U. Press, London, 1953. 32. Ruskin, John. The Stones flIVenice 3. London, 1853. 33. Salton, G. A. Personal communication, June 21, 1974. 34. Snow, C. P. The two cultures. The New Statesman and Nation 52 (Oct. 6, 1956), 413-414. 35. Snow, C. P. The Two Cultures: and a Second Look. Cambridge University Press, 1964.
Categories and Subject Descriptors: D.1.2 [Software]: Programming Techniques-automatic programming; K.6.1 [Management of Computing and Information Systems]: Project and People Management; K.7.3 (Computing Milieux]: The Computing Profession-general
General Terms: Performance, Standards, Theory
46 DONALD E. KNUTH
Logic and Programming Languages DANA S. SCOTT University of Oxford [DanaS. Scott was one of two recipients of the 1976 Turing Award presented at the ACM Annual Conference in Houston on October 20. M. 0. Rabin's paper, Complexity of Computations, appears on page 319.] Logic has been long interested in whether answers to certain questions are computable in principle, since the outcome puts bounds on the possibilities of formalization. More recently, precise comparisons in the efficiency of decision methods have become available through the developments in complexity theory. These, however, are applications to logic, and a big question is whether methods of logic have significance inthe other direction for the more applied parts of computability theory. Programming languages offer an obvious opportunity as their syntactic formalization is well advanced; however, the semantical theory can hardly be said to be complete. Though we have many examples, we have still to give wide-ranging mathematical answers to these queries: What is a machine? What is a computable process? How (or how well) does a machine simulate a process? Programs naturally enter in giving descriptions of processes. The definition of the precise meaning of a program then requires us to explain what are the objects of computation (in a way, the statics of the problem) and how they are to be transformed (the dynamics). So far the theories of automata and of nets, though most interesting for dynamics, have formalized only a portion of the field, and there has been perhaps too much concentration of the finite-state and algebraic aspects. It would seem that the understanding of higher-level program features involves us with infinite objects and forces us to pass through several levels of explanation to go from the conceptual ideas to the final simulation on a real machine. These levels can be made mathematically exact if we can find the right abstractions to represent the necessary structures. Author's present address: Department of Computer Science, Carnegie-Mellon University, Pittsburgh, PA 15213. 47
The experience of many independent workers with the method of data types as lattices (or partial orderings) under an information content ordering, and with their continuous mappings, has demonstrated the flexibility of this approach in providing definitions and proofs, which are clean and without undue dependence on implementations. Nevertheless much remains to be done in showing how abstract conceptualizations can (or cannot) be actua. i2 ed before we can say we have a unified theory.
As the eleven-and-one-half-lh Turing lecturer, it gives me the greatest pleasure to share this prize and this podium with Michael Rabin. Alas, we have not had much chance t.o collaborate since the time of writing our 1959 paper, and that is for me a great loss. I work best in collaboration, but it is not easy to arrange the right conditions - especially in interdisciplinary subjects and where people are separated by international boundaries. But I have followed his career with deep interest and admiration. As you have heard today, Rabin has been able to apply ideas from logic having to do with decidability, computability, and complexity to questions of real mathematical and computational interest. He, and many others, are actively creating new methods of analysis for a wide class of algorithinic problems which has great promise for future development. These aspects of the theory of computation are, however, quite outside my competence, since over the years my interests have diverged from hose of Rabin. From the late 1960's my own work has concentrated on seeing whether the ideas of logic can be used to give a better conceptual understanding of programming languages. I shall therefore not speak today in detail about my past joint work with Rabin but about my own development and some plans and hopes for the future. The difficulty of obtaining a precise overall view of a language arose during the period when committees were constructing mammoth 'universal' computer languages. We stand now, it seems, on the doorstep of yet another technological revolution during which our ideas of machines and software are going to be completely changed. (I have just noted that the AciV is campaigning again to eliminate the word 'machine' altogether.) The big, big languages may prove to be not very adaptable, but I think the problem of semantics will surely remain. I would like to think that the work -again done in collaboration with other people, most notably with the late Christopher Strachey-has made a basic contribution to the foundations of the semantic enterprise. Well, we shall see. I hope too that the research on semantics will not too much longer remain disjoint from investigations like Rabin's.
An Apology and a Nonapology As a rule, I think, public speakers should not apologize: it only makes the audience uncomfortable. .At such a meeting as this, however, one apology is necessary (along with a disclaimer). Those of you who know my background may well be reminded of Sir Nicholas Gimcrack, hero of the play The Virtuoso. It was written 48
DANA S. SCOTT
in 1676 by Thomas Shadwell to poke a little fun at the remarkable experiments then being done before the Royal Society of London. At one point in the play, Sir Nicholas is discovered lying on a table trying to learn to swim by imitating the motions of a frog in a bowl of water. When asked whether he had ever practiced swimming in water, he replies that he hates water and would never go near it! 'I content myself,' he said, 'with the speculative part of swimming; I care not for the practical. I seldom bring anything to use .... Knowledge is the ultimate end.' Now though our ultimate aims are the same, I hasten to disassociate myself from the attitude of disdain for the practical. It is, however, the case that I have no practical experience in present-day programming; by necessity I have had to confine myself to speculative programming, gaining what knowledge I could at second hand by watching various frogs and other creatures. Luckily for me, some of the frogs could speak. With some of them I have had to learn an alien language, and perhaps I have not understood what they were about. But I have tried to read and to keep up with developments. I apologize for not being a professional in the programming field, and I certainly, therefore, will not try to sermonize: many of the past Turing lecturers were well equipped for that, and they have given us very good advice. What I try to do is to make some results from logic which seem to me to be relevant to computing comprehensible to those who could make use of them. I have also tried to add some results of my own, and I have to leave it to you to judge how successful my activities have been. Most fortunately today I do not have to apologize for the lack of published material; if I had written this talk the day I received the invitation, I might have. But in the August number of Communications we have the excellent tutorial paper by Robert Tennent [141 on denotational semantics, and I very warmly recommend it as a starting place. Tennent not only provides serious examples going well beyond what Strachey and I ever published, but he also has a well-organized bibliography. Only last month the very hefty book by Milne and Strachey [9] was published. Strachey's shockingly sudden and untimely death unfortunately prevented him from ever starting on the revision of the manuscript. We have lost much in style and insight (to say nothing of inspiration) by Strachey's passing, but Robert Milne has carried out their plan admirably. What is important about the book is that it pushes the discussion of a complex language through from the beginning to the end. Some may find the presentation too rigorous, but the point is that the semantics of the book is not mere speculation but the real thing. It is the product of serious and informed thought; thus, one has the detailed evidence to decide whether the approach is going to be fruitful. Milne has organized the exposition so one can grasp the language on many levels down to the final compiler. He has not tried to sidestep any difficulties. Though not lighthearted and biting, as Strachey often was in conversation, the book is a very fitting Logic and Programming Languages
49
memorial to the last phase of Strachey's work, and it contains any number of original contributions by Milne himself. (I can say these things because I had no hand in writing the book myself.) Recently published also is thie volume by Donahue [4]. This is a not too long and very readable work that discusses issues not covered, or not covered from the same point of view, by the previously mentioned references. Again, it was written quite independently of Strachey and me, and I was very glad to see its appearance. Soon to come out is the textbook by Joe Stoy [13]. This will complement these other works and should be very useful for teaching, because Stoy has excellent experience in lecturing, both at Oxford University and at M.I.T. On the foundational side, my own revised paper (Scott [12]) will be out any moment in the SIAMjrournal on Computing. As it was written from the point of view of enumeration operators in more 'classical' recursion theory, its relevance to practical computing may not be at all clear at first glance. Thus I am relieved that these other references explain the uses of the theory tn the way I intended. Fortunately all the above authors cite the literature extensively, and so I can neglect going into further historical detail today. May I only say that many other people have taken up various of the ideas of Strachey and myself, and you can find out about their work not only from these bibliographies but also, for exaLrmple, from two recent conference proceedings, Manes [7] and Bohm [1]. If I tried to list names here, I would only leave some out-those that have had contact with me know how much I appreciate their interest and contributions.
Some Personal Notes I was born in California an I began my work in mathematical logic as an undergraduate at Berkeley in the early 1950's. The primary influence was, of course, Alfred Tarski together with his many colleagues and students at the University of California. Among many other things, I learned recursive function theory from Raphael and Julia Robinson, whom I want to thank for numerous insights. Also at the time through self-study I found out about the X-calculus of Curry and Church (which, literally, gave me nightmares at first). Especially important for my later ideas wvas the study of Tarski's semantics and his definition of truth for formalized languages. These concepts are still being hotly debated today in the philosophy of natural language, as you know. I have tried to carry over the spirit of Tarski's approach to algorithmic languages, which at least have the advantage of being reasonably well formalized syntactically. Whether I have found the right denotations of terms as guided by the schemes of Strachey (and worked out by many hands) is what needs discussion. I am the first to say that not all problems are solved just by giving denotations to some languages. Languages like (the very pure) XAcalculus are well served but many programming concepts are still rot covered. 50
DANA S. SCOTT
My graduate work was completed in Princeton in 1958 under the direction of Alonzo Church, who also supervised Michael Rabin's thesis. Rabin and I met at that time, but it was during an IBM summer job in 1957 that we did our joint work on automata theory. It was hardly carried out in a vacuum, since many people were working in the area; but we did manage to throw some basic ideas into sharp relief. At the time I was certainly thinking of a project of giving a mathematical definition of a machine. I feel now that the finite-state approach is only partially successful and without much in the way of practical implication. True, many physical machines can be modelled as finite-state devices; but the finiteness is hardly the most important feature, and the automata point of view is often rather superficial. Two later developments made automata seem to me more interesting, at least mathematically: the Chomsky hierarchy and the connections with semigroups. From the algebraic point of view (to my taste at least) Eilenberg, the Euclid of automata theory, in his books [5] has said pretty much the last word. I note too that he has avoided abstract category theory. Categories may lead to good things (cf. Manes [7]), but too early a use can only make things too difficult to understand. That is my personal opinion. In some ways the Chomsky hierarchy is in the end disappointing. Context-free languages are very important and everyone has to learn about them, but it is not at all clear to me what comes next - if anything. There are so many other families of languages, but not much order has come out of the chaos. I do not think the last word has been said here. It was not knowing where to turn, and being displeased with what I thought was excessive complexity, that made me give up working in automata theory. I tried once in a certain way to connect automata and programming languages by suggesting a more systematic way of separating the machine from the program. Eilenberg heartily disliked the idea, but I was glad to see the recent book by Clark and Cowell [2] where, at the suggestion of Peter Landin, the idea is carried out very nicely. It is not algebra, I admit, but it seems to me to be (elementary, somewhat theoretical) programming. I would like to see the next step, which would fall somewhere in between Manna [8] and Milne-Strachey [9]. It was at Princeton that I had my first introduction to real machines -the now almost prehistoric von Neumann machine. I have to thank Forman Acton for that. Old fashioned as it seems now, it was still real; and Hale Trotter and I had great fun with it. How very sad I was indeed to see the totally dead corpse in the Smithsonian Museum with no indication at all what it was like when it was alive. From Princeton I went to the University of Chicago to teach in the Mathematics Department for two years. Though I met Bob Ashenhurst and Nick Metropolis at that time, my stay was too short to learn from them; and as usual there is always too great a distance between departments. (Of course, since I am only writing about connections with computing, I am not trying to explain my other activities in mathematics and logic.) Logic and Programming Languages
51
From Chicago I went to Berkeley for three years. There I met many computer people through Harry Huskey and Ren6 de Vogelaere, the latter of whom introduced me to the details of Algol 60. There was, however, no Computer Science Department as such in Berkeley at that time. For personal reasons I decided soon to move to Stanford. Thus, though I taught a course in Theory of Computation at Berkeley for one semester, my work did not amount to anything. One thing I shall always regret about Berkeley and Computing is that I never learned the details of the work of Dick and Emma Lehmer, because I very much admire the way they get results in number theory by machine. Now that we have the Four-Color Problem solved by machine, we are going to see great activity in large-scale, special-purpose theorem proving. I am very sorry not to have any hand in it. Stanford had from the early 1960's one of the best Computer Science departments in the country, as everyone agrees. You will wonder why I ever left. The answer may be that my appointment was a mixed one between the departments of Philosophy and Mathematics. I suppose my personal difficulty is knowing where I should be and what I want to do. But personal failings aside, I had excellent contacts in Forsythe's remarkable department and very good relations with the graduates, and we had many lively courses and seminars. John McCarthy and Pat Suppes, and people from their groups, had much influence on me and my views of computing. In Logic, with my colleagues Sol Feferman and Georg Kreise., we had a very active group. Among the many Ph.D. students in Logic, the work of Richard Platek had a few years later, when I savw how to use some of his ideas, much influence on me. At this point I had a year's leave in Amsterdam which proved unexpectedly to be a turning point in my intellectual development. I shall not go into detail, since the story is complicated; but the academic year 1968/69 was one of deep crisis for me, and it is still very painful for me to think back on it. As lack would have it, however, Pat Suppes had proposed my name for the IFIP Working Group 2.2 (now called Formal Description of Prograrmming Concepts). At that time Tom Steel was Chairman, and it was at the Vienna meeting that I first met Christopher Strachey. If the violence of the arguments in this group is any indication, I am really glad I was not involved with anything important like the Algol committee. But I suppose fighting is therapeutic: it brings out the best and the worst in people. And in any case it is good to learn to defend oneself. Among the various combatants I liked the style and ideas of Strachey best, though I think he often overstated his case; out what he said convinced me I should learn more. It was only at the end of my year in Amsterdam that I began to talk with Jaco de Bakker, arid it was only through correspondence over that summer that our ideas took definite shape. The Vienna IBM Group that I met through WG 2.2 influenced me at this stage 52 DANA S. SCOTT
also. In the meantime I had decided to leave Stanford for the Princeton Philosophy Department; but since I was in Europe with my family, I requested an extra term's leave so I could visit Strachey in Oxford in the fall of 1969. That term was one of feverish activity for me; indeed, for several days, I felt as though I had some kind of real brain fever. The collaboration with Strachey in those few weeks was one of the best experiences in my professional life. We were able to repeat it once more the next summer in Princeton, though at a different level of excitement. Sadly, by the time I came to Oxford permanently in 1972, we were both so involved in teaching and administrative duties that real collaboration was nearly impossible. Strachey also became very discouraged over the continuing lack of research funds and help in teaching, and he essentially withdrew himself to write his book with Milne. (It was a great effort and I do not think it did his health any good; how I wish he could have seen it published.) Returning to 1969, what I started to do was to show Strachey that he was all wrong and that he ought to do things in quite another way. He had originally had his attention drawn to the X-calculus by Roger Penrose and had developed a handy style of using this notation for functional abstraction in explaining programming concepts. It was a formal device, however, and I tried to argue that it had no mathematical basis. I have told this story before, so to make it short, let me only say that in the first place I had actually convinced him by 'superior logic' to give up the type-free X-calculus. But then, as one consequence of my suggestions followed the other, I began to see that computable functions could be defined on a great variety of spaces. The real step was to see that function-spaces were good spaces, and I remember quite clearly that the logician Andrzej Mostowski, who was also visiting Oxford at the time, simply did not believe that the kind of function spaces I defined had a constructive description. But when I saw they actually did, I began to suspect that the possibilities of using function spaces might just be more surprising than we had supposed. Once the doubt about the enforced rigidity of logical types that I had tried to push onto Strachey was there, it was not long before I had found one of the spaces isomorphic with its own function space, which provides a model of the 'type-free' X-calculus. The rest of the story is in the literature. (An interesting sidelight on the X-calculus is the r6le of Alan Turing. He studied at Princeton with Church and connected computability with the (formal) X-calculus around 1936/37. Illuminating details of how his work (and the further influence of X-calculus) was viewed by Steve Kleene can be found in Crossley [3]. (Of course Thring's later ideas about computers very much influenced Strachey, but this is not the time for a complete historical analysis.) Though I never met Thring (he died in 1954), the second-hand connections through Church and Strachey and my present Oxford colleagues, Les Fox and Robin Gandy, are rather close, though by the time I was a graduate Logic and Programming Languages
53
at Princeton, Church was no longer working on the X-calculus, and we never discussed his experiences with ringng) It is very strange that my &-calculus models were not discovered earlier by someone else; but I am most encouraged that new kinds of models with new properties are now being discovered, such as the 'powerdomains' of Gordon Plotkin [10]. I am personally convinced that the field is well established, both on the theoretical and on the applied side. John Reynolds and Robert Milne have independently introduced a new inductive method of proving equivalences, and the interesting work of Robin MLiner on LCF and its proof techniques continues at Edinburgh. This direction of proving things about models was started off by David Park's theorem on relating the fixed-point operator and the so-called paradoxical combinator of X-calculus, and it opened up a study of the infinitary, yet computable operators which continues now along many ]ines. Another direction of work goes on in Novosibirsk under Yu. L. Ershov, and quite surprising connections with topological algebra have been pointed out to me by Karl H. Hofmann and his group. There is no space here even to begin to list the many contributors. In looking forward to the next few years, I am particularly happy to report at this meeting that TPony Hoare has recently accepted the Chair of Computation at Oxford, now made permanent since Strachey's passing. This opens up all sorts of new possibilities for collaboration, both with Hoare and with the many students he will attract after he takes up the post next year. And, as you know, the practical aspects of use and design of compu ter languages and of programming methodology will certainly be stressed at Oxford (as Strachey did too, I hasten to add), and this is all to the good; but there is also excellent hope for theoretical investigations.
Some Semantic Structures Turning now to technical details, I should like to give a brief indication of how my construction gEes, and how it is open to considerable variation. It will not be possible to argue here that these are the 'right' abstractions, and that is why it is a relief to have those references mentioned earlier so easily available. Perhaps the quickest indication of what I am getting at is provided by two domains: 0, the domain of Boolean values, and J = OX, the domain of infinite sequences of Boolean values. The first main point is that we are going to accept the idea of partial functions represented mathematically by giving the functions from time to time partialvalues. As far as q goes the idea is very trivial: we write R = j true, false, 1 } where i is an extra element called 'the undefined.' In order to keep i in its place we impose a partial ordering c on the domain X, where x q yiffetherx=iorx=y, 54 DANA S. SCOTT
for all x, y M. It will not mean all that much here in 5, but we can read ' G ' as saying that the information content of x is contained in the information content of y. The element I has, therefore, empty information content. The scheme is illustrated in Figure 1.
.1 FIGURE
1.
The Boolean values.
(An aside: in many publications I have advocated using lattices, which as partial orderings have a 'top' element T as well as a 'bottom' element i, so that we can assert i c x: T for all elements of the domain. This suggestion has not been well received for many reasons I cannot go into here. Some discussion of its reasonableness is to be found in Scott [12], but of course the structure studied there is special. Probably it is best neither to exclude or include a i; and, for simplicity, I shall not mention it further today.) Looking now at J, the domain of sequences, we shall employ a shorthand notation where subscripts indicate the coordinates; thus, X = (x),n =O
for all x E J. Each term is such that xEO, because J= 0=. Technically, a 'direct product' of structures is intended, so we define
c on J by x : y iffx, c y., for all n. Intuitively, a sequence y is 'better' in information than a sequence x iff some of the coordinates of x which were 'undefined' have passed over into 'being defined' when we go from x to y. For example, each of the following sequences stands in the relation c to the following ones: (1,1,1,1,
. . .),
(true, i, i, i, . .), (true, false,ii, l, .), (true, false, true, i ,
. . .).
Clearly this list could be expanded infinitely, and there is also no need to treat the coordinates in the strict order n = 0, 1, 2 .... Thus the c relation on J is far more complex than the original c on 0. Logic and Programming Languages
55
An obvious difference between 0 and J is that . is finite while J has infinitely many elements. In J, also, certain elements have infinite information content, whereas this is not so in M. However, we can employ the partial ordering in J to explain abstractly what we mean by 'finite approximation' and 'limits.' The sequences listed above are finite in J because they have only finitely many coordinates distinct from I. Given any x E J we can cut it down to a finite element by defining
.r, if n < m; i, if not. It is easy to see from our definitions that x 1 m c .x I (m + 1) c x, so that the x 1 m are 'building up' to a limit; and, in fact, that limit is the original x. We write this as .
= U (x m),
where U is the sup or least-upper-bound operation in the partially ordered set J . The point is th-at J has many sups; and, whenever we have elements y(m) r') in ,P(regardless of whether they are finite or not), we can define the 'limit' z, where U y(n). 1,1=01
(Hint: ask yourself what the coordinates of z will have to be.) We cannot rehash the details here, but J really is a topological space, and z really is a limit. Thus, though J is infinitary, there is a good chance that we can let manipulations fall back on finitary operations and be able to discuss computable operations on J and on more complex domains. Aside from the sequence and partial-order structure on J, we can define many kinds of algebraic structure. That is why J is a good example. For instance, up to isomorphism the space satisfies (J
--
J' X J,
where on the right-hand side the usual binary direct product is intended. Abstractly, the domain Jf X J consists of all ordered pairs (x, y) with x, y E J', where we define E on J x J by (x, y) c (x', v') iff x r x' and y c y'. But for all practical purpo ses there is no harm in identifying (x, y) with a sequence already in J; indeed coordinatewise we can define (
56 DANA S. SCOTT
Y)
y
if n = 2k+
1.
The above criterion for ' between pairs will be verified, and we can say that J has a (bi-unique) pairing function. The pairing function (.,.) on J has many interesting properties. In effect we have already noted that it is monotonic (intuitively: as you increase the information contents of x and y, you increase the information content of (x, y). More importantly, (.,-) is continuous in the following precise sense: (x, y) = U (x I m, y Im), ,=n
which means that (,) behaves well under taking finite approximations. And this is only one example; the whole theory of monotone and continuous functions is very important to this approach. Even with the small amount of structure we have put on J, a language suggests itself. For the sake of illustration, we concentrate on the two isomorphisms satisfied by J; namely, U = 0 x J and J = J x J. The first identified J as having to do with (infinite) sequences of Boolean values; while the second reminds us of the above discussion of the pairing function. In Figure 2 we set down a quick BNF definition of a language with two kinds of expressions: Boolean (the a's) and sequential (the a's). 3
a:
true I false I head a , Oa I tail a I if a then a' else a' even a
FIGURE
Iodd
a
I merge
a' a'
2. A brief language.
This language is very brief indeed: no variables, no declarations, no assignments, only a miniature selection of constant terms. Note that the notation chosen was meant to make the meanings of these expressions obvious. Thus, if a denotes a sequence x, then head a has got to denote the first term x0 of the sequence x. As x0 E 0 and x E X, we are keeping our types straight. More precisely, for each expression we can define its (constant) value a . ; so that [ 83D E M for Boolean expressions 3, and T a- EJ for sequential expressions. Since there are ten clauses in the BNF language definition, we would have to set down ten equations to completely specify the semantics of this example; we shall content ourselves with selected equations here. To carry on with the remark in the last paragraph: I head a
=
a
o
On the other side, the expression 3* creates an infinite sequence of Boolean values:
Logic and Programming Languages
57
(This notation, though rough is clear.) In the same vein:
[3aD
= (A, oin] O.lal
,To ,*);
while we have
I tail u
= ( Iall1
IaD WaD Wad,*
Further along:
I even aD
SX
ea
=
(a'aD
2,
4,
SaD,**)
and
Imerges a'1
a D).
These should be enough to give the idea. It should also be clear that what we have is really only a selection, because J satisfies many more isomorphisms (e.g., J = J x J x cf), and there are many, many more ways of tearing apart and recombining sequences of Boolean values-all in quite computable ways.
The Function Space It should not be concluded that the previous section contains the whole of my idea: this would leave us on the elementary level of program schemes (e.g., van Emden-Kowalski [6] or Manna [8] (last chapter)). What some people call 'Fixpoint Semantics' (I myself do not like the abbreviated word 'fixpoint') is only a first chapter. The second chapter already includes procedures that take procedures as arguments - higher type procedures - and we are well beyond program schemes. True, fixed-point techniques can be applied to these highertype procedures, but that is not the only thing to say in their favor. The semantic structure needed to make this definite is the function space. I have tried to stress this from the start in 1969, but many people have not understood me well enough. Suppose O' and A' are two domains of the kind we have been discussing (say, M or 0 x or Jor something worse). By I 'm '] let us understand the domain of all monotone and continuous functions f mapping into 0'. This is -what I mean by a function space. It is not all that difficult mathematically, but it is not all that obvious either that [I '-* a'] is again a domain 'of the same kind,' though admittedly of a more complicated structure. I cannot prove it here, but at least I can define the ' relation on the function space: '
f c g iffJ(x) c g(x) for all x
E
0
Treating functions as abstract objects is nothing new; what has to be checked is that they are also quite reasonable objects of computation. The relation E on [ GY'-R 0] is the first step in checking this, and it leads to a well-behaved notion of a finite approximation to a function. (Sorry! there is no time to be more precise here.) And when that is seen, 58 DANA S. SCOTT
the way is open to iteration of function spaces; as in [[ 0'- 0 ']- As]. This is not as crazy as it might seem at first, since our theory identifies f(x) as a computable binary function of variable f and variable x. Thus, as an operation, it can be seen as an element of a function space:
[[[
X'- 0]
x
This is only the start of a theory of these operators (or combinators, as Curry and Church call them). Swallowing all this, let us attempt an infinite iteration of function spaces beginning with J. We define Z7 = J and _ +, = 7- J]. ] and Thus 9_7 = [J' -* = [[[[f-* J
J]
]
-
J]
You just have to believe me that this is all highly constructive (because we employ only the continuous functions). It is fairly clear that there is a natural sense in which this is cumulative. In the first place J is 'contained in' [J - J ] as a subspace: identify each x E J with the corresponding constant function in (J -f J ]. Clearly by our definitions this is an order-preserving correspondence. Also each f E [J -f J] is (crudely) approximated by a constant, namelyf(i) (this is the 'best' element c all the valuesf(x)). This relationship of subspace and approximation between spaces will be denoted by J < [J - J] Pushing higher we can say [If -
] < [[J -
If
ill]
but now for a different reason. Once we fix the reason why J < [J -J ], we have to respect the function space structure of the higher •-. In the special case, suppose f E [Jr-i ]. We want to inject f into the next space, so call it i (f ) E [ - J] J. If g is any element in [J - J ] we are being required to define i (f) (g) E J. Now, since g E [J - J ], we have the original projection backwards j (g) = g (i) E J. So, as this is the best approximation to g we can get in J, we are stuck with defining -
i(f) (g) = f(j(g)). This gives the next map i: , - 72, To define the corresponding projection j: 972 -9j, we argue in a similar way and define
j(o) x=
(i(x)),
where we have 0 E [J - JrJ-J, and i(x)E [ J - J ] is the constant function with value x. With this progression in mind there is no Logic and Programming Languages
59
trouble in using an exactly similar plan in defining i: _2 3 and 92. And so on, giving the exact sense to the cumulation: 3
j: J
o<
1 <
2 <.
..
<
.
-
<
....
Having all this, it would be a pity not to pass to the limit (this time with spaces), and this is just what I want you to accept. What is obtained by decreeing that there is a space =
lim
X
?
Since the separate stages interact thus: 1Ill=
[
n-
J]
it is not so queer to guess that
holds (at least up to isomorphism). It does, but I can only indicate the bare bones of the reason (and reasonableness) of this isomorphism. In the first place the separate spaces n, have been placed one inside another, which not only makes a tower of spaces but also respects the combinationf(x) as an algebraic operation of two variables. Xx in a precise sense is the completion of the union of the n7; that is within these spaces we can think of towers of functions each approximating the next (by the use cf the i and j mappings), so that in 9, these towers are given limits. If the towers are truncated, then we can argue that each space n7, <2 '7. Now why the isomorphism of ,9x? Take a function (continuous!) in [97>]. By its very continuity it will be determined by what it does to the finite levels -7n. That is, it will have better and better approximations in [ 9n -J ] thus, the approximations 'live' in the finite levels of •7x . Their limit ought to just give us back the function [ 97 -J ] we started with. In the same way any element in gr7 can be regarded as a limit of approximate functions in the spaces [ J ]. Admittedly there are details to check; but, in the limit, there is no real difference between 9, and [ -*JJ]: the infinite level of higher type functions is its own function space. (As always: this is a consequence of continuity. Much structure is lurking under the surface here; in fact more than I thought at first. In Figure 3, I illustrate a chain of isomorphisms that shows that •7 gets much of the character of J with which we are already familiar. The reasons why these are valid are as follows. First, we treat ,7x as a function space. Now pairs of functions can be isomorphically put into correspondence with functions taking on 60 DANA S. SCOTT
pairs of values. But J x J = J as we already know. The final step just puts functions on gr back to elements of gr. x
x
Z=[•JZx[
JJ]x
=
FIGURE
3.
.
[
-4
=
tx .]
J]
The first chain of isomorphisms.
Using the isomorphsim of Figure 3, we can gain the further result illustrated in Figure 4. The reasons are fairly clear. Take a function from gr to gx. The values of this function can be construed as functions. gZ0
S~
=
[~O
7]
[Zx
FIGURE
[4
-x
Xx
J
-4
H]
J
4. The second chain of isomorphisms.
But consider that a function whose values are functions is just (up to isomorphism of spaces) a function of two arguments. As we have seen x •. = Z., so we obtain the final simplification (up in Figure 3, ,x to isomorphism). What we have done is to sketch why g., the space of functions of infinite type, is a model of the X-calculus. The X-calculus is a language (not illustrated here), where every term can be regarded as denoting both an argument (or value) and a function at the same time. The formal details are pretty simple, but the semantical details are what we have been looking at: every element of the space 97S can be taken at the same time as being an element of the space [.9~ -Z .7. ]; thus, g9x, provides a model, but it is just one of many. Without being able to be explicit, a denotational (or mathematical) semantics was outlined for a pure language of procedures (also pairs and all the other stuff in Figure 2). In the references cited on real programming languages, all the other features (of assignments, sequencing, declarations, etc., etc.) are added. What has been established in these references is that the method of semantical definition does in fact work. I hope you will look into it.
References 1. Bohm, C., Ed. A-Calculus and Computer Science Theory. Lecture Notes in Computer Science, Vol. 37. Springer-Verlag, New York, 1975. 2. Clark, K. L., and Cowell, D. F. Programs, Machines, and Computation. McGraw-Hill, New York, 1976. Logic and Programming Languages 61
I]
3. Crossley, J. N., Ed. Algebra and Logic Papers from the 1974 Summer Res. Inst. Australian Math, Soc., Monash U. Clayton, Victoria, Australia. Lecture Notes in Mathemotics, Vol. 450, Springer-Verlag, 1976. 4. Donahue, J. E. Complementary Definitions of Programming Language Semantics. Lecture Notes ii Computer Science, Vol. 42, Springer-Verlag, 1976. 5. Eilenberg, S. Automata, Languages, and Machines. Academic Press, New York, 1974. 6. van Emden, M. H., and Kowalski, R. A. The semantics of predicate logic as a programming language. J. ACM 23, 4 (Oct. 1976), 733-742. 7. Manes, E. G., Ed. Category Theory Applied to Computation and Control. First Int. Symp. Lecture Notes in Computer Science, Vol. 25, Springer-Verlag, New York 1976. 8. Manna, Z.Mathematical Theory of Computation. McGraw-Hill, New York, 1974. 9. Milne, R., and Strachey, C. A. Theory of Programming Language Semantics. Chapman and Hall, London, and Wiley, New York, 2 Vols., 1976.
10. Plotkin, G. D. A powerdormain construction. SIAMJ Comptng. 5 (1976), 452-487. 11. Rabin, M. O., and Scott D. S. Finite automata and their decision problems. IBMJ. Res. and Develop. 3 (1959), 114-125. 12. Scott, D. S. Data types as lattices. SIAMJ. Comptng. 5 (1976), 522-587. 13. Stoy, J. E. Denotational Semantics-The Scott-Strachey Approach to ProgrammingLanguage Theory. M.I.T. Press, Cambridge, Mass. 14. Tennent, R. D. The denotational semantics of programming languages. Comm. ACM 19, 8 (Aug. 1976), 437-453.
Categories and Subject Descriptors: F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages-algebraic approaches to semantics; denotational semantics;
F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic-lambda calculus and related systems; logic programming
General Terms: Languages, Theory
Additional Key Words and Phrases: Automata, context-free languages
62
DANA S. SCOTT
Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs JOHN BACKUS IBM Research Laboratory, San Jose The 1977 ACM Thring Award was presented to John Backus at the ACM Annual Conference in Seattle, October 17. In introducingthe recipient,Jean E. Sammet, Chairman of the Awards Committee, made the following comments and read a portion of the final citation. The full announcement is in the September 1977 issue of Communications, page 681. 'Probably there is nobody in the room who has not heard of Fortran and most of you have probably used it at least once, or at least looked over the shoulder of someone who was writing a Fortranprogram. There are probably almost as many people who have heard the letters BNF but don't necessarily know what they stand for. Well, the B is for Backus, and the other letters are explained in the formal citation. These two contributions, in my opinion, are among the half dozen most important technical contributions to the computer field and both were made by John Backus /which in the Fortran case also involved some colleagues). It is for these contributions that he is receiving this year's Thring award. Author's present address: 91 Saint Germain Ave., San Francisco, CA 94114. 63
The short form of his citation is for 'profound, influential, and lasting contributions to the design of practical high-level programming systems, notably through his work on Fortran,and for seminal publication of formal procedures for the specifications of programming languages.' The most significant part of the full citation is as follows: '..Backus headed a small IM group in New York City during the early 1950s. The earliest product of this group's efforts was a high-level language for scientific and technical computations called Fortran. This same group designed the first system to translate Fortran programs into machine language. They employed novel optimizing techniques to generate fast machine-language programs. Many other compilers for the language were developed, first on IBM machines, and later on virtually every make of computer. Fortran was adopted as a US. national standard in 1966. During the latter part of the 1950s, Backus served on the international committees which developed Algo! 58 and a later version, Algol 60. The language Algol, and its derivative compilers, received broad acceptance in Europe as a means for developing programs and as a formal means of publishing the algorithms on which the programs are based. In 1959, Backus presented a raper at the UNESCO conference in Paris on the syntax and semantics of a proposed internationalalgebraic language. In this paper, he was the first to employ a formal technique for specifying the syntax of programming languages. The formal notation became known as BNF-standing for 'Backus Normal Form,' or 'Backus Naur Form' to recognize the further contributions by Peter Naur of Denmark. Thus, Backus has contributed strongly both to the pragmatic world of problem-solving on computers and to the theoretical world existing at the interface between artificial languages and computational linguistics. Fortran remains one of the most widely used programming languages in the world. Almost all programming languages are now described with some type of formal syntactic definition ' ' Conventional programming langua e, are growing ever more enormous, but not stronger. Inherent defects at the most basic level cause them to be both fat and weak: their primitive word-at-a-time style of programming inherited from their common ancestor-the von Neumann computer, their close coupling of semantics to state transitions, their division- of programming into a world of expressions and a world of statements, their inability to effectively use powerful combining forms for building new programs from existing ones, and their lack of useful mathematical properties for reasoning about programs. An alternative functional style of programming is founded on the use of combining forms for creating programs. Functional programs deal with structured data, are often nonrepetitive and ILonrecursive, are hierarchically constructed, do not name their arguments, anc do not require the complex machinery of procedure declarations to become generally applicable. Combining forms can use high-level programs to build s:ill higher level ones in a style not possible in conventional languages. Associated with the functional st le of programming is an algebra of programs whose variables range over programs and whose operations are combining forms. This algebra can be used to transform programs and to solve equations whose 'unknowns' are programs in much the same way one tranforms equations in high school algebra. These transformations are given by algebraic laws and are carried out in the same language in which programs are written. Combining forms are chosen not only for their programming power but also the power of their 64 JOHN BACKUS
associated algebraic laws. General theorems of the algebra give the detailed behavior and termination conditions for large classes of programs. A new class of computing systems uses the functional programming style both in its programming language and in its stage transition rules. Unlike von Neumann languages, these systems have semantics loosely coupled to states only one state transition occurs per major computation.
Introduction I deeply appreciate the honor of the ACM invitation to give the 1977 Turing Lecture and to publish this account of it with the details promised in the lecture. Readers wishing to see a summary of this paper should turn to Section 16, the last section.
1
Conventional Programming Languages: Fat and Flabby Programming languages appear to be in trouble. Each successive language incorporates, with a little cleaning up, all the features of its predecessors plus a few more. Some languages have manuals exceeding 500 pages; others cram a complex description into shorter manuals by using dense formalisms. The Department of Defense has current plans for a committee-designed language standard that could require a manual as long as 1,000 pages. Each new language claims new and fashionable features, such as strong typing or structured control statements, but the plain fact is that few languages make programming sufficiently cheaper or more reliable to justify the cost of producing and learning to use them. Since large increases in size bring only small increases in power, smaller, more elegant languages such as Pascal continue to be popular. But there is a desperate need for a powerful methodology to help us think about programs, and no conventional language even begins to meet that need. In fact, conventional languages create unnecessary confusion in the way we think about programs. For twenty years programming languages have been steadily progressing toward their present condition of obesity; as a result, the study and invention of programming languages have lost much of their excitement. Instead, it is now the province of those who prefer to work with thick compendia of details rather than wrestle with new ideas. Discussions about programming languages often resemble medieval debates about the number of angels that can dance on the head of a pin instead of exciting contests between fundamentally differing concepts. Many creative computer scientists have retreated from inventing languages to inventing tools for describing them. Unfortunately, they have been largely content to apply their elegant new tools to studying the warts and moles of existing languages. After examining the appalling type structure of conventional languages, using the elegant tools A Functional Style and Its Algebra of Programs
65
developed by Dana Scott, it is surprising that so many of us remain passively content with that structure instead of energetically searching for new ones. The purpose of this article is twofold: first, to suggest that basic defects in the framework of conventional languages make their expressive weakness and their cancerous growth inevitable, and second, to suggest some alternative avenues of exploration toward the design of new kinds of languages.
2
Models of Computing Systems Underlying every programming language is a model of a computing system that its programs control. Some models are pure abstractions, some are represented by hardware, and others by compiling or interpretive programs. Before we examine conventional languages more closely, it is useful to make a brief survey of existing models as an introduction to the current universe of alternatives. Existing models may be crudely classified by the criteria outlined below.
2.1 Criteria for Models 2.1.1 Foundations. Is there an elegant and concise mathematical description of the model? Is it useful in proving helpful facts about the behavior of the model? Or is the model so complex that its description is bulky and of little mathematical use? 2.1.2 History Sensitivity. Does the model include a notion of storage, so that one program can save information that can affect the behavior of a later program? That is, is the model history sensitive? 2.1.3 Type of Semantics. Does a program successively transform states (which are not programsI until a terminal state is reached (statetransition semantics)? Are states simple or complex? Or can a 'program' be successively reduced to simpler 'programs' to yield a final 'normal form program,' which is the result (reduction semantics)? 2.1.4 Clarity and Conceptual Usefulness of Programs. Are programs of the model clear expressions of a process or computation? Do they embody concepts that help us to formulate and reason about processes?
2.2 Classification of Models Using the above criteria we can crudely characterize three classes of models for computing systems -simple operational models, applicative models, and von Neumann models. 66 JOHN BACKUS
2.2.1 Simple Operational Models. Examples: Turing machines, various automata. Foundations: concise and useful. History sensitivity: have storage, are history sensitive. Semantics: state transition with very simple states. Program clarity: programs unclear and conceptually not helpful. 2.2.2 Applicative Models. Examples: Church's lambda calculus [5], Curry's system of combinators [6], pure Lisp [17], functional programming systems described in this paper. Foundations:concise and useful. History sensitivity: no storage, not history sensitive. Semantics: reduction semantics, no states. Program clarity: programs can be clear and conceptually useful. 2.2.3 Von Neumann Models. Examples: von Neumann computers, conventional programming languages. Foundations: complex, bulky, not useful. History sensitivity: have storage, are history sensitive. Semantics: state transition with complex states. Program clarity: programs can be moderately clear, are not very useful conceptually. The above classification is admittedly crude and debatable. Some recent models may not fit easily into any of these categories. For example, the with this thirty year old concept. In its simplest form a von Neumann computer has three parts: a central processing unit (or CPU), a store, and a connecting tube that can transmit a single word between the CPU and the store (and send an address to the store). I propose to call this tube the von Neumann bottleneck. The task of a program is to change the contents of the A Functional Style and Its Algebra of Programs
67
store in some major way; when one considers that this task must be accomplished entirely by pumping single words back and forth through the von Neumann bottleneck, the reason for its name becomes clear. Ironically a large part of the traffic in the bottleneck is not useful data but merely names of data, as well as operations and data used only to compute such names. Before a word can be sent through the tube its address must be in the CPU; hence it must either be sent through the tube from the store or be generated by some CPU operation. If the address is sent from the store, then its address must either have been sent from the store or generated in the CPU, and so on. If, on the other hand, the address is generated in the CPU, it must be generated either by E,fixed rule (e.g., 'add 1 to the program counter') or by an instruction that was sent through the tube, in which case its address must have been sent ... and so on. Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and rnach of that traffic concerns not significant data itself but where to find it.
4 Von Neumann Languages Conventional programming languages are basically high-level, complex versions of the von Neumann computer. Our thirty year old belief that there is only one kind of computer is the basis of our belief that there is only one kind of programming language, the conventional-von Neumann-- language. The differences between Fortran and Algol 68, although considerable, are less significant than the fact that both are based on the programming style of the von Neumann computer. Although I refer to conventional languages as 'von Neumann languages' tc take note of their origin and style, I do not, of course, blame the great mathematician for their complexity. In fact, some might say that I bear some responsibility for that problem. Von Neumann programming languages use variables to imitate the computer's storage cells; control statements elaborate its jump and test instructions; and assignment statements imitate its fetching, storing, and arithmetic. The ass gnment statement is the von Neumann 68 JOHN BACKUS
bottleneck of programming languages and keeps us thinking in wordat-a-time terms in much the same way the computer's bottleneck does. Consider a typical program; at its center are a number of assignment statements containing some subscripted variables. Each assignment statement produces a one-word result. The program must cause these statements to be executed many times, while altering subscript values, in order to make the desired overall change in the store, since it must be done one word at a time. The programmer is thus concerned with the flow of words through the assignment bottleneck as he designs the nest of control statements to cause the necessary repetitions. Moreover, the assignment statement splits programming into two worlds. The first world comprises the right sides of assignment statements. This is an orderly world of expressions, a world that has useful algebraic properties (except that those properties are often destroyed by side effects). It is the world in which most useful computation takes place. The second world of conventional programming languages is the world of statements. The primary statement in that world is the assignment statement itself. All the other statements of the language exist in order to make it possible to perform a computation that must be based on this primitive construct: the assignment statement. This world of statements is a disorderly one, with few useful mathematical properties. Structured programming can be seen as a modest effort to introduce some order into this chaotic world, but it accomplishes little in attacking the fundamental problems created by the word-at-a-time von Neumann style of programming, with its primitive use of loops, subscripts, and branching flow of control. Our fixation on von Neumann languages has continued the primacy of the von Neumann computer, and our dependency on it has made non-von Neumann languages uneconomical and has limited their development. The absence of full-scale, effective programming styles founded on non-von Neumann principles has deprived designers of an intellectual foundation for new computer architectures. (For a brief discussion of that topic, see Section 15.) Applicative computing systems' lack of storage and history sensitivity is the basic reason they have not provided a foundation for computer design. Moreover, most applicative systems employ the substitution operation of the lambda calculus as their basic operation. This operation is one of virtually unlimited power, but its complete and efficient realization presents great difficulties to the machine designer. Furthermore, in an effort to introduce storage and to improve their efficiency on von Neumann computers, applicative systems have tended to become engulfed in a large von Neumann system. For example, pure Lisp is often buried in large extensions with many von Neumann features. The resulting complex systems offer little guidance to the machine designer. A Functional Style and Its Algebra of Programs
69
5
Comparison of von. Neumann and Functional Programs To get a more detailed picture of some of the defects of von Neumann languages, let us compare a conventional program for inner product with a functional one wr-itten in a simple language to be detailed further on.
5.1
A von Neumann Program for Inner Product c := 0 for i := I step 1 until n do c := c + ali]xbtil
Several properties of this program are worth noting: (a) Its statements operate on an invisible 'state' according to complex rules. (b) It is not hierarchical. Except for the right side of the assignment statement, it does not construct complex entities from simpler ones. (Larger programs, however, often do.) (c) It is dynamic and repetitive. One must mentally execute it to understand it. (d) It computes word-at-a-tirne by repetition (of the assignment) and by modification (of variable i). (e) Part of the data, n, is in the program; thus it lacks generality and works only for vectors of length n. (f) It names its arguments; it can only be used for vectors a and b. To become general, it requires a procedure declaration. These involve complex issues (e.g., call-by-narne versus call-by-value). (g) Its 'housekeeping' operations are represented by symbols in scattered places (in the for statement and the subscripts in the assignment). This makes it impossible to consolidate housekeeping operations, the most common of all, into single, powerful, widely useful operators. Thus in programming those operations one must always start again at square one, writing 'for i := ... ' and 'for j := ... ' followed by assignment statements sprinkled with i's and j's. 70 JOHN BACKUS
I
5.2 A Functional Program
for Inner Product Def
Innerproduct = (Insert +) o(ApplyToAll x ) o'TYanspose
Or, in abbreviated form: Def IP = (/+)o(ax)o Trans. Composition (o), Insert (/), and ApplyToAll (ax) are functional forms that combine existing functions to form new ones. Thus fog is the function obtained by applying first g and then £, and af is the function obtained by applying f to every member of the argument. If we write f: x for the result of applying f to the object x, then we can explain each step in evaluating Innerproduct applied to the pair of vectors ((1, 2, 3), (6, 5, 4)) as follows: IP:(( 1,2,3) ,(6,5,4)) Definition of IP Effect of composition, o Applying Thanspose Effect of ApplyToAll, a Applying x Effect of Insert, / Applying + Applying + again
= (+) o(a X) oTans: (/+):((aX):fRans: (+): ((ax): ((1,6), (l+): (x: (1,6), X: (/+): (6,10,12) +: (6, +: (10,12)) +: (6,22) 28
((1,2,3), (6,5,4)) ((1,2,3), (6,5,4)))) (2,5), (3,4))) (2,5), x: (3,4))
Let us compare the properties of this program with those of the von Neumann program. (a) It operates only on its arguments. There are no hidden states or complex transition rules. There are pnly two kinds of rules, one for applying a function to its argument, the other for obtaining the function denoted by a functional form such as composition, fog, or ApplyToAll, af when one knows the functionsf and g, the parameters of the forms. (b) It is hierarchical, being built from three simpler functions (+, x, lYans) and three functional forms fog, af and If (c) It is static and nonrepetitive, in the sense that its structure is helpful in understanding it without mentally executing it. For example, if one understands the action of the forms fog and af and of the functions x and Trans, then one understands the action of ax and of (ax) o'Tans, and so on. (d) It operates on whole conceptual units, not words; it has three steps; no step is repeated. (e) It incorporates no data; it is completely general; it works for any pair of conformable vectors. A Functional Style and Its Algebra of Programs 71
(f) It does not name its arguments; it can be applied to any pair of vectors without any procedure declaration or complex substitution rules. (g) It employs housekeeping forms and functions that are generally useful in many other programs; in fact, only + and x are not concerned with housekeeping. These forms and functions can combine with others to create higher level housekeeping operators. Section 14 sketches a kind of system designed to make the above functional style of programming available in a history-sensitive system with a simple framework, but much work remains to be done before the above applicative style can become the basis for elegant and practical programming languages. For the present, the above comparison exhibits a number of serious Flaws in von Neumann programming languages and can serve as a starting point in an effort to account for their present fat and flabby condition.
6
Language Frameworks versus Changeable Parts Let us distinguish two parts of a programming language. First, its framework which gives the overall rules of the system, and second, its changeable parts, whose existence is anticipated by the framework but whose particular behavior is not specified by it. For example, the for statement, and almost all other statements, are part of Algol's framework but library functions and user-defined procedures are changeable parts. Thus the framework of a language describes its fixed features and provides a general environment for its changeable features. Now suppose a language had a small framework which could accommodate a great variety of powerful features entirely as changeable parts. Then such a framework could support many different features and styles without being changed itself. In contrast to this pleasant possibility, von Neumann languages always seem to have an immense framework and very limited changeable parts. What causes this to happen? The answer concerns two problems of von Neumann languages. The first problem results from the von Neumann style of word-ata-time programming, which requires that words flow back and forth to the state, just like the flow through the von Neumann bottleneck. Thus a von Neumann language must have a semantics closely coupled to the state, in which every detail of a computation changes the state. The consequence of this semantics closely coupled to states is that every detail of every feature must be built into the state and its transition rules. 72 JOHN BACKUS
Thus every feature of a von Neumann language must be spelled out in stupefying detail in its framework. Furthermore, many complex features are needed to prop up the basically weak word-at-a-time style. The result is the inevitable rigid and enormous framework of a von Neumann language.
7 Changeable Parts and Combining Forms The second problem of von Neumann languages is that their changeable parts have so little expressive power. Their gargantuan size is eloquent proof of this; after all, if the designer knew that all those complicated features, which he now builds into the framework, could be added later on as changeable parts, he would not be so eager to build them into the framework. Perhaps the most important element in providing powerful changeable parts in a language is the availability of combining forms that can be generally used to build new procedures from old ones. Von Neumann languages provide only primitive combining forms, and the von Neumann framework presents obstacles to their full use. One obstacle to the use of combining forms is the split between the expression world and the statement world in von Neumann languages. Functional forms naturally belong to the world of expressions; but no matter how powerful they are they can only build expressions that produce a one-word result. And it is in the statement world that these one-word results must be combined into the overall result. Combining single words is not what we really should be thinking about, but it is a large part of programming any task in von Neumann languages. To help assemble the overall result from single words these languages provide some primitive combining forms in the statement world - the for, while, and if-then-else statements - but the split between the two worlds prevents the combining forms in either world from attaining the full power they can achieve in an undivided world. A second obstacle to the use of combining forms in von Neumann languages is their use of elaborate naming conventions, which are further complicated by the substitution rules required in calling procedures. Each of these requires a complex mechanism to be built into the framework so that variables, subscripted variables, pointers, file names, procedure names, call-by-value formal parameters, call-by-name formal parameters, and so on, can all be properly interpreted. All these names, conventions, and rules interfere with the use of simple combining forms. A Functional Style and Its Algebra of Programs
73
8
APL versus Word-at-a-Time Programming Since I have said so much about word-at-a-time programming, I must now say something about APL [12]. We owe a great debt to Kenneth Iverson for showing us that there are programs that are neither word-at-a-time nor dependent on lambda expressions, and for introducing us to the use of new functional forms. And since APL assignment statements can store arrays, the effect of its functional forms is extended beyond a single assignment. Unfortunately, however, API, still splits programming into a world of expressions and a world of statements. Thus the effort to write one-line programs is partly motivated by the desire to stay in the more orderly world of express'.ons. APL has exactly three functional forms, called inner product, outer product, and reduction. These are sometimes difficult to use, there are not enough of them, and their use is confined to the world of expressions. Finally, APL semantics is st ll too closely coupled to states. Consequently, despite the greater simplicity and power of the language, its framework has the complexi ty and rigidity characteristic of von Neumann languages.
9 Von Neumann Languages Lack Useful Mathematical Properties So far we have discussed -he gross size and inflexibility of von Neumann languages; another important defect is their lack of useful mathematical properties and the obstacles they present to reasoning about programs. Although a great amount of excellent work has been published on proving facts about programs, von Neumann languages have almost no properties that are helpful in this direction and have many properties that are obstacles (e.g., side effects, aliasing). Denotational semantics [23] and its foundations [20, 211 provide an extremely helpful mathematical understanding of the domain and function spaces implicit in programs. When applied to an applicative language (such as that of the 'recursive programs' of [16]), its foundations provide powerful tools for describing the language and for proving properties of programs. When applied to a von Neumann language, on the other hand, it provides a precise semantic description and is helpful in identifying trouble spots in the language. But the complexity of the language is mirrored in thae complexity of the description, which is a bewildering collection of productions, domains, functions, and equations that is only slightly more helpful in proving facts about programs than the reference manual of the language, since it is less ambiguous. 74 JOHN BACKUS
Axiomatic semantics [11] precisely restates the inelegant properties of von Neumann programs (i.e., transformations on states) as transformations on predicates. The word-at-a-time, repetitive game is not thereby changed, merely the playing field. The complexity of this axiomatic game of proving facts about von Neumann programs makes the successes of its practitioners all the more admirable. Their success rests on two factors in addition to their ingenuity: First, the game is restricted to small, weak subsets of full von Neumann languages that have states vastly simpler than real ones. Second, the new playing field (predicates and their transformations) is richer, more orderly and effective than the old (states and their transformations). But restricting the game and transferring it to a more effective domain does not enable it to handle real programs (with the necessary complexities of procedure calls and aliasing), nor does it eliminate the clumsy properties of the basic von Neumann style. As axiomatic semantics is extended to cover more of a typical von Neumann language, it begins to lose its effectiveness with the increasing complexity that is required. Thus denotational and axiomatic semantics are descriptive formalisms whose foundations embody elegant and powerful concepts; but using them to describe a von Neumann language cannot produce an elegant and powerful language any more than the use of elegant and modern machines to build an Edsel can produce an elegant and modern car. In any case, proofs about programs use the language of logic, not the language of programming. Proofs talk about programs but cannot involve them directly since the axioms of von Neumann languages are so unusable. In contrast, many ordinary proofs are derived by algebraic methods. These methods require a language that has certain algebraic properties. Algebraic laws can then be used in a rather mechanical way to transform a problem into its solution. For example, to solve the equation ax + bx = a + b for x (given that a+b # 0), we mechanically apply the distributive, identity, and cancellation laws, in succession, to obtain (a + b)x = a + b (a + b)x = (a + b)1 x = 1.
Thus we have proved that x = 1 without leaving the 'language' of algebra. Von Neumann languages, with their grotesque syntax, offer few such possibilities for transforming programs. As we shall see later, programs can be expressed in a language that has an associated algebra. This algebra can be used to transform proA Functional Style and Its Algebra of Programs
75
grams and to solve some equations whose 'unknowns' are programs, in much the same way one solves equations in high school algebra. Algebraic transformations and proofs use the language of the programs themselves, rather than the language of logic, which talks about programs.
10
What Are the Alternatives to von Neumann Languages? Before discussing alternatives to von Neumann languages, let me remark that I regret the need for the above negative and not very precise discussion of these languages. But the complacent acceptance most of us give to these enormous, weak languages has puzzled and disturbed me for a long time. I am disturbed because that acceptance has consumed a vast effort toward making von Neumann languages fatter that might have been better spent in looking for new structures. For this reason I have tried to analyze some of the basic defects of conventional languages and show that those defects cannot be resolved unless we discover a new kind of language framework. In seeking an alternative to conventional languages we must first recognize that a system cannot be history sensitive (permit execution of one program to affect the behavior of a subsequent one) unless the system has some kind of state Iwhich the first program can change and the second can access). Thus a history-sensitive model of a computing system must have a state-transition semantics, at least in this weak sense. But this does not mean that every computation must depend heavily on a complex state, wi :h many state changes required for each small part of the computation (as in von Neumann languages). To illustrate some alternatives to von Neumann languages, I propose to sketch a class of history-sensitive computing systems, where each system: (a) has a loosely coupled state-transition semantics in which a state transition occurs only once in a major computation; (b) has a simply structured state and simple transition rules; (c) depends heavily on an underlying applicative system both to provide the basic programming language of the system and to describe its state transitions. These systems, which I call applicative state transition (or AST) systems, are described in Section 14. These simple systems avoid many of the complexities and weaknesses of von Neumann languages and provide for a powerful and extensive set of changeable parts. However, they are sketched only as crude examples of a vast area of non-von Neumann systems with various attractive properties. I have been studying this area for the past three or four years and have not yet found a satisfying solution to the many conflicting requirements that a good language must resolve. But I believe this search has indicated a useful approach to designing non-von Neumann languages. 76 JOHN BACKUS
This approach involves four elements, which can be summarized as follows. (a) A functional style of programming without variables. A simple, informal functional programming (FP) system is described. It is based on the use of combining forms of FP programs. Several programs are given to illustrate functional programming. (b) An algebra offunctional programs. An algebra is described whose variables denote FP functional programs and whose 'operations' are FP functional forms, the combining forms of FP programs. Some laws of the algebra are given. Theorems and examples are given that show how certain function expressions may be transformed into equivalent infinite expansions that explain the behavior of the function. The FP algebra is compared with algebras associated with the classical applicative systems of Church and Curry. (c) A formal functional programmingsystem. A formal (FFP) system is described that extends the capabilities of the above informal FP systems. An FFP system is thus a precisely defined system that provides the ability to use the functional programming style of FP systems and their algebra of programs. FFP systems can be used as the basis for applicative state transition systems. (d) Applicative state transition systems. As discussed above. The rest of the paper describes these four elements and ends with a summary of the paper.
11 Functional Programming Systems (FP Systems) 11.1
Introduction In this section we give an informal description of a class of simple applicative programming systems called functional programming (FP) systems, in which 'programs' are simply functions without variables. The description is followed by some examples and by a discussion of various properties of FP systems. An FP system is founded on the use of a fixed set of combining forms called functional forms. These, plus simple definitions, are the only means of building new functions from existing ones; they use no variables or substitution rules, and they become the operations of an associated algebra of programs. All the functions of an FP system are of one type: they map objects into objects and always take a single argument. A Functional Style and Its Algebra of Programs
77
In contrast, a lambda-calculus-based system is founded on the use of the lambda expression w ith an associated set of substitution rules for variables, for building :aew functions. The lambda expression (with its substitution rules) is capable of defining all possible computable functions of all possible types; and of any number of arguments. This freedom and power has its disadvantages as well as its obvious advantages. It is analogous to the power of unrestricted control statements in conventional languages: with unrestricted freedom comes chaos. If one constantly invents new combining forms to suit the occasion, as one can in the lambda calculus, one will not become familiar with the style or useful properties of the few combining forms that are adequate for all purposes. Just as structured programming eschews many control statements to obtain programs with simpler structure, better properties, and uniform methods for understanding their behavior, so functional programming eschews the lambda expression, substitution, and multiple function types. It thereby achieves programs built with familiar functional forms with known useful properties. These programs are so structured that their behavior can often be understood and proven by mechanical use of algebraic techniques similar to those used in solving high school algebra problems. Functional forms, unlike most programming constructs, need not be chosen on an ad hoc basis. Since they are the operations of an associated algebra, one chooses only those functional forms that not only provide powerful programming constructs, but that also have attractive algebraic properties: one chooses them to maximize the strength and utility of the algebraic laws that relate them to other functional forms of the system. In the following description we shall be imprecise in not distinguishing between (a) a function symbol or expression and (b) the function it denotes. We shall indicate the symbols and expressions used to denote functions by example and usage. Section 13 describes a formal extension of FP systems (FFP systems); they can serve to clarify any ambiguities about FP systems.
11.2
Description An FP system comprises the following: (1) a set 0 of objects; (2) a set F of functions f that :map objects into objects; (3) an operation, application; (4) a set F of functional forms; these are used to combine existing functions, or objects, to form new functions in F; (5) a set D of definitions that define some functions in F and assign a name to each. 78 JOHN BACKUS
What follows is an informal description of each of the above entities with examples. 11.2.1 Objects, 0. An object x is either an atom, a sequence (x1, ... , xn) whose elements xi are objects, or i ('bottom' or 'undefined'). Thus the choice of a set A of atoms determines the set of objects. We shall take A to be the set of nonnull strings of capital letters, digits, and special symbols not used by the notation of the FP system. Some of these strings belong to the class of atoms called 'numbers.' The atom k is used to denote the empty sequence and is the only object which is both an atom and a sequence. The atoms T and F are used to denote 'true' and 'false.' There is one important constraint in the construction of objects: if x is a sequence with w as an element, then x = i. That is, the 'sequence constructor' is 'w-preserving.' Thus no proper sequence has ± as an element. Examples of objects i 1.5
0 AB3
(AB,1,2.3)
(A,((B),C),D)
(A, Q=
±
11.2.2 Application. An FP system has a single operation, application. Iff is a function and x is an object, then f:x is an application and denotes the object which is the result of applying f to x. f is the operator of the application and x is the operand. Examples of applications + : (1,2) = 3 tl: (A,B, C) = (B, C) 1: (A,B, C) = A
2: (A,B, C) = B
11.2.3 Functions, F. All functions fin F map objects into objects and are bottom-preserving:f: i = ±, for all f in F. Every function in F is either primitive, that is, supplied with the system, or it is defined (see below), or it is a functional form (see below). It is sometimes useful to distinguish between two cases in which f:x = i. If the computation for f:x terminates and yields the object i,
we say f is undefined at x, that is, f terminates but has no meaningful value at x. Otherwise we say f is nonterminating at x. Examples of primitive functions. Our intention is to provide FP systems with widely useful and powerful primitive functions rather than weak ones that could then be used to define useful ones. The A Functional Style and Its Algebra of Programs
79
following examples define some typical primitive functions, many of which are used in later examples of programs. In the following definitions we use a variant of McCarthy's conditional expressions [17]; thus we write
patel; ... ; Pinden; en+1 instead of McCarthy's expression
(po1,em1, ... , Pn -en, T-en I I11 The following definitions are to hold for all objects x, xi, y, yi, z, zi. Selector functions 1:x =_ x = (xl, ... , Xn) _+x ; L1
and for any positive integer s s:x x = (x,...
X) x, & n > s -x5;
l
Thus, for example: 3: (A,B, C) = C and 2: (A) = l. Note that the function symbols 1, 2, etc. are distinct from the atoms 1, 2, etc. Tail
tl:x = x = (x,) -
,
; x =(x, ... ,
nn>2-)
Xn(x ;
2 . -
Identity id :x = x Atom atom:x =x is an atom
T: x
-l
F; i
Equals eq:xx = (y,z) &y=z-T;xc=(y,z) &y~z-F; I Null null:x - x
-T;
x
-F;
i
Reverse reverse:x =x=0-0;
x=(x, --- Xn)(Xn, --- xI); I
Distribute from left; distribute from right distl:x x=(y, 0) -;
x=(y,(z
distr:x=x=(&,y) -;
x= (Y 1
80 JOHN BACKUS
.
YZn)); I Y,
Zn))-((YZ), ye), Z)-- ((Y 1 ,z)
,
(YnsZ)); -1
Length length:x x=(x1 ,
... ,
xn)-n;
x=-O;
1
Add, subtract, multiply, and divide +:x-x = (y,z) -:x-x = (y,z) X :x x = (y,z) . :x -x= (y,z)
& y,z & y,z & y,z & y,z
are are are are
numbers-y+z; i numbers-*y-z; i numbers-yxz; ± numbers . z; i
(where y-. O = L)
Transpose
trans: x~x=
,..,)
x=(x, ... Xn(Y., -
Ym);1
where xi = (xi,
xim) and yj = (xj, ... ,xnj),
l
i
n,
< j <m.
And, or, not and:x-x=(T,T)-*T;x=(T,F)Vx=(F,T)Vx=(F,F)-F;i etc. Append left; append right apndl:x=x=(y,o)- (y); x=(y,(z1 , apndr:x =x= (O,z)-(z);
x= ((YI,
, Zn)).-(Y
* *, Yn),Z)
1,
.
Zn); 1 YnZ);
-(Y I
Right selectors; right tail
l r:xx x=(x, ... , x
Xn; I
2r:x x =(xI, .. ,Xn) & n>2-Xn-1;
I
etc. tlr:x-x=(x,)-¢k;
x=(x, ..
x)
& n> 2 -(xi,
... ,
xn-I);
I
Rotate left; rotate right
rotl:x--x=0-0; x=(x),)(x,); x=(x,
..
,Xn)
& n>2-(X2,
..
,xn'x);1
etc. 11.2.4 Functional forms, F A functional form is an expression denoting a function; that function depends on the functions or objects which are the parameters of the expression. Thus, for example, if f and A Functional Style and Its Algebra of Programs
81
g are any functions, then fog i, a functional form, the composition of and gf and g are its parameters, and it denotes the function such that, for any object x,
f
(fog):x = f:(g:x).
Some functional forms may have objects as parameters. For example, for any object x, i is a functional form, the constant function of x, so that for any object y i:y
}
y
wl; X.
In particular, I is the everywhere-i function. Below we give some functional forms, many of which are used later in this paper. We use p, f, and g with and without subscripts to denote arbitrary functions; and x, x,
...
, xn, y as arbitrary objects. Square
brackets [...] are used to indicate the functional form for construction, which denotes a function, whereas pointed brackets ( ... ) denote sequences, which are objects. Parentheses are used both in particular functional forms (e.g., in condition) and generally to indicate grouping.
Composition (f g):X
g: (g:X)
Construction [ofl
-- , An] :x =-(Al :x, .f
(Recall that since (...
ing, so is [f 1,
.
,
n :,C) ..
=
i and all functions are -- preserv-
fn])
Condition (pe-f; g) :x - (p:x) = T-f:x;
(p:x) = F-og:x; i
Conditional expressions (used outside of FP systems to describe their functions) and the functional form condition are both identified by ' -'. They are quite different although closely related, as shown in the above definitions. But no confusion should arise, since the elements of a conditional expression all denote values, whereas the elements of the functional form condition all denote functions, never values. When no ambiguity arises we omit right-associated parentheses; we write, for example,
P
f1 ; P2 -f
2
; g for (P1I-f
82 JOHN BACKUS
; (P 2 -f 2 ; g))
Constant
I
(Here x is an object parameter.)
x:y y= I
I; X
Insert If:x } x = (x.) -x,; x=(x, ... ,
n) & n>2-f:(x,
/f:(X2, *-.
Xn));
I
If f has a unique right unit Uf * i , where f: (x, uf) E {x, 1} for all objects x, then the above definition is extended: If: 0 = Uf. Thus 1+:(4,5,6)
=
+:(4, +:(5, / +:(6))) = +:(4, +:(5,6)) =15
l+ :O=O
Apply to all of:x
X=X-0b;
x=(xl,
Binary to unary
...
Xn)
(f-Xl
,
-,
f:Xn);
1
(x is an object parameter)
(bufx) :y -f: (xy) Thus (bu + 1):x = I + x While (while pf):x =p:x=T-V(whilepf):(f:x);
p:x=F-ox; i
The above functional forms provide an effective method for computing the values of the functions they denote (if they terminate) provided one can effectively apply their function parameters. 11.2.5 Definitions. A definition in an FP system is an expression of the form Def
I= r
where the left side 1 is an unused function symbol and the right side r is a functional form (which may depend on 1). It expresses A Functional Style and Its Algebra of Programs
83
the fact that the symbol I is to denote the function given by r. Thus the definition Def last 1r everse defines the function last that produces the last element of a sequence (or i). Similarly, Def last = nullotl-l; lastot] defines the function last, which is the same as lastl. Here in detail is how the definition would be used to compute last: (1,2): last: (1,2) definition of last action of the form (p-f;g) action of the form fog definition of the primitive tail definition of last = action of the form (p-f;g) definition of selector 1
= ->
=, =,
>
(null otl- 1; last otl): (1,2) last otl: (1,2) since nullotl:(1,2) = null:(2) = F last:(tl:(1,2)) last: (2) (null ' tl - 1; last otl): (2)
1:(2) since null otl: (2) = null: 0 = T 2
The above illustrates the simple rule: to apply a defined symbol, replace it by the right side of its definition. Of course, some definitions may define nonterminating functions. A set D of definitions is well formed if no two left sides are the same. 11.2.6 Semantics. It can be seen from the above that an FP system is determined by choice of the following sets: (a) The set of atoms A (which determines the set of objects). (b) The set of primitive functions P. (c) The set of functional forms F. (d) A well formed set of definitions I). To understand the semantics of such a system one needs to know how to compute f: x for any function f and any object x of the system. There are exactly four possibilities forf: (1) (2) (3) (4)
f is a f is a there none
primitive function; functional form; is one definition in D, Def f = r; and of the above.
Iff is a primitive function, theTn one has its description and knows how to apply it. If f is a functional form, then the description of the form tells how to compute f:x in terms of the parameters of the form, which can be done by further use o:- these rules. Iff is defined, Def f = r, as in (3), then to findf:x one computes r:x, which can be done by further use of these rules. If none of these, then f:x = -. Of course, the use of these rules may not termin ate for some f and some x, in which case we assign the value f:x L. 84 JOHN BACKUS
11.3 Examples of Functional Programs The following examples illustrate the functional programming style. Since this style is unfamiliar to most readers, it may cause confusion at first; the important point to remember is that no part of a function definition is a result itself. Instead, each part is a function that must be applied to an argument to obtain a result. 11.3.1 Factorial Def
!
eqO-l; Xo[id,!osubl]
where Def
eqO-eqo[id,O]
Def
subl
- o[id, 1]
Here are some of the intermediate expressions an FP system would obtain in evaluating ! :2: !:2
(eqO-j;xo[id,!osubl]):2 - Xo[id,!osubl]:2 x:(id:2, !osubl:2) x:(2,!:1) x :(2, x :(1, 1:0))
Xx(2, x :(1,1I : 0))
ox
:(~2, x : (1,1))
A:x :(2,1)
2.
In Section 12 we shall see how theorems of the algebra of FP programs can be used to prove that ! is the factorial function. 11.3.2 Inner Product. We have seen earlier how this definition works. Def
IP = (/+)o(aX)otrans
11.3.3 Matrix Multiply. This matrix multiplication program yields the product of any pair (mn) of conformable matrices, where each matrix m is represented as the sequence of its rows: m = (mI, ... , mr) where mi = (mi,.,
mis) for i = ,.
r.
A Functional Style and Its Algebra of Programs
85
Def
MM=(aaIP) o(udistl) odistr [l, transo2]
The program MM has four steps, reading from right to left; each is applied in turn, beginning with [1, transo2], to the result of its predecessor. If the argument is (rn), then the first step yields (mn') where n' ((man'), .
=
trans n. The second step yields (mr,n)),
where the mi are the rows of m. The third step, adistl, yields
(distl:
(ml, n'), .
distl: (mrn'))
=
.
,pr)
where pi = distl: (mi,n') = ((mi,nj'),
..
, (m,ns'))
for i = 1,
r
and nj' is the jth column of n (the jth row of n'). Thus pi, a sequence of row and column pairs, corresponds to the ith product row. The operator aaIP, or u(aIP), causes aIP to be applied to each pi, which in turn causes IP to be applied to each row and column pair in each pi. The result of the last step is therefore the sequence of rows comprising the product matrix. I either matrix is not rectangular, or if the length of a row of m differs from that of a column of n, or if any element of m or n is not a number, the result is i. This program MM does not name its arguments or any intermediate results; contains no variable:;, no loops, no control statements nor procedure declarations; has no initialization instructions; is not wordat-a-time in nature; is hierarchically constructed from simpler components; uses generally applicable housekeeping forms and operators (e.g., af, distl, distr, trans); is perfectly general; yields whenever its argument is inappropriate in any way; does not constrain the order of evaluation unnecessarily (all applications of IP to row and column pairs can be done in parallel or in any order); and, using algebraic laws (see below), can be transformed into more 'efficient' or into more 'explanatory' programs (e.g., one that is recursively defined). None of these properties hold for the typical von Neumann matrix multiplication program. Although it has an unfamiliar and hence puzzling form, the program MM describes the essential operations of matrix multiplication without overdetermining the process cr obscuring parts of it, as most programs do; hence many straightforward programs for the operation can be obtained from it by formal transformations. It is an inherently inefficient program for von Neumann computers (with regard to the use of 86 JOHN BACKUS
space), but efficient ones can be derived from it and realizations of FP systems can be imagined that could execute MM without the prodigal use of space it implies. Efficiency questions are beyond the scope of this paper; let me suggest only that since the language is so simple and does not dictate any binding of lambda-type variables to data, there may be better opportunities for the system to do some kind of 'lazy' evaluation [9, 10] and to control data management more efficiently than is possible in lambda-calculus-based systems. 11.4
Remarks about FP Systems 11.4.1 FP Systems as Programming Languages. FP systems are so minimal that some readers may find it difficult to view them as programming languages. Viewed as such, a functionfis a program, an object x is the contents of the store, and f:x is the contents of the store after program f is activated with x in the store. The set of definitions is the program library. The primitive functions and the functional forms provided by the system are the basic statements of a particular programming language. Thus, depending on the choice of primitive functions and functional forms, the FP framework provides for a large class of languages with various styles and capabilities. The algebra of programs associated with each of these depends on its particular set of functional forms. The primitive functions, functional forms, and programs given in this paper comprise an effort to develop just one of these possible styles. 11.4.2 Limitations of FP Systems. FP systems have a number of limitations. For example, a given FP system is a fixed language; it is not history sensitive: no program can alter the library of programs. It can treat input and output only in the sense that x is an input and f:x is the output. If the set of primitive functions and functional forms is weak, it may not be able to express every computable function. An FP system cannot compute a program since function expressions are not objects. Nor can one define new functional forms within an FP system. (Both of these limitations are removed in formal functional programming (FFP) systems in which objects 'represent' functions.) Thus no FP system can have a function, apply, such that apply: (x,y) = x: y because, on the left, x is an object, and, on the right, x is a function. (Note that we have been careful to keep the set of function symbols and the set of objects distinct: thus 1 is a function symbol, and I is an object.) A Functional Style and Its Algebra of Programs
87
The primary limitation of F P systems is that they are not history sensitive. Therefore they must be extended somehow before they can become practically useful. For discussion of such extensions, see the sections on FFP and AST systems (Sections 13 and 14). 11.4.3 Expressive Power of FP Systems. Suppose two FP systems, FP, and FP2 , both have the same set of objects and the same set of primitive functions, but the set of functional forms of FP, properly includes that of FP2 . Suppose also that both systems can express all computable functions on objects. Nevertheless, we can say that FP 1 is more expressive than FP 2 , since every function expression in FP 2 can be duplicated in FP, but by using a functional form not belonging to FP2 , FP, can express some functions more directly and easily than FP2 . I believe the above observation could be developed into a theory of the expressive power of languages in which a language A would be more expressive than language B under the following roughly stated conditions. First, form all possible functions of all types in A by applying all existing functions to objects and to each other in all possible ways until no new function of any type can be formed. (The set of objects is a type; the set of continuous functions [T - U] from type T to type U is a type. If J E [T -U] and t E T, then ft in U can be formed by applying f to t.) Do the same in language B. Next, compare each type in A to the corresponding type in B. If, for every type, A's type includes B's corresponding type, then A is more expressive than B (or equally expressive). If some type of As functions is incomparable to B's, then A and B are not comparable in expressive power. 11.4.4 Advantages of FP Systems. The main reason FP systems are considerably simpler than either conventional languages or lambdacalculus-based languages is thai they use only the most elementary fixed naming system (naming a function in a definition) with a simple fixed rule of substituting a function for its name. Thus they avoid the complexities both of the naming systems of conventional languages and of the substitution rules of the lambda calculus. FP systems permit the definition of different naming systems (see Sections 13.3.4 and 14.7) for various purposes. These need not be complex, since many programs can do without them completely. Most importantly, they treat names as functions that can be combined with other functions without special treatment. FP systems offer an escape from conventional word-at-a-time programming to a degree greater even than APL [12] (the most successful attack on the problem to date within the von Neumann framework) because they provide a more powerful set of functional forms within a unified world of expressions. They offer the opportunity to develop higher level techniques for thinking about, manipulating, and writing programs. 88 JOHN BACKUS
12 The Algebra
of Programs for FP Systems 12.1 Introduction The algebra of the programs described below is the work of an amateur in algebra, and I want to show that it is a game amateurs can profitably play and enjoy, a game that does not require a deep understanding of logic and mathematics. In spite of its simplicity, it can help one to understand and prove things about programs in a systematic, rather mechanical way. So far, proving a program correct requires knowledge of some moderately heavy topics in mathematics and logic: properties of complete partially ordered sets, continuous functions, least fixed points of functional, the first-order predicate calculus, predicate transformers, weakest preconditions, to mention a few topics in a few approaches to proving programs correct. These topics have been very useful for professionals who make it their business to devise proof techniques; they have published a lot of beautiful work on this subject, starting with the work of McCarthy and Floyd, and, more recently, that of Burstall, Dijkstra, Manna and his associates, Milner, Morris, Reynolds, and many others. Much of this work is based on the foundations laid down by Dana Scott (denotational semantics) and C. A. R. Hoare (axiomatic semantics). But its theoretical level places it beyond the scope of most amateurs who work outside of this specialized field. If the average programmer is to prove his programs correct, he will need much simpler techniques than those the professionals have so far put forward. The algebra of programs below may be one starting point for such a proof discipline and, coupled with current work on algebraic manipulation, it may also help provide a basis for automating some of that discipline. One advantage of this algebra over other proof techniques is that the programmer can use his programming language as the language for deriving proofs, rather than having to state proofs in a separate logical system that merely talks about his programs. At the heart of the algebra of programs are laws and theorems that
state that one function expression isthe same as another. Thus the law [fg] oh =- [foh, g oh] says that the construction of f and g (composed with h) is the same function as the construction of (f composed with h) and (g composed with h) no matter what the functions f g, and h are. Such laws are easy to understand, easy to justify, and easy and powerful to use. However, we also wish to use such laws to solve A Functional Style and Its Algebra of Programs
89
equations in which an 'unknown' function appears on both sides of the equation. The problem is that if f satisfies some such equation, it will often happen that some extension f' of f will also satisfy the same equation. Thus, to give a unique meaning to solutions of such equations, we shall require a foundation for the algebra of programs (which uses Scott's notion of least fixed points of continuous functionals) to assure us that solutions obtained by algebraic manipulation are indeed least, and hence unique, solutions. Our goal is to develop a foundation for the algebra of programs that disposes of the theoretical issues, so that a programmer can use simple algebraic laws and one or two theorems from the foundations to solve problems and create proofs in the same mechanical style we use to solve high-school algebra problems, and so that he can do so without knowing anyth. r g about least fixed points or predicate transformers. One particular foundational problem arises: given equations of the form f
_po~qo; ...-
pilqi;
Ei(f)
(1)
where the pi's and qi's are functions not involving f and Ei(f) is a function expression involving f, the laws of the algebra will often permit the formal 'extension' of this equation by one more 'clause' by deriving Ei(f) -pi+j
qi+j; Ei+1(fi
(2)
which, by replacing Ei(f) in (1) by the right side of (2), yields fypo
q0; ...- ; pi+I-qi+ 1;
E~j+ I(f).
(3)
This formal extension may gc on without limit. One question the foundations must then answer is: when can the least f satisfying (1) be represented by the infinite expansion
f =_poqo; ...- ; Pn~qn; --
(4)
in which the final clause involvingf has been dropped, so that we now have a solution whose right side is free off's? Such solutions are helpful in two ways: first, they give proofs of 'termination' in the sense that (4) means that f:x is defined if and only if there is an n such that, for every i less than n, pi:x = F and pn:x = T and qn:x is defined. Second, (4) gives a case-by-case description of f that can often clarify its behavior. The foundations for the algebra given in a subsequent section are a modest start toward the goal stated above. For a limited class of equations its 'linear expansion theorem' gives a useful answer as to when one can go from indefinitely extendable equations like (1) to infinite expansions like (4). For a larger class of equations, a more 90 JOHN BACKUS
general 'expansion theorem' gives a less helpful answer to similar questions. Hopefully, more powerful theorems covering additional classes of equations can be found. But for the present, one need only know the conclusions of these two simple foundational theorems in order to follow the theorems and examples appearing in this section. The results of the foundations subsection are summarized in a separate, earlier subsection titled 'expansion theorems,' without reference to fixed point concepts. The foundations subsection itself is placed later where it can be skipped by readers who do not want to go into that subject.
12.2 Laws of the Algebra of Programs In the algebra of programs for an FP system variables range over the set of functions of the system. The 'operations' of the algebra are the functional forms of the system. Thus, for example, [fg]oh is an expression of the algebra for the FP system described above, in which f, g, and h are variables denoting arbitrary functions of that system. And [fg]oh = [foh,goh] is a law of the algebra which says that, whatever functions one chooses forf, g, and h, the function on the left is the same as that on the right. Thus this algebraic law is merely a restatement of the following proposition about any FP system that includes the functional forms [fig] and fog: For all functionsfg, and h and all objects x, ( [fig] oh):
PROPOSITION.
x
=
[fohgoh]:x. PROOF
([f,g] oh) :x = [fig]: (h :x) (f: (h :x), g: (h :x)) = ((foh):x, (goh):x) = [foh, goh]:x -
by by by by
definition of composition definition of construction definition of composition definitionofconstruction.
L
Some laws have a domain smaller than the domain of all objects. Thus lo[fg] ufdoes not hold for objectsx such that g:x = i. We write defined og
-
11o[f,g]-f A Functional Style and Its Algebra of Programs
91
to indicate that the law (or theorem) on the right holds within the domain of objects x for which clefinedog:x = T, where Def
defined
T
i.e., defined:x x= -ifunctional equation: P-
f
; T. In general we shall write a qualified
g
to mean that, for any object x, whenever p:x = T, then f:x = g:x. Ordinary algebra concerns itself with two operations, addition and multiplication; it needs few laws. I he algebra of programs is concerned with more operations (functional forms) and therefore needs more laws. Each of the following laws requires a corresponding proposition to validate it. The interested reader will find most proofs of such propositions easy (two are given below). We first define the usual ordering on functions and equivalence in terms of this ordering: Definition. fgg iff for all objects x, either f:x = l, or f:x = g:x. Definition. f = g iff f5g and g!f. It is easy to verify that < is a partial ordering, that f5g means g is an extension of f, and that f BE g Off f: x = g :x for all objects x. We now give a list of algebraic laws organized by the two principal functional forms involved. I
Composition and construction
[f V.1
fo]og M =[fog,
.
I.2
afo[g
I.3
Ifo[g1 ,
,
.
fog]
g]= [Jgl.
,
fog.]
g.] =fo[g1 , /fo[g 2 . , fo[g Jo[g
Ifo[g]
2
,g]]
when n>2
fo[gn-, ,g 0 ]... ]]
.
g
I.4
fo [xg]
I.5
lo[fN,
(bu f x) og
=
f
.f.].
so[f1 ,
, fs
, fn *, f, for any selector s, s< />
definedofi (for all ids, I< />
[fool,
1.6
t1o [f, < 0 and tlo [f,
fnon]o[g 1 ,
..,
...
defined of,
tlo [ff]-
and tlo[f,
f*j]
92 JOHN BACKUS
gn]
so[f, [f0o1
fn] < [f2,
[12, .
f]
f. ] =fs fnogn]
.
fn] for n > 2 for n > 2
1.7
[fgg]]
[[fg],
, gn]]
distlo[f, [gi,
definedof--distl[fq]
The analogous law holds for distr. I.8
, g0]]
apndlo[f, [g1 ,
[fg 1 ,
,
gn]
[f]
nullog--apndlo[fg]
and so on for apndr, reverse, rotl, etc. I.9
[... , T, ... ] =-
I.10
apndlo[fog,
I.11
pair & notonullol -apndlo[[1l1,2],
ef oh]
where f&g
=-
foapndlo[g,h]
=- ando[f,g];
distr
distro[tlol,2]]
pair = atom -*F; eqo[length,2]
II
Composition and condition (right associated parentheses omitted) (Law II.2 is noted in Manna et al. [16, p. 493].)
11.1
(p-ff; g)oh
II.2
ho(p-f;
II.3
oro[q,notoq]-
poh-*foh; goh
=
g) ep-*hof; hog
= p-(q-f;
ando[p,q]-f; ando[p,notoq]-g; h
g); h
f; g); h =~p f; h
II.3.1
p -(p
III
Composition and miscellaneous
III. 1
of <xdefined °f--<
o°f =- x
idof =f
III.2
foid
III.3
pair-
III.4
oa(fog) -
cfoag
II1.5
nullog-
4afog-
IV
Condition and construction
IV.1
[fi,
=
lodistr = [101,2]
, (p-g; h),
p -.[fl ....
also:
pair-
lotl
=
2
etc.
, f.]
g.... , fj]; [fl,
...
, h,
fn]
A Functional Style and Its Algebra of Programs
93
IV.1.1
[f 1 .
(p
=PI VIs *..; Pn -- i,
gg . g 1, S
gn
p; -ngn; h),
fm]
fln']; . fW]; [f 1
h,
*
f ]
This concludes the present list of algebraic laws; it is by no means exhaustive; there are many others. Proof of two laws We give the proofs of validating propositions for laws 1.10 and I. 11, which are slightly more involved than most of the others. PROPOSITION 1
apndlo[fog, afoh] = afoapndlo[g,h] PROOF. We show that, for ev ery object x, both of the above functions yield the same result.
h:x is neither a sequence nor ¢. Then both sides yield i when applied to x. Case 1.
h:x = 0. Then apndlo[fog, afoh]: x = apnd]: (fog:x,¢) Case 2.
xfoapndlo[g,h]: x
(f:(g:x))
-
afoapncl (g:x, O) = af:(g:x) -(f:(g:xi)
Case 3.
, Yn). Then
h:x = (Y,
apndlo[fog, cefoh]: x
=
apndl: (fog:x, uf (y
=
(f:(g:x), f Y1,
...
,
Y.))
f Yn)
afoapndlo [g,h]: x = afoapndch: (g:x, (Y. = af:(g:.X, Y1I. ,
. . . .
. Yn))
, Yn)
= (f:(g:x),' f y 1 *f ..
yY)
PROPOSITION 2
Pair & o otono
1-
apndlo[[1 2 , 2], distro[tlol, 2]]]
where f & g is the function: ando [f, g], and f
2
=
distr
= fof.
PROOF. We show that both sides produce the same result when applied to any pair (x,y), where x * , as per the stated qualification.
94 JOHN BACKUS
Case 1. x is an atom or i. Then distr: (x,y) = l, since x * 4 . The left side also yields i when applied to (xy), since tln1: (x ,y) all functions are L-preserving. Case 2. x = (xl, ... , xe). Then apndlo[[12, 2], distro[tlol, 2]]:(xy) = apndl:((1:x, y), distr:(tl:xy)) = apndl:((x,y), 0) = ((x,y)) if tl:x = = apndl:((x,y), ((x 2,y), . (xny))) if tl:x* =
((X1My),
* --
(Xn MY))
E
= distr:(xy).
12.3 Example: Equivalence of lWo Matrix Multiplication Programs We have seen earlier the matrix multiplication program: MM = aalIP-odistro [l, trans o2].
Def
We shall now show that its initial segment, MM', where Def
MM' = auIPoadistlodistr
can be defined recursively. (MM' 'multiplies' a pair of matrices after the second matrix has been transposed. Note that MM', unlike MM, gives i for all arguments that are not pairs.) That is, we shall show that MM' satisfies the following equation which recursively defines the same function (on pairs): f= nullOl-0;
apndlo[aIPodistlo[1-l,
2], fo[tlol, 2]].
Our proof will take the form of showing that the following function, R, Def
R = nullol
-*;
apndlo[aIPodistlo[o1l, 2], MM'o[tlol, 2]]
is, for all pairs (xy), the same function as MM'. R 'multiplies' two matrices, when the first has more than zero rows, by computing the first row of the 'product' (with aIPodistlo[lol, 2]) and adjoining it to the 'product' of the tail of the first matrix and the second matrix. Thus the theorem we want is pair
MM' = R
-
from which the following is immediate: MM = MM'o[1, transo2] = Ro[1, transo2]; where Def
pair = atom-OF; eqo[length, 2]. A Functional Style and Its Algebra of Programs 95
=and
THEOREM:
pair
MM' =_ R
-
where Def
MM' = o oIPoadistlodistr
Def
R = nullol
;
apndlo[(x[Podistlo[1 2 , 2], MM'o[tlol,
21]
PROOF
V[ M' = R.
Case 1. pair & null ol4 pair & null
R
01
pair & nullol
by definition of R
MM' =¢
-
by definition of distr
since distr: < Ox > = 0 and cef:4 = S
by definition of Apply to all.
And so: aaIPoadistlodistr:(qx) = 0. Thus pair & nullol-+Case 2.
MM'
R.
aaa
MM' = R.
pair & notonullol --
pair & not onullol-t-
by def of R and R'
R = R'
(1)
where R' = apndlo[aIP odistlo[ 1 , 2], MM o[tlo 1, 2]].
Def
We note that R'
=
apndlo[
fog,
cf oh]
where ulP distl
f=
2]
g a
[12,
h
distro[tlol, 2]
=
cef= a(aIPodistl)
=
calPoadistl
(by II1.4).
(2)
Thus, by I.10, (3)
R' = af oapndlo[g,h].
Now apndlo[g,h] = apndlo[[l 2 , 2], distro[tlol, 2]], thus, by I.11, pair & not onullol
--
apndo[g,h] = distr.
(4)
And so we have, by (1), (2), (3), and (4), pair & not onullol -
R R
'
of-odistr =_ uIPoadistlodistr = MM'. Case 1 and Case 2 together p:-ove the theorem. 96 JOHN BACKUS
E
12.4
Expansion Theorems In the following subsections we shall be 'solving' some simple equations (where by a 'solution' we shall mean the 'least' function which satisfies an equation). To do so we shall need the following notions and results drawn from the later subsection on foundations of the algebra, where their proofs appear. 12.4.1 Expansion. Suppose we have an equation of the form f-
E(f)
(E1)
where E (f) is an expression involving f. Suppose further that there is an infinite sequence of functions fi for i = 0, 1, 2, ... , each having the following form:
Jo =fi+ I m po , qo; ...
Piqi; *;
T
(E2)
where the pi's and qi's are particular functions, so that E has the property: E(fi) -f+ 1 for i = 0, 1, 2, ..... (E3) Then we say that E is expansive and has the fat's as approximating functions. If E is expansive and has approximating functions as in (E2), and if f is the solution of (El), then fcan be written as the infinite expansion fpoIqO; ... ;Pnqn; ...
(E4)
meaning that, for any x, f:x * i iff there is an n > 0 such that (a)pi:x = F for all i < n, and (b)Pn:x = T, and (c) qn:x * l. When f:x * then f:x = qn:x for this n. (The foregoing is a consequence of the 'expansion theorem.') ±,
12.4.2 Linear Expansion. A more helpful tool for solving some equations applies when, for any function h, E(h) _ po E-o; El(h)
(LE1)
and there exist pi and qi such that
E (pi qi; h)-pi+I-aqi+1; E,(h) for i = 0, 1, 2,
...
(LE2)
and E (I) =.
(LE3)
Under the above conditions E is said to be linearly expansive. If so, and
f is the solution of f _ E(f)
(LE4)
then E is expansive and f can again be written as the infinite expansion f mpOq; ... ; PnIqn; ...
(LE5)
using the pi's and qi's generated by (LE1) and (LE2). A Functional Style and Its Algebra of Programs
97
Although the pi's and qj's of (E4) or (LE5) are not unique for a given function, it may be possible to find additional constraints which would make them so, in which case the expansion (LE5) would comprise a canonical form for a function. Even without uniqueness these expansions often permit one to prove the equivalence of two different function expressions, and they often clarify a function's behavior.
12.5 A Recursion Theorem Using three of the above laws and linear expansion, one can prove the following theorem of moderate generality that gives a clarifying expansion for many recursively defined functions. LetJ be a solution of
RECURSION THEOREM:
f
p
(1)
g; Q(f)
where Q(k)
ho [ikoj] for any function k
and p, g, h, i, f
=p
-
g, pj
(2)
j are any given functions; then -Q(g);
p o;n -Qn(g);
..
(3)
...
( where Qn (g) is h ° [i, Qn - '(g) j], and jn is j Ojnl
for n > 2) and
Qn(g) = Ih o [i, ioj, ... , iojl--I, gojn]. We verify that p
(4)
g; Q(f) is linearly expansive. Let Pn, qn, and k be any functions. Then PROOF.
Q(Pn-
q.; k)
--
ho[i, (p,
-
-+
ho[i,(pncj n-
ho(pnoj
-
n; k)oj]
by (2)
qnoj; koj)]
by II.1
ji, qnoj]; [i, koj])
by IV.1
=
poj
-
ho i, qn 0j]; ho[i, koj]
by
=
po
-
Q(qn); Q(k)
by (2).
(5)
11.2
Thus if po = p and qg = g, then (5) gives p, = p oj and q, = Q(g) and
in general gives the following functions satisfying (LE2) pn
= pojn
and qn
= Qn(g).
(6)
Finally, Q(1)
ho[i, 1oj] =ho[i,j]
by 111.1.1
=hoi
by I.9
= 1
98 JOHN BACKUS
by III.1.1.
(7)
Thus (5) and (6) verify (LE2) and (7) verifies (LE3), with E - Q. If we let E(f) - p - g; Q(f), then we have (LE1); thus E is linearly expansive. Sincef is a solution off= E(f), conclusion (3) follows from (6) and (LE5). Now Qn(g) = h
[i,
Qn - I (g) ojj
ho[i, ho[ioj, Iho[i, ioj, ...
... ,
ho[iojn-1, gojn] ...
ijf 1-, gojf]
]]
by I.1, repeatedly by
1.3.
(8)
Result (8) is the second conclusion (4). 12.5.1 Example: Correctness Proof of a Recursive Factorial Function. Let f be a solution of
f = eqO
l; x o [id, fos]
where Def s= -o[id, I]
(subtract 1).
Then f satisfies the hypothesis of the recursion theorem with p g= 1, h x, i id, andj = s. Therefore
f
eqO
1;
...
; eqO
sn
-
eqO,
Qn(J);
and Qn(I)
/xo[id, idos, ...
idosn-1, losr].
Now idosk - Sk by III.2 and eqO osn I oSn = I by III.1, since eqOosn:x implies definedosn:x; and also eqoosn:x = eqO: (x - n)= x=n. Thus if eqOosn:x T, then x = n and Qn(l): n = n X (n
-
1)X ... X(n
-(n
- 1)) x (1:(n - n)) = n!.
Using these results for losn, eqOosn, and Qn(j) in the previous expansion for f, we obtain
f:x =_x =Ol-; ... ;x=n-n x (n -
)x
... x I x l; ..
Thus we have proved that f terminates on precisely the set of nonnegative integers and that it is the factorial function thereon. A Functional Style and Its Algebra of Programs
99
12.6
An Iteration Theorem This is really a corollary of the recursion theorem. It gives a simple expansion for many iterative programs. ITERATION THEOREM.
Let f be the solution (i.e., the least solution) of
g; hofok
f =-p then
p - g; pok - hogok; ... pofkn - hnogokn;
f
Let h' = h 2, i'
PROOF.
f
p
-
id. j' - k, then
g; h'o[i',foj']
since ho2o[id,fok] = h of ok by 1.5 (id is defined except for i, and the equation holds for L). Thts the recursion theorem gives =P -g;... ; p okn- Qn(g),
f
where Qn(g) = h o2 o [id, Qn- '(g) ok] = hoQn l(g)ok = hnogokC'
by I.5.
E
12.6.1 Example: Correctness Proof for an Iterative Factorial Function. Let f be the solution of f
eqOo1
-
2; fo[sol, x]
o[id,1] (subsiract 1). We want to prove that f:(x,/) where Def s = x! iff x is a nonnegative integer. Let p eqOol, g 2, h id, k [so1, X]. Then
g; h of ok
f =-p and so f
= P-
g; ... ; pok -gokn,
.
(1)
by the iteration theorem, since hn = id. We want to show that pair
--
kn = [an, bn]
100 JOHN BACKUS
(2)
holds for every n 2 1, where an
sno1
bn
/ X
(3)
o[Snl 101 ..
sol, 1, 2].
(4)
Now (2) holds for n= 1 by definition of k. We assume it holds for some n 2 1 and prove it then holds for n + 1. Now pair
kn-
-
= kokn =
[sol, X]o[an,
bn]
(5)
since (2) holds for n. And so pair
Ikn+l = [soan, x o [an, bn]]
-
by 1.1 and 1.5.
(6)
To pass from (5) to (6) we must check that whenever an or bn yield ± in (5), so will the right side of (6). Now soan = Sn+ol = an+l
X - [an , bnI
(7)
I X o [Sn -1l, .. , So°l, 1, 2] =
bn+1 by 1.3.
(8)
Combining (6), (7), and (8) gives pair-kn+1
=
[a,+t, b,+l].(
Thus (2) holds for n = I and holds for n + I whenever it holds for n; therefore, by induction, it holds for every n > 1. Now (2) gives, for pairs: definedokn
-
pokn
eqOolo[an, bn] = eqOoan = eqOosnol
definedokn
--
(10)
gokn = 2o[an, bn] =/
X o[Snlol, -I
, sol, 1, 2]
(11)
(both use I.5). Now (1) tells us that f:(x,1) is defined iff there is an n such thatpoki:(x,1) = Ffor all i < n, andpokn:(x,1) = T, that is, by (10), eqOosn:x = T, i.e., x = n; and gokn: x,1) is defined, in which case,
by (11), f:(x,J) = Ix :(1, 2,
...
x-1, x, I) =n!
El
which, is what we set out to prove. A Functional Style and Its Algebra of Programs
101
12.6.2 Example: Proof of Equivalence of Two Iterative Programs. In this example we want to prove that two iteratively defined programs, f and g, are the sarne function. Let f be the solution of f=pol--2; hofo[kol, 21.
(1)
Let g be the solution of g = pol- 2 ; go[kol, ho2].
(2)
Then, by the iteration theorem: f
P0q;
... ; Pnqn; .(3)
9
P 0__q 0; .. ; P'n -q n;
*(4)
where (letting r° = id for any r), for n = 0, 1, Pn =polo[kol, 2]n =polo[klc,1, 2]
by I.5.1
(5)
qn= hn o2 o[kol, 2]n
by 1.5.1
(6)
by 1.5.1
(7)
by 1.5.1
(8)
hn 2 [knol ,2]
P'n =pol, ho[kol, 2]n =pololknol, hno2] q'n= 21[k0I, ho2]n
=
2o[kito1I hno2]
Now, from the above, using 1.5, definedo2-* Pn = pokn fl
(9)
definedohn o2-2-
P'n =pokl k1l
(10)
definedoknol---
qn = q'n= h o2.
(11)
Thus definedohno2--
definedo2
(12)
T
(13)
Pn= P'n
definedohn o2and f
= P
qG; *--; pn hno2; *
g =p'0 -q'0;
...
; P'n
hr-2;
(14)
since Pn and p'n provide the clualification needed forqn 102 JOHN BACKUS
(15)
... =
q'n
hn o2.
Now suppose there is an x such that f ox # g:x. Then there is an n such that pi:x = p'i:x = Ffor i < n, and pn:x *p',:x. From (12) and (13) this can only happen when hn o2:x = i. But since h is L-preserving, hm o2:x = l for all m>n. Hence f:x=g:x=i by (14) and (15). This contradicts the assumption that there is an x for which f:x * g:x. Hence f =_g. This example (by ]. H. Morris, Jr.) is treated more elegantly in [16] on p. 498. However, some may find that the above treatment is more constructive, leads one more mechanically to the key questions, and provides more insight into the behavior of the two functions.
12.7 Nonlinear Equations The earlier examples have concerned 'linear' equations (in which the 'unknown' function does not have an argument involving itself). The question of the existence of simple expansions that 'solve' 'quadratic' and higher order equations remains open. The earlier examples concerned solutions of f = E(f), where E is linearly expansive. The following example involves an E(f) that is quadratic and expansive (but not linearly expansive). 12.7.1 Example: proof of idempotency the solution of fE(f)
a
p-id;
([16], p. 497). Let f be
f 2 oh.
(1)
We wish to prove that f- f2. We verify that E is expansive (Section 12.4.1) with the following approximating functions: =l(2a) fn
ph-id; ... ; p h
I -hn-
First we note that p -- fn p ohi -nf
ohi
=
1; 1
for n > 0.
id and so
hi.
Now E(f) _phid;
(2b)
(3) 2
oh -f
(4)
and E(fn)
= p-id;
fn o(p- id; ... ; p ohn-l
=p-id;fno(poh-h; ph-id; poh-fn
...
ohns1; ±) oh
pohnf-hn;fl oh)
oh; ... ; pohnfln ohnffn
=p-aid; poh-h; ... ; p ohf-Chn;± =
by (3)
fn +1
(5) A Functional Style and Its Algebra of Programs
103
Thus E is expansive by (4) and 1,5); so by (2) and Section 12.4.1 (E4) ... ; p hnl. -ohnf ...
f = p-id;
(6)
But (6), by the iteration theorem, gives f = p-aid; foh.
(7)
Now, if p:x = T, thenf:x = A f:x =f 2 oh:x
=f
2
:x,
by (1). If p:x = F. then
by (1)
= f: (foh:x) = f: (f:x) by (7)
E
2
= f :x.
If p:x is neither Tnor F, then f:x = 1= f 2 :x. Thus f=f2 .
12.8 Foundations for the Algebra of Programs Our purpose in this section is to establish the validity of the results stated in Section 12.4. Subsequent sections do not depend on this one, hence it can be skipped by readers who wish to do so. We use the standard concepts and results from [16], but the notation used for objects and functions, etc., will be that of this paper. We take as the domain (and range) for all functions the set 0 of objects (which includes i) of a given FP system. We take F to be the set of functions, and F to be the set of functional forms of that FP system. We write E(f) for any function expression involving functional forms, primitive and defined functions, and the function symbol f; and we regard E as a functional that maps a function f into the corresponding function E(f). We assume that all f E F are L-preserving and that all functional forms in F correspond to continuous functionals in every variable (e.g., [f, g] is continuous in both f and g). (All primitive functions of the FP system given earlier are ipreserving, and all its functional forms are continuous.) Definitions. Let E(f) be a function expression. Let
fi+ipoqo;
... ; pi- qi; 1 for i = 0, 1,
where pi, qi E F. Let E have the property that E(f) =_fiI
for i = 0, l, ....
104 JOHN BACKUS
Then E is said to be expansive with the approximatingfunctions fi. We
writeI f
po~qo; .. ; Pn qn; *-..
to mean that f =rimn {f }, where the f1 have the form above. We call the right side an infinite expansion of f We take f:x to be defined iff there is an n > 0 such that (a) pi:x = F for all i < n, and (b) Pn :x = T, and (c) qn:x is defined, in which case f:x = q, :x. EXPANSION THEOREM. Let E(f) be expansive with approximating functions as above. Let f be the least function satisfying
f-
E(f).
Then f 'polqo;
.- ;Pn 0 qn;*.
PROOF. Since E is the composition of continuous functionals (from F) involving only monotonic functions (-preserving functions from F) as constant terms, E is continuous [16, p. 493]. Therefore its least fixed pointfis limit {Ei(I)} limit{ r j} [16, p. 494], which by definition is the above infinite expansion for f E Definition. Let E (f)
be a function expression satisfying the
E(h) =-po-q 0; E,(h) for all h E F
(LE1)
where pi E F and qi E F exist such that E ,(pi qi ; h) - pi+,I qi + I; EjI(h) for all h E F and i = 0, 1,...
(LE2)
and E,(T) -.
(LE3)
Then E is said to be linearly expansive with respect to these pi's and qi's.
LINEAR EXPANSION THEOREM.
respect to pi and qi, i = 0, 1,. functions
Let E be linearly expansive with
Then E is expansive with approximating
fo =-T
((1)
fj+j mpo So; .. ;pilqi; T.
(2)
A Functional Style and Its Algebra of Programs
105
PROOF.
We want to show that E(fi) =fi I for any i >0. Now
E(fo) =-po-qo; El(i) =-po 0 .*a;T=f l
by (LE1)(LE3)(1).
(3)
Let i > 0 be fixed and let fi
=polqo;
w
=- pI
ql;
w1
(4a)
W2
(4b)
etc. wi-I =-pi-,qi-,; T. Then, for this i > 0 E(fi) =p -q 0 ; E, (fi) El (fi) =pi-q; El (w,) E1(w 1) Pp2- q2 ; E1 (w2 ) etc. El(wi -)
=pi jqj; =
Pi
qi;
(4-) by (LE 1) by (LE2) and (4a) by (LE2) and (4b)
E,(I) by (LE2) and (4-) by (:LE3).
Combining the above gives E(f) mfi+ I for arbitrary i > ID,by (2).
(5)
By (3), (5) also holds for i = Cl; thus it holds for all i > 0. Therefore E is expansive and has the required approximating functions. E
i
=
COROLLARY. If E is linear' expansive with respect to pi and qi, 0,1, . and f is the least function satisfying
f= E(f)
(LE4)
then f
po
qo;
...
;
Pn
qin;
(LE5)
12.9
The Algebra of Programs for the Lambda Calculus and for Combinators Because Church's lambda calculus [5] and the system of combinators developed by Schonfinkel and Curry [6] are the primary mathematical systems for representing the notion of application of functions, and because they are more powerful than FP systems, it is natural to enquire what an algebra of programs based on those systems would look like. 106 JOHN BACKUS
The lambda calculus and combinator equivalents of FP composition, fog, are Xfgx.(f(gx))
-
B
where B is a simple combinator defined by Curry. There is no direct equivalent for the FP object (xy) in the Church or Curry systems proper; however, following Landin [14] and Burge [4], one can use the primitive functions prefix, head, tail, null, and atomic to introduce the notion of list structures that correspond to FP sequences. Then, using FP notation for lists, the lambda calculus equivalent for construction is Xfgx. (fx, gx). A combinatory equivalent is an expression involving prefix, the null list, and two or more basic combinators. It is so complex that I shall not attempt to give it. If one uses the lambda calculus or combinatory expressions for the functional forms fog and [fg] to express the law I.1 in the FP algebra, [f,g] oh [foh,g oh], the result is an expression so complex that the sense of the law is obscured. The only way to make that sense clear in either system is to name the two functionals: composition = B, and construction = A, so that Bfg ~fog, and Afg = [fig]. Then I.1 becomes B(Afg)h = A(Bfh)(Bgh) which is still not as perspicuous as the FP law. The point of the above is that if one wishes to state clear laws like those of the FP algebra in either Church's or Curry's system, one finds it necessary to select certain functionals (e.g., composition and construction) as the basic operations of the algebra and to either give them short names or, preferably, represent them by some special notation as in FP. If one does this and provides primitives, objects, lists, etc., the result is an FP-like system in which the usual lambda expressions or combinators do not appear. Even then these Church or Curry versions of FP systems, being less restricted, have some problems that FP systems do not have: (a) The Church and Curry versions accommodate functions of many types and can define functions that do not exist in FP systems. Thus, Bf is a function that has no counterpart in FP systems. This added power carries with it problems of type compatibility. For example, in fog, is the range of g included in the domain of f? In FP systems all functions have the same domain and range. (b) The semantics of Church's lambda calculus depends on substitution rules that are simply stated but whose implications are very difficult to fully comprehend. The true complexity of these rules is not widely recognized but is evidenced by the succession of able logicians who have published 'proofs' of the Church-Rosser theorem that failed to account for one or another of these complexities. (The A Functional Style and Its Algebra of Programs
107
Church-Rosser theorem, or Scott's proof of the existence of a model [22], is required to show that the lambda calculus has a consistent semantics.) The definition of pure Lisp contained a related error for a considerable period (the 'funarg' problem). Analogous problems attach to Curry's system as well. In contrast, the formal (FFP) version of FP systems (described in the next section) has no variables and only an elementary substitution rule (a function for its named, and it can be shown to have a consistent semantics by a relatively simple fixed-point argument along the lines developed by Dana Scott and by Manna et al. [16]. For such a proof see McJones [18].
12.10 Remarks The algebra of programs ou ned above needs much work to provide expansions for larger classes or equations and to extend its laws and theorems beyond the elementary ones given here. It would be interesting to explore the algebra for an FP-like system whose sequence constructor is not Ž-preserving (law 1.5 is strengthened, but IV. 1 is lost). Other interesting problems are: (a) Find rules that make expansions unique, giving canonical forms for functions; (b) find algorithms for expanding and analyzing the behavior of functions for various classes of arguments; and (c) explore ways of using the laws and theorems of the algebra as the basic rules either of a formal, preexecution 'lazy evaluation' scheme [9, 10], or of one which operates during execution. Such schemes would, for example, make use of the law 10 [f,g]
13 Formal Systems for Functional Programming (FFP Systems) 13.1 Introduction As we have seen, an FP system has a set of functions that depends on its set of primitive functions, its set of functional forms, and its set of definitions. In particular, its set of functional forms is fixed once and for all, and this set determines the power of the system in a major way. For example, if its set of functional forms is empty, then its entire set of functions is just the set of primitive functions. In FFP systems one can create new functional forms. Functional forms are represented by object sequences; the first element of a sequence determines which lorm it represents, while the remaining elements are the parameters of the form. 108 JOHN BACKUS
The ability to define new functional forms in FFP systems is one consequence of the principal difference between them and FP systems: in FFP systems objects are used to 'represent' functions in a systematic way. Otherwise FFP systems mirror FP systems closely. They are similar to, but simpler than, the Reduction (Red) languages of an earlier paper [2]. We shall first give the simple syntax of FFP systems, then discuss their semantics informally, giving examples, and finally give their formal semantics.
13.2 Syntax We describe the set 0 of objects and the set E of expressions of an FFP system. These depend on the choice of some set A of atoms, which we take as given. We assume that T (true), F (false), k (the empty sequence), and # (default) belong to A, as well as 'numbers' of various kinds, etc. (1) Bottom, i, is an object but not an atom. (2) Every atom is an object. (3) Every object is an expression. (4) If xi, ... , Xn are objects [expressions], then (xl, ... , xn) is an object [resp., expression] called a sequence (of length n) for n> 1. The object [expression] xi for 1 < i < n, is the ith element of the sequence (xl, ... , ... , Xn). (O is both a sequence and an atom; its length is 0.) (5) If x and y are expressions, then (x:y) is an expression called an application. x is its operator and y is its operand. Both are elements of the expression. (6) If x = (x1, ... , Xn) and if one of the elements of x is l, then x = i. That is, (..., l, ... ) = l. (7) All objects and expressions are formed by finite use of the above rules. A subexpression of an expression x is either x itself or a subexpression of an element of x. An FFP object is an expression that has no application as a subexpression. Given the same set of atoms, FFP and FP objects are the same.
13.3 Informal Remarks About FFP Semantics 13.3.1 The Meaning of Expressions; the Semantic Function s. Every FFP expression e has a meaning, ye, which is always an object; pe is found by repeatedly replacing each innermost application in e by its meaning. If this process is nonterminating, the meaning of e is 1. The meaning of an innermost application (x:y) (since it is innermost, A Functional Style and Its Algebra of Programs
109
x and y must be objects) is the result of applying the function represented by x to y, just as in FP systems, except that in FFP systems functions are represented by objects, rather than by function expressions, with atoms (instead of function symbols) representing primitive and defined functions, and with sequences representing the FP functions denoted by functional forms. The association between objects and the functions they represent is given by the representationfunction, p, of the FFP system. (Both p and lb belong to the description of the system, not the system itself.) Thus if the atom NULL re presents the FP function null, then pNULL = null and the meaning of (NULL:A) is 1 i(NULL:A) = (pNULL):A = null:A = F. From here on, as above, we use the colon in two senses. When it is between two objects, as is (NULL:A), it identifies an FFP application that denotes only itself; when it comes between a function and an object, as in (pNULL):A or null:A, it identifies an FP-like application that denotes the result of applying the function to the object. The fact that FFP operators are objects makes possible a function, apply, which is meaningless in FP systems: apply:(xy) = (x:y). The result of apply:(xy), namely, (x:y), is meaningless in FP systems on two levels. First, (x:y) is nol. itself an object; it illustrates another difference between FP and FF1' systems: some FFP functions, like apply, map objects into expressions, not directly into objects as FP functions do. However, the meaning of apply: (xy) is an object (see below). Second, (x:y) could not be even an intermediate result in an FP system; it is meaningless in FP systems since x is an object, not a function, and FP systems do not associate functions with objects. Now if APPLY represents apply, then the meaning of (APPLY: (NULL,A)) is 1i(APPLY: (NULL,A)) = t((p.4PPLY):(NULLA)) = yI(apply:(NULL,A)) = jt(NIJLL:A) = s((pNULL):A) = g(null:A) = [F = F. The last step follows from the fact that every object is its own meaning. Since the meaning function (eventually evaluates all applications, one can think of apply: (NULL, A) as yielding F even though the actual result is (NULL:A). 13.3.2 How Objects Represent Functions; the Representation Function p. As we have seen, some atoms (primitive atoms) will represent the primitive func-tins of the system. Other atoms can 110 JOHN BACKUS
represent defined functions just as symbols can in FP systems. If an atom is neither primitive nor defined, it represents 1, the function which is i everywhere. Sequences also represent functions and are analogous to the functional forms of FP. The function represented by a sequence is given (recursively) by the following rule. Metacomposition rule (p(x.,1 . .. , xn)):y = (pX.): ((x.,
...
,
Xn), y),
where the xi's and y are objects. Here px, determines what functional form (x, ... , xn) represents, and x2 , ... , Xn are the parameters of the form (in FFP, x, itself can also serve as a parameter). Thus, for example, let Def pCONST 201; then (CONST,x) in FFP represents the FP functional form x, since, by the metacomposition rule, if y * 1,
(p(CONST,x)):y = (pCONST):((CONST,x),y) = 2o1((CONST,x),y) = x. Here we can see that the first, controlling, operator of a sequence or form, CONST in this case, always has as its operand, after metacomposition, a pair whose first element is the sequence itself and whose second element is the original operand of the sequence, y in this case. The controlling operator can then rearrange and reapply the elements of the sequence and original operand in a great variety of ways. The significant point about metacomposition is that it permits the definition of new functional forms, in effect, merely by defining new functions. It also permits one to write recursive functions without a definition. We give one more example of a controlling function for a functional form: Def pCONS =aapply otl odistr. This definition results in (CONS, f . , fn)- where the fi are objects - representing the same function as [pf 1, ... , pf n]. The following shows this. (p(CONSJf, ... , f.)):x = (pCONS): ((CONS,f 1
.
..
= aapply otlodistr: ((CONS, = aapply:(( f.,x) ., = (apply:(fx), ...
,)
,x)
by metacomposition
f, ... ffn),x) by def of pCONS
(fnx)) by def of tl and distr and apply:(fn,x))
by def of ax
= ((f 1 :x), ... , (f,:x) ) by def of apply. In evaluating the last expression, the meaning function /Awill produce the meaning of each application, giving p fi x as the ith element. A Functional Style and Its Algebra of Programs
111
Usually, in describing the function represented by a sequence, we shall give its overall effect rather than show how its controlling operator achieves that effect. Thus we would simply write (p(CONS,f1 ... f.)):x
=..f:0,-- , (fn:X))
instead of the more detailed account above. We need a controlling operator, COMP, to give us sequences representing the functional form composition. We take pCOMP to be a primitive function such thai, for all objects x, (p(COMP,fl, ...
,ffn)):x
= (fl:(f2:(...
:(fn:x)...))) for n> 1.
(I am indebted to Paul McJones for his observation that ordinary composition could be achieved by this primitive function rather than by using two composition rules in the basic semantics, as was done in an earlier paper [2].) Although FFP systems peraiLf the definition and investigation of new functional forms, it is to be expected that most programming would use a fixed set of forms (whose controlling operators are primitives), as in FP, so that the algebraic laws for those forms could be employed, and so that a structured programming style could be used based on those forms. In addition to its use in defining functional forms, metacomposition can be used to create recursive functions directly without the use of recursive definitions of the form Def f= E(f). For example, if pMLAST - nullotlo2-102; applyo[l, tl02], then p(MLAST) =- last, where last:x = x = (x, ... , xn)-xn; i. Thus the operator (MLAST) works as follows: it((MLAST): (A,B)) = p(pMLAST:((MLAST:, (A,B)))
by metacomposition
= u(applyo[1, tlo2]:((ML4ST),(A,B))) = p(apply: ((MLAST), (B))) = p((MLAST):(B)) = yi(pMLAST: ((MLAST.
(B)))
= p(1 o2: ((MLAST),(B))) = B. 13.3.3 Summary of the Properties of p and i. So far we have shown how p maps atoms and sequences into functions and how those functions map objects into expressions. Actually, p and all FFP functions can be extended so Ihat they are defined for all expressions. 112 JOHN BACKUS
With such extensions the properties of p and u can be summarized as follows:
(1) ILE [expressions
-
objects].
(2) If x is an object, px = x. (3) If e is an expression and e = (e1 ,
...
, en), then ye = (ge,
yen ) -
(4) p E [expressions
-
[expressions
-
expressions]].
(5) For any expression e, pe = p(ue). (6) If x is an object and e an expression, then px:e = px: (e). (7) If x and y are objects, then ,a(x:y) = it(px:y). In words: the meaning of an FFP application (x:y) is found by applying px, the function represented by x, to y and then finding the meaning of the resulting expression (which is usually an object and is then its own meaning).
13.3.4 Cells, Fetching, and Storing. For a number of reasons it is convenient to create functions which serve as names. In particular, we shall need this facility in describing the semantics of definitions in FFP systems. To introduce naming functions, that is, the ability to fetch the contents of a cell with a given name from a store (a sequence of cells) and to store a cell with given name and contents in such a sequence, we introduce objects called cells and two new functional forms, fetch and store. Cells. A cell is a triple (CELL, name, contents). We use this form instead of the pair (name,contents) so that cells can be distinguished from ordinary pairs. Fetch. The functional form fetch takes an object n as its parameter (n is customarily an atom serving as a name); it is written tn (read 'fetch n'). Its definition for objects n and x is Tn:x -x
= k - #; atom:x- i; (1:x) = (CELL,n,c) -c;
tnotl:x
where # is the atom 'default.' Thus Tn (fetch n) applied to a sequence gives the contents of the first cell in the sequence whose name is n; if there is no cell named n, the result is default, # . Thus Tn is the name function for the name n. (We assume that pFETCH is the primitive function such that p(FETCH,n) =- tn. Note that Tn simply passes over elements in its operand that are not cells.) Store and push, pop, purge. Like fetch, store takes an object n as its parameter; it is written In ('store n'). When applied to a pair (x,y), where y is a sequence, In removes the first cell named n from y, if any, then creates a new cell named n with contents x and appends it to y. Before defining In (store n) we shall specify four auxiliary functional forms. (These can be used in combination with fetch n and store n A Functional Style and Its Algebra of Programs
113
I
to obtain multiple, named, LIFO stacks within a storage sequence.) Two of these auxiliary forms are specified by recursive functional equations; each takes an object n as its parameter. (cellname n) _atom -; eqollength, 3]- eqo[[CELL, i], [1, 2]]; F (push n) =pair apndl o[[CELL, h, 11, 21; l (pop n) = null - ; (cellname n)o-1-tl; apndlo[1, (pop n)otl] (purge n)
null - 0; (cellnam e n) o 1 - (purge n) otl; apndlo[1, (purge n)otl] In-pair-(push n)o[1, (pop n)o2]; I =
The above functional forms work as follows. For x * l, (cellname n) :x is T if x is a cell named n; otherwise it is F (pop n) y removes the first cell named n from a sequence y; (purge n):y removes all cells named n from y. (push n): (x ,y) puts a cell named n with contents x at the head of sequence y; In:(x,y) is (push n):(x,(pop n):y). (Thus (push n):(x,y) = y' paishesx onto the top of a 'stack' named n in y'; x can be read by tn:y' =x and can be removed by (pop n):y'; thus Tno(pop n):y' is the element belowx in the stack n, provided there is more than one cell named a in y'.) 13.3.5 Definitions in FFP Systems. The semantics of an FFP system depends on a fixed set of definitions D (a sequence of cells), just as an FP system depends cin its informally given set of definitions. Thus the semantic function u depends on D; altering D gives a new p' that reflects the altered definitions. We have represented D as an object because in AST systems (Section 14) we shall want to transform D by applying functions to it and to fetch data from it-in addition to using it as the source of function definitions in FFP semantics. If (CELL,n,c) is the first celL named n in the sequence D (and n is an atom) then it has the same effect as the FP definition Def n pc, that is, the meaning of (n:x) will be the same as that of pc:x. Thus, for example, if (CELL,CONST,(CCIMP,2,1)) is the first cell in D named CONST, then it has the same effect as Def CONST--22 1, and the FFP system with that D would fir-d A(CONST:((x,y),z)) = y and consequently
j((CONST,A):B) = A. In general, in an FFP system with definitions D, the meaning of an application of the form (atom:x) is dependent on D; if Tatom:D * # (that is, atom is defined in D) then its meaning is u(c:x), where c = Tatom: D, the contents of the first cell in D named atom. If Tatom: D = #, then atom is not defined in D and either atom is primitive, i.e., the system knows how to compute patom:x, and tt(atom:x) = IL(patom:x); otherwise I4(atom:x) = i. 114 JOHN BACKUS
13.4 Formal Semantics for FFPSystems We assume that a set A of atoms, a set D of definitions, a set P C A of primitive atoms and the primitive functions they represent have all been chosen. We assume that pa is the primitive function represented by a if a belongs to P, and that pa = Iif a belongs to Q, the set of atoms in A-P that are not defined in D. Although p is defined for all expressions (see 13.3.3), the formal semantics uses its definition only on P and Q. The functions that p assigns to other expressions x are implicitly determined and applied in the following semantic rules for evaluating u(x:y). The above choices of A and D, and of P and the associated primitive functions determine the objects, expressions, and the semantic function AD for an FFP system. (We regard D as fixed and write si for ltD.) We assume D is a sequence and that Ty:D can be computed (by the function Ty as given in Section 13.3.4) for any atom y. With these assumptions we define it as the least fixed point of the functional T, where the function Tit is defined as follows for any function jt (for all expressions x, xi, y, yi, z, and w): (T)x -x
E A- x; X
=(Xl,..,
x
=
Xn)-(PXI,
Ai
Xn);
(y:z)-* (y E A & (Ty:D) = #E((py)-(*z)); y EA & (Ty:D) = it(w:z); Y
(=l
.--
Yn)
*t(yi :(Yz)); a(Ay:z)); I
The above description of I expands the operator of an application by definitions and by metacomposition before evaluating the operand. It is assumed that predicates like 'x E A' in the above definition of rA are L-preserving (e.g., 'l E A' has the value i) and that the conditional expression itself is also L-preserving. Thus (Ti)~il and (rpt) (i:z) =- i. This concludes the semantics of FFP systems.
14
Applicative State Transition Systems (AST Systems) 14.1 Introduction This section sketches a class of systems mentioned earlier as alternatives to von Neumann systems. It must be emphasized again that these applicative state transition systems are put forward not as practical programming systems in their present form, but as examples of a class in which applicative style programming is made available in a history A Functional Style and Its Algebra of Programs
115
sensitive, but non-von Neumann system. These systems are loosely coupled to states and depend on an underlying applicative system for both their programming language and the description of their state transitions. The underlying applicative system of the AST system described below is an FFP system, but other applicative systems could also be used. To understand the reasons for the structure of AST systems, it is helpful first to review the basic structure of a von Neumann system, Algol, observe its limitations, and compare it with the structure of AST systems. After that review a minimal AST system is described; a small, top-down, self-protecting system program for file maintenance and running user programs is given., with directions for installing it in the AST system and for running an example user program. The system program uses 'name functions' instead of conventional names and the user may do so too. The section concludes with subsections discussing variants of AST systems, their general properties, and naming systems.
14.2
The Structure of Algol Compared to That of AST Systems An Algol program is a sequence of statements, each representing a transformation of the Algol state, which is a complex repository of information about the status of various stacks, pointers, and variable mappings of identifiers onto values, etc. Each statement communicates with this constantly changing state by means of complicated protocols peculiar to itself and even to its different parts (e.g., the protocol associated with the variable x depends on its occurrence on the left or right of an assignment, in a declaration, as a parameter, etc.). It is as if the Algol state were a complex 'store' that communicates with the Algol program through an enormous 'cable' of many specialized wires. The complex communications protocols of this cable are fixed and include those for every statement type. The 'meaning' of an Algol program must be given in terms of the total effect of a vast number of communications with the state via the cable and its protocols (plus a means for identifying the output and inserting the input into the state). By comparison with this massive cable to the Algol state/store, the cable that is the von Neumann bottleneck of a computer is a simple, elegant concept. Thus Algol statements are not expressions representing state-to-state functions that are built up by the use of orderly combining forms from simpler state-to-state function:;. Instead they are complex messages with context-dependent parts that nibble away at the state. Each part transmits information to and from the state over the cable by its own protocols. There is no provision for applying general functions to the whole state and thereby making large changes in it. The possibility of large, powerful transformations of the state S by function application, 116 JOHN BACKUS
S-f:S, is in fact inconceivable in the von Neumann-cable and protocol - context: there could be no assurance that the new state f: S would match the cable and its fixed protocols unless f is restricted to the tiny changes allowed by the cable in the first place. We want a computing system whose semantics does not depend on a host of baroque protocols for communicating with the state, and we want to be able to make large transformations in the state by the application of general functions. AST systems provide one way of achieving these goals. Their semantics has two protocols for getting information from the state: (1) get from it the definition of a function to be applied, and (2) get the whole state itself. There is one protocol for changing the state: compute the new state by function application. Besides these communications with the state, AST semantics is applicative (i.e., FFP). It does not depend on state changes because the state does not change at all during a computation. Instead, the result of a computation is output and a new state. The structure of an AST state is slightly restricted by one of its protocols: It must be possible to identify a definition (i.e., cell) in it. Its structure-it is a sequenceis far simpler than that of the Algol state. Thus the structure of AST systems avoids the complexity and restrictions of the von Neumann state (with its communications protocols) while achieving greater power and freedom in a radically different and simpler framework.
14.3 Structure of an AST System An AST system is made up of three elements: (1) An applicative subsystem (such as an FFP system). (2) A state D that is the set of definitions of the applicative subsystem. (3) A set of transitionrules that describe how inputs are transformed into outputs and how the state D is changed. The programming language of an AST system is just that of its applicative subsystem. (From here on we shall assume that the latter is an FFP system.) Thus AST systems can use the FP programming style we have discussed. The applicative subsystem cannot change the state D and it does not change during the evaluation of an expression. A new state is computed along with output and replaces the old state when output is issued. (Recall that a set of definitions D is a sequence of cells; a cell name is the name of a defined function and its contents is the defining expression. Here, however, some cells may name data rather than functions; a data name n will be used in Tn (fetch n) whereas a function name will be used as an operator itself.) We give below the transition rules for the elementary AST system A Functional Style and Its Algebra of Programs
117
we shall use for examples of programs. These are perhaps the simplest of many possible transition rules that could determine the behavior of a great variety of AST systems. 14.3.1 Transition Rules for an Elementary AST System. When the system receives an input x, it forms the application (SYSTEM:x) and then proceeds to obtain its meaning in the FFP subsystem, using the current state D as the set of definitions. SYSTEM is the distinguished name of a function defined in D (i.e., it is the 'system program'). Normally the result is a pair 1(SYSTEM:x) = (o,d) where o is the system output that results from input x and d becomes the new state D for the system's next input. Usually d will be a copy or partly changed copy of the old state. If u(SYSTEM:x) is not a pair, the output is an error message and the state remains unchanged. 14.3.2 Transition Rules: Exception Conditions and Startup. Once an input has been accepted, our system will not accept another (except (RESET,x), see below: until an output has been issued and the new state, if any, installed. The system will accept the input (RESET,x) at any time. There are two cases: (a) if SYSTEM is defined in the current state D, then the system aborts its current computation without altering D and treats.r. as a new normal input; (b) if SYSTEM is not defined in D, then x is appended to D as its first element. (This ends the complete description of the transition rules for our elementary AST system.) If SYSTEM is defined in D it can always prevent any change in its own definition. If it is not c fined, an ordinary input x will produce pI(SYSTEM:x) = i and the transition rules yield an error message and an unchanged state; on the other hand, the input (RESET, (CELL, SYSTEM,s)) will define SYSTEM to be s. 14.3.3 Program Access to the State; the Function pDEFS. Our FFP subsystem is required to have one new primitive function, defs, named DEFS such that for anti object x # l, defs:x = pDEFS:x = D where D is the current state and set of definitions of the AST system. This function allows programs access to the whole state for any purpose, including the essential one of computing the successor state.
14.4 An Example of a System Program The above description of our elementary AST system, plus the FFP
subsystem and the FP primitives and functional forms of earlier sections, specify a complete history-sensitive computing system. Its
input and output behavior is limited by its simple transition rules, 118 JOHN BACKS
but otherwise it is a powerful system once it is equipped with a suitable set of definitions. As an example of its use we shall describe a small system program, its installation, and operation. Our example system program will handle queries and updates for a file it maintains, evaluate FFP expressions, run general user programs that do not damage the file or the state, and allow authorized users to change the set of definitions and the system program itself. All inputs it accepts will be of the form (key,input) where key is a code that determines both the input class (system-change, expression, program, query, update) and also the identity of the user and his authority to use the system for the given input class. We shall not specify a format for key. Input is the input itself, of the class given by key. 14.4.1 General Plan of the System Program. The state D of our AST system will contain the definitions of all nonprimitive functions needed for the system program and for users' programs. (Each definition is in a cell of the sequence D.) In addition, there will be a cell in D named FILE with contents file, which the system maintains. We shall give FP definitions of functions and later show how to get them into the system in their FFP form. The transition rules make the input the operand of SYSTEM, but our plan is to use name-functions to refer to data, so the first thing we shall do with the input is to create two cells named KEY and INPUT with contents key and input and append these to D. This sequence of cells has one each for key, input, and file; it will be the operand of our main function called subsystem. Subsystem can then obtain key by applying 1KEY to its operand, etc. Thus the definition Def
system = pair -subsystem of; [NONPAIR, defs]
where f= =INPUTo[2, lKEYo[l, defs]] causes the system to output NONPAIR and leave the state unchanged if the input is not a pair. Otherwise, if it is (key,input), then
f: (key,input) = ((CELL,INPUT, input), (CELL,KEY, key), d, ... , dn) where D = (d, ... , do). (We might have constructed a different operand than the one above, one with just three cells, for key, input, and file. We did not do so because real programs, unlike subsystem, would contain many name functions referring to data in the state, and this 'standard' construction of the operand would suffice then as well.) A Functional Style and Its Algebra of Programs
119
14.4.2 The 'Subsystem' Function. We now give the FP definition of the function subsystem, followed by brief explanations of its six cases and auxiliary functions. Def
subsystem is-system-change oTKEY- [report-change, apply] o [TINPUT, defs]; is-expression oTKEY- [TINP UT, defs]; is-program KEY- systein-check oapply [T INP UT, defs]; is-query o tKEY- [query-resaponseo [IINPUT, TFILE], defs]; is-update oTKEY[report-update, IFILE -[update, defs]] o[TINPUT, TFILE]; [report-erroro[TKEY, T.NPUTf, defs].
This subsystem has five 'p -. f; ' clauses and a final default function, for a total of six classes of inputs; the treatment of each class is given below. Recall that the operand of subsystem is a sequence of cells containing key, input, andfile as well as all the defined functions of D, and that subsystem: operand (output, newstate). Default inputs. In this case the result is given by the last (default) function of the definition when key does not satisfy any of the preceding clauses. The output is report-error: (key,input). The state is unchanged since it is given by defs: operand = D. (We leave to the reader's imagination what the function report-error will generate from its operand.) System-change inputs. When is-system-change oTKEY:operond = is-system-change:key = T, key specifies that the user is authorized to make a system change and that input = TINPUT: operand represents a function f that is to be applied to D to produce the new state jr: D. (Of course f: D can be a useless new state; no constraints are placed on it.) The output is a report, namely, report-change: (input, D). Expression inputs. When is-expression:key = T, the system understands that the output is to be the meaning of the FFP expression input; TINPUT:operandproduces it and it is evaluated, as are all expressions. The state is unchanged. Program inputs and system self protection. When is-program: key = T. both the output and new state are given by (pinput): D = (output, newstate). If newstate contains file in suitable condition and the definitions of system and other protected functions, then system-check: (output,newstate) = (output,newstate). Otherwise, system-check: (output, newstate) = (error-report,D). 120 JOHN BACKUS
Although programinputs can make major, possibly disastrous changes in the state when it produces newstate, system-check can use any criteria to either allow it to become the actual new state or to keep the old. A more sophisticated system-check might correct only prohibited changes in the state. Functions of this sort are possible because they can always access the old state for comparison with the new state-tobe and control what state transition will finally be allowed. File query inputs. If is-query:key = T, the function query-response is designed to produce the output = answer to the query input from its operand (input, file). File update inputs. If is-update:key = T, input specifies a file transaction understood by the function update, which computes updatedfile = update: (input, file). Thus l FILE has (updated-file, D) as its operand and thus stores the updated file in the cell FILE in the new state. The rest of the state is unchanged. The function report-update generates the output from its operand (input, file). 14.4.3 Installing the System Program. We have described the function called system by some FP definitions (using auxiliary functions whose behavior is only indicated). Let us suppose that we have FP definitions for all the nonprimitive functions required. Then each definition can be converted to give the name and contents of a cell in D (of course this conversion itself would be done by a better system). The conversion is accomplished by changing each FP function name to its equivalent atom (e.g., update becomes UPDATE) and by replacing functional forms by sequences whose first member is the controlling function for the particular form. Thus 4FILE o[update, defs] is converted to (COMP,(STORE,FILE), (CONS, UPDATE,DEFS)) and the FP function is the same as that represented by the FFP object, provided that update pUPDATE and COMP, STORE, and CONS represent the controlling functions for composition, store, and construction. All FP definitions needed for our system can be converted to cells as indicated above, giving a sequence Do. We assume that the AST system has an empty state to start with; hence SYSTEM is not defined. We want to define SYSTEM initially so that it will install its next input as the state; having done so we can then input Do and all our definitions will be installed, including our program-system-itself. To accomplish this we enter our first input (RESET, (CELL, SYSTEM, loader)) where loader = (CONS, ((CONST,DONE),ID). A Functional Style and Its Algebra of Programs
121
Then, by the transition rule foi RESET when SYSTEM is undefined in D, the cell in our input is put at the head of D = X, thus defining pSYSTEM =- ploader = [DONE,id]. Our second input is D0, the set of definitions we wish to become the state. The regular transition rule causes the AST system to evaluate ,(SYSTEM:D 0) = [DONEid]:l)o = (DONE,D,). Thus the output from our second input is DONE, the new state is Do, and pSYSTEM is now our system program (which only accepts inputs of the form (key,input)). Our next task is to load the file (we are given an initial value file). To load it we input a program into the newly installed system that contains file as a constant and stores it in the state; the input is (program-key, [DONE, store-file I where pstore-file
4.FlLEo [file,idl.
Program-key identifies [DONE,store-file] as a program to be applied to the state Do to give the output and new state DI which is pstore-file:Do = lFILEo[file,id]:Do, or Do with a cell containing file at its head. The output is DONE:D0 = DONE. We assume that system-check will pass (DONE, D,) unchanged. FP expressions have been used in the above in place of the FFP objects they denote, e.g., DONE for (CONST,DONE). 14.4.4 Using the System. We have not said how the system's file, queries, or updates are structured, so we cannot give a detailed example of file operations. How ever, the structure of subsystem shows clearly how the system's response to queries and updates depends on the functions query-response, update, and report-update. Let us suppose that matrices m, n named M, and N are stored in D and that the function MM described earlier is defined in D. Then the input (expression-key,(MMo[TM, TN] oDEFS: #)) would give the product of the two matrices as output and an unchanged state. Expression-key identifies the application as an expression to be 122 JOHN BACKUS
evaluated and since defs: # = D and [TM,TN]:D = (m,n), the value of the expression is the result MM: (m, n), which is the output. Our miniature system program has no provision for giving control to a user's program to process many inputs, but it would not be difficult to give it that capability while still monitoring the user's program with the option of taking control back.
14.5 Variants of AST Systems A major extension of the AST systems suggested above would provide combining forms, 'system forms,' for building a new AST system from simpler, component AST systems. That is, a system form would take AST systems as parameters and generate a new AST system, just as a functional form takes functions as parameters and generates new functions. These system forms would have properties like those of functional forms and would become the 'operations' of a useful 'algebra of systems' in much the same way that functional forms are the 'operations' of the algebra of programs. However, the problem of finding useful system forms is much more difficult, since they must handle RESETS, match inputs and outputs, and combine historysensitive systems rather than fixed functions. Moreover, the usefulness or need for system forms is less clear than that for functional forms. The latter are essential for building a great variety of functions from an initial primitive set, whereas, even without system forms, the facilities for building AST systems are already so rich that one could build virtually any system (with the general input and output properties allowed by the given AST scheme). Perhaps system forms would be useful for building systems with complex input and output arrangements.
14.6 Remarks about AST Systems As I have tried to indicate above, there can be innumerable variations in the ingredients of an AST system -how it operates, how it deals with input and output, how and when it produces new states, and so on. In any case, a number of remarks apply to any reasonable AST system: (a) A state transition occurs once per major computation and can have useful mathematical properties. State transitions are not involved in the tiniest details of a computation as in conventional languages; thus the linguistic von Neumann bottleneck has been eliminated. No complex 'cable' or protocols are needed to communicate with the state. A Functional Style and Its Algebra of Programs
123
(b) Programs are written in an applicative language that can accommodate a great range of changeable parts, parts whose power and flexibility exceed that of any vcr, Neumann language so far. The wordat-a-time style is replaced by an applicative style; there is no division of programming into a world of expressions and a world of statements. Programs can be analyzed and optimized by an algebra of programs. (c) Since the state cannot change during the computation of system :x, there are no side effects. Thus independent applications can be evaluated in parallel. (d) By defining appropriate functions one can, I believe, introduce major new features at any time, using the same framework. Such features must be built into the framework of a von Neumann language. I have in mind such features as: 'stores' with a great variety of naming systems, types and type checking, communicating parallel processes, nondeterminacy and Dijkstra's 'guarded command' constructs [8], and improved methods for structured programming. (e) The framework of an AeST system semantics of the underlying applicative framework sketched above. By current framework for a language and is the only
comprises the syntax and system plus the system standards, this is a tiny fixed part of the system.
14.7 Naming Systems in AST and von Neumann Models In an AST system, naming is accomplished by functions as indicated in Section 13.3.3. Many useful functions for altering and accessing a store can be defined (e.g., push, pop, purge, typed fetch, etc.). All these definitions and their associated naming systems can be introduced without altering the AST framework. Different kinds of 'stores' (e.g., with 'typed cells') with individual naming systems can be used in one program. A cell in one store may contain another entire store. The important point about AST naming systems is that they utilize the functional nature of names (Reynolds' GEDANKEN [19] also does so to some extent within a von Neumann framework). Thus name functions can be composed and combined with other functions by functional forms. In contrast, functions and names in von Neumann languages are usually disjoint concepts and the function-like nature of names is almost totally concealed and useless, because (a) names cannot be applied as functions; (b) there are no general means to combine names with other names and functions; (c) the objects to which name functions apply (stores) are not accessible as objects. The failure of von Neumann languages to treat names as functions may be one of their more important weaknesses. In any case, the ability to use names as functions and stores as objects may turn out to be a useful and important programming concept, one which should be thoroughly explored. 124 JOHN BACKUS
15 Remarks about Computer Design The dominance of von Neumann languages has left designers with few intellectual models for practical computer designs beyond variations of the von Neumann computer. Data flow models [1], [7], [13] are one alternative class of history-sensitive models. The substitution rules of lambda-calculus-based languages present serious problems for the machine designer. Berkling [3] has developed a modified lambda calculus that has three kinds of applications and that makes renaming of variables unnecessary. He has developed a machine to evaluate expressions of this language. Further experience is needed to show how sound a basis this language is for an effective programming style and how efficient his machine can be. Mag6 [15] has developed a novel applicative machine built from identical components (of two kinds). It evaluates, directly, FP-like and other applicative expressions from the bottom up. It has no von Neumann store and no address register, hence no bottleneck; it is capable of evaluating many applications in parallel; its built-in operations resemble FP operators more than von Neumann computer operations. It is the farthest departure from the von Neumann computer that I have seen. There are numerous indications that the applicative style of programming can become more powerful than the von Neumann style. Therefore it is important for programmers to develop a new class of history-sensitive models of computing systems that embody such a style and avoid the inherent efficiency problems that seem to attach to lambda-calculus-based systems. Only when these models and their applicative languages have proved their superiority over conventional languages will we have the economic basis to develop the new kind of computer that can best implement them. Only then, perhaps, will we be able to fully utilize large-scale integrated circuits in a computer design not limited by the von Neumann bottleneck.
16 Summary The fifteen preceding sections of this paper can be summarized as follows. Section 1. Conventional programming languages are large, com-
plex, and inflexible. Their limited expressive power is inadequate to justify their size and cost. Section 2. The models of computing systems that underlie programming languages fall roughly into three classes: (a) simple operational models (e.g., Turing machines), (b) applicative models (e.g., the lambda A Functional Style and Its Algebra of Programs
125
calculus), and (c) von Neumann models (e.g., conventional computers and programming languages). Each class of models has an important difficulty: The programs of class (a) are inscrutable; class (b) models cannot save information from one program to the next; class (c) models have unusable foundations and programs that are conceptually unhelpful. Section 3. Von Neumann computers are built around a bottleneck: the word-at-a-time tube connecting the CPU and the store. Since a program must make its overall change in the store by pumping vast numbers of words back and forth through the von Neumann bottleneck, we have grown up with a style of programming that concerns itself with this word-at-a-time traffic through the bottleneck rather than with the larger conceptual units of our problems. Section 4. Conventional languages are based on the programming style of the von Neumann computer. Thus variables = storage cells; assignment statements = fetching, storing, and arithmetic; control statements = jump and test instructionss. The symbol ':=' is the linguistic von Neumann bottleneck. Programming in a conventional von Neumann -language still concerns itself with the word-at-a-time traffic through this slightly more sophisticated bottleneck. Von Neumann languages also split programming into a world of expressions and a world of statements; the first of these is an orderly world, the second is a disorderly one, a world that structured programming has simplified somewhat, but without attacking the basic problems of the split itself and of the word-at-a-time style of conventional languages. Section 5. This section compares a von Neumann program and a functional program for inner product. It illustrates a number of problems of the former and advantages of the latter: e.g., the von Neumann program is repetitive and word-at-a-time, works only for two vectors named a and b of a given length n, and can only be made general by use of a procedure de claration, which has complex semantics. The functional program is nonrepetitive, deals with vectors as units, is more hierarchically constructed, is completely general, and creates 'housekeeping' operations by composing high-level housekeeping operators. It does not name its arguments, hence it requires no procedure declaration. Section 6. A programming language comprises a framework plus some changeable parts. The framework of a von Neumann language requires that most features must be built into it; it can accommodate only limited changeable parts e.g., user-defined procedures) because there must be detailed provisions in the 'state' and its transition rules for all the needs of the changeable parts, as well as for all the features built into the framework. The reason the von Neumann framework is so inflexible is that its semantics is too closely coupled to the state: every detail of a computation changes the state. 126 JOHN BACKUS
Section 7. The changeable parts of von Neumann languages have little expressive power; this is why most of the language must be built into the framework. The lack of expressive power results from the inability of von Neumann languages to effectively use combining forms for building programs, which in turn results from the split between expressions and statements. Combining forms are at their best in expressions, but in von Neumann languages an expression can only produce a single word; hence expressive power in the world of expressions is mostly lost. A further obstacle to the use of combining forms is the elaborate use of naming conventions. Section 8. APL is the first language not based on the lambda calculus that is not word-at-a-time and uses functional combining forms. But it still retains many of the problems of von Neumann languages. Section 9. Von Neumann languages do not have useful properties for reasoning about programs. Axiomatic and denotational semantics are precise tools for describing and understanding conventional programs, but they only talk about them and cannot alter their ungainly properties. Unlike von Neumann languages, the language of ordinary algebra is suitable both for stating its laws and for transforming an equation into its solution, all within the 'language.' Section 10. In a history-sensitive language, a program can affect the behavior of a subsequent one by changing some store which is saved by the system. Any such language requires some kind of state transition semantics. But it does not need semantics closely coupled to states in which the state changes with every detail of the computation. 'Applicative state transition' (AST) systems are proposed as history-sensitive alternatives to von Neumann systems. These have: (a) loosely coupled state-transition semantics in which a transition occurs once per major computation; (b) simple states and transition rules; (c) an underlying applicative system with simple 'reduction' semantics; and (d) a programming language and state transition rules both based on the underlying applicative system and its semantics. The next four sections describe the elements of this approach to non-von Neumann language and system design. Section 11. A class of informal functional programming (FP) systems is described which use no variables. Each system is built from objects, functions, functional forms, and definitions. Functions map objects into objects. Functional forms combine existing functions to form new ones. This section lists examples of primitive functions and functional forms and gives sample programs. It discusses the limitations and advantages of FP systems. Section 12. An 'algebra of programs' is described whose variables range over the functions of an FP system and whose 'operations' are the functional forms of the system. A list of some twenty-four A Functional Style and Its Algebra of Programs
127
laws of the algebra is followed by an example proving the equivalence of a nonrepetitive matrix multiplication program and a recursive one. The next subsection states the results of two 'expansion theorems' that 'solve' two classes of equations. These solutions express the 'unknown' function in such equations as an infinite conditional expansion that constitutes a case-by-case description of its behavior and immediately gives the necessary and sufficient conditions for termination. These results are used to derive a 'recursion theorem' and an 'iteration theorem,' which provide ready-made expansions for some moderately general and useful classes of 'linear' equations. Examples of the use of these theorems treat: (a) correctness proofs for recursive and iterative factorial functions, and (b) a proof of equivalence of two iterative programs. A final example deals with a 'quadratic' equation and proves that its solution is an idempotent function. The next subsection gives the proofs of the two expansion theorems. The algebra associated with FP systems is compared with the corresponding algebras for the ,ambda calculus and other applicative systems. The comparison shows some advantages to be drawn from the severely restricted FP systems, as compared with the much more powerful classical systems. Questions are suggested about algorithmic reduction of functions of infinite expansions and about the use of the algebra in various 'lazy evaluation' schemes. Section 13. This section describes formal functional programming (FFP) systems that extend and make precise the behavior of FP systems. Their semantics are simpler than that of classical systems and can be shown to be consistent by a simple fixed-point argument. Section 14. This section compares the structure of Algol with that of applicative state transition (AST) systems. It describes an AST system using an FFP system as its appl native subsystem. It describes the simple state and the transition rules for the system. A small self-protecting system program for the AST system is described, and how it can be installed and used for file maintenance and for running user programs. The section briefly discusses variants of AST systems and functional naming systems that can be defined and used within an AST system. Section 15. This section briefly discusses work on applicative computer designs and the need to develop and test more practical models of applicative systems as the future basis for such designs.
Acknowledgments In earlier work relating to t a s paper I have received much valuable help and many suggestions frcm Paul R. McJones and Barry K. Rosen. I have had a gret deal of valuotle help and feedback in preparing this paper. James N. Gray was exceedingly generous with his time and knowledge in reviewing the first draft. Stephen N. Zillis also gave it 128 JOHN BACKUS
a careful reading. Both made many valuable suggestions and criticisms at this difficult stage. It is a pleasure to acknowledge my debt to them. I also had helpful discussions about the first draft with Ronald Fagin, Paul R. McJones, and James H. Morris, Jr. Fagin suggested a number of improvements in the proofs of theorems. Since a large portion of the paper contains technical material, I asked two distinguished computer scientists to referee the third draft. David J. Gries and John C. Reynolds were kind enough to accept this burdensome task. Both gave me large, detailed sets of corrections and overall comments that resulted in many improvements, large and small, in this final version (which they have not had an opportunity to review). I am truly grateful for the generous time and care they devoted to reviewing this paper. Finally, I also sent copies of the third draft to Gyula A. Mag6, Peter Naur, and John H. Williams. They were kind enough to respond with a number of extremely helpful comments and corrections. Geoffrey A. Frank and Dave Tolle at the University of North Carolina reviewed Mag6's copy and pointed out an important error in the definition of the semantic function of FFP systems. My grateful thanks go to all these kind people for their help.
References 1. Arvind, and Gostelow, K. P. A new interpreter for data flow schemas
and its implications for computer architecture. Tech. Rep. No. 72, Dept. Comptr. Sci., U. of California, Irvine, Oct. 1975. 2. Backus, J. Programming language semantics and closed applicative languages. Conf. Record ACM Symp. on Principles of Programming Languages, Boston, Oct. 1973, 71-86. 3. Berkling, K. J. Reduction languages for reduction machines. Interner Bericht ISF-76-8, Gesellschaft fur Mathematik und Datenverarbeitung
MBH, Bonn, Sept. 1976. 4. Burge, W. H. Recursive Programming Techniques. Addison-Wesley, Reading, Mass., 1975. 5. Church, A. The Calculi of Lambda-Conversion. Princeton U. Press,
Princeton, N.J., 1941. 6. Curry, H. B., and Feys, R. CombinatoryLogic, Vol. I. North-Holland Pub.
Co., Amsterdam, 1958. 7. Dennis, J. B. First version of a data flow procedure language. Tech. Mem. No. 61, Lab. for Comptr. Sci., M.I.T., Cambridge, Mass., May 1973. 8. Dijkstra, E. W. A Discipline of Programming. Prentice-Hall, Englewood Cliffs, N.J., 1976. 9. Friedman, D. P., and Wise, D.S. CONS should not evaluate its arguments. In Automata, Languages and Programming, S. Michaelson and R. Milner,
Eds., Edinburgh U. Press, Edinburgh, 1976, pp. 257-284. 10. Henderson, P., and Morris, J. H. Jr. A lazy evaluator. Conf. Record 3rd
ACM Symp. on Principles of Programming Languages, Atlanta, Ga., Jan. 1976, pp. 95-103. A Functional Style and Its Algebra of Programs
129
i
11. Hoare, C. A. R. An axiomatic basis for computer programming. Comm. ACM 12, 10 (Oct. 1969), 576-583. 12. Iverson, K. A Programming Language. Wiley, New York, 1962. 13. Kosinski, P. A data flow programming language. Rep. RC 4264, IBM T. J. Watson Research Ctr. Yorktown Heights, N.Y., March 1973. 14. Landin, P. J. The mechanical evaluation of expressions. ComputerJ. 6, 4 (1964), 308-320. 15. Mag6, G. A. A network: of microprocessors to execute reduction languages. To appear in mnt. J. Comptr. and Inform. Sci. 16. Manna, Z., Ness, S., and Vuillemin J. Inductive methods for proving properties of programs. Comm. ACM 16, 8 (Aug. 1973), 491-502. 17. McCarthy, J. Recursive functions of symbolic expressions and their computation by machine, Pt. 1. Comm. ACM 3, 4 (April 1960), 184-195. 18. McJones, P. A Church-Rosser property of closed applicative languages. Rep. RJ 1589, IBM Res. Lab., San Jose, Calif., May 1975. 19. Reynolds, J. C. GEDANKEN-a simple typeless language based on the principle of completeness and the reference concept. Comm. ACM 13, 5 (May 1970), 308-318. 20. Reynolds, J. C. Notes on a lattice-theoretic approach to the theory of computation. Dept. Syst. and Inform. Sci., Syracuse U., Syracuse, N.Y., 1972. 21. Scott, D. Outline of a na:lhematical theory of computation. Proc. 4th Princeton Conf. on Informr. Sci. and Syst., 1970. 22. Scott, D., Lattice-theoretic :models for various types-free calculi. Proc. 4th Int. Congress for Logic, .ethodology, and the Philosophy of Science, Bucharest, 1972. 23. Scott, D., and Strachey, C'. Towards a mathematical semantics for computer languages. P-cc. Symp. on Comptrs. and Automata, Polytechnic Inst. of BrookLyn, 1971.
Categories and Subject Descriptors: C.1.1 [Processor Architectures]: Single Data Stream Architectures-von Neumann architectures; D.1.1 [Programming Techniques]: Applicative (Functional Programming; D.2.4 [Software Engineering]: Program Verification-correctness proofs;; D.3.1 [Programming Languages]: Formal Definitions and Theory-semartics; F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic-lambda calculus and related systems;
G.1.3 [Numerical Analysis]
Numerical Linear Algebra-linear systems;
G.1.5 [Numerical Analysis: Roots of Nonlinear Equations-iterative methods
General Terms: Design, Economics, Languages Theory
Additional Key Words and Phrases: Algol, APL, metacomposition
130 JOHN BACKUS
The Paradigms of Programming ROBERT W. FLOYD Stanford University The 1978 ACM Mhring Award was presented to Robert W Floyd by Walter Carlson, Chairman of the Awards Committee, at the ACM Annual Conference in Washington, D.C, December 4. In making the selection, the General Technical Achievement Award Subcommittee (formerly the Thring Award Subcommittee) cited Professor Floyd for 'helping to found the following important subfields of computer science: the theory of parsing, the semantics of programming languages, automatic program verification, automatic program synthesis, and analysis of algorithms.' ProfessorFloyd, who received both his A.B. and B.S. from the University of Chicago in 1953 and 1958, respectively, is a self-taught computer scientist. His study of computing began in 1956, when as a night-operatorfor an IBM 650, he found the time to learn about programming between loads of card hoppers. Floyd implemented one of the first Algol 60 compilers, finishing his work on this project in 1962. In the process, he did some early work on compiler optimization. Subsequently, in the years before 1965, Floyd systematized the parsing of programming languages. For that he originated the precedence method, the bounded context method, and the production language method of parsing. Author's present address: Department of Computer Science, Stanford University, Stanford, CA 94305. 131
In 1966 Professor Floyd presented a mathematical method to prove the correctness of programs. He has oljfered, over the years, a number of fast useful algorithms. These include (1) the tree-sortalgorithm for in-place sorting, (2) algorithms for finding the shortest paths through networks, and (3) algorithms for finding medians and convex hulls. In addition, Floyd has determined the limiting speed of digital addition and the limiting speeds for permuting information in a computer memory. His contributionsto mechanical theoremproving and automatic spelling checkers have also been numerous. In recent years Professor Floyd has been working on the design and implementation of a programming language primarilyfor student use. It will be suitable for teaching structured programming systematically to novices and will be nearly universal in iNs capabilities. Paradigm (pae radim, -daim .. [i.. F F paradigm, ad. L. paradigms, a. Gr. example, f. 77GapaGJKLVv- vat to exhibit beside, show side by side...] 1. A pattern, exemplar, example. 1752 J. Gill TRinity v. 91 The archetype, paradigm, exemplar, and idea, according to which all things were made. From the Oxford English Dictionary.
7TapaS&Lyga pattern,
Today I want to talk about the paradigms of programming, how they affect our success as designers of computer programs, how they should be taught, and how they should be embodied in our programming languages. A familiar example of a paradigm of programming is the technique of structuredprogramming, which appears to be the dominant paradigm in most current treatments of programming methodology. Structured programming, as formulated by Dijkstra [6], Wirth [27, 29], and Parnas [21], among others, consists of two phases. In the first phase, that of top-down design, or stepwise refinement, the problem is decomposed into a very small number of simpler subproblems. In programming the solution of simultaneous linear equations, say, the first level of decomposition would be into a stage of triangularizing the equations aind a following stage of backsubstitution in the triangularized system. Thi s gradual decomposition is continued until the subproblems that arise are simple enough to cope with directly. In the simultaneous equation example, the back substitution process would be further decomposed as a backwards iteration of a process which finds and stores the value of the ith variable from the ith equation. Yet further decomposition would yield a fully detailed algorithm. The second phase of the structured programming paradigm entails working upward from the concrete objects and functions of the underlying machine to the more abstract objects and functions used throughout the modules produced by the top-down design. In the linear equation example, if the coefficients of the equations are rational functions of one variable, we might first design a multiple-precision arithmetic representation and procedures, then, building upon them, a polynomial 132 ROBERT W. FLOYD
representation with its own arithmetic procedures, etc. This approach is referred to as the method of levels of abstraction, or of information hiding. The structured programming paradigm is by no means universally accepted. Its firmest advocates would acknowledge that it does not by itself suffice to make all hard problems easy. Other high level paradigms of a more specialized type, such as branch-and-bound [17, 20] or divideand-conquer [1, 11] techniques, continue to be essential. Yet the paradigm of structured programming does serve to extend one's powers of design, allowing the construction of programs that are too complicated to be designed efficiently and reliably without methodological support. I believe that the current state of the art of computer programming reflects inadequacies in our stock of paradigms, in our knowledge of existing paradigms, in the way we teach programming paradigms, and in the way our programming languages support, or fail to support, the paradigms of their user communities. The state of the art of computer programming was recently referred to by Robert Balzer [3] in these words: 'It is well known that software is in a depressed state. It is unreliable, delivered late, unresponsive to change, inefficient, and expensive. Furthermore, since it is currently labor intensive, the situation will further deteriorate as demand increases and labor costs rise.' If this sounds like the famous 'software crisis' of a decade or so ago, the fact that we have been in the same state for ten or fifteen years suggests that 'software depression' is a more apt term. Thomas S. Kuhn, in The Structure of Scientific Revolutions [16], has described the scientific revolutions of the past several centuries as arising from changes in the dominant paradigms. Some of Kuhn's observations seem appropriate to our field. Of the scientific textbooks which present the current scientific knowledge to students, Kuhn writes: Those texts have, for example, often seemed to imply that the content of science is uniquely exemplified by the observations, laws and theories described in their pages. In the same way, most texts on computer programming imply that the content of programming is the knowledge of the algorithms and language definitions described in their pages. Kuhn writes, also: The study of paradigms, including many that are far more specialized than those named illustratively above, is what mainly prepares the student for membership in the particular scientific community with which he will later practice. Because he there joins men who learned the bases of their field from the same concrete models, his subsequent practice will seldom evoke overt disagreement over fundamentals... In computer science, one sees several such communities, each speaking its own language and using its own paradigms. In fact, programming languages typically encourage use of some paradigms and discourage others. There are well defined schools of Lisp programming, APL programming, Algol programming, and so on. Some regard data flow, and some control flow, as the primary structural information about a The Paradigms of Programming
133
program. Recursion and iteration, copying and sharing of data structures, call by name and call by value, all have adherents. Again from Kuhn: The older schools gradually disappear. In part their disappearance is caused by their members' conversion to the new paradigm. But there are always some men who cling to one or another of the older views, and they are simply read out of the profession, which thereafter ignores their work. In computing, there is no mechanism for reading such men out of the profession. I suspect they mainly become managers of software development. Balzer, in his jeremiad against the state of software construction, went on to prophesy that automatic programming will rescue us. I wish success to automatic programmers, but until they clean the stables, our best hope is to improve our own capabilities. I believe the best chance we have to improve the general practice of programming is to attend to our paradigms. In the early 1960's, parsing of context-free languages was a problem of pressing importance in both compiler development and natural linguistics. Published algorithms were usually both slow and incorrect. John Cocke, allegedly with very little effort, found a fast and simple algorithm [2], based on a now standard paradigm which is the computational form of dynamic programming [1]. The dynamic programming paradigm solves a problem for given input by first iteratively solving it for all smaller inputs. Cocke's algorithm successively found all parsings of all substrings of the input. In this conceptual frame, the problem became nearly trivial. The resulting algorithm was the first to uniformly run in polynomial time. At around the same time, after several incorrect top-down parsers had been published, I attacked the problem of designing a correct one by inventing the paradigm of finding a hierarchical organization of processors, akin to a human organization of employers hiring and discharging subordinates, that could solve the problem, and then simulating the behavior of th.s organization [8]. Simulation of such multiple recursive processes led me to the use of recursive coroutines as a control structure. I later found that other programmers with difficult combinatorial problems, for example Gelernter with his geometrytheorem proving machine [10], had apparently invented the same control structure. John Cocke's experience and mine illustrate the likelihood that continued advance in programming will require the continuing invention, elaboration, and communication of new paradigms. An example of the effective Elaboration of a paradigm is the work by Shortliffe and Davis on the MYCIN [24] program, which skillfully diagnoses, and recommends medication for, bacterial infections. MYCIN is a rule-based system, based on a large set of independent rules, each with a testable condition of app'licability and a resulting simple action when the condition is satisfied. Davis's TEIRESIAS [5] program modifies MYCIN,
allowing an expert user to improve
134 ROBERT W. FLOYD
MYCIN's
performance. The
program elaborates the paradigm by tracing responsibility backward from an undesired result through the rules and conditions that permitted it, until an unsatisfactory rule yielding invalid results from valid hypotheses is reached. By this means it has become technically feasible for a medical expert who is not a programmer to improve MYCIN's diagnostic capabilities. While there is nothing in MYCIN which could not have been coded in a traditional branching tree of decisions using conditional transfers, it is the use of the rule-based paradigm, with its subsequent elaboration for self-modification, that makes the interactive improvement of the program possible. If the advancement of the general art of programming requires the continuing invention and elaboration of paradigms, advancement of the art of the individual programmer requires that he expand his repertory of paradigms. In my own experience of designing difficult algorithms, I find a certain technique most helpful in expanding my own capabilities. After solving a challenging problem, I solve it again from scratch, retracing only the insight of the earlier solution. I repeat this until the solution is as clear and direct as I can hope for. Then I look for a general rule for attacking similar problems, that would have led me to approach the given problem in the most efficient way the first time. Often, such a rule is of permanent value. By looking for such a general rule, I was led from the previously mentioned parsing algorithm based on recursive coroutines to the general method of writing nondeterministic programs [9], which are then transformed by a macroexpansion into conventional deterministic ones. This paradigm later found uses in the apparently unrelated area of problem solving by computers in artificial intelligence, becoming embodied in the programming languages PLANNER [12, 13], MICROPLANNER [25], and QA4 [23]. The acquisition of new paradigms by the individual programmer may be encouraged by reading other people's programs, but this is subject to the limitation that one's associates are likely to have been chosen for their compatibility with the local paradigm set. Evidence for this is the frequency with which our industry advertises, not for programmers, but for Fortran programmers or Cobol programmers. The rules of Fortran can be learned within a few hours; the associated paradigms take much longer, both to learn and to unlearn. Contact with programming written under alien conventions may help. Visiting MIT on sabbatical this year, I have seen numerous examples of the programming power which Lisp programmers obtain from having a single data structure, which is also used as a uniform syntactic structure for all the functions and operations which appear in programs, with the capability to manipulate programs as data. Although my own previous enthusiasm has been for syntactically rich languages like the Algol family, I now see clearly and concretely the TEIRESIAS
force of Minsky's 1970 luring Lecture [19], in which he argued that Lisp's uniformity of structure and power of self-reference gave the programmer capabilities whose content was well worth the sacrifice The Paradigms of Programming
135
of visual form. I would like to arrive at some appropriate synthesis of these approaches. It remains as true now as when I entered the computer field in 1956 that everyone wants to design a new programming language. In the words written on the wall of a Stanford University graduate student office, 'I would rather write programs to help me write programs than write programs.' In evaluating each year's crop of new programming languages, it is helpful to classify them by the extent to which they permit and encourage the use of effective programming paradigms. When we make our paradigms explicit, we find that there are a vast number of them. Cordell Green [11] finds that the mechanical generation of simple searching and sorting algorithms, such as merge sorting and Quicksort, requires over a hundred rules, most of them probably paradigms familiar to most programmers. Often our programming languages give us no help, or even thwart us, in using even the familiar and low level paradigms. Some examples follow. Suppose we are simulating :he population dynamics of a predatorprey system-wolves and rabbits, perhaps. We have two equations: W'= f(WR) R' - g(WR) which give the numbers of wolves and rabbits at the end of a time period, as a function of the numbers at the start of the period. A common beginner's mistake is to write: FOR I :=-- -DO BEGIN 'l := ftW, R); R :- gtW,R) END
where g is, erroneously, evaluated using the modified value of W. To make the program work, we must write: FORI:= - - - DO BEGIN REAL TEMP; TEMP := f(W,R); R := g(W,R); W :- TEMP END
The beginner is correct to believe we should not have to do this. One of our most common paradigms, as in the predator-prey simulation, is simultaneous assignment of new values to the components of state vectors. Yet hardly any language has an operator for simultaneous assignment. We must instead go through the mechanical, time-wasting, and error-prone operation of introducing one or more temporary variables and shunting the new values around through them. 136 ROBERT W. FLOYD
Again, take this simple-looking problem: Read lines of text, until a completely blank line is found. Eliminate redundant blanks between the words. Print the text, thirty characters to a line, without breaking words between lines.
Because input and output are naturally expressed using multiple levels of iteration, and because the input iterations do not nest with the output iterations, the problem is surprisingly hard to program in most programming languages [14]. Novices take three or four times as long with it as instructors expect, ending up either with an undisciplined mess or with a homemade control structure using explicit incrementations and conditional execution to simulate some of the desired iterations. The problem is naturally formulated by decomposition into three communicating coroutines [4], for input, transformation, and output of a character stream. Yet, except for simulation languages, few of our programming languages have a coroutine control structure adequate to allow programming the problem in a natural way. When a language makes a paradigm convenient, I will say the language supports the paradigm. When a language makes a paradigm feasible, but not convenient, I will say the language weakly supports the paradigm. As the two previous examples illustrate, most of our languages only weakly support simultaneous assignment, and do not support coroutines at all, although the mechanisms required are much simpler and more useful than, say, those for recursive call-by-name procedures, implemented in the Algol family of languages seventeen years ago. Even the paradigm of structured programming is at best weakly supported by many of our programming languages. To write down the simultaneous equation solver as one designs it, one should be able to write: MAIN PROGRAM: BEGIN TRIANGULARIZE; BACK SUBSTITUTE END; BACK-SUBSTITUTE: FOR I := N STEP- 1 UNTIL 1 DO SOLVE FOR VARIABLE(Il); SOLVE-FOR VARIABLE(I):
TRIANGULARIZE:
Procedures for multiple-precision arithmetic Procedures for rational-function arithmetic Declarations of arrays
In most current languages, one could not present the main program, procedures, and data declarations in this order. Some preliminary The Paradigms of Programming
137
I
human text-shuffling, of a sort readily mechanizable, is usually required. Further, any variables used in more than one of the multiple-precision procedures must be global to every part of the program where multipleprecision arithmetic can be done, thereby allowing accidental modification, contrary to the principle of information hiding. Finally, the detailed breakdown of a problem into a hierarchy of procedures typically results in very inefficient code even though most of the procedures, being called from only one place, could be efficiently implemented by macroexpansion. A paradigm at an even higher level of abstraction than the structured programming paradigm is the construction of a hierarchy of languages, where programs in the highest level language operate on the most abstract objects, and are translated into programs on the next lower level language. Examples include the numerous formula-manipulation languages which have been constructed on top of Lisp, Fortran, and other languages. Most of our lower level languages fail to fully support such superstructures. For example, their error diagnostic systems are usually cast in concrete, so that diagnostic messages are intelligible only by reference to the translated program on the lower level. I believe that the continued advance of programming as a craft requires development and dissemination of languages which support the major paradigms of their user's communities. The design of a language should be preceded by enumeration of those paradigms, including a study of the deficiencies in programming caused by discouragement of unsupported paradigms. I take no satisfaction from the extensions of our languages, such as the variant records and powersets of Pascal [15, 28], so long as the paradigms I have spoken of, and many others, remain unsupported or weakly supported. If there is ever a science of programming language design, it will probably consist largely of matching languages to the design methods they support. I do not want to imply that support of paradigms is limited to our programming languages proper. The entire environment in which we program, diagnostic systems, file systems, editors, and all, can be analyzed as supporting or failing to support the spectrum of methods for design of programs. There is hope that this is becoming recognized. For example, recent work at IRIA in France and elsewhere has implemented editors which are aware of the structure of the program they edit [7, 18, 26]. Anyone who has tried to do even such a simple task as changing every occurrence Of X as an identifier in a program without inadvertently changing all the other X's will appreciate this. Now I want to talk about what we teach as computer programming. Part of our unfortunate obsession with form over content, which Minsky deplored in his Turing lecture [191, appears in our typical choices of what to teach. If I ask another professor what he teaches in the introductory programming course, whether he answers proudly 'Pascal' or 138 ROBERT W. FLOYD
diffidently 'FORTRAN,' I know that he is teaching a grammar, a set of semantic rules, and some finished algorithms, leaving the students to discover, on their own, some process of design. Even the texts based on the structured programming paradigm, while giving direction at the highest level, what we might call the 'story' level of program design, often provide no help at intermediate levels, at what we might call the 'paragraph' level. I believe it is possible to explicitly teach a set of systematic methods for all levels of program design, and that students so trained have a large head start over those conventionally taught entirely by the study of finished programs. Some examples of what we can teach follow. When I introduce to students the input capabilities of a programming language, I introduce a standard paradigm for interactive input, in the form of a macroinstruction I call PROMPT-READ-CHECKECHO, which reads until the input datum satisfies a test for validity, then echoes it on the output file. This macro is, on one level, itself a paradigm of iteration and input. At the same time, since it reads once more often than it says 'Invalid data,' it instantiates a more general, previously taught paradigm for the loop executed 'n and a half times.' PROMPT-READ-CHECK-ECHO: arguments are a string PROMPT, a variable V to be read, and a condition BAD which characterizes bad data; PRINT-ON-TERMINAL(PROMPT); READ-FROM-TERMINAL(V); WHILE BAD(V) DO BEGIN PRINT-ON-TERMINAL('Invalid data'); READ-FROM-TERMINAL(V) END; PRINT-ON-FILE(V)
It also, on a higher level, instantiates the responsibilities of the programmer toward the user of the program, including the idea that each component of a program should be protected from input for which that component was not designed. Howard Shrobe and other members of the Programmer's Apprentice group [22] at MIT have successfully taught their novice students a paradigm of broad utility, which they call generate/filter/accumulate. The students learn to recognize many superficially dissimilar problems as consisting of enumerating the elements of a set, filtering out a subset, and accumulating some function of the elements in the subset. The MACLISP language [18], used by the students, supports the paradigm; the students provide only the generator, the filter, and the accumulator. The predator-prey simulation I mentioned earlier is also an instance of a general paradigm, the state-machine paradigm. The state-machine paradigm typically involves representing the state of the computation by the values of a set of storage variables. If the state is complex, The Paradigms of Programming
139
the transition function requires a design paradigm for handling simultaneous assignment, particularly since most languages only weakly support simultaneous assignment. To illustrate, suppose we want to compute: 6
arcsin
where I have circled the parts of each summand that are useful in computing the next one on the right. Without describing the entire design paradigm for such processes, a part of the design of the state transition is systematically to find a way to get from
Q
2-2.4i' I
1
1.3
S=2+2 2TT7 325 2 4. 5 to 1.3.5
1
SII-- ..- + 1.3.5 2 2'-2-4-6.7 The experienced programr.ier has internalized this step, and in all but the most complex cases does it unconsciously. For the novice, seeing the paradigm explicitly enables him to attack state-machine problems more complex than he could without aid, and, more important, encourages him to identify other useful paradigms on his own. Most of the classical algorithms to be found in texts on computer programming can be viewed as instances of broader paradigms. Simpson's rule is an instance of extrapolation to the limit. Gaussian elimination is problem solution by recursive descent, transformed into iterative form. Merge sorting is an instance of the divide-and-conquer paradigm. For every such classic algorithm, one can ask, 'How could I have invented this,' and recover what should be an equally classic paradigm. To sum up, my message to the serious programmer is: spend a part of your working day examining and refining your own methods. Even though programmers are always struggling to meet some future or past deadline, methodological abstraction is a wise long-term investment. To the teacher of programing, even more, I say: identify the paradigms you use, as fully as you can, then teach them explicitly. They will serve your students when 'Fortran has replaced Latin and Sanskrit as the archetypal dead language. 140 ROBERT W. FLOYD
To the designer of programming languages, I say: unless you can support the paradigms I use when I program, or at least support my extending your language into one that does support my programming methods, I don't need your shiny new languages; like an old car or house, the old language has limitations I have learned to live with. To persuade me of the merit of your language, you must show me how to construct programs in it. I don't want to discourage the design of new languages; I want to encourage the language designer to become a serious student of the details of the design process. Thank you, members of the ACM, for naming me to the company of the distinguished men who are my predecessors as Turing lecturers. No one reaches this position without help. I owe debts of gratitude to many, but especially to four men: to Ben Mittman, who early in my career helped and encouraged me to pursue the scientific and scholarly side of my interest in computing; to Herb Simon, our profession's Renaissance man, whose conversation is an education; to the late George Forsythe, who provided me with a paradigm for the teaching of computing; and to my colleague Donald Knuth, who sets a distinguished example of intellectual integrity. I have also been fortunate in having many superb graduate students from whom I think I have learned as much as I have taught them. To all of you, I am grateful and deeply honored. References 1. Aho, A.V., Hopcroft, J. E., and Ullman, J.D. The Design and Analysis of Computer Algorithms. Addison-Wesley, Reading, Mass., 1974. 2. Aho, A. V., and Ullman, J.D. The Theory of Parsing, Translation, and Compiling, Vol. 1: Parsing.Prentice-Hall, Englewood Cliffs, New Jersey, 1972.
3. Balzer, R. Imprecise program specification. Report ISI/RR-75-36, Inform. Sciences Inst., Dec. 1975.
4. Conway, M. E. Design of a separable transition-diagram compiler. Comm. ACM 6, 7 (July 1963), 396-408. 5. Davis, R. Interactive transfer of expertise: Acquisition of new inference rules. Proc. Int. Joint Conf. on Artif. Intell., MIT, Cambridge, Mass., August 1977, pp. 321-328.
6. Dijkstra, E. W. Notes on structured programming. In Structured Programming,0. J. Dahl, E. W.Dijkstra, and C. A. R. Hoare, Academic Press, New York, 1972, pp. 1-82.
7. Donzeau-Gouge, V., Huet, G., Kahn, G., Lang, B., and Levy, J. J. A structure oriented program editor: A first step towards computer assisted programming. Res. Rep. 114, IRIA, Paris, April 1975. 8. Floyd, R. W. The syntax of programming languages -A survey. IEEE EC-13, 4 (Aug. 1964), 346-353. 9. Floyd, R. W. Nondeterministic algorithms. J.ACM 14, 4 (Oct. 1967), 636-644.
10. Gelernter. Realization of a geometry-theorem proving machine. In Computers and Thought, E. Feigenbaum and J. Feldman, Eds., McGraw-Hill, New York, 1963, pp. 134-152. The Paradigms of Programming
141
11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23.
24. 25. 26. 27. 28. 29.
Green, C. C., and Barstow, D. On program synthesis knowledge. Artif. Intell. 10, 3 (June 1978), 241-279. Hewitt, C. PLANNER: A language for proving theorems in robots. Proc. Int. Joint Conf. on Artif. Intell., Washington, D.C., 1969. Hewitt, C. Description and theoretical analysis (using schemata) of PLANNER... AI TR-258, MIT, Cambridge, Mass., April 1972. Hoare, C. A. R. Communicating sequential processes. Comm. ACM 21, 8 (Aug. 1978), 666-677. Jensen, K., and Wirth, N Pascal User Manual and Report. SpringerVerlag, New York, 1978. Kuhn, T. S. The Structure of Scientific Revolutions. Univ. of Chicago Press, Chicago, Ill., 1970 Lawler, E., and Wood, D. Branch and bound methods: A survey. Operations Res. 14, 4 (Julv-Aug. 1966), 699-719. MACLISP Manual. MIT, Cambridge, Mass., July 1978. Minsky, M. Form and content in computer science. Comm. ACM 17, 2 (April 1970), 197-215. Nilsson, N. J. Problem Solving Methods in Artificial Intelligence. McGraw-Hill, New York 1971. Parnas, D. On the criteria for decomposing systems into modules. Comm. ACM 15, 12 (Dec. 1972), 1053-1058. Rich, C., and Shrobe, H. Initial Report on a LISP programmer's apprentice. IEEE J. Software Erig. SE-4, 6 (Nov. 1978), 456-467. Rulifson, J. F., Derkson, J. ... , and Waldinger, R. J. QA4: A procedural calculus for intuitive reasoning. Tech. Note 73, Stanford Res. Inst., Menlo Park, Calif., Nov. 1972. Shortliffe, E. H. Computer-Based Medical Consultations: MYCIN. American Elsevier, New York, 1976. Sussman, G. J., Winograd, T., and Charniak, C. MICROPLANNER reference manual. Al Memo 203A, MIT, Cambridge, Mass., 1972. Teitelman, W., et al. INTERLISP manual. Xerox Palo Alto Res. Ctr., 1974. Wirth, N. Program development by stepwise refinement. Comm. ACM 14 (April 1971), 221-227. Wirth, N. The programming language Pascal. Acta Informatica 1, 1 (1971), 35-63. Wirth, N. Systematic Programming, an Introduction. Prentice-Hall, Englewood Cliffs, New Jersey, 1973.
Categories and Subject Descriptors: D.2.2 [Software Engineering]: Tools and Techniques - structuredprogramming; D.3.3 [Programming Languages]: Language Constructs-control
structures; F.4.2 [Mathematical Logic and Formal Languages]: Grammars and Other Rewriting Systems-parsing; K.3.2 [Computers and Education]: Computer and Information Science Education-computer science education
General Terms: Algorithms, Design, Languages
Additional Key Words and Phrases: MACLISP, MYCIN program 142 ROBERT W. FLOYD
The Emperor's Old Clothes CHARLES ANTONY RICHARD HOARE Oxford University, England The 1980 ACM Thring Award was presented to Charles Antony Richard Hoare, Professor of Computation at the University of Oxford, England, by Walter Carlson, Chairmanof the Awards Committee, at the ACM Annual Conference in Nashville, Tennessee, October 27, 1980. Professor Hoare was selected by the General Technical Achievement Award Committee for his fundamental contributions to the definition and design of programming languages. His work is characterizedby an unusual combination of insight, originality, elegance, and impact. He is best known for his work on axiomatic definitions of programming languages through the use of techniques popularly referred to as axiomatic semantics. He developed ingenious algorithms such as Quicksort and was responsible for inventing and promulgating advanced data structuring techniques in scientific programming languages. He has also made important contributions to operating systems through the study of monitors. His most recent work is on communicating sequential processes. Prior to his appointment to the University of Oxford in 1977, Professor Hoare was Professor of Computer Science at The Queen's University in Belfast, Ireland, from 1968 to 1977 and was a Visiting Professor at Stanford University in 1973. From 1960 to 1968 he held a number of positions with Elliott Brothers, Ltd., England. Author's present address: Oxford University Computing Laboratory, Programming Research Group, 8-11 Keble Road, Oxford OX1 3QD, England. 143
ProfessorHoare has published extensively and is on the editorialboards of a number of the world's foremost computer science journals. In 1973 he received the ACM ProgrammingSystems and Languages Paper Award. Professor Hoare became a Distinguished Fellow of the British Computer Society in 1978 and was awarded the degree of Doctor of Science Honoris Causa by the University of Southern California in 1979. The Turing Award is the Association for Computing Machinery'shighest award for technicalcontributions to the computing community. It is presented each year in commemoration of Dr. A.M. Touring, an English mathematician who made many important contributions to the computing sciences. The author recounts his experiences in the implementation, design, and standardization of computer programming languages, and issues a warning for the future. My first and most pleasant duty in this lecture is to express my profound gratitude to the Association for Computing Machinery for the great honor which they have be stowed on me and for this opportunity to address you on a topic of my choice. What a difficult choice it is! My scientific achievements, sD amply recognized by this award, have already been amply described in the scientific literature. Instead of repeating the abstruse technicalities of my trade, I would like to talk informally about myself, my personal experiences, my hopes and fears, my modest successes, and my -ather less modest failures. I have learned more from my failures than can ever be revealed in the cold print of a scientific article and now I would like you to learn from them, too. Besides, failures are much more fun to hear about afterwards; they are not so funny at the time. I start my story in August 1960, when I became a programmer with a small computer manufacturer, a division of Elliott Brothers (London) Ltd., where in the next eight years I was to receive my primary education in computer science. My first task was to implement for the new Elliott 803 computer, a library subroutine for a new fast method of internal sorting just invented by Shell. I greatly enjoyed the challenge of maximizing efficiency in the simple decimal-addressed machine code of those days. My boss and tutor, Pat Shackleton, was very pleased with my completed program. I then said timidly that I thought I had invented a sorting method that would usually run faster than SHELLSORT, without taking much extra store. He let me sixpence that I had not. Although my method was very difficult: to explain, he finally agreed that I had won my bet. I wrote several other tightly coded library subroutines but after six months I was given a much more important task-that of designing a new advanced high-level programming language for the company's next computer, the Elliott 503 which was to have the same instruction code as the existing 803 but run sixty times faster. In spite of my education in classical languages, this was a task for which I was even 144 CHARLES ANTONY RICHARD HOARE
less qualified than those who undertake it today. By great good fortune there came into my hands a copy of the Report on the International Algorithmic Language ALGOL 60. Of course, this language was obviously too complicated for our customers. How could they ever understand all those begins and ends when even our salesmen couldn't? Around Easter 1961, a course on ALGOL 60 was offered in Brighton, England, with Peter Naur, Edsger W. Dijkstra, and Peter Landin as tutors. I attended this course with my colleague in the language project, Jill Pym, our divisional Technical Manager, Roger Cook, and our Sales Manager, Paul King. It was there that I first learned about recursive procedures and saw how to program the sorting method which I had earlier found such difficulty in explaining. It was there that I wrote the procedure, immodestly named QUICKSORT, on which my career as a computer scientist is founded. Due credit must be paid to the genius of the designers of ALGOL 60 who included recursion in their language and enabled me to describe my invention so elegantly to the world. I have regarded it as the highest goal of programming language design to enable good ideas to be elegantly expressed. After the ALGOL course in Brighton, Roger Cook was driving me and my colleagues back to London when he suddenly asked, 'Instead of designing a new language, why don't we just implement ALGOL 60?' We all instantly agreed-in retrospect, a very lucky decision for me. But we knew we did not have the skill or experience at that time to implement the whole language, so I was commissioned to design a modest subset. In that design I adopted certain basic principles which I believe to be as valid today as they were then. (1) The first principle was security: The principle that every syntactically incorrect program should be rejected by the compiler and that every syntactically correct program should give a result or an error message that was predictable and comprehensible in terms of the source language program itself. Thus no core dumps should ever be necessary. It was logically impossible for any source language program to cause the computer to run wild, either at compile time or at run time. A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to-they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law. The Emperor's Old Clothes
145
(2) The second principle in the design of the implementation was brevity of the object code produced by the compiler and compactness of run time working data. There was a clear reason for this: The size of main storage on any computer is limited and its extension involves delay and expense. A program exceeding the limit, even by one word, is impossible to run, especially sirce many of our customers did not intend to purchase backing stores. This principle of compactness of object code is even more valid today, when processors are trivially cheap in comparison with the amounts of main store they can address, and backing stores are comparatively even more expensive and slower by many orders of magnitude. If as a result of care taken in implementation the available hardware remains more powerful than may seem necessary for a particular application, the app ijations programmer can nearly always take advantage of the extra capacity to increase the quality of his program, its simplicity, its ruggedness, and its reliability. (3) The third principle of our design was that the entry and exit conventions for proceduresand functions should be as compact and efficient as for tightly coded machine-code subroutines. I reasoned that procedures are one of the most powerful features of a high-level language, in that they both simplify the programming task and shorten the object code. Thus there must be no impediment to their frequent use. (4) The fourth principle wNts that the compiler should use only a single pass. The compiler was structured as a collection of mutually recursive procedures, each capable of analyzing and translating a major syntactic unit of the language-a statement, an expression, a declaration, and so on. It was designed and documented in ALGOL 60, and then coded into decimal machine code using an explicit stack for recursion. Without the ALGOL 60 concept of recursion, at that time highly controversial, we could not have written this compiler at all. I can still recommend single-pass top-down recursive descent both as an implementation method and as a design principle for a programming language. First, we certainly want programs to be read by people and people prefer to read things once in a single pass. Second, for the user of a time-sharing or personal computer system, the interval between typing in a program (or amendment) and starting to run that program is wholly unproductive. It can be minimized by the high speed of a single pass compiler. Finally, to structure a compiler according to the syntax of its input language makes a great contribution to ensuring its correctness. Unless we have absolute confidence in this, we can never have confidence in the results of any of our programs. To observe these four princ iples, I selected a rather small subset of ALGOL 60. As the design and implementation progressed, I gradually 146 CHARLES ANTONY RICHARD HOARE
discovered methods of relaxing the restrictions without compromising any of the principles. So in the end we were able to implement nearly the full power of the whole language, including even recursion, although several features were removed and others were restricted. In the middle of 1963, primarily as a result of the work of Jill Pym and Jeff Hillmore, the first version of our compiler was delivered. After a few months we began to wonder whether anyone was using the language or taking any notice of our occasional reissue, incorporating improved operating methods. Only when a customer had a complaint did he contact us and many of them had no complaints. Our customers have now moved on to more modern computers and more fashionable languages but many have told me of their fond memories of the Elliott ALGOL System and the fondness is not due just to nostalgia, but to the efficiency, reliability, and convenience of that early simple ALGOL System. As a result of this work on ALGOL, in August 1962, I was invited to serve on the new Working Group 2.1 of IFIP, charged with responsibility for maintenance and development of ALGOL. The group's first main task was to design a subset of the language which would remove some of its less successful features. Even in those days and even with such a simple language, we recognized that a subset could be an improvement on the original. I greatly welcomed the chance of meeting and hearing the wisdom of many of the original language designers. I was astonished and dismayed at the heat and even rancor of their discussions. Apparently the original design of ALGOL 60 had not proceeded in that spirit of dispassionate search for truth which the quality of the language had led me to suppose. In order to provide relief from the tedious and argumentative task of designing a subset, the working group allocated one afternoon to discussing the features that should be incorporated in the next design of the language. Each member was invited to suggest the improvement he considered most important. On October 11, 1963, my suggestion was to pass on a request of our customers to relax the ALGOL 60 rule of compulsory declaration of variable names and adopt some reasonable default convention such as that of FORTRAN. I was astonished by the polite but firm rejection of this seemingly innocent suggestion: It was pointed out that the redundancy of ALGOL 60 was the best protection against programming and coding errors which could be extremely expensive to detect in a running program and even more expensive not to. The story of the Mariner space rocket to Venus, lost because of the lack of compulsory declarations in FORTRAN, was not to be published until later. I was eventually persuaded of the need to design programming notations so as to maximize the number of errors which cannot be made, or if made, can be reliably detected at compile time. Perhaps this would make the text of programs longer. Never mind! Wouldn't you be delighted if your Fairy Godmother offered to wave her wand The Emperor's Old Clothes
147
-
over your program to remove all its errors and only made the condition that you should write out and key in your whole program three times! The way to shorten programs is to use procedures, not to omit vital declarative information. Among the other proposals for the development of a new ALGOL was that the switch declaration of ALGOL 60 should be replaced by a more general feature, namely an array of label-valued variables and that a program should be able to change the values of these variables by assignment. I was very much opposed to this idea, similar to the assigned GO TO of FORTRAN, because I had found a surprising number of tricky problems in the implementation of even the simple labels and switches of ALGOL 60. I could see even more problems in the new feature including that of umping back into a block after it had been exited. I was also beginning to suspect that programs that used a lot of labels were more difficult to understand and get correct and that programs that assigned nHw values to label variables would be even more difficult still. It occurred to me that the appropriate notation to replace the ALGOL 60 switch should be based cn that of the conditional expression of ALGOL 60, which selects between two alternative actions according to the value of a Boolean expression. So I suggested the notation for a 'case expression' which selects between any number of alternatives according to the value of an integer expression. That was my second language design proposal. I am still most proud of it, because it raises essentially no problems either for the implementor, the programmer, or the reader of a program. Now, after more than fifteen years, there is the prospect of international standardization of a language incorporating this notation - a remarkably short interval compared with other branches of engineering. Back again to my work at Elliott's. After the unexpected success of our ALGOL Compiler, our thoughts turned to a more ambitious project: To provide a range of operating system software for larger configurations of the 503 computer, with card readers, line printers, magnetic tapes, and even a core backing store which was twice as cheap and twice as large as main store, but fifteen times slower. This was to be known as the Elliott 503 Mark II software system. It comprised: (1) An assembler for a symbolic assembly language in which all the rest of the software was to bE written. (2) A scheme for automatic administration of code and data overlays, either from magnetic tape or from core backing store. This was to be used by the rest of the software. (3) A scheme for automatic buffering of all input and output on any available peripheral device, - again, to be used by all the other software. (4) A filing system on magnetic tape with facilities for editing and job control. 148 CHARLES ANTONY RICHABE HOARE
(5) A completely new implementation of ALGOL 60, which removed all the nonstandard restrictions which we had imposed on our first
implementation. (6) A compiler for
FORTRAN
as it was then.
I wrote documents which described the relevant concepts and facilities and we sent them to existing and prospective customers. Work started with a team of fifteen programmers and the deadline for delivery was set some eighteen months ahead in March 1965. After initiating the design of the Mark II software, I was suddenly promoted to the dizzying rank of Assistant Chief Engineer, responsible for advanced development and design of the company's products, both hardware and software. Although I was still managerially responsible for the 503 Mark II software, I gave it less attention than the company's new products and almost failed to notice when the deadline for its delivery passed without event. The programmers revised their implementation schedules and a new delivery date was set some three months ahead in June 1965. Needless to say, that day also passed without event. By this time, our customers were getting angry and my managers instructed me to take personal charge of the project. I asked the senior programmers once again to draw up revised schedules, which again showed that the software could be delivered within another three months. I desperately wanted to believe it but I just could not. I disregarded the schedules and began to dig more deeply into the project. It turned out that we had failed to make any overall pla is for the allocation of our most limited resource -main storage. Each programmer expected this to be done automatically, either by the symbolic assembler or by the automatic overlay scheme. Even worse, we had failed to simply count the space used by our own software which was already filling the main store of the computer, leaving no space for our customers to run their programs. Hardware address length limitations prohibited adding more main storage. Clearly, the original specifications of the software could not be met and had to be drastically curtailed. Experienced programmers and even managers were called back from other projects. We decided to concentrate first on delivery of the new compiler for ALGOL 60, which careful calculation showed would take another four months. I impressed upon all the programmers involved that this was no longer just a prediction; it was a promise; if they found they were not meeting their promise, it was their personal responsibility to find ways and means of making good. The programmers responded magnificently to the challenge. They worked nights and days to ensure completion of all those items of software which were needed by the ALGOL compiler. To our delight, The Emperor's Old Clothes
149
they met the scheduled delivery date; it was the first major item of working software produced by the company over a period of two years. Our delight was short-lived; the compiler could not be delivered. Its speed of compilation was only two characters per second which compared unfavorably with the existing version of the compiler operating at about a thousand characters per second. We soon identified the cause of the problem: It was thrashing between the main store and the extension core backing store which was fifteen times slower. It was easy to make some simple improvements, and within a week we had doubled the speed of compilation -to four characters per second. In the next two weeks of investigation and reprogramming, the speed was doubled again-to eight characters per second. We could see ways in which wA ithin a month this could be still further improved; but the amount of reprogramming required was increasing and its effectiveness was decreasing; there was an awful long way to go. The alternative of increasing the size of the main store so frequently adopted in later failures of this kind was prohibited by hardware addressing limitations. There was no escape: The entire Elliott 503 Mark II software project had to be abandoned, and with it, over thirty man-years of programming effort, equivalent to nearly one man's active working life, and I was responsible, both as designer and as manager, for wasting it. A meeting of all our 503 customers was called and Roger Cook, who was then manager of the computing division, explained to them that not a single word of the long-promised software would ever be delivered to them. He adopted a very quiet tone of delivery, which ensured that none of the cusircmers could interrupt, murmur in the background, or even shuffle in their seats. I admired but could not share his calm. Over lunch our customers were kind to try to comfort me. They had realized long ago that software to the original specification could never have been delivered, and even if it had been, they would not have known how to use its sophisticated features, and anyway many such large projects get cancelled before delivery. In retrospect, I believe our customers were fortunate that hardware limitations had protected them from the arbitrary excesses of our software designs. In the present day, users of microprocessors benefit from a similar protection-but not for much longer. At that time I was reading the early documents describing the concepts and features of the newly announced OS 360, and of a new time-sharing project called Multics. These were far more comprehensive, elaborate, and sophisticated than anything I had imagined, even in my first version of the 503 Mark II software. Clearly IBM and MIT must be possessed of some secret of successful software design and implementation whose nature I could not even begin to guess at. It was only later that they realized they could not either. 150 CHARLES ANTONY RICHARDiHOARE
So I still could not see how I had brought such a great misfortune upon my company. At the time I was convinced that my managers were planning to dismiss me. But no, they were intending a far more severe punishment. 'O.K. Tony,' they said. 'You got us into this mess and now you're going to get us out.' 'But I don't know how,' I protested, but their reply was simple. 'Well then, you'll have to find out.' They even expressed confidence that I could do so. I did not share their confidence. I was tempted to resign. It was the luckiest of all my lucky escapes that I did not. Of course, the company did everything they could to help me. They took away my responsibility for hardware design and reduced the size of my programming teams. Each of my managers explained carefully his own theory of what had gone wrong and all the theories were different. At last, there breezed into my office the most senior manager of all, a general manager of our parent company, Andrew St. Johnston. I was surprised that he had even heard of me. 'You know what went wrong?' he shouted-he always shouted - 'You let your programmers do things which you yourself do not understand.' I stared in astonishment. He was obviously out of touch with present-day realities. How could one person ever understand the whole of a modern software product like the Elliott 503 Mark II software system? I realized later that he was absolutely right; he had diagnosed the true cause of the problem and he had planted the seed of its later solution. I still had a team of some forty programmers and we needed to retain the good will of customers for our new machine and even regain the confidence of the customers for our old one. But what should we actually plan to do when we knew only one thing-that all our previous plans had failed? I therefore called an all-day meeting of our senior programmers on October 22, 1965, to thrash out the question between us. I still have the notes of that meeting. We first listed the recent major grievances of our customers: Cancellation of products, failure to meet deadlines, excessive size of software, '... not justified by the usefulness of the facilities provided,' excessively slow programs, failure to take account of customer feedback; 'Earlier attention paid to quite minor requests of our customers might have paid as great dividends of goodwill as the success of our most ambitious plans.' We then listed our own grievances: Lack of machine time for program testing, unpredictability of machine time, lack of suitable peripheral equipment, unreliability of the hardware even when available, dispersion of programming staff, lack of equipment for keypunching of programs, lack of firm hardware delivery dates, lack of technical writing effort for documentation, lack of software knowledge outside of the programming group, interference from higher managers who imposed decisions, '... without a full realization of The Emperor's Old Clothes
151
the more intricate implications of the matter,' and overoptimism in the face of pressure from customers and the Sales Department. But we did not seek to ex cuse our failure by these grievances. For example, we admitted that it was the duty of programmers to educate their managers and other departments of the company by '... presenting the necessary information in a simple palatable form.' The hope '... that deficiencies in original program specifications could be made up by the skill of a technical writing department ... was misguided; the design of a program and the design of its specification must be undertaken in parallel by the same person, and they must interact with each other. A lack of clarity in specification is one of the surest signs of a deficiency in the program it describes, and the two faults must be removed simultaneously before the project is embarked upon.' I wish I had followed this advice in 1963; I wish we all would follow it today. My notes of the proceedings of that day in October 1965 include a complete section devoted to failings within the software group; this section rivals the most abject -elf-abasement of a revisionist official in the Chinese cultural revolution. Our main failure was overambition. 'The goals which we have attempted have obviously proved to be far beyond our grasp.' There was also failure in prediction, in estimation of program size and speed, of effort required, in planning the coordination and interaction of programs, in providing an early warning that things were going wrong. The:e were faults in our control of program changes, documentation, liaison with other departments, with our management, and with our customers. We failed in giving clear and stable definitions of the responsibilities of individual programmers and project leaders, -Oh, need I go on? What was amazing was that a large team of highly intelligent programmers could labor so hard and so long on such an unpromising pro ect. You know, you shouldn't trust us intelligent programmers. We can think up such good arguments for convincing ourselves and each other of the utterly absurd. Especially don't believe us when we promise to repeat an earlier success, only bigger and better next time. The last section of our inquiry into the failure dealt with the criteria of quality of software. 'In the recent struggle to deliver any software at all, the first casualty has been consideration of the quality of the software delivered. The quality of software is measured by a number of totally incompatible criteria, which must be carefully balanced in the design and implementation of every program.' We then made a list of no less than seventeen criteria which has been published in a guest editorial in VolumC 2 of the journal, Software Practice and Experience. How did we recover from the catastrophe? First, we classified our 503 customers into groups, according to the nature and size of the hardware configurations which they had bought-for example, those 152
CHARLES ANTONY RICHARD HOARE
with magnetic tapes were all in one group. We assigned to each group of customers a small team of programmers and told the team leader to visit the customers to find out what they wanted; to select the easiest request to fulfill, and to make plans (but not promises) to implement it. In no case would we consider a request for a feature that would take more than three months to implement and deliver. The project leader would then have to convince me that the customers' request was reasonable, that the design of the new feature was appropriate, and that the plans and schedules for implementation were realistic. Above all, I did not allow anything to be done which I did not myself understand. It worked! The software requested began to be delivered on the promised dates. With an increase in our confidence and that of our customers, we were able to undertake fulfilling slightly more ambitious requests. Within a year we had recovered from the disaster. Within two years, we even had some moderately satisfied customers. Thus we muddled through by common sense and compromise to something approaching success. But I was not satisfied. I did not see why the design and implementation of an operating system should be so much more difficult than that of a compiler. This is the reason why I have devoted my later research to problems of parallel programming and language constructs which would assist in clear structuring of operating systems -constructs such as monitors and communicating processes. While I was working at Elliott's, I became very interested in techniques for formal definition of programming languages. At that time, Peter Landin and Christopher Strachey proposed to define a programming language in a simple functional notation, that specified the effect of each command on a mathematically defined abstract machine. I was not happy with this proposal because I felt that such a definition must incorporate a number of fairly arbitrary representation decisions and would not be much simpler in principle than an implementation of the language for a real machine. As an alternative, I proposed that a programming language definition should be formalized as a set of axioms, describing the desired properties of programs written in the language. I felt that carefully formulated axioms would leave an implementation the necessary freedom to implement the language efficiently on different machines and enable the programmer to prove the correctness of his programs. But I did not see how to actually do it. I thought that it would need lengthy research to develop and apply the necessary techniques and that a university would be a better place to conduct such research than industry. So I applied for a chair in Computer Science at the Queen's University of Belfast where I was to spend nine happy and productive years. In October 1968, as I unpacked my papers in my new home in Belfast, I came across an obscure preprint of an article by Bob Floyd entitled, 'Assigning Meanings to Programs.' What a stroke of luck! At last I could see a The Emperor's Old Clothes
153
way to achieve my hopes for my research. Thus I wrote my first paper on the axiomatic approach to computer programming, published in the Communications of the ACM i:l October 1969: Just recently, I have discovered that an early advocate of the assertional method of program proving was none other than Alan Turing himself. On June 24, 1950, at a conference in Cambridge, he gave a short talk entitled, 'Checking a Large Routine,' which explains the idea with great clarity. 'How can one check a large routine in the sense of making sure that it's right? In order that the man who checks may not have too difficult a task, the programmer should make a number of definite assertions which can be checked individually, and from which the correctness of the whole program easily follows.' Consider the analogy of checking an addition. If the sum is given [just as a column of figures with the answer below] one must check the whole at one sitting. But if the totals for the various columns are given [with the carries added in separately], the checker's work is much easier, being split up into the checking of the various assertions [that each column is correctly added] and the small addition [of the carries to the total]. This principle car. be applied to the checking of a large routine but we will illustrate the method by means of a small routine viz. one to obtain n factorial without the use of a multiplier. Unfortunately there is no coding system sufficiently generally known to justify giving this routine in full, but a flow diagram will be sufficient for illustration. That brings me back to the main theme of my talk, the design of programming languages. During the period August 1962 to October 1966, I attended every meeting of the IFIP ALGOL working group. After completing our labors on the IFIP ALGOL subset, we started on the design of ALGOL X, the intended successor to ALGOL 60. More suggestions for new features were made and in May 1965, Niklaus Wirth was commissioned to collate them into a single language design. I was delighted by his draft design which avoided all the known defects of ALGOL 60 and included several new features, all of which could be simply and efficiently implemented, and safely and conveniently used. The description of the tarnguage was not yet complete. I worked hard on making suggestions for its improvement and so did many other members of our group. By the time of the next meeting in St. Pierre de Chartreuse, France, in October 1965, we had a draft of an excellent and realistic language design which was published in June 1966 as 'A Contribution to the Development of ALGOL' in the Communications of the ACM. It was implemented on the IBM 360 and given the title ALGOL W by its many happy u users. It was not only a worthy successor of ALGOL 60, it was even a worthy predecessor of PASCAL. At the same meeting, the ALGOL committee had placed before it, a short, incomplete, and rather comprehensiblee document, describing a different, more ambitious and, to me, a far less attractive language. 154 CHARLES ANTONY RICHARD HOARE
I was astonished when the working group, consisting of all the best known international experts of programming languages, resolved to lay aside the commissioned draft on which we had all been working and swallow a line with such an unattractive bait. This happened just one week after our inquest on the 503 Mark II software project. I gave desperate warnings against the obscurity, the complexity, and overambition of the new design, but my warnings went unheeded. I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult. It demands the same skill, devotion, insight, and even inspiration as the discovery of the simple physical laws which underlie the complex phenomena of nature. It also requires a willingness to accept objectives which are limited by physical, logical, and technological constraints, and to accept a compromise when conflicting objectives cannot be met. No committee will ever do this until it is too late. So it was with the ALGOL committee. Clearly the draft which it preferred was not yet perfect. So a new and final draft of the new ALGOL language design was promised in three months' time; it was to be submitted to the scrutiny of a subgroup of four members including myself. Three months came and went, without a word of the new draft. After six months, the subgroup met in the Netherlands. We had before us a longer and thicker document, full of errors corrected at the last minute, describing yet another but to me, equally unattractive language. Niklaus Wirth and I spent some time trying to get removed some of the deficiencies in the design and in the description, but in vain. The completed final draft of the language was promised for the next meeting of the full ALGOL committee in three months time. Three months came and went -not a word of the new draft appeared. After six months, in October 1966, the ALGOL working group met in Warsaw. It had before it an even longer and thicker document, full of errors corrected at the last minute, describing equally obscurely yet another different, and to me, equally unattractive language. The experts in the group could not see the defects of the design and they firmly resolved to adopt the draft, believing it would be completed in three months. In vain, I told them it would not. In vain, I urged them to remove some of the technical mistakes of the language, the predominance of references, the default type conversions. Far from wishing to simplify the language, the working group actually asked the authors to include even more complex features like overloading of operators and concurrency. When any new language design project is nearing completion, there is always a mad rush to get new features added before standardization. The rush is mad indeed, because it leads into a trap from which there The Emperor's Old Clothes
155
is no escape. A feature which is omitted can always be added later, when its design and its implications are well understood. A feature which is included before it is fully understood can never be removed later. At last, in December 1968, in a mood of black depression, I attended the meeting in Munich at which our long-gestated monster was to come to birth and receive the name ALGOL 68. By this time, a number of other members of the group had become disillusioned, but too late: the committee was now packed with supporters of the language, which was sent up for promulgation by the higher committees of IFIP. The best we could do was to send with it a minority report, stating our considered view that, '... as a tool for the reliable creation of sophisticated programs, the language was a failure.' This report was later suppressed by IFIP, an act which reminds me of the lines of Hilaire Belloc, But scientists, who ought to know / Assure us that it must be so./ Oh, Let us never, never doubt 'What nobody is sure about.
I did not attend any further meetings of that working group. I am pleased to report that the group soon came to realize that there was something wrong with their language and with its description; they labored hard for six more years to produce a revised description of the language. It is a great improvement but I'm afraid, that in my view, it does not remove the basic technical flaws in the design, nor does it begin to address the problems of its overwhelming complexity. Programmers are always surrounded by complexity; we cannot avoid it. Our applications are complex because we are ambitious to use our computers in ever more sophisticated ways. Programming is complex because of the large number of conflicting objectives for each of our programming projects. If our basic tool, the language in which we design and code our programs, is also complicated, the language itself becomes part of the problem rather than part of its solution. Now let me tell you about yet another overambitious language project. Between 1965 and 19'0 I was a member and even chairman of the Technical Committee No. 10 of the European Computer Manufacturers Association. We were charged first with a watching brief and then with the standardization of a language to end all languages, designed to meet the needs of all computer applications, both commercial and scientific, by :he greatest computer manufacturer of all time. I had studied with interest and amazement, even a touch of amusement, the four initial documents describing a language called NPL, which appeared between March 1 and November 30, 1964. Each was more ambitious and absurd than the last in its wishful speculations. Then the language began to be implemented and a new series of documents began to appear at s .x-monthly intervals, each describing the final frozen version of the language, under its final frozen name PL/I. 156
CHARLES ANTONY RICHARD HOARE
But to me, each revision of the document simply showed how far the initial F-level implementation had progressed. Those parts of the language that were not yet implemented were still described in freeflowing flowery prose giving promise of unalloyed delight. In the parts that had been implemented, the flowers had withered; they were choked by an undergrowth of explanatory footnotes, placing arbitrary and unpleasant restrictions on the use of each feature and loading upon a programmer the responsibility for controlling the complex and unexpected side-effects and interaction effects with all the other features of the language. At last, March 11, 1968, the language description was nobly presented to the waiting world as a worthy candidate for standardization. But it was not. It had already undergone some seven thousand corrections and modifications at the hand of its original designers. Another twelve editions were needed before it was finally published as a standard in 1976. I fear that this was not because everybody concerned was satisfied with its design, but because they were thoroughly bored and disillusioned. For as long as I was involved in this project, I urged that the language be simplified, if necessary by subsetting, so that the professional programmer would be able to understand it and able to take responsibility for the correctness and cost-effectiveness of his programs. I urged that the dangerous features such as defaults and ON- conditions be removed. I knew that it would be impossible to write a wholly reliable compiler for a language of this complexity and impossible to write a wholly reliable program when the correctness of each part of the program depends on checking that every other part of the program has avoided all the traps and pitfalls of the language. At first I hoped that such a technically unsound project would collapse but I soon realized it was doomed to success. Almost anything in software can be implemented, sold, and even used given enough determination. There is nothing a mere scientist can say that will stand against the flood of a hundred million dollars. But there is one quality that cannot be purchased in this way -and that is reliability. The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay. All this happened a long time ago. Can it be regarded as relevant in a conference dedicated to a preview of the Computer Age that lies ahead? It is my gravest fear that it can. The mistakes which we have made in the last twenty years are being repeated today on an even grander scale. I refer to a language design project which has generated documents entitled strawman, woodenmen, tinman, ironman, steelman, green and finally now ADA. This project has been initiated and sponsored by one of the world's most powerful organizations, the United States Department of Defense. Thus it is ensured of an influence and attention quite independent of its technical merits and its faults The Emperor's Old Clothes
157
and deficiencies threaten us with far greater dangers. For none of the evidence we have so far can inspire confidence that this language has avoided any of the problems that have afflicted other complex language projects of the past. I have been giving the best of my advice to this project since 1975. At first I was extremely hopeful. The original objectives of the language included reliability, readability of programs, formality of language definition, and even simplicity Gradually these objectives have been sacrificed in favor of power, supposedly achieved by a plethora of features and notational conventions, many of them unnecessary and some of them, like exception handling, even dangerous. We relive the history of the design of the motor car. Gadgets and glitter prevail over fundamental concerns of safe-.y and economy. It is not too late! I believe that by careful pruning of the ADA language, it is still possible to select a very powerful subset that would be reliable and efficient in implementation and safe and economic in use. The sponsors of the language have declared unequivocally, however, that there shall be no subsets. This is the strangest paradox of the whole strange project. If you want a language with no subsets, you must make it small. You include only those features which you know to be needed for every single application of the language and which you know to be appropriate for every single hardware configuration on which the language is implemented. Then extensions can be specially designed where necessary for particular hardware devices and for particular applications. That is the great strength of PASCAL, that there are so few unnecessary features and almost no need for subsets. That is why the language is strong enough to support specialized extensionsConcurrent PASCAL for real time work, PASCAL PLUS for discrete event simulation, UCSD PASCAL for microprocessor work stations. If only we could learn the right lessons from the successes of the past, we would not need to learn from our failures. And so, the best of my advice to the originators and designers of ADA has been ignored. In this last resort, I appeal to you, representatives of the programming profession in the United States, and citizens concerned with the welfare and safety of your own country and of mankind: Do not allow this language in its present state to be used in applications where reliability is critical, i.e., nuclear power stations, cruise missiles, early warning systems, antiballistic missile defense systems. The next rocket to go astray as a result of a programming language error may not be an exploratory space rocket on a harmless trip to Venus: It may be a nuclear warhead exploding over one of our own cities. An unreliable programming language generating unreliable programs constitutes a far greater risk to our environment and to our society than unsafe cars, toxic pesticides, or accidents at nuclear power stations. Be vigilant to reduce the risk, not to increase it. 158 CHARLES ANTONY RICHARD HOARE
Let me not end on this somber note. To have our best advice ignored is the common fate of all who take on the role of consultant, ever since Cassandra pointed out the dangers of bringing a wooden horse within the walls of Troy. That reminds me of a story I used to hear in my childhood. As far as I recall, its title was:
The Emperor's Old Clothes Many years ago, there was an Emperor who was so excessively fond of clothes that he spent all his money on dress. He did not trouble himself with soldiers, attend banquets, or give judgement in court. Of any other king or emperor one might say, 'He is sitting in council,' but it was always said of him, 'The emperor is sitting in his wardrobe.' And so he was. On one unfortunate occasion, he had been tricked into going forth naked to his chagrin and the glee of his subjects. He resolved never to leave his throne, and to avoid nakedness, he ordered that each of his many new suits of clothes should be simply draped on top of the old. Time passed away merrily in the large town that was his capital. Ministers and courtiers, weavers and tailors, visitors and subjects, seamstresses and embroiderers, went in and out of the throne room about their various tasks, and they all exclaimed, 'How magnificent is the attire of our Emperor.' One day the Emperor's oldest and most faithful Minister heard tell of a most distinguished tailor who taught at an ancient institute of higher stitchcraft, and who had developed a new art of abstract embroidery using stitches so refined that no one could tell whether they were actually there at all. 'These must indeed be splendid stitches,' thought the minister. 'If we can but engage this tailor to advise us, we will bring the adornment of our Emperor to such heights of ostentation that all the world will acknowledge him as the greatest Emperor there has ever been.' So the honest old Minister engaged the master tailor at vast expense. The tailor was brought to the throne room where he made obeisance to the heap of fine clothes which now completely covered the throne. All the courtiers waited eagerly for his advice. Imagine their astonishment when his advice was not to add sophistication and more intricate embroidery to that which already existed, but rather to remove layers of the finery, and strive for simplicity and elegance in place of extravagant elaboration. 'This tailor is not the expert that he claims,' they muttered. 'His wits have been addled by long contemplation in his ivory tower and he no longer understands the sartorial needs of a modern Emperor.' The tailor argued loud and long for the good sense of his advice but could not make himself heard. Finally, he accepted his fee and returned to his ivory tower. Never to this very day has the full truth of this story been told: That one fine morning, when the Emperor felt hot and bored, he extricated himself carefully from under his mountain of clothes and The Emperor's Old Clothes
159
is now living happily as a swineherd in another story. The tailor is canonized as the patron saint of all consultants, because in spite of the enormous fees that he extracted, he was never able to convince his clients of his dawning realization that their clothes have no Emperor.
References 1. Arvind, and Gostelow, K. P A new interpreter for data flow schemas and its implications for computer architecture. Tech. Rep. No. 72, Dept. Comptr. Sci., U. of California, Irvine, Oct. 1975. 2. Backus, J. Programming Language semantics and closed applicative languages. Conf. Record ACM Symp. on Principles of Programming Languages, Boston, Oct. )973, 71-86. 3. Berkling, K.J. Reduction languages for reduction machines. Interner Bericht ISF-76-8, Gesellschaft fur Mathematik und Datenverarbeitung MBH, Bonn, Sept. 1976. 4. Burge, W. H. Recursive Programming Techniques. Addison-Wesley, Reading, Mass, 1975. 5. Church, A. The Calculi of Lambda-Conversion. Princeton U. Press, Princeton, N.J., 1941. 6. Curry, H. B., and Feys, R. Cormbinatory Logic, Vol. 1. North Holland Pub. Co., Amsterdam, 1958. 7. Dennis, J. B. First version of a data flow procedure language. Tech. Mem. No. 61, Lab. for Comptr. Sci., M.I.T., Cambridge, Mass., May 1973. 8. Dijkstra, E. W. A Discipline of Programming. Prentice-Hall, Englewood Cliffs, N.J., 1976. 9. Friedman, D. R., and Wise, D. S. CONS should not evaluate its arguments. In Automata, Lbnguages and Programming, S.Michaelson and R. Milner, Eds., Edinburgh U. Press, Edinburgh, 1976, pp. 257-284. 10. Henderson, P, and Morris, J H., Jr. A lazy evaluator. Conf. Record Third ACM Symp. on Principles of Programming Languages, Atlanta, Ga., Jan. 1976, pp. 95-103. 11. Hoare, C.A.R. An axiomatic basis for computer programming. Comm. ACM 12, 10 (Oct. 1969) 376-583. 12. Iverson, K. A ProgrammirgLanguage. Wiley, New York, 1962. 13. Kosinski, P. A data flow programming language. Rep. RC 4264, IBM T. J. Watson Research Ctr., Yorktown Heights, N.Y., March 1973. 14. Landin, P. J. The mechanical evaluation of expressions. Computer. 6, 4 (1964), 308-320. 15. Mago, G. A. A network of microprocessors to execute reduction languages. Int. J. Comptr. and Inform. Sci. 16. Manna, Z., Ness, S., and 'vuillemin, J. Inductive methods for proving properties of programs. Comm. ACM 16, 8 (Aug. 1973), 491-502. 17. McCarthy, J. Recursive functions of symbolic expressions and their computation by machine, P . 1. Comm. ACM 3, 4 (April 1960), 184-195. 18. McJones, P.A Church-Rosser property of closed applicative languages. Rep. RJ 1589, IBM Res. Lab., San Jose, Calif., May 1975. 19. Reynolds, J. C. GEDANKEN --A simple typeless language based on the principle of completeness and the reference concept. Comm. ACM 13, 5 (May 1970), 308-318. 160 CHARLES ANTONY RICHARD HOARE
20. Reynolds, J. C. Notes on a lattice-theoretic approach to the theory of computation. Dept. Syst. and Inform. Sci., Syracuse U., Syracuse, N.Y., 1972. 21. Scott, D. Outline of a mathematical theory of computation. Proc. 4th Princeton Conf. on Inform. Sci. and Syst., 1970. 22. Scott, D. Lattice-theoretic models for various type-free calculi. Proc. Fourth Int. Congress for Logic, Methodology, and the Philosophy of Science, Bucharest, 1972. 23. Scott, D., and Strachey, C. Towards a mathematical semantics for computer languages. Proc. Symp. on Comptrs. and Automata, Polytechnic Inst. of Brooklyn, 1971.
Categories and Subject Descriptors: D.3.2 [Programming Languages]: Language Classifications; D.3.4 [Programming Languages]: Processors -compilers; D.4.1 [Operating Systems]: Process Management General Terms: Design, Languages, Reliability, Security
Additional Key Words and Phrases: Ada, Algol 60, Algol 68, Algol W, Elliott 503 Mark II Software System, PL/I
The Emperor's Old Clothes
161
Reflections on Software Research DENNIS M. RITCHIE AT&T Bell Laboratories TheACMA. M. TuringAward for 1983 was presented to Dennis M. Ritchie and Ken L. Thompson of AT&T Bell Laboratoriesat the Association's Annual Conference in October for their development and implementation of the UNIX operating system. The UNIX time-sharing system was conceived by Thompson and developed jointly with Ritchie in the late 1560's. A key contribution to the portability of the UNIX system was the development by Ritchie of the C Programming Language. Their seminal paper, 'The UNIX Time-Sharing System,' was originally presented at the Fourth ACM Symposium on OperatingSystems Principles in 1973 and a revised version subsequently appearedin theJuly 1974 issue of Communications. This paper received the ACM award for best paper in programming languages and systems in 1974. According to the Tring Award selection committee, 'The success of the UNIX system stems from its tasteful selection of a few key ideas and their elegant implementation. The model of the UNIX system has led a generation of software designers to new ways of thinking about programming. The genius of the UNIX system is its framework, which enables programmers to stand on the work of others.' Author's present address: AT&T Bell Laboratories, Room 2C-517600 Mountain Avenue, Murray Hill, NJ 07974. 163
The award is the Association's highest recognitionof technical contributions to the computing community It honors Alan M. Trying, the English mathematician who made major contributions to the computing sciences. Ritchie and Thompson gave separate lectures at the conference. Ritchie focused on the nature of the environment at Bell Labs that made development of UNIX possible. Thompson reflected on the question of how much one can trust software in contradistinction to people who write it. Thompson's paper begins on page 171. Can the circumstances that existed in Bell Labs that nurtured the UNIX project be produced again?
The UNIX' operating system has suddenly become news, but it is not new. It began in 1969 when Ken Thompson discovered a little-used PDP-7 computer and set out to fashion a computing environment that he liked. His work soon attracted me; I joined in the enterprise, though most of the ideas, and most of the work for that matter, were his. Before long, others from our group in the research area of AT&T Bell Laboratories were using the system; Joe Ossanna, Doug Mchlroy, and Bob Morris were especially enthusiastic critics and contributors. In 1971, we acquired a PDP-11, and by the end of that year we were supporting our first real users: three typists entering patent applications. In 1973, the system was rewritten in the C language, and in that year, too, it was first described publicly at the Operating Systems Principles conference; the resulting paper [8] appeared in Communications of the ACM the next year. Thereafter, its use grew si eadily, both inside and outside of Bell Laboratories. A development group was established to support projects inside the company, and several research versions were licensed for outside use. The last research distribution was the seventh edition system, which appeared in 1979; more recently, AT&T began to market System III, and now offers System V, both products of the development group. All research versions were 'as is,' unsupported software; System V is a supported product on several different hardware lines, most recently including the 3B systems designed and built by AT&T. UNIX is in wide use, and is now even spoken of as a possible industry standard. How did it come to succeed? There are, of course, its technical merits. Because the system and its history have been discussed at some length in the literature [6, 7, 11], I will not talk about these qualities except for one; despite its frequent surface inconsistency, so colorfully annotated by Don Norman in his Datamationarticle [4] and despite its richness, UNIX is a simple 'UNIX is a trademark of AT&T Bell Laboratories.
164 DENNIS M. RITCHIE
coherent system that pushes a few good ideas and models to the limit. It is this aspect of the system, above all, that endears it to its adherents. Beyond technical considerations, there were sociological forces that contributed to its success. First, it appeared at a time when alternatives to large, centrally administered computation centers were becoming possible; the 1970s were the decade of the minicomputer. Small groups could set up their own computation facilities. Because they were starting afresh, and because manufacturers' software was, at best, unimaginative and often horrible, some adventuresome people were willing to take a chance on a new and intriguing, even though unsupported, operating system. Second, UNIX was first available on the PDP-11, one of the most successful of the new minicomputers that appeared in the 1970s, and soon its portability brought it to many new machines as they appeared. At the time that UNIX was created, we were pushing hard for a machine, either a DEC PDP-10 or SDS (later Xerox) Sigma 7. It is certain, in retrospect, that if we had succeeded in acquiring such a machine, UNIX might have been written but would have withered away. Similarly, UNIX owes much to Multics [5]; as I have described [6, 7] it eclipsed its parent as much because it does not demand unusual hardware support as because of any other qualities. Finally, UNIX enjoyed an unusually long gestation period. During much of this time (say 1969-1979), the system was effectively under the control of its designers and being used by them. It took time to develop all the ideas and software, but even though the system was still being developed people were using it, both inside Bell Labs, and outside under license. Thus, we managed to keep the central ideas in hand, while accumulating a base of enthusiastic, technically competent users who contributed ideas and programs in a calm, communicative, and noncompetitive environment. Some outside contributions were substantial, for example those from the University of California at Berkeley. Our users were widely, though thinly, distributed within the company, at universities, and at some commercial and government organizations. The system became important in the intellectual, if not yet commercial, marketplace because of this network of early users. What does industrial computer science research consist of? Some people have the impression that the original UNIX work was a bootleg project, a 'skunk works.' This is not so. Research workers are supposed to discover or invent new things, and although in the early days we subsisted on meager hardware, we always had management encouragement. At the same time, it was certainly nothing like a development project. Our intent was to create a pleasant computing environment for ourselves, and our hope was that others liked it. The Computing Science Research Center at Bell Laboratories to which Thompson and I belong studies three broad areas: theory; numerical Reflections on Software Research
165
analysis; and systems, languages, and software. Although work for its own sake resulting, for example, in a paper in a learned journal, is not only tolerated but welcomed, there is strong though wonderfully subtle pressure to think about problems somehow relevant to our corporation. This has been so since I joined Bell Labs around 15 years ago, and it should not be surprising; the old Bell System may have seemed a sheltered monopoly, but research has always had to pay its way. Indeed, researchers love to find problems to work on; one of the advantages of doing research in a large company is the enormous range of the puzzles that turn up. For example, theorists may contribute to compiler design, or to LSI algorithms; numerical analysts study charge and current distribution in semiconductors; and, of course, software types like to design systems and write programs that people use. Thus, computer research at Bell Labs has always had a considerable commitment to the world, and does not fear edicts commanding us to be practical. For some of us, in fact, a pr icipal frustration has been the inability to convince others that our research products can indeed be useful. Someone may invent a new application, write an illustrative program, and put it to use in our own ab. Many such demonstrations require further development and contin uing support in order for the company to make best use of them. In the past, this use would have been exclusively inside the Bell System; more recently, there is the possibility of developing a product for direct sale. For example, some years ago Mike Lesk developed an automated directory-assistance system 13]. The program had an online Bell Labs phone book, and was connect' d to a voice synthesizer on a telephone line with a tone decoder. One dialed the system, and keyed in a name and location code on the telephone's key pad; it spoke back the person's telephone number and office address (it didn't attempt to pronounce the name). In spite of the hashing through twelve buttons (which, for example, squashed 'A,' 'B.' and 'C' together), it was acceptably accurate: it had to give up on around 5 percent of the tries. The program was a local hit and we ll used. Unfortunately, we couldn't find anyone to take it over, even as a supported service within the company, let alone a public offering, and it was an excessive drain on our resources, so it was finally scrapped. (I chose this example not only because it is old enough to to exacerbate any current squabbles, but also because it is timely: The orgar ization that publishes the company telephone directory recently asked us whether the system could be revived.) Of course not every idea is worth developing or supporting. In any event, the world is changing: Our ideas and advice are being sought much more avidly than before. This increase in influence has been going on for several years, partly because of the success of UNIX, but, more recently, because of the dramatic alteration of the structure of our company. 166 DENNIS M. RITCHIE
AT&T divested its telephone operating companies at the beginning of 1984. There has been considerable public speculation about what this will mean for fundamental research at Bell Laboratories; one report in Science [2] is typical. One fear sometimes expressed is that basic research, in general, may languish because it yields insufficient shortterm gains to the new, smaller AT&T. The public position of the company is reassuring; moreover, research management at Bell Labs seems to believe deeply, and argues persuasively, that the commitment to support of basic research is deep and will continue [1]. Fundamental research at Bell Labs in physics and chemistry and mathematics may, indeed, not be threatened; nevertheless, the danger it might face, and the case against which it must be prepared to argue, is that of irrelevance to the goals of the company. Computer science research is different from these more traditional disciplines. Philosophically it differs from the physical sciences because it seeks not to discover, explain, or exploit the natural world, but instead to study the properties of machines of human creation. In this it is analogous to mathematics, and indeed the 'science' part of computer science is, for the most part, mathematical in spirit. But an inevitable aspect of computer science is the creation of computer programs: objects that, though intangible, are subject to commercial exchange. More than anything else, the greatest danger to good computer science research today may be excessive relevance. Evidence for the worldwide fascination with computers is everywhere, from the articles on the financial, and even the front pages of the newspapers, to the difficulties that even the most prestigious universities experience in finding and keeping faculty in computer science. The best professors, instead of teaching bright students, join start-up companies, and often discover that their brightest students have preceded them. Computer science is in the limelight, especially those aspects, such as systems, languages, and machines architecture, that may have immediate commercial applications. The attention is flattering, but it can work to the detriment of good research. As the intensity of research in a particular area increases, so does the impulse to keep its results secret. This is true even in the university (Watson's account [12] of the discovery of the structure of DNA provides a well-known example), although in academia there is a strong counterpressure: Unless one publishes, one never becomes known at all. In industry, a natural impulse of the establishment is to guard proprietary information. Researchers understand reasonable restrictions on what and when they publish, but many will become irritated and flee elsewhere, or start working in less delicate areas, if prevented from communicating their discoveries and inventions in suitable fashion. Research management at Bell Labs has traditionally been sensitive to maintaining a careful balance between company interests and the industrial equivalent of academic freedom. The Reflections on Software Research
167
entrance of AT&T into the computer industry will test, and perhaps strain, this balance. Another danger is that commercial pressures of one sort or another will divert the attention of the oest thinkers from real innovation to exploitation of the current fad, from prospecting to mining a known lode. These pressures manifest themselves not only in the disappearance of faculty into industry, but also in the conservatism that overtakes those with well-paying investments - intellectual or financial -in a given idea. Perhaps this effect explains why so few interesting software systems have come from the large computer companies; they are locked into the existing world. Even IBM, which supports a wellregarded and productive research establishment, has in recent years produced little to cause even a minor revolution in the way people think about computers. The working examples of important new systems seem to have come either from entrepreneurial efforts (Visicalc is a good example) or from large companies, like Bell Labs and most especially Xerox, that were much involved with computers and could afford research into them, but did not regard them as their primary business. On the other hand, in smaller companies, even the most vigorous research support is highly dependent on market conditions. The New York Times, in an article describing Alan Kay's passage from Atari to Apple, notes the problem: 'Mr Kay... said that Atari's laboratories had lost some of the atmosphere of innovation that once attracted some of the finest talent in the industry.' 'When I left last month it was clear that they would be putting their efforts in the short term,' he said.... 'I guess the tree of research must from time to time be refreshed with the blood of bean counters' [9]. Partly because they are new and still immature, and partly because they are a creation of the intellect, the arts and sciences of software abridge the chain, usual in physics and engineering, between fundamental discoveries, advanced development, and application. The inventors of ideas about how software should work usually find it necessary to build demonstration systems. For large systems, and for revolutionary ideas, much time is required: It can be said that UNIX was written in the 70s to distill the best systems ideas of the 60s, and became the commonplace of the 80s. The work at Xerox PARC on personal computers, bitmap graphics, and programming environments [10] shows a similar progression, starting, and coming to fruition a few years later. Time and a commitment to the long-term value of the research are needed on the Dart of both the researchers and their management. Bell Labs has provided thiS commitment and more: a rare and uniquely stimulating researc i environment for my colleagues and me. As it enters what company publications call 'the new competitive era,' its managers and workers will do well to keep in mind how, and under what conditions, the UNIX system succeeded. If we 168 DENNIS M. RITCHIE
can keep alive enough openness to new ideas, enough freedom of communication, enough patience to allow the novel to prosper, it
will remain possible for a future Ken Thompson to find a little-used CRAY/I computer and fashion a system as creative, and as influential, as UNIX.
References 1. Bell Labs: New order augurs well. Nature 305, 5933 (Sept. 29, 1983). 2. Bell Labs on the brink. Science 221 (Sept. 23, 1983). 3. Lesk, M. E. User-activated BTL directory assistance. Bell Laboratories internal memorandum (1972). 4. Norman, D. A. The truth about UNIX. Datamation27, 12 (1981). 5. Organick, E. I. The Multics System. MIT Press, Cambridge, MA, 1972. 6. Ritchie, D. M. UNIX time-sharing system: A retrospective. Bell Syst. Tech. J. 57, 6 (1978), 1947-1969.. 7. Ritchie, D. M. The evolution of the UNIX time-sharing system. In Language Design and ProgrammingMethodology, Jeffrey M. Tobias, ed., Springer-Verlag, New York (1980). 8. Ritchie, D. M. and Thompson, K. The UNIX time-sharing system. Cornmun. ACM 17, 7 (July 1974), 365-375. 9. Sanger, D. E. Key Atari scientist switches to Apple. The New York Times 133, 46, 033 (May 3, 1984). 10. Thacker, C. P. et al. Alto, a personal computer, Xerox PARC Technical Report CSL-79-11. 11. Thompson, K. UNIX time-sharing system: UNIX implementation. Bell Syst. Tech. J. 57, 6 (1978), 1931-1946. 12. Watson, J. D. The Double Helix: A PersonalAccount of the Discovery of the Structure of DNA. Atheneum Publishers, New York (1968).
Categories and Subject Descriptors: C.5.2 [Computer System Implementation]: Minicomputers; D.4.0 [Software]: Operating Systems -general; K.6.1 [Management of Computing and Information Systems]: Project and People Management systems analysis and design
General Terms: Design
Additional Key Words and Phrases: Directory-assistance system, PDP-11
Reflections on Software Research
169
S U
Reflections on Trusting Trust KEN THOMPSON AT&T Bell Laboratories [Ken Thompson was a joint recipient of the 1983 ACM Wring Award for his part in the development and implementation of the UNIX operating system. See Introduction to D. M. Ritchie's paper, Reflections on Software Research, which begins on page 163.] To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.
Introduction I thank the ACM for this award. I can't help but feel that I am receiving this honor for timing and serendipity as much as technical merit. UNIX' swept into popularity with an industry-wide change from central main frames to autonomous minis. I suspect that Daniel Bobrow [1] would be here instead of me if he could not afford a PDP -10 and had had to 'settle' for a PDP-1 1. Moreover, the current state of UNIX is the result of the labors of a large number of people. There is an old adage, 'Dance with the one that brought you,' which means that I should talk about UNIX. I have not worked on 'UNIX is a trademark of AT&T Bell Laboratories. Author's present address: AT&T Bell Laboratories, Room 2C-519600 Mountain Avenue, Murray Hill, NJ 07974. 171
mainstream UNIX in many years, yet I continue to get undeserved credit for the work of others. Therefore, I am not going to talk about UNIX, but I want to thank everyone who has contributed. That brings me to Dennis Ritchie. Our collaboration has been a thing of beauty. In the ten years that we have worked together, I can recall only one case of miscoordination of work. On that occasion, I discovered that we both had written the same 20-line assembly language program. I compared the sources and was astounded to find that they matched character-for-character. The result of our work together has been far greater than the work that we each contributed. I am a programmer. On my 1040 form, that is what I put down as my occupation. As a programrmner, I write programs. I would like to present to you the cutest program I ever wrote. I will do this in three stages and try to bring it together at the end.
Stage I In college, before video games, we would amuse ourselves by posing programming exercises. One of the favorites was to write the shortest self-reproducing program. Since this is an exercise divorced from reality, the usual vehicle was FORTRAN. Actually, FORTRAN was the language of choice for the same reason that three-legged races are popular. More precisely stated, the problem is to write a source program that, when compiled and executed, will produce as output an exact copy of its source. If you have never done this, I urge you to try it on your own. The discovery cf how to do it is a revelation that far surpasses any benefit obtained by being told how to do it. The part about 'shortest' was just an incentive to demonstrate skill and determine a winner. Figure 1 shows a self-reproducing program in the C3 programming language. (The purist will note that the program is not precisely a selfreproducing program, but will produce a self-reproducing program.) This entry is much too large to win a prize, but it demonstrates the technique and has two important properties that I need to complete my story: (1) This program can be easily written by another program. (2) This program can contain an arbitrary amount of excess baggage that will be reproduced along with the main algorithm. In the example, even the comment is reproduced.
Stage II The C compiler is written in C. What I am about to describe is one of many 'chicken and egg' problems that arise when compilers are written in their own language. In this case, I will use a specific example from the C compiler. 172 KEN THOMPSON
chars[]= I
'V,. An',
'I'. .,
'7',
', 'Vi', (213 lines deleted) 0
I; Is
* The string s is a
. representation of the body . of this program from 'O' . to the end.
*1 FIGURE 1
main() int i; pnntf('charts ] =n); for(i=O: s[i]; i++)
printf('Yd,
j7,
s(iD;
printf(Z°/os', s);
Here are some simple transliterations to allow a non-C programmer to read this code. = assignment equal to .EQ. != not equal to .NE. ++ increment 'x' single character constant 'xxx' multiple character string %d format to convert to decimal %6s format to convert to string Vt tab character n newline character
C allows a string construct to specify an initialized character array. The individual characters in the string can be escaped to represent unprintable characters. For example, 'Hello worldn' represents a string with the character 'n,' representing the new line character. Reflections on T'usting 'Rust
173
Figure 2.1 is an idealization of the code in the C compiler that interprets the character escape sequence. This is an amazing piece of code. It 'knows' in a completely portable way what character code is compiled for a new line in any character set. The act of knowing then allows it to recompile itself, thus perpetuating the knowledge. Suppose we wish to alter the C compiler to include the sequence 'v ' to represent the vertical tab character. The extension to Figure 2.1 is obvious and is presented in Figure 2.2. We then recompile the C compiler, but we get a diagnostic. Obviously, since the binary version of the compiler does not know about 'v,'the source is not legal C. We must 'train' the compiler. After it 'knows' what v' means, then our new change will become legal C. We look up on an ASCII chart that a vertical tab is decimal 11. We alter our source to look like Figure 2.3. Now the old compiler accepts the new source. We install the resulting binary as the new official C compiler and now we can write the portable version the way we had it in Figure 2.2. .-
... c = next( ); if(c != ') retum(c); c = next ); iffc ') return') ;
ift(c 'n') return('n');
FIGURE
2.1
c
=
c = next( ):
next( );
if(c != ') retum(c);
ff(c != ')
C - next( );
c = next( ); if(c ') return('); if(c Wn')
retum(c);
itc ')
retum('V); if(c 'n') returr'n');
return(' n'); ift(c 'v') retum(1 1);
if(c 'v')
retum('v'); FIGURE
2.2
FIGURE
2.3
This is a deep concept. It is as close to a 'learning' program as I have seen. You simply tell it once, then you can use this self-referencing definition.
Stage III Again, in the C compiler, Figure 3.1 represents the high-level control of the C compiler where the routine 'compile' is called to compile the next line of source. Figure 3.2 shows a simple modification to compile(s) char .s; compile(s) char .s;
I
...
if(match(s. pattenm')) I compffe('bug'); retum; I ...
l FIGURE 3.1 174 KEN THOMPSON
FIGURE 3.2
the compiler that will deliberately miscompile source whenever a particular pattern is matched. If this were not deliberate, it would be called a compiler 'bug.' Since it is deliberate, it should be called a 'Trojan horse.' The actual bug I planted in the compiler would match code in the UNIX 'login' command. The replacement code would miscompile the login command so that it would accept either the intended encrypted password or a particular known password. Thus if this code were installed in binary and the binary were used to compile the login command, I could log into that system as any user. Such blatant code would not go undetected for long. Even the most casual perusal of the source of the C compiler would raise suspicions. The final step is represented in Figure 3.3. This simply adds a second Trojan horse to the one that already exists. The second pattern is aimed at the C compiler. The replacement code is a Stage I selfreproducing program that inserts both Trojan horses into the compiler. compile(s) char es; if(match(s, 'pattemi)) I compile ('bugl'); retum; FIGURE 3.3
I
if(match(s, 'pattern 2')) 1 compile ('bug 2'); retum;
This requires a learning phase as in the Stage II example. First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere.
Moral The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these
Reflections on Trusting Trust
175
bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect. After trying to convince you that I cannot be trusted, I wish to moralize. I would like to criticize the press in its handling of the 'hackers,' the 414 gang, the I)alton gang, etc. The acts performed by these kids are vandalism at best and probably trespass and theft at worst. It is only the inadequacy of the criminal code that saves the hackers from very serious prosecution. The companies that are vulnerable to this activity 'and most large companies are very vulnerable) are pressing hard to update the criminal code. Unauthorized access to computer systems is already a serious crime in a few states and is currently being addressed in many more state legislatures as well as Congress. There is an explosive situation brewing. On the one hand, the press, television, and movies make heros of vandals by calling them whiz kids. On the other hand, the acts performed by these kids will soon be punishable by years in prison. I have watched kids testifying before Congress. It is clear that they are completely unaware of the seriousness of their acts. There is obviously a cultural gap. The act of breaking into a computer system has to have the same social stigma as breaking into a neighbor's house. It should not matter that the neighbor's door is unlocked. The press must learn that misguided use of a computer is no more amazing than drunk driving of an automobile.
Ack nowledgment I first read of the possibility of such a Trojan horse in an Air Force critique [4] of the security ol an early implementation of Multics. I cannot find a more specific reference to this document. I would appreciate it if anyone who can supply this reference would let me know.
References 1. Bobrow, D. G., Burchfie., J. D., Murphy, D. L., and Tomlinson, R. S. TENEX, a paged time-sha-ing system for the PDP-10. Common. ACM 15, 3(Mar. 1972), 135--143. 2. Kernighan, B. W., and Fitchie, D. M. The C ProgrammingLanguage. Prentice-Hall, Englewood Cliffs, N.J., 1978. 3. Ritchie, D. M., and Thompson, K. The UNIX time-sharing system. Common. ACM 17, 7{July 19741, 365-375. 4. Unknown Air Force Document.
176
KEN THOMPSON
Categories and Subject Descriptors: D.2.5 [Software Engineering]: Testing and Debugging-debuggingaids; D.3.2 [Programming Languages]: Language Classifications-applicative languages; D.3.4 [Programming Languages]: Processors-compilers; D.4.6 [Operating Systems]: Security and Protection -access controls
General Terms: Design, Languages, Legal Aspects
Additional Key Words and Phrases: C, UNIX
Reflections on Trusting Trust 177
From Programming Language Design to Computer Construction NIKLAUS WIRTH Niklaus Wirth of the Swiss Federal Institute of Technology (ETH) was presented the 1984 ACM A. M. Trying Award at the Association's Annual Conference in San Francisco in October in recognition of his outstanding work in developing a sequence of innovative computer languages: Euler, ALGOL-W, Modula, and Pascal. Pascal, in particular, has become significant pedagogically and has established a foundation for future research in the areas of computer language, systems, and architecture. The hallmarks of a Wirth language are its simplicity, economy of design, and high-quality engineering, which result in a language whose notation appears to be a natural extension of algorithmic thinking rather than an extraneous formalism. Wirth's ability in language design is complemented by a masterful writing ability. In the April 1971 issue of Communications of the ACM, Wirth published a seminal paper on Structured Programming('Program Development by Stepwise Refinement') that recommended top-down structuring of programs (i.e., successively refining program stubs until the program is fully elaborated). The resultingelegant and powerful method of exposition remains interesting reading today even after the furor over Structured Programming has subsided. Two later papers, 'Toward a Discipline of Author's present address: Institut fur Informatik, ETH, 8092 Zurich, Switzerland. 179
Real-Time Programming' and 'What Can We Do About the Unnecessary Diversity of Notation' (publishedin CACM in August and November 1974, respectively), speak to Wirth's Consistent and dedicated search for an adequate language formalism. The firing Award, the Association's highest recognition of technical contributions to the computing community, honors Alan M. Tkring, the English mathematician who defined the computer prototype Tkring machine and helped break German ciphers during World War I. Wirth received his Ph.D. from the University of California at Berkeley in 1963 and was Assistant Pro/essor at Stanford University until 1967. He has been Professorat the ETH Zurich since 1968; from 1982 until 1984 he was Chairman of the Division of Computer Science (Informatikl at ETH. Wirth's recent work includes the design and development of the personal computer Lilith in conjunction with the Modula-2 language. In his lecture, Wirth presents a short history of his major projects, drawing conclusions and highlighting the principles that have guided his work. From NELIAC (via ALGOL 60) to Euler and ALGOL W,to Pascal and Modula-2, and ultimately Lilith, Wirth's search for an appropriate formalism for systems programming yields intriguing insights and surprising results. It is a great pleasure to receive the Turing Award, and both gratifying and encouraging to receive appreciation for work done over so many years. I wish to thank ACM for bestowing upon me this prestigious award. It is particularly fitting that I receive it in San Francisco, where my professional career began. Soon after I received notice of the award, my feeling of joy was tempered somewhat by the awareness of having to deliver the 'itring lecture. For someone who is an engineer rather than an orator or preacher, this obligation causes some noticeable anxiety. Foremost among the questions it poses is the following: What do people expect from such a lecture? Some will wish to gain technical insight about one's work, or expect an assessment of its relevance or impact. Others will wish to hear how the idea; behind it emerged. Still others expect a statement from the expert about future trends, events, and products. And some hope for a frank assessment of the present engulfing us, either glorifying the monumental advance of our technology or lamenting its cancerous side effects and exaggerations. In a period of indecision, I consulted some previous Thring lectures and saw that a condensed report about the history of one's work would be quite acceptable. In order to be not just entertaining, I shall try to summarize what I believe I have learned from the past. This choice, frankly, suits me quite well, because neither do I pretend to know more about the future than most others, nor do I like to be proven wrong afterwards. Also, the art of preaching about current achievements and misdeeds is not my primary strength. This does not imply that I observe the present computing scene without concern, particularly its tumultuous hassle with commercialism. 180 NIKLAUS WIRTH
Certainly, when I entered the computing field in 1960, it was neither so much in the commercial limelight nor in academic curricula. During my studies at the Swiss Federal Institute of Technology (ETH), the only mention I heard of computers was in an elective course given by Ambros P. Speiser, who later became the president of IFIP. The computer ERMETH developed by him was hardly accessible to ordinary students, and so my initiation to the computing field was delayed until I took a course in numerical analysis at Laval University in Canada. But alas, the Alvac III E machinery was out of order most of the time, and exercises in programming remained on paper in the form of untested sequences of hexadecimal codes. My next attempt was somewhat more successful: At Berkeley, I was confronted with Harry Huskey's pet machine, the Bendix G-15 computer. Although the Bendix G-15 provided some feeling of success by producing results, the gist of the programming art appeared to be the clever allocation of instructions on the drum. If you ignored the art, your programs could well run slower by a factor of one hundred. But the educational benefit was clear: You could not afford to ignore the least little detail. There was no way to cover up deficiencies in your design by simply buying more memory. In retrospect, the most attractive feature was that every detail of the machine was visible and could be understood. Nothing was hidden in complex circuitry, silicon, or a magic operating system. On the other hand, it was obvious that computers of the future had to be more effectively programmable. I therefore gave up the idea of studying how to design hardware in favor of studying how to use it more elegantly. It was my luck to join a research group that was engaged in the development-or perhaps rather improvement-of a compiler and its use on an IBM 704. The language was called NELIAC, a dialect of ALGOL 58. The benefits of such a 'language' were quickly obvious, and the task of automatically translating programs into machine code posed challenging problems. This is precisely what one is looking for when engaged in the pursuit of a Doctorate. The compiler, itself written in NELIAC, was a most intricate mess. The subject seemed to consist of 1 percent science and 99 percent sorcery, and this tilt had to be changed. Evidently, programs should be designed according to the same principles as electronic circuits, that is, clearly subdivided into parts with only a few wires going across the boundaries. Only by understanding one part at a time would there be hope of finally understanding the whole. This attempt received a vigorous starting impulse from the appearance of the report on ALGOL 60. ALGOL 60 was the first language defined with clarity; its syntax was even specified in a rigorous formalism. The lesson was that a clear specification is a necessary but not sufficient condition for a reliable and effective implementation. Contact with Aadrian van Wijngaarden, one of ALGOL's codesigners, From Programming Language Design to Computer Construction
181
brought out the central theme more distinctly: Could ALGOL's principles be condensed and crystallized even further? Thus began my adventure. in programming languages. The first experiment led to a dissertation and the language Euler-a trip with the bush knife through the jungle of language features and facilities. The result was academic elegance, but not much of practical utilityalmost an antithesis of the later coding. Fortunately, I was given the opportunity to spend a sabbatical year at the research laboratory of Xerox Corporation in Palo Alto, where the concept of the powerful personal workstation had not only originated but was also put into practice. Instead of sharing a large, monolithic computer with many others and fighting for a share via a wire with a 3-kHz bandwidth, I now used my own computer placed 184 NIKLAUS WIRTH
under my desk over a 15 -MHz channel. The influence of a 5000-fold increase in anything is not foreseeable; it is overwhelming. The most elating sensation was that after 16 years of working for computers, the computer now seemed to work for me. For the first time, I did my daily correspondence and report writing with the aid of a computer, instead of planning new languages, compilers, and programs for others to use. The other revelation was that a compiler for the language Mesa whose complexity was far beyond that of Pascal, could be implemented on such a workstation. These new working conditions were so many orders of magnitude above what I had experienced at home that I decided to try to establish such an environment there as well. I finally decided to dig into hardware design. This decision was reinforced by my old disgust with existing computer architectures that made life miserable for a compiler designer with a bent toward systematic simplicity. The idea of designing and building an entire computer system consisting of hardware, microcode, compiler, operating system, and program utilities quickly took shape in my imagination-a design that would be free from any constraint to be compatible with a PDP-11 or an IBM 360, or FORTRAN, Pascal, UNIX, or whatever other current fad or committee standard there might be. But a sensation of liberation is not enough to succeed in a technical project. Hard work, determination, a sensitive feeling of what is essential and what ephemeral, and a portion of luck are indispensable. The first lucky accident was a telephone call from a hardware designer enquiring about the possibility of coming to our university to learn about software techniques and acquire a Ph.D. Why not teach him about software and let him teach us about hardware? It didn't take long before the two of us became a functioning team, and Richard Ohran soon became so excited about the new design that he almost totally forgot both software and Ph.D. That didn't disturb me too much, for I was amply occupied with the design of hardware parts; with specifying the micro- and macrocodes, and by programming the latter's interpreter; with planning the overall software system; and in particular with programming a text editor and a diagram editor, both making use of the new high-resolution bit-mapped display and the small miracle called Mouse as a pointing device. This exercise in programming highly interactive utility programs required the study and application of techniques quite foreign to conventional compiler and operating system design. The total project was so diversified and complex that it seemed irresponsible to start it, particularly in view of the small number of part-time assistants available to us, who averaged around seven. The major threat was that it would take too long to keep the enthusiastic two of us persisting and to let the others, who had not yet experienced the power of the workstation idea, become equally enthusiastic. To keep the project within reasonable dimensions, I stuck to three dogmas: From Programming Language Design to Computer Construction
185
Aim for a single-processor computer to be operated by a single user and programmed in a single language. Notably, these cornerstones were diametrically opposed to the trends of the time, which favored research in multiprocessor configurations, time-sharing multiuser operating systems, and as many languages as you could muster. Under the constraints of a single language, I faced a difficult choice whose effects would be wide ranging, namely, that of selecting a language. Of existing languages, none seemed attractive. Neither could they satisfy all the requirements, nor were they particularly appealing to the compiler designer who knows the task has to be accomplished in a reasonable time span. In particular, the language had to accommodate all our wishes with rn gard to structuring facilities, based on 10 years' experience with Pascal, and it had to cater to problems so far only handled by coding with an assembler. To cut a long story short, the choice was to design an offspring of both proven Pascal and experimental Modula, that is, Modula-2. The module is the key to bringing under one hat the contradictory requirements of high-level abstraction for security through redundancy checking and low-level facilities that allow access to individual features of a particular computer. It lets the programmer encapsulate the use of low-level facilities in a few small parts of the system, thus protecting him from falling into their traps in unexpected places. The Lilith project proved that it is not only possible but advantageous to design a single-language system. Everything from device drivers to text and graphics editors s written in the same language. There is no distinction between modules belonging to the operating system and those belonging to the user's program. In fact, that distinction almost vanishes and with it the burden of a monolithic, bulky resident block of code, which no one wants but everyone has to accept. Moreover, the Lilith project proved the benefits of a well-matched hardware/software design. These benefits can be measured in terms of speed: Comparisons of execution times of Modula programs revealed that Lilith is often superior to a VAX 750 whose complexity and cost are a multiple of those of Lilith. They can also be measured in terms of space: The code of Modula programs for Lilith is shorter than the code for PDP-11, VAX, or 68000 by factors of 2 to 3, and shorter than that of the NS 32000 by a factor of 1.5 to 2. In addition, the codegenerating parts of compilers for these microprocessors are considerably more intricate than they are in Lilith due to their ill-matched instruction sets. This length factor ha s to be multiplied by the inferior density factor, which casts a dark shadow over the much advertised high-level language suitability of modern microprocessors and reveals these claims to be exaggerated. The prospect that these designs will be reproduced millions of times is rather depressing, for by their mere number they become our standard building blocks. Unfortunately, advances in semiconductor technology have been so rapid that architectural advances are overshadowed and have become seemingly less relevant. 186 NIKLAUS WIRTH
Competition forces manufacturers to freeze new designs into silicon long before they have proved their effectiveness. And whereas bulky software can at least be modified and at best be replaced, nowadays complexity has descended into the very chips. And there is little hope that we have a better mastery of complexity when we apply it to hardware rather than software. On both sides of this fence, complexity has and will maintain a strong fascination for many people. It is true that we live in a complex world and strive to solve inherently complex problems, which often do require complex mechanisms. However, this should not diminish our desire for elegant solutions, which convince by their clarity and effectiveness. Simple, elegant solutions are more effective, but they are harderto find than complex ones, and they require more time, which we too often believe to be unaffordable. Before closing, let me try to distill some of the common characteristics of the projects that were mentioned. A very important technique that is seldom used as effectively as in computing is the bootstrap. We used it in virtually every project. When developing a tool, be it a programming language, a compiler, or a computer, I designed it in such a way that it was beneficial in the very next step: PL360 was developed to implement ALGOL W; Pascal to implement Pascal; Modula-2 to implement the whole workstation software; and Lilith to provide a suitable environment for all our future work, ranging from programming to circuit documentation and development, from report preparation to font design. Bootstrapping is the most effective way of profiting from one's own efforts as well as suffering from one's mistakes. This makes it mandatory to distinguish early between what is essential and what ephemeral. I have always tried to identify and focus in on what is essential and yields unquestionable benefits. For example, the inclusion of a coherent and consistent scheme of data type declarations in a programming language I consider essential, whereas the details of varieties of for-statements, or whether the compiler distinguishes between upper- and lowercase letters, are ephemeral questions. In computer design, I consider the choice of addressing modes and the provision of complete and consistent sets of (signed and unsigned) arithmetic instructions including proper traps on overflow to be crucial; in contrast, the details of a multichannel prioritized interrupt mechanism are rather peripheral. Even more important is ensuring that the ephemeral never impinge on the systematic, structured design of the central facilities. Rather, the ephemeral must be added fittingly to the existing, well-structured framework. Rejecting pressures to include all kinds of facilities that 'might also be nice to have' is sometimes hard. The danger that one's desire to please will interfere with the goal of consistent design is very real. I have always tried to weigh the gains against the cost. For example, when considering the inclusion of either a language feature or the From Programming Language Design to Computer Construction
187
compiler's special treatment of a reasonably frequent construct, one must weigh the benefits against the added cost of its implementation and its mere presence, which results in a larger system. Language designers often fail in this respect. I gladly admit that certain features of Ada that have no counterparts in Modula-2 may be nice to have occasionally, but at the same time, I question whether they are worth the price. The price is considerable: First, although the design of both languages started in 1977, Ada compilers have only now begun to emerge, whereas we have been using Modula since 1979. Second, Ada compilers are rumored to be gigantic programs consisting of several hundred thousand lines of code, whereas our newest Modula compiler measures some five thousand lines only. I confess secretly that this Modula compiler is already at the limits of comprehensible complexity, and I would feel utterly incapable of constructing a good compiler for Ada. But even if the effort of building unnecessarily large systems and the cost of memory to contain their code could be ignored, the real cost is hidden in the unseen efforts of the innumerable programmers trying desperately to understand them and use them effectively. Another common characteristic of the projects sketched was the choice of tools. It is my belief that a tool should be commensurate with the product; it must be as simple as possible, but no simpler. A tool is in fact counterproductive when a large part of the entire project is taken up by mastering the tool. Within the Euler, ALGOL W, and PL360 projects, much consideration was given to the development of tabledriven, bottom-up syntax analysis techniques. Later, I switched back to the simple recursive-descent, top-down method, which is easily comprehensible and unquestionably sufficiently powerful, if the syntax of the language is wisely chosen. In the development of the Lilith hardware, we restricted ourselves to a good oscilloscope; only rarely was a logic state analyzer needed. This was possible due to a relatively systematic, trick-free concept [or the processor. Every single project was pri marily a learning experiment. One learns best when inventing. Only by actually doing a development project can I gain enough familiarity with the intrinsic difficulties and enough confidence that the inherent details can be mastered. I never could separate the design of a language from its implementation, for a rigid definition without the feedback from the construction of its compiler would seem to me presumptuous and unprofessional. Thus, I participated in the construction of compilers, circuitry, and text and graphics editors, and this entailed microprogramming, much high-level programming, circuit design, board layout, and even wire wrapping. This may seem odd, but I simply like hands-on experience much better than team management. I have also learned that researchers accept leadership from a factual, in-touch team member much more readily than from an organization expert, be he a manager in industry or a university professor. I try to keep in mind that teaching by setting a good example is often the most effective method and sometimes the only one available. 188 NIKLAUS WIRTH
Lastly, each of these projects was carried through by the enthusiasm and the desire to succeed in the knowledge that the endeavor was worthwhile. This is perhaps the most essential but also the most elusive and subtle prerequisite. I was lucky to have team members who let themselves be infected with enthusiasm, and here is my chance to thank all of them for their valuable contributions. My sincere thanks go to all who participated, be it in the direct form of working in a team, or in the indirect forms of testing our results and providing feedback, of contributing ideas through criticism or encouragement, or of forming user societies. Without them, neither ALGOL W, nor Pascal, nor Modula-2, nor Lilith would have become what they are. This Thring Award also honors their contributions.
Categories and Subject Descriptors: C.0 [Computer Systems Organization]: General -hardware/software interfaces; D.2.1 [Software Engineering]: Requirements/Specifications tools; D.3.4 [Programming Languages]: Processors-code generation; compilers General Terms: Design, Languages, Performance Additional Key Words and Phrases ALGOL 60, ALGOL W, Lilith project, NELIAC, Pascal, PL360
From Programming Language Design to Computer Construction
189
Introduction to Part II
Computers and Computing Methodologies Part II of this volume contains twelve essays, eleven Turing Lectures as they originally appeared in ACM publications, and an extensive Postscript to the one Turing Lecture never published. It should be noted that the authors, to varying degrees, present their lectures in a format that includes some personal observations and experiences. Two of the lecturers, Maurice Wilkes and J. H. Wilkinson, were personally acquainted with Alan Turing and take the opportunity to provide interesting and insightful reminiscences about Turing as a scientist and colleague. Maurice Wilkes's 1967 lecture, 'Computers Then and Now,' is an overview of computing to that time and must, of course, be interpreted from the perspective of 1987 as something like 'Computers before Then and Then.' This should be no problem to those who know about program loops ('next time, this time's value will be last time's'). When Wilkes draws an analogy between the problems of computer development 'then' and similar difficulties connected with timesharing development 'now,' the reader to whom timesharing is a familiar way of life may imagine what current parallels exist -perhaps networking of databases or micro-to-mainframe links. 191
Wilkes's insights into the problems of combining hardware and software systems and his comments on the role of programming languages and data structures, on the importance of graphics and process control, and on the essential qualities of portability and parallelism have a contemporary flavor. And given the current enthusiasm for expert systems based on artificial intelligence techniques, it is difficult to resist quoting his remark: 'Animals and machines are constructed from entirely different materials and on quite different principles.' Although Wilkes's focus is not primarily on hardware, it is noteworthy that he and the authors of two of the other papers (Hamming and Newell and Simon) make special reference to the appropriateness of the word 'machinery' in the name of the society that sponsors the lectures -this is particularly o: interest in view of persistent attempts over the years to change the name of the Association for Computing Machinery to something more 'relevant.' Four other papers deal wvith general aspects of computing and computer science, presenting a spectrum of viewpoints: Richard Hamming's 1968 Lecture, 'One Man's View of Computer Science,' exhibits the characteristic down-to-earth perspective of its author. Although Hamming is perhaps best known as the inventor of the error-detecting/correcting codes which bear his name, his point of view as an applied mathematician who came to computing in the days when numerical computation was preeminent is revealed by his oftquoted remark, 'The purpose of computing is insight, not numbers.' His views on the need for mathematical training of computer scientists, on the teaching of computing applications (by the relevant departments instead of by computer science faculty), and on the importance of teaching programming 'style,' are all in accord with current directions and still can be read with profit. His characterization of the distinction between pure and applied mathematics (which some years ago pitted him against the big guns of px -e mathematics in the letters column of Science) should also be heeded in this era of increasing emphasis on 'theoretical' aspects of computer science. Finally, his remarks on the 'delicate matter of ethics' in computing-related matters are as timely today as they were when he wrote them. Marvin Minsky's 1969 Lecture, 'Form and Content in Computer Science,' takes as its poin: of departure what the author terms 'form-content confusion' r three areas: theory of computation, programming languages, and education. His remarks relating to the first two of these areas can be profitably read in connection with the 'theoretical' papers discussed below and the 'languages' papers of Part I, respectively. More than half the paper, however, is concerned with education, and Minsky's observations on mathematical learning by children, reflecting his work with his colleague Seymour Papert, is used to critique the 'New Mathematics.' Although enthusiasm for the latter has been tempered since 1969, the points made concerning the preoccupation with form to the neglect of content are still relevant. 192 ROBERT L. ASHENHURST
The 1975 Lecture by Allen Newell and Herbert Simon, 'Computer Science as Empirical Inquiry: Symbols and Search,' discusses philosophically the nature of symbol systems and heuristic search in the context of computer science generally, but more particularly of artificial intelligence. Newell and Simon take as fundamental the symbolic designation of objects and the symbolic interpretation of processes (objects and processes being of the real world). Heuristic search is then advanced as the main technique for arriving at solutions to 'problems' involoving objects and processes, and the techniques of formulating such searches depend on insights about the objects and processes (hence the 'empirical inquiry'). Newell and Simon apply these ideas to programs capable of solving problems 'at 'expert' professional levels' (of which there were few at the time of the writing), 'at the level of competent amateurs,' and so forth. Kenneth Iverson's 1979 Lecture, 'Notation as a Tool of Thought,' discusses general attributes of notational schemes, such as 'suggestivity' and 'economy,' and shows how these qualities aid in problem solving. That the paper uses as an illustrative notation the programming language APL is perhaps not surprising, since the author is the inventor of that language. The context of applications, however, ranges over various areas, such as algebra, number theory, and graph theory. This breadth may also come as no surprise to those who further know that Iverson's dissertation was on numerical computing, under an economist, and that he has worked in operations research and coauthored a book called Automatic Data Processing. At first glance the perspective seems at the opposite end of the spectrum from that of Newell and Simon, being excessively 'analytic' as opposed to 'heuristic,' but if one sees analysis simply as a means of narrowing the search tree down to just one possibility, the direct solution, the two can be viewed as different sides of the same coin. The seven remaining papers in Part II deal with more specific areas in the theory and technique of computing. Although it is common to dichotomize computing practice into 'systems' and 'applications' branches, some years ago the term 'methodologies' was introduced to represent an intermediate area, that of techniques 'derived from broad areas of applications of computing which have common structures, processes, and techniques.' The quotation here is from the original Curriculum 68 recommendations for computer science programs (Communications of the ACM, March 1968). Among the methodologies there listed are Numerical Mathematics, Data Processing and File Management, and Artificial Intelligence. Although it can hardly be claimed that this terminology has attained common acceptance, it is essentially reflected in the current Computing Reviews category system, which has a category specifically named 'Computing Methodologies,' as well as two other categories representing important topics of this type, 'Mathematics of Computing' and 'Information Systems.' Computers and Computing Methodologies
193
Four of the papers in the current volume represent the three 'methodological' areas cited above: J. H. Wilkinson's 1970 Lecture, 'Some Comments from a Numerical Analyst,' is largely retrospective, but contains some observations on 'accuracy' in numerical computing, the area in which the author has contributed so fundamentally. The 'failures in the matrix field' that he recounts -actually failures of relevant communities to grasp the importance of stability results -have perhaps now been ameliorated as far as the 'big leagues' go (research in computing practice, mathematical software design, etc.). Similar remarks, however, could be appropriately applied today to the practices surrounding personal computers and mass-marketed software. John McCarthy's 1971 Lecture, 'Generality in Artificial Intelligence' (although it has been cited as 'The Present State of Research on Artificial Intelligence') is, as mentioned earlier, the one Turing lecture never published. In the Postscript that appears here, the author puts the issue of 'generality,' both as it seemed in 1971 and as subsequent work in artificial intelligence has attempted to achieve it, into perspectivie. Two other essays fall into an area more recently developed, that of Data Management. These are Charles Bachman's 1973 Lecture, 'The Programmer as Navigator,' and E. F. Codd's 1981 Lecture, 'Relational Database: A Practical Foundation for Productivity.' The fact that the two authors have been representatives of the two sides in the 'network versus relational' controversy which pervaded the database area for some years should not be permitted to obscure the fact that each paper gives, in its own way, valuable insights into the subject of data management generally. Bachman, in particular, describes under the metaphor of 'navigation' the modes of access to data stored in structured and linked form. If there is to be any criticism of his point of view, it is perhaps that characterization of this activity as in the purview of 'the programmer' is misleading in the light of today's emphasis on database interaction by 'the end user' as well as by the programmer. In an extensive Postscript, Bachman charges 'the programmer' with even more responsibilities. Codd, writing some eight years after Bachman, during which time the relational approach was transformed from an academic curiosity to a practical technique, reviews such basic motivating factors supporting the approach as data independence, communicability, and set-processing capability and outlines the relational model and its functioning on the basis of the fundamental database operations of select, project, and join. His discussion of the desirability of a 'doublemode sublanguage,' that is, a data manipulation language that can be both used interactively and embedded in programs, is welcome, since many discussions of database systems overemphasize the interactive mode and forget the heavy reTlance that information systems have on programmed interaction with stored databases. 194 ROBERT L. ASHENHURST
The remaining three papers deal with the topic of computational complexity, which has been a prominent focus of theoretical research in computer science in the past few years. Michael Rabin's 1976 Lecture, 'Complexity of Computations,' Stephen Cook's 1982 Lecture, 'An Overview of Computational Complexity,' and finally Richard Karp's 1985 Lecture, 'Combinatorics, Complexity, and Randomness,' collectively provide an admirable summary of this area, in a form accessible to the nonspecialist. Rabin sets forth the kinds of problems that complexity theory attacks, while Cook, struck by 'how much activity there has been in the field' since Rabin's writing, gives a comprehensive review of complexity research in several problem areas. The two papers together cite over ninety references to the literature. Finally, Karp gives a penetrating discussion of certain major directions, set in the context of his own work in the field and that of others. All three of these papers, and the supplemental interview with Karp combine to give the reader a refreshing sense of 'being there' as these fundmental developments were being promulgated. -ROBERT L. ASHENHURST
Chicago, Illinois
Computers and Computing Methodologies
195
Computers Then and Now MAURICE V. WILKES Cambridge University Cambridge, England Reminiscences on the early developments leading to large-scale electronic computers show that it took much longer than was expected for the first of the more ambitious and fully engineered computers to be completed and prove themselves in practical operation. Comments on the present computer field assess the needs for future development.
I do not imagine that many of the Turing lecturers who will follow me will be people who were acquainted with Alan Thring. The work on computable numbers, for which he is famous, was published in 1936 before digital computers existed. Later he became one of the first of a distinguished succession of able mathematicians who have made contributions to the computer field. He was a colorful figure in the early days of digital computer development in England, and I would find it difficult to speak of that period without making some references to him.
Pioneering Days An event of first importance in my life occurred in 1946, when I received a telegram inviting me to attend in the late summer of that year a course on computers at the Moore School of Electrical EngineerPresented at the ACM 20th Anniversary Conference, Washington, D.C., August 1967. Author's present address: Olivetti Research Ltd., 4A Market Hill, Cambridge CB2 3NJ, England.
197
ing in Philadelphia. I was able to attend the latter part of the course, and a wonderful experience it was. No such course had ever been held before, and the achievem ants of the Moore School, and the other computer pioneers, were known to few. There were 28 students from 20 organizations. The principal instructors were John Mauchly and Presper Eckert. They were fresh from their triumph as designers of the ENIAC, which was the first electronic digital computer, although it did not work on the stored program principle. The scale of this machine would be impressive even today-it ran to over 18,000 vacuum tubes. Although the ENIAC was very successful - and very fast - for the computation of ballistic tables, which was the application for which the project was negotiated, it had severe limitations which greatly restricted its application as a general-purpose computing device. In the first place, the program was set up by means of plugs and sockets and switches, and it took a long time to charge from one problem to another. In the second place, it had internal storage capacity for 20 numbers only. Eckert and Mauchly appreciated that the main problem was one of storage, and they proposed for future machines the use of ultrasonic delay lines. Instructions and numbers would be mixed in the same memory in the way to which we are now accustomed. Once the new principles were enunciated, it was seen that computers of greater power than the ENIAC could be built with one tenth the amount of equipment. Von Neumann was, at that time, associated with the Moore School group in a consultative capacity, although I did not personally become acquainted with him until somewhat later. The computing field owes a very great debt to von Neumann. He appreciated at once the possibilities of what became known as logical design, and the potentialities implicit in the stored program principle. That von Neumann should bring his great prestige and influence to bear was important, since the new ideas were too revolutionary for some, and powerful voices were being raised to say that the ultrasonic memory would not be reliable enough, and that to mix instructions and numbers in the same memory was going against nature. Subsequent developments have provided a decisive vindication of the principles taught by Eckert and Mauchly in 1946 to those of us who were fortunate enough to be in the course. There was, however, a difficult period in the early 19 50s. The first operating stored-program computers were, naturally enough, laboratory models; they were not fully engineered and they by no means exploited the full capability of the technology of the time. It took much longer than people had expected for the first of the more ambitious and fully engineered computers to be completed and prove themselves in practical operation. In retrospect, the period seem a short one; at the time, it was a period of much heart searching and even recrimination. I have often felt during the past year that we are going through a very similar phase in relation to time sharing. This is a development 198 MAURICE V. WILKES
carrying with it many far-reaching implications concerning the relationship of computers to individual users and to communities, and one that has stirred many people's imaginations. It is now several years since the pioneering systems were demonstrated. Once again, it is taking longer than people expected to pass from experimental systems to highly developed ones that fully exploit the technology that we have available. The result is a period of uncertainty and questioning that closely resembles the earlier period to which I referred. When it is all over, it will not take us long to forget the trials and tribulations that we are now going through. In ultrasonic memories, it was customary to store up to 32 words end to end in the same delay line. The pulse rate was fairly high, but people were much worried about the time spent in waiting for the right word to come around. Most delay line computers were, therefore, designed so that, with the excercise of cunning, the programmer could place his instructions and numbers in the memory in such a way that the waiting time was minimized. Thring himself was a pioneer in this type of logical design. Similar methods were later applied to computers which used a magnetic drum as their memory and, altogether, the subject of optimum coding, as it was called, was a flourishing one. I felt that this kind of human ingenuity was misplaced as a long-term investment, since sooner or later we would have truly random-access memories. We therefore did not have anything to do with optimum coding in Cambridge. Although a mathematician, Turing took quite an interest in the engineering side of computer design. There was some discussion in 1947 as to whether a cheaper substance than mercury could not be found for use as an ultrasonic delay medium. Touring's contribution to this discussion was to advocate the use of gin, which he said contained alcohol and water in just the right proportions to give a zero temperature coefficient of propagation velocity at room temperature. A source of strength in the early days was that groups in various parts of the world were prepared to construct experimental computers without necessarily intending them to be the prototype for serial production. As a result, there became available a body of knowledge about what would work and what would not work, about what it was profitable to do and what it was not profitable to do. While looking around at the computers commercially available today, one cannot feel that all the lessons were learned, there is no doubt that this diversity of research in the early days has paid good dividends. It is, I think, important that we should have similar diversity today when we are learning how to construct large, multiple-access, multiprogrammed, multiprocessor computer systems. Instead of putting together components and vacuum tubes to make a computer, we have now to learn how to put together memory modules, processors, and peripheral devices to make a system. I hope that money will be Computers Then and Now 199
available to finance the construction of large systems intended for research only. Much of the early engineering development of digital computers was done in universities. A few yea s ago, the view was commonly expressed that universities had played their part in computer design, and that the matter could now safely be left to industry. I do not think that it is necessary that work on computer design should go on in all universities, but I am glad that some have remained active in the field. Apart from the obvious functions of universities in spreading knowledge, and keeping in the public domain material that might otherwise be hidden, universities can make a special c ontribution by reason of their freedom from commercial considerations, including freedom from the need to follow the fashion.
Good Language and Bad Gradually, controversies about the design of computers themselves died down and we all began to argue about the merits or demerits of sophisticated programming techniques; the battle for automatic programming or, as we should now say, for the use of higher level programming languages, had begun. I well remember taking part at one of the early ACM meetings -it must have been about 1953 -in a debate on this subject. John Carr was also a speaker and he distinguished two groups of programmers; the first comprised the 'primitives,' who believed that all instructions should be written in octal, hexadecimal, or some similar form, and who Shad no time for what they called fancy schemes, while the second comprised the 'space cadets,' who saw themselves as the pioneers of E new age. I hastened to enroll myself as a space cadet, although I remember issuing a warning against relying on interpretive systems, for which there was then something of a vogue, rather than on compilers. (I do not think that the term compiler was then in general use, although t had in fact been introduced by Grace Hopper.) The serious arguments advanced against automatic programming had to do with efficiency. Not only was the running time of a compiled program longer than that of a hand-coded program, but, what was then more serious, it needed more memory. In other words, one needed a bigger computer to do the same work. We all know that these arguments, although valid, have not proved decisive, and that people have found that it has paid them to make use of automatic programming. In fact, the spectacular expansion of the computing field during the last few years would otherwise have been impossible. We have now a very similar debate raging about time sharing, and the arguments being raised against it are very similar to those raised earlier against automatic programming. Here again, I am on the side of the space cadets, and I expect the debate to have a similar outcome. 200
MAURICE V. WILKES
Incidentally, I fear that in that automatic programming debate Touring would have been definitely on the side of the primitives. The programming system that he devised for the pioneering computer at Manchester University was bizarre in the extreme. He had a very nimble brain himself and saw no need to make concessions to those less well-endowed. I remember that he had decided-presumably because someone had shown him a train of pulses on an oscilloscope that the proper way to write binary numbers was backwards, with the least significant digit on the left. He would, on occasion, carry this over into decimal notation. I well remember that once, during a lecture, when he was multiplying some decimal numbers together on the blackboard to illustrate a point about checking a program, we were all unable to follow his working until we realized that he had written the numbers backwards. I do not think that he was being funny, or trying to score off us; it was simply that he could not appreciate that a trivial matter of that kind could affect anybody's understanding one way or the other. I believe that in twenty years people will look back on the period in which we are now living as one in which the principles underlying the design of programming languages were just beginning to be understood. I am sorry when I hear well-meaning people suggest that the time has come to standardize on one or two languages. We need temporary standards, it is true, to guide us on our way, but we must not expect to reach stability for some time yet.
The Higher Syntax A notable achievement of the last few years has been to secure a much improved understanding of syntax and of syntax analysis. This has led to practical advances in compiler construction. An early achievement in this field, not adequately recognized at the time, was the Compiler-Compiler of Brooker and Morris. People have now begun to realize that not all problems are linguistic in character, and that it is high time that we paid more attention to the way in which data are stored in the computer, that is, to data structures. In his Turing lecture given last year, Alan Perlis drew attention to this subject. At the present time, choosing a programming language is equivalent to choosing a data structure, and if that data structure does not fit the data you want to manipulate then it is too bad. It would, in a sense, be more logical first to choose a data structure appropriate to the problem and then look around for, or construct with a kit of tools provided, a language suitable for manipulating that data structure. People sometimes talk about high-level and lowlevel programming languages without defining very clearly what they mean. If a high-level programming language is one in which the data structure is fixed and unalterable, and a low-level language is one Computers Then and Now
201
in which there is some latitude in the choice of data structures, then I think we may see a swing toward low-level programming languages for some purposes. I would, however, make this comment. In a high-level language, much of the syntax, and a large part of the compiler, are concerned with the mechanism of making declarations, the forming of compound statements out of simple statements, and with the machinery of conditional statements. All this is entirely independent of what the statements that really operate on the data do or what the data structure is like. We have, in fact, two languages, one inside the other; an outer language that is concerned with the flow of control, and an inner language which operates on the data. There might be a case for having a standard outer language - or a small number to choose from - and a number of inner languages which could be, as it were, plugged in. If necessary, in order to meet special circumstances, a new inner language could be constructed; when plugged in, it would benefit from the power provided by the outer language in the matter of organizing the flow of control. When I think of the number of special languages that we are beginning to require - for example, 5cr real time control, computer graphics, the writing of operating systems, etc., -the more it seems to me that we should adopt a system which would save us designing and learning to use a new outer language each time. The fundamental importance of data structures may be illustrated by considering the problem of designing a single language that would be the preferred language either for a purely arithmetic job or for a job in symbol manipulation. Atempts to produce such a language have been disappointing. The difficulty is that the data structures required for efficient implementation xr the two cases are entirely different. Perhaps we should recognize this difficulty as a fundamental one, and abandon the quest for an oninibus language which will be all things
to all men. There is one development in the software area which is, perhaps, not receiving the notice that it deserves. This is the increasing mobility of language systems from one computer to another. It has long been possible to secure this mobility by writing the system entirely in some high-level programming language in wide use such as ALGOL or FORTRAN. This method, however, forces the use of the data structures implicit in the host language and this imposes an obvious ceiling on efficiency. In order that a system may be readily transferred from one computer to another, other than via a host language, the system must be written in the first place in machine-independent form. This would not be the place to go into the various techniques that are available for transferring a suitably constructed system. They include such devices as bootstrapping, and the use Of primitives and macros. Frequently the operation of transfer involves doing some work on a computer on which 202
MAURICE V. WILKES
the system is already running. Harry Huskey did much early pioneer work in this subject with the NELIAC system. There is reason to hope that the new-found mobility will extend itself to operating systems, or at least to substantial parts of them. Altogether, I feel that we are entering a new period in which the inconveniences of basic machine-code incompatibility will be less felt. The increasing use of internal filing systems in which information can be held within the system in alphanumeric, and hence in essentially machineindependent, form will accentuate the trend. Information so held can be transformed by algorithm to any other form in which it may be required. We must get used to regarding the machine-independent form as the basic one. We will then be quite happy to attach to our computer systems groups of devices that would now be regarded as fundamentally incompatible; in particular, I believe that in the large systems of the future the processors will not necessarily be all out of the same stable.
Design and Assembly A feature of the last few years has been an intensive interest in computer graphics. I believe that we in the computer field have long been aware of the utility in appropriate circumstances of graphical means of communication with a computer, but I think that many of us were surprised by the appeal that the subject had to mechanical engineers. Engineers are used to communicating with each other by diagrams and sketches and, as soon as they saw diagrams being drawn on the face of a cathode-ray tube, many of them jumped to the conclusion that the whole problem of using a computer in engineering design had been solved. We, of course, know that this is far from being the case, and that much hard work will be necessary before the potential utility of displays can be realized. The initial reaction of engineers showed us, however, two things that we should not forget. One is that, in the judgment of design engineers, the ordinary means of communicating with a computer are entirely inadequate. The second is that graphical communication in some form or other is of vital importance in engineering as that subject is now conducted; we must either provide the capability in our computer systems, or take on the impossible task of training up a future race of engineers conditioned to think in a different way. There are signs that the recent growth of interest in computer graphics is about to be followed by a corresponding growth of interest in the manipulation of objects by computers. Several projects in this area have been initiated. The driving force behind them is largely an interest in artificial intelligence. Both the tasks chosen and the programming strategy employed reflect this interest. My own interest in the subject, however, is more practical. I believe that computer controlled mechanical devices have a great future in Computers Then and Now 203
factories and elsewhere. The production of engineering components has been automated to a remarkable extent, and the coming of numerically-controlled machine tools has enabled quite elaborate components to be produced automatically in relatively small batches. By contrast, much less progress has been made in automating the assembly of components to form complete articles. The artificial intelligence approach may not be altogether the right one to make to the problem of designing automatic assembly devices. Animals and machines are constructed from entirely different materials and on quite different principles. When engineers have tried to draw inspiration from a study of the way animals work they have usually been misled; the history of early attempts to construct flying machines with flapping wings illustrates this very clearly. My own view is that we shall see, before very long, ccmputer-controlled assembly belts with rows of automatic handling machines arranged alongside them, and controlled by the same computer system. I believe that these handling machines will resemble machine tools rather than fingers and thumbs, although they will be lighter in construction and will rely heavily on feedback from sensing elements of various kinds.
The Next Breakthrough I suppose that we are all asking ourselves whether the computer as we now know it is here to stay, or whether there will be radical innovations. In considering this question, it is well to be clear exactly what we have achieved. Acceptance of the idea that a processor does one thing at a time-at any rate as the programmer sees it-made programming conceptually very simple, and paved the way for the layer upon layer of sophistication tha. we have seen develop. Having watched people try to program early computers in which multiplications and other operations went on in parallel, I believe that the importance of this principle can hardly be exaggerated. From the hardware point of view, the same principle led to the development of systems in which a high factor of hardware utilization could be maintained over a very wide range of problems, in other words to the development of computers that are truly general purpose. The ENIAC, by contrast, contained a great deal of hardware, some Df it for computing and some of it for programming, and yet, on the average problem, only a fraction of this hardware was in use at any given time. Revolutionary advances, if they come, must come by the exploitation of the high degree of parallelism that the use of integrated circuits will make possible. The problem is to secure a satisfactorily high factor of hardware utilization, since, without this, parallelism will not give us greater power. Highly parallel systems tend to be efficient only on the problems that the designer had in his mind; on other problems, the hardware utilization factor tends to fall to such an extent that 204
MAURICE V. WILKES
conventional computers are, in the long run, more efficient. I think that it is inevitable that in highly parallel systems we shall have to accept a greater degree of specialization towards particular problem areas than we are used to now. The absolute cost of integrated circuits is, of course, an important consideration, but it should be noted that a marked fall in cost would also benefit processors of conventional design. One area in which I feel that we must pin our hopes on a high degree of parallelism is that of pattern recognition in two dimensions. Presentday computers are woefully inefficient in this area. I am not thinking only of such tasks as the recognition of written characters. Many problems in symbol manipulation have a large element of pattern recognition in them, a good example being syntax analysis. I would not exclude the possibility that there may be some big conceptual breakthrough in pattern recognition which will revolutionize the whole subject of computing.
Summary I have ranged over the computer field from its early days to where we are now. I did not start quite at the beginning, since the first pioneers worked with mechanical and electromechanical devices, rather than with electronic devices. We owe them, however, a great debt, and their work can, I think, be studied with profit even now. Surveying the shifts of interest among computer scientists and the ever-expanding family of those who depend on computers in their work, one cannot help being struck by the power of the computer to bind together, in a genuine community of interest, people whose motivations differ widely. It is to this that we owe the vitality and vigor of our Association. If ever a change of name is thought necessary, I hope that the words 'computing machinery' or some universally recognized synonym will remain. For what keeps us together is not some abstraction, such as Turing machine, or information, but the actual hardware that we work with every day. Categories and Subject Descriptors:
D.1.2 [Software]: Programming Techniques-automatic programming; I.2.1 [Computing Methodologies]: Artificial Intelligence; K.2 [Computing Milieu]: History of Computing-people, systems General Terms: Design, Languages Additional Key Words and Phrases: ENIAC, Moore School, optimum coding, ultrasonic delay line
Computers Then and Now 205
One Man's View of Computer Science R. W. HAMMING Bell Telephone Laboratories, Inc. Murray Hill, New Jersey A number of observations and comments are directed toward suggesting that more than the usual engineering flavor be given to computer science. The engineering aspect is important because most present difficulties in this field do not involve the theoretical question of whether certain things can be done, but rather the practical question of how can they be accomplished well and simply. The teaching of computer science could be made more effective by various alterations, for example, the inclusion of a laboratory course in programming, the requirement for a strong minor in something other than mathematics, and more practical coding and less abstract theory, as well as more seriousness and less game-playing.
Let me begin with a few personal words. When one is notified that he has been elected the ACM Turing lecturer for the year, he is at first surprised -especially is the nonacademic person surprised by an ACM award. After a little while the surprise is replaced by a feeling of pleasure. Still later comes a feeling of 'Why me?' With all that has been done and is being done in computing, why single out me and my work? Well, I suppose it has to happen to someone each year, and this time I am the lucky person. Anyway, let me thank you for the honor you have given to me and by inference to the Bell Telephone Laboratories where I work and which has made possible so much of what I have done. Author's present address: Naval Postgraduate School, Monterey, CA 93940. 207
The topic of my Turing lecture, 'One Man's View of Computer Science,' was picked because ' What is computer science?' is argued endlessly among people in the field. Furthermore, as the excellent Curriculum 68 report' remarks in its introduction, 'The Committee believes strongly that a continuing dialogue on the process and goals of education in computer science will be vital in the years to come.' Lastly, it is wrong to think of hiring, for whom these lectures were named, as being exclusively interested in Turing machines; the fact is that he contributed to many aspects of the field and would probably have been very interested in the topic, though perhaps not in what I say. The question 'What is computer science?' actually occurs in many different forms, among whioh are: What is computer science currently? What can it develop into? What should it develop into? What will it develop into? A precise answer cannot be given to any of these. Many years ago an eminent mathematician wrote a book, What is Mathematics, and nowhere did he try to define mathematics; rather he simply wrote mathematics. While you will now and then find some aspect of mathematics defined rather sharply, the only generally agreed upon definition of mathematics is 'Mathematics is what mathematicians do,' which is followed by 'Mathematicians are people who do mathematics.' What is true about defining ma :hematics is also true about many other fields: there is often no clear, sharp definition of the field. In the face of this difficulty many people, including myself at times, feel that we should ignore the discussion and get on with doing it. But as George Forsythe points out so well in a recent article, it does matter what people in Washington, D.C., think computer science is. According to him, they tend to feel that it is a part of applied mathematics and therefore turn to the mathematicians for advice in the granting of funds. And it is not greatly different elsewhere; in both industry and the universities you can often still see traces of where computing first started, whether in electrical engineering, physics, mathematics, or even business. Evidently the picture which people have of a subject can significantly affect its subsequent development. Therefore, although we cannot hope to settle the question definitively, we need frequently to examine and to air our views on what our subject is and should become. In many respects, for me ii: would be more satisfactory to give a talk on some small, technical point in computer science-it would certainly be easier. But that is exactly one of the things that I wish to stress-the danger of getting lost in the details of the field, especially 'A Report of the ACM Curriculum Com-mittee on Computer Science. Comm. ACM II, 3(Mar. 1968), 151-197. 2
Forsythe, G. E. What to do till the computer scientist comes. Am. Math. Monthly 75, 5(May 1968), 454-461.
208
R. W. HAMMING
in the coming days when there will be a veritable blizzard of papers appearing each month in the journals. We must give a good deal of attention to a broad training in the field-this in the face of the increasing necessity to specialize more and more highly in order to get a thesis problem, publish many papers, etc. We need to prepare our students for the year 2000 when many of them will be at the peak of their career. It seems to me to be more true in computer science than in many other fields that 'specialization leads to triviality.' I am sure you have all heard that our scientific knowledge has been doubling every 15 to 17 years. I strongly suspect that the rate is now much higher in computer science; certainly it was higher during the past 15 years. In all of our plans we must take this growth of information into account and recognize that in a very real sense we face a 'semi-infinite' amount of knowledge. In many respects the classical concept of a scholar who knows at least 90 percent of the relevant knowledge in his field is a dying concept. Narrower and narrower specialization is not the answer, since part of the difficulty is in the rapid growth of the interrelationships between fields. It is my private opinion that we need to put relatively more stress on quality and less on quantity and that the careful, critical, considered survey articles will often be more significant in advancing the field than new, nonessential material. We live in a world of shades of grey, but in order to argue, indeed even to think, it is often necessary to dichotomize and say 'black' or 'white.' Of course in doing so we do violence to the truth, but there seems to be no other way to proceed. I trust, therefore, that you will take many of my small distinctions in this light -in a sense, I do not believe them myself, but there seems to be no other simple way of discussing the matter. For example, let me make an arbitrary distinction between science and engineering by saying that science is concerned with whatispossible while engineering is concerned with choosing, from among the many possible ways, one that meets a number of often poorly stated economic and practical objectives. We call the field 'computer science' but I believe that it would be more accurately labeled 'computer engineering' were not this too likely to be misunderstood. So much of what we do is not a question of can there exist a monitor system, algorithm, scheduler, or compiler; rather it is a question of finding a practical working one with a reasonable expenditure of time and effort. While I would not change the name from 'computer science' to 'computer engineering,' I would like to see far more of a practical, engineering flavor in what we teach than I usually find in course outlines. There is a second reason for asking that we stress the practical side. As far into the future as I can see, computer science departments are going to need large sums of money. Now society usually, though not always, is more willing to give money when it can see practical One Man's View of Computer Science
209
returns than it is to invest in what it regards as impractical activities, amusing games, etc. If we are to get the vast sums of money I believe we will need, then we had be-ter give a practical flavor to our field. As many of you are well aware, we have already acquired a bad reputation in many areas. There have been exceptions, of course, but all of you know how poorly we have so far met the needs of software. At the heart of computer science lies a technological device, the computing machine. Without the machine almost all of what we do would become idle speculation, hardly different from that of the notorious Scholastics of the Middle Ages. The founders of the ACM clearly recognized that most of w,hat we did, or were going to do, rested on this technological device, and they deliberately included the word 'machinery' in the title. There are those who would like to eliminate the word, in a sense to symbolically free the field from reality, but so far these efforts have failed. I do not regret the initial choice. I still believe that it is important for us to recognize that the computer, the information processing machi le, is the foundation of our field. How shall we produce this flavor of practicality that I am asking for, as well as the reputation for delivering what society needs at the time it is needed? Perhaps most important is the way we go about our business and our teaching, though the research we do will also be very important. We need to avoid the bragging of uselessness and the game-playing that the pure mathematicians so often engage in. Whether or not the pure mathematician is right in claiming that what is utterly useless today will be useful tomorrow (and I doubt very much that he is, in the current situation), it is simply poor propaganda for raising the large amounts of money we need to support the continuing growth of the field. We need to avoid making computer science look like pure mathematics: our primary standard for acceptance should be experience in the real world, not aesthetics. Were I setting up a computer science program, I would give relatively more emphasis to laboratory work than does Curriculum 68, and in particular I would require every computer science major, undergraduate or graduate, to take a laboratory course in which he designs, builds, debugs, and documents a reasonably sized program, perhaps a simulator or a simplified compiler for a particular machine. The results would be judged on style of programrnIng, practical efficiency, freedom from bugs, and documentation. If any of these were too poor, I would not let the candidate pass. In judging his work we need to distinguish clearly between superficial cleverness and genuine understanding. Cleverness was essential in the past; it is no longer sufficient. I would also require a strong minor in some field other than computer science and mathematics. Without real experience in using the computer to get useful results the computer science major is apt to know all about the marvelous tool except how to use it. Such a person is a mere technician, skilled in manipulating the tool but with little 210
R. W. HAMMING
sense of how and when to use it for its basic purposes. I believe we should avoid turning out more idiot savants-we have more than enough 'computniks' now to last us a long time. What we need are professionals! The Curriculum 68 recognized this need for 'true-to-life' programming by saying, 'This might be arranged through summer employment, a cooperative work-study program, part-time employment in computer centers, special projects courses, or some other appropriate means.' I am suggesting that the appropriate means is a stiff laboratory course under your own control, and that the above suggestions of the Committee are rarely going to be effective or satisfactory. Perhaps the most vexing question in planning a computer science curriculum is determining the mathematics courses to require of those who major in the field. Many of us came to computing with a strong background in mathematics and tend automatically to feel that a lot of mathematics should be required of everyone. All too often the teacher tries to make the student into a copy of himself. But it is easy to observe that in the past many highly rated software people were ignorant of most of formal mathematics, though many of them seemed to have a natural talent for mathematics (as it is, rather than as it is often taught). In the past I have argued that to require a strong mathematical content for computer science would exclude many of the best people in the field. However, with the coming importance of scheduling and the allocating of the resources of the computer, I have had to reconsider my opinion. While there is some evidence that part of this will be incorporated into the hardware, I find it difficult to believe that there will not be for a long time (meaning at least five years) a lot of scheduling and allocating of resources in software. If this is to be the pattern, then we need to consider training in this field. If we do not give such training, then the computer science major will find that he is a technician who is merely programming what others tell him to do. Furthermore, the kinds of programming that were regarded in the past as being great often depended on cleverness and trickery and required little or no formal mathematics. This phase seems to be passing, and I am forced to believe that in the future a good mathematical background will be needed if our graduates are to do significant work. History shows that relatively few people can learn much new mathematics in their thirties, let alone later in life; so that if mathematics is going to play a significant role in the future, we need to give the students mathematical training while they are in school. We can, of course, evade the issue for the moment by providing two parallel paths, one with and one without mathematics, with the warning that the nonmathematical path leads to a dead end so far as further university training is concerned (assuming we believe that mathematics is essential for advanced training in computer science). One Man's View of Computer Science 211
-
Once we grant the need for a lot of mathematics, then we face the even more difficult task of saying specifically which courses. In spite of the numerical analysts' claims for the fundamental importance of their field, a surprising amount of computer science activity requires comparatively little of it. But I believe we can defend the requirement that every computer science major take at least one course in the field. Our difficulty lies, perhaps, in the fact that the present arrangement of formal mathematics courses is not suited to our needs as we presently see them. We seem to need some abstract algebra; some queuing theory; a lot of statistics, including the design of experiments; a moderate amount of probability, with perhaps some elements of Markov chains; parts of information and coding theory; and a little on bandwidth and signalling rates, some graph theory, etc., but we also know that the field is rapidly changing and that tomorrow we may need complex variables, topology, and other topics. As I said, the planning of the mathematics courses is probably the most vexing part of the curriculum. After a lot of thinking on the matter, I currently feel that if our graduates are to make significant contributions and not be reduced to the level of technicians running a tool as they are told by others, then it is better to give them too much mathematics rather than too little. I realize all too well that this will exclude many people whc in the past have made contributions, and I am not happy about my conclusion, but there it is. In the future, success in the field of computer science is apt to require a command of mathematics. One of the complaints regularly made of computer science curriculums is that they seem to almost totally ignore business applications and COBOL. I think that it is not a question of how important the applications are, nor how widely a language like COBOL is used, that should determine whether or not it is tau ght in the computer science department; rather, I think it depends on whether or not the business administration department can do a far better job than we can, and whether or not what is peculiar to the business applications is fundamental to other aspects of computer science. Ar.d what I have indicated about business applications applies, I believe, to most other fields of application that can be taught in other departments. I strongly believe that with the limited resources we have, and will have for a long time to come, we should not attempt to teach applications of computers in the computer science department -rather, the se applications should be taught in their natural environments by the appropriate departments. The problem of the role of analog computation in the computer science curriculum is not quite toe same as that of applications to special fields, since there is really no place else for it to go. There is little doubt that analog computers are economically important and will continue to be so for some time. But there is also little doubt that the field, even including hybrid computers, does not have at present the intellectual 212
R. W. HAMMING
ferment that digital computation does. Furthermore, the essence of good analog computation lies in the understanding of the physical limitations of the equipment and in the peculiar art of scaling, especially in the time variable, which is quite foreign to the rest of computer science. It tends, therefore, to be ignored rather than to be rejected; it is either not taught or else it is an elective, and this is probably the best we can expect at present when the center of interest is the general-purpose digital computer. At present there is a flavor of 'game-playing' about many courses in computer science. I hear repeatedly from friends who want to hire good software people that they have found the specialist in computer science is someone they do not want. Their experience is that graduates of our programs seem to be mainly interested in playing games, making fancy programs that really do not work, writing trick programs, etc. and are unable to discipline their own efforts so that what they say they will do gets done on time and in practical form. If I had heard this complaint merely once from a friend who fancied that he was a hard-boiled engineer, then I would dismiss it; unfortunately I have heard it from a number of capable, intelligent, understanding people. As I earlier said, since we have such a need for financial support for the current and future expansion of our facilities, we had better consider how we can avoid such remarks being made about our graduates in the coming years. Are we going to continue to turn out a product that is not wanted in many places? Or are we going to turn out responsible, effective people who meet the real needs of our society? I hope that the latter will be increasingly true; hence my emphasis on the practical aspects of computer science. One of the reasons that the computer scientists we turn out are more interested in 'cute' programming than in results is that many of our courses are being taught by people who have the instincts of a pure mathematician. Let me make another arbitrary distinction which is only partially true. The pure mathematician starts with the given problem, or else some variant that he has made up from the given problem, and produces what he says is an answer. In applied mathematics it is necessary to add two crucial steps: (1) an examination of the relevance of the mathematical model to the actual situation, and (2) the relevance of, or if you wish the interpretation of, the results of the mathematical model back to the original situation. This is where there is the sharp difference: The applied mathematician must be willing to stake part of his reputation on the remark, 'If you do so and so you will observe such and such very closely and therefore you are justified in going ahead and spending the money, or effort, to do the job as indicated,' while the pure mathematician usually shrugs his shoulders and says, 'That is none of my responsibility.' Someone must take the responsibility for the decision to go ahead on one path or another, and it seems to me that he who does assume this One Man's View of Computer Science
213
responsibility will get the greater credit, on the average, as it is doled out by society. We need, therefore, in our teaching of computer science, to stress the assuming of responsibility for the whole problem and not just the cute mathematical part. This is another reason why I have emphasized the engineering aspects of the various subjects and tried to minimize the purely mathematical aspects. The difficulty is, of course that so many of our teachers in computer science are pure mathematicians and that pure mathematics is so much easier to teach than is applied work. There are relatively few teachers available to teach in the style I am asking for. This means we must do the best we can with what we have, but we should be conscious of the direction we want to take and that we want, where possible, to give a practical flavor of responsibility and engineering rather than mere existence of results. It is unfortunate that in the early stages of computer science it is the talent and ability to handle a sea of minutiae which is important for success. But if the student is to grow into someone who can handle the larger aspects of computer science, then he must have, and develop, other talents which are not being used or exercised at the early stages. Many of our graduates never make this second step. The situation is much like that in mathematics: in the early years it is the command of the trivia of arithmetic and formal symbol manipulation of algebra which is needed, but in advanced mathematics a far different talent is needed for success. As I said, many of the people in computer science who made their mark in the area where the minutiae are the dominating feature do not develop the larger talents, and they are still around teaching and propagating their brand of detail. What is needed in the higher levels of computer science is not the 'black or white' mentality that characterizes so much of mathematics, but rather the judgment and balancing of conflicting aims that characterize engineering. I have so far skirted the field of software, or, as a friend of mine once said, 'ad hoc-ery.' There is so much truth in his characterization of software as ad hoc-ery that it is embarrassing to discuss the topic of what to teach in software courses. So much of what we have done has been in an ad hoc fashiorn, and we have been under so much pressure to get something gcing as soon as possible that we have precious little which will stani examination by the skeptical eye of a scientist or engineer who asks, 'What content is there in software?' How few are the difficult ideas to grasp in the field! How much is mere piling on of detail after detail wvthout any careful analysis! And when 50,000-word compilers are later remade with perhaps 5000 words, how far from reasonable must have been the early ones! I am no longer a software expert, so it is hard for me to make serious suggestions about what to do in the software field, yet I feel that all too often we have been satisfied with such a low level of quality that 214
R. W. HAMMING
we have done ourselves harm in the process. We seem not to be able to use the machine, which we all believe is a very powerful tool for manipulating and transforming information, to do our own tasks in this very field. We have compilers, assemblers, monitors, etc. for others, and yet when I examine what the typical software person does, I am often appalled at how little he uses the machine in his own work. I have had enough minor successes in arguments with software people to believe that I am basically right in my insistence that we should learn to use the machine at almost every stage of what we are doing. Too few software people even try to use the machine on their own work. There are dozens of situations where a little machine computation would greatly aid the programmer. I recall one very simple one where a nonexpert with a very long FORTRAN program from the outside wanted to convert it to our local use, so he wrote a simple FORTRAN program to locate all the input-output statements and all the library references. In my experience, most programmers would have personally scanned long listings of the program to find them and with the usual human fallibility missed a couple the first time. I believe we need to convince the computer expert that the machine is his most powerful tool and that he should learn to use it as much as he can rather than personally scan the long listings of symbols as I see being done everywhere I go around the country. If what I am reporting is at all true, we have failed to teach this in the past. Of course some of the best people do in fact use the computer as I am recommending; my observation is that the run-of-the-mill programmers do not do so. To parody our current methods of teaching programming, we give beginners a grammar and a dictionary and tell them that they are now great writers. We seldom, if ever, give them any serious training in style. Indeed I have watched for years for the appearance of a Manual of Style and/or an Anthology to Good Programmingand have as yet found none. Like writing, programming is a difficult and complex art. In both writing and programming, compactness is desirable but in both you can easily be too compact. When you consider how we teach good writing-the exercises, the compositions, and the talks that the student gives and is graded on by the teacher during his training in English -it seems we have been very remiss in this matter of teaching style in programming. Unfortunately only few programmers who admit that there is something in what I have called 'style' are willing to formulate their feelings and to give specific examples. As a result, few programmers write in flowing poetry; most write in halting prose. I doubt that style in programming is tied very closely to any particular machine or language, any more than good writing in one natural language is significantly different than it is in another. There are, of course, particular idioms and details in one language that favor one way of expressing the idea rather than another, but the essentials of good writing seem to transcend the differences in the Western One Man's View of Computer Science
215
European languages with which I am familiar. And I doubt that it is much different for most general-purpose digital machines that are available these days. Since I am apt to be misunderstood when I say we need more of an engineering flavor and less of a science one, I should perhaps point out that I came to computer science with a Ph.D. in pure mathematics. When I ask that the training in software be given a more practical, engineering flavor, I also loudly proclaim that we have too little understanding of what we are doing and that we desperately need to develop relevant theories. Indeed, one of my major corn plaints about the computer field is that whereas Newton could say, 'If I have seen a little farther than others it is because I have stood on t'he shoulders of giants,' I am forced to say, 'Today we stand on each other's feet.' Perhaps the central problem we face in all of computer science is how we are to get to the situation where we build on op of the work of others rather than redoing so much of it in a trivially different way. Science is supposed to be cumulative, not almost endless duplication of the same kind of things. This brings me to another distinction, that between undirected research and basic research. Everyone likes to do undirected research and most people like to believe that undirected research is basic research. I am choosing to define basic research as being work upon which people will in the future base a lot of their work. After all, what else can we reasonably mean by basic research other than work upon which a lot of later work is based? I believe experience shows that relatively few people are capable of doing basic research. While one cannot be certain that a particular piece of work will or will not turn out to be basic, cne can often give fairly accurate probabilities on the outcome. Upon examining the question of the nature of basic research, I have come to the conclusion that what determines whether or not a piece of work has much chance to become basic is not so much -he question asked as it is the way the problem is attacked. Numerical analysis is the one venerable part of our curriculum that is widely accepted as having some content. Yet all too often there is some justice in the remark that many of the textbooks are written for mathematicians and are in fact much more mathematics than they are practical computing. The reason is, of course, that many of the people in the field are converted, or rather only partially converted, mathematicians who still have the unconscious standards of mathematics in the back of their minds. I am sure many of you are familiar with my objections 3 along these lines and I need not repeat them here. 3
Hamming, R. W. Numerical analysis Xs. mathematics. Science 148 (Apr. 1965), 473-475.
216
R. W. HAMMING
It has been remarked to me by several persons, and I have also observed, that many of the courses in the proposed computer science curriculum are padded. Often they appear to cover every detail rather than confining themselves to the main ideas. We do not need to teach every method for finding the real zeros of a function: we need to teach a few typical ones that are both effective and illustrate basic concepts in numerical analysis. And what I have just said about numerical analysis goes even more for software courses. There do not seem to me (and to some others) to be enough fundamental ideas in all that we know of software to justify the large amount of time that is devoted to the topic. We should confine the material we teach to that which is important in ideas and technique -the plodding through a mass of minutiae should be avoided. Let me now turn to the delicate matter of ethics. It has been observed on a number of occasions that the ethical behavior of the programmers in accounting installations leaves a lot to be desired when compared to that of the trained accounting personnel. 4 We seem not to teach the 'sacredness' of information about people and private company material. My limited observation of computer experts is that they have only the slightest regard for these matters. For example, most programmers believe they have the right to take with them any program they wish when they change employers. We should look at, and copy, how ethical standards are incorporated into the traditional accounting courses (and elsewhere), because they turn out a more ethical product than we do. We talk a lot in public of the dangers of large data banks of personnel records, but we do not do our share at the level of indoctrination of our own computer science majors. Along these lines, let me briefly comment on the matter of professional standards. We have recently had a standard published 5 and it seems to me to be a good one, but again I feel that I am justified in asking how this is being incorporated into the training of our students, how they are to learn to behave that way. Certainly it is not sufficient to read it to the class each morning; both ethical and professional behavior are not effectively taught that way. There is plenty of evidence that other professions do manage to communicate to their students professional standards which, while not always followed by every member, are certainly a lot better instilled than those we are presently providing for our students. Again, we need to examine how they do this kind of training and try to adapt their methods to our needs. Lastly, let me mention briefly the often discussed topic of social responsibility. We have sessions at meetings on this topic, we discuss it in the halls and over coffee and beer, but again I ask, 'How is it being incorporated into our training program?' The fact that we do not 4
Carey, J. L., and Doherty, W.A.Ethical Standards of the Accounting Profession. Am. Inst. CPAs., 1966. 5
Comm. ACM 11, 3 (Mar. 1968), 198-220.
One Man's View of Computer Science 217
have exact rules to follow is not sufficient reason for omitting all training in this important matter. I believe these three topics--ethics, professional behavior, and social responsibility -must be incorporated into the computer science curriculum. Personally I do not believe that a separate course on these topics will be effective. From what little I understand of the matter of teaching these kinds of things, they can best be taught by example, by the behavior of the professor They are taught in the odd moments, by the way the professor phrases his remarks and handles himself. Thus it is the professor who inust first be made conscious that a significant part of his teaching role is in communicating these delicate, elusive matters and that he is not justified in saying, 'They are none of my business.' These are things that must be taught constantly, all the time, by everyone, or they will not be taught at all. And if they are not somehow taught to the majority of our students, then the field will justly keep its present reputation (which may well surprise you if you ask your colleagues in other departments for their frank opinions). In closing, let me revert to a reasonable perspective of the computer science field. The field is very new, it has had to run constantly just to keep up, and there has been little time for many of the things we have long known we must some day do. But at least in the universities we have finally arrived: we have established separate departments with reasonable courses, faculty, and equipment. We are now well started, and it is time to deepen, strengthen, and improve our field so that we can be justly proud of what we teach, how we teach it, and of the students we turn out. We are not engaged in turning out technicians, idiot savants, and computniks; we know that in this modern, complex world we must turn out people who can play responsible major roles in our changing society, or else we must acknowledge that we have failed in our duty as teachers and leaders in this exciting, important field -computer science.
Categories and Subject Descriptors: K.3.2 [Computers and Education]: Computer and Information Science Education-computer science eou-ation, curriculum; K.7.m [The Computing Profession]: Miscellaneous -- ehics; J.2 [ComputerApplications]: Physical Sciences and Engineering General Terms: Design, Human Factors, Standerds
218 R. W. HAMMING
Form and Content in Computer Science MARVIN MINSKY Massachusetts Institute of Technology Cambridge, Massachusetts An excessive preoccupation with formalism is impeding the development of computer science. Form-content confusion is discussed relative to three areas: theory of computation, programming languages, and education.
The trouble with computer science today is an obsessive concern with form instead of content. No, that is the wrong way to begin. By any previous standard the vitality of computer science is enormous; what other intellectual area ever advanced so far in twenty years? Besides, the theory of computation perhaps encloses, in some way, the science of form, so that the concern is not so badly misplaced. Still, I will argue that an excessive preoccupation with formalism is impeding our development. Before entering the discussion proper, I want to record the satisfaction my colleagues, students, and I derive from this Thring award. The cluster of questions, once philosophical but now scientific, surrounding the understanding of intelligence was of paramount concern to Alan Touring, and he along with a few other thinkers-notably Warren S. McCulloch and his young associate, Walter Pitts -made many of the Author's present address: MIT Artificial Intelligence Laboratory; 545 Technology Drive, Cambridge, MA 021390. 219
early analyses that led both to the computer itself and to the new technology of artificial intelligence. In recognizing this area, this award should focus attention on other work of my own scientific family especially Ray Solomonoff, Oliver Selfridge, John McCarthy, Allen Newell, Herbert Simon, and Seymour Papert, my closest associates in a decade of work. Papert's views pervade this essay. This essay has three parts, suggesting form-content confusion in theory of computation, in programming languages, and in education.
1 Theory of Computation To build a theory, one needs to know a lot about the basic phenomena of the subject matter. We s.rnply do not know enough about these, in the theory of computation, to teach the subject very abstractly. Instead, we ought to teach more about the particular examples we now understand thoroughly, and hope that from this we will be able to guess and prove more general principles. I am not saying this just to be conservative about things probably true that haven't been proved yet. I think that many of our beliefs that seem to be common sense are false. We have bad misconceptions about the possible exchanges between time and memory, trade-offs between time and program complexity, software and hardware, digital and analog circuits, serial and parallel computations, associative and addressed memory, and so on. It is instructive to consider the analogy with physics, in which one can organize much of the basic knowledge as a collection of rather compact conservation laws. This, of course, is just one kind of description; one could use differential equations, minimum principles, equilibrium laws, etc. Conservation of energy, for example, can be interpreted as defining exchanges between various forms of potential and kinetic energies, such as between height and velocity squared, or between temperature and pressure-volume. One can base a development of quantum theory on a trade-off between certainties of position and momentum, or between time and energy. There is nothing extraordinary about this; any equation with reasonably smooth solutions can be regarded as defining some kind of a trade-off among its variable quantities. But there are many ways to formulate things and it is risky to become too attached to one particular form or law and come to believe that it is the real basic principle. See Feynman's [1] dissertation on this. Nonetheless, the recognition of exchanges is often the conception of a science, if quantifying t aem is its birth. What do we have, in the computation field, of this character? In the theory of recursive 220
MARVIN MINSKY
functions, we have the observation by Shannon [2] that any Turing machine with Q states and R symbols is equivalent to one with 2 states and nQR symbols, and to one with 2 symbols and n'QR states, where n and n' are small numbers. Thus the state-symbol product QR has an almost invariant quality in classifying machines. Unfortunately, one cannot identify the product with a useful measure of machine complexity because this, in turn, has a trade-off with the complexity of the encoding process for the machines-and that trade-off seems too inscrutable for useful application. Let us consider a more elementary, but still puzzling, trade-off, that between addition and multiplication. How many multiplications does it take to evaluate the 3 x 3 determinant? If we write out the expansion as six triple-products, we need twelve multiplications. If we collect factors, using the distributive law, this reduces to nine. What is the minimum number, and how does one prove it, in this and in the n x n case? The important point is not that we need the answer. It is that we do not know how to tell or prove that proposed answers are correct! For a particular formula, one could perhaps use some sort of exhaustive search, but that wouldn't establish a general rule. One of our prime research goals should be to develop methods to prove that particular procedures are computationally minimal, in various senses. A startling discovery was made about multiplication itself in the thesis of Cook [3], which uses a result of Toom; it is discussed in Knuth [4]. Consider the ordinary algorithm for multiplying decimal numbers: for two n-digit numbers this employs n2 one-digit products. It is usually supposed that this is minimal. But suppose we write the numbers in two halves, so that the product is N = (c A + B) (@ C + D), where @ stands for multiplying by lOn/2. (The left-shift operation is considered to have negligible cost.) Then one can verify that N = @@AC + BD + @(A + B) (C +D) - @(AC + BD). This involves only three half-length multiplications, instead of the four that one might suppose were needed. For large n, the reduction can obviously be reapplied over and over to the smaller numbers. The price is a growing number of additions. By compounding this and other ideas, Cook showed that for any e and large enough n, multiplication requires less than n1' products, instead of the expected n2 . Similarly, V. Strassen showed recently that to multiply two m x m matrices, the number of products could be reduced to the order of mlog& 7, when it was always believed that the number must be cubic - because there are m2 terms in the result and each would seem to need a separate inner Form and Content in Computer Science
221
product with m multiplications. In both cases ordinary intuition has been wrong for a long time, sc wrong that apparently no one looked for better methods. We still dc not have a set of proof methods adequate for establishing exactly what is the minimum trade-off exchange, in the matrix case, between multiplying and adding. The multiply-add exchange may not seem vitally important in itself, but if we cannot thoroughly understand something so simple, we can expect serious trouble with anything more complicated. Consider another trade-off, that between memory size and computation time. In our book [5], Papert and I have posed a simple question: given an arbitrary collection oF n-bit words, how many references to memory are required to tell which of those words is nearest' (in number of bits that agree) to an arbitrary given word? Since there are many ways to encode the 'library' collection, some using more memory than others, the question stated more precisely is: how must the memory size grow to achieve a given reduction in the number of memory references? This much is trivial: if memory is large enough, only one reference is required, for we can use the question itself as address, and store the answer in the register so addressed. But if the memory is just large enough to store the information in the library, then one has to search all of it -and we do not know any intermediate results of any value. It is surely a fundamental theoretical problem of information retrieval, yet no one seems to have any idea about how to set a good lower bound on this basic trade-off. Another is the serial-parallel exchange. Suppose that we had n computers instead of just one. ]Iow much can we speed up what kinds of calculations? For some, we can surely gain a factor of n. But these are rare. For others, we can gain log n, but it is hard to find any or to prove what are their properties. And for most, I think, we can gain hardly anything; this is the case in which there are many highly branched conditionals, so that look-ahead on possible branches will usually be wasted. We know almost nothing about this; most people think, with surely incorrect optimism, that parallelism is usually a profitable way to speed up most computations. These are just a few of the poorly understood questions about computational trade-offs. There is no space to discuss others, such as the digital-analog question. (Some problems about local versus global computations are outlined in [C].) And we know very little about trades between numerical and symbDc'ic calculations. There is, in today's computer science curricula, very little attention to what is known about such questions; almost all their time is devoted to formal classifications of syntactic language types, defeatist unsolvability theories, folklore about systems programming, and generally trivial fragments of 'optimization of logic design' -the latter often in
I
For identifying an exact match, one can use hash-coding and the problem is reasonably well understood.
222
MARVIN MINSKY
situations where the art of heuristic programming has far outreached the special-case 'theories' so grimly taught and tested -and invocations about programming style almost sure to be outmoded before the student graduates. Even the most seemingly abstract courses on recursive function theory and formal logic seem to ignore the few known useful results on proving facts about compilers or equivalence of programs. Most courses treat the results of work in artificial intelligence, some now fifteen years old, as a peripheral collection of special applications, whereas they in fact represent one of the largest bodies of empirical and theoretical exploration of real computational questions. Until all this preoccupation with form is replaced by attention to the substantial issues in computation, a young student might be well advised to avoid much of the computer science curricula, learn to program, acquire as much mathematics and other science as he can, and study the current literature in artificial intelligence, complexity, and optimization theories.
2
Programming Languages Even in the field of programming languages and compilers, there is too much concern with form. I say 'even' because one might feel that this is one area in which form ought to be the chief concern. But let us consider two assertions: (1) languages are getting so they have too much syntax, and (2) languages are being described with too much syntax. Compilers are not concerned enough with the meanings of expressions, assertions, and descriptions. The use of context-free grammars for describing fragments of languages led to important advances in uniformity, both in specification and in implementation. But although this works well in simple cases, attempts to use it may be retarding development in more complicated areas. There are serious problems in using grammars to describe self-modifying or self-extending languages that involve execution, as well as specifying, processes. One cannot describe syntactically - that is, statically - the valid expressions of a language that is changing. Syntax extension mechanisms must be described, to be sure, but if these are given in terms of a modern pattern-matching language such as SNOBOL, CONVERT [6], or MATCHLESS [7], there need be no distinction between the parsing program and the language description itself. Computer languages of the future will be more concerned with goals and less with procedures specified by the programmer. The following arguments are a little on the extreme side but, in view of today's preoccupation with form, this overstepping will do no harm. (Some of the ideas are due to C. Hewitt and T. Winograd.) Form and Content in Computer Science 223
2.1
Syntax Is Often Unnecessary One can survive with muc h less syntax than is generally realized. Much of programming syntax is concerned with suppression of parentheses or with emphasis of scope markers. There are alternatives that have been much underused. Please do not think I am against the use, at the human interface, of such devices as infixes and operator precedence. They have their place. But their importance to computer science as a whole has been so exaggerated that it is beginning to corrupt the youth. Consider the familiar algorithm for the square root, as it might be written in a modern algebraic language, ignoring such matters as declarations of data types, One asks for the square root of A, given an initial estimate X and an error limit E. DEFINE SQRT(A,X,E): if ABS(A-X * X)< E then X else SQRT(A, (X+A÷ X)÷.2, E). In an imaginary but recognizable version of LisP (see Levin [8] or Weissman [9]), this same procedure might be written:
(DEFINE (SQRT AXE) (IF (LESS (ABS (-A (* X X))) E) THEN X ELSE (SQRT A ( . ( + X ( . A X)) 2) E))) Here, the function names come immediately inside their parentheses. The clumsiness, for humans, of writing all the parentheses is evident; the advantages of not having to learn all the conventions, such as that (X+A . X) is (+X (- AX)) and not ( . (+X A) X), is often overlooked. It remains to be seen whether a syntax with explicit delimiters is reactionary, or whether it is the wave of the future. It has important advantages for editing, interpret ing, and for creation of programs by other programs. The complete syntax of LISP can be learned in an hour or so; the interpreter is compact and not exceedingly complicated, and students often can answer questions about the system by reading the interpreter program itself. Of course, this will not answer all questions about real, practical implementation, but neither would any feasible set of syntax rules. Furthermore, despite the language's clumsiness, many frontier workers consider it to have outstanding expressive power. Nearly all work on procedures that solve problems by building and 224 MARVIN MINSKY
modifying hypotheses have been written in this or related languages. Unfortunately, language designers are generally unfamiliar with this area, and tend to dismiss it as a specialized body of 'symbolmanipulation techniques.' Much can be done to clarify the structure of expressions in such a 'syntax-weak' language by using indentation and other layout devices that are outside the language proper. For example, one can use a 'postponement' symbol that belongs to an input preprocessor to rewrite the above as DEFINE (SQRT AXE) II. IF X THEN X ELSE Xl LESS (ABS X ) E. -- A (* XX). SQRT A l E. X 2. +X (÷AX) where the dot means ')(' and the arrow means 'insert here the next expression, delimited by a dot, that is available after replacing (recursively) its own arrows.' The indentations are optional. This gets a good part of the effect of the usual scope indicators and conventions by two simple devices, both handled trivially by reading programs, and it is easy to edit because subexpressions are usually complete on each line. To appreciate the power and limitations of the postponement operator, the reader should take his favorite language and his favorite algorithms and see what happens. He will find many choices of what to postpone, and he exercises judgment about what to say first, which arguments to emphasize, and so forth. Of course, {, is not the answer to all problems; one needs a postponement device also for list fragments, and that requires its own delimiter. In any case, these are but steps toward more graphical program-description systems, for we will not forever stay confined to mere strings of symbols. Another expository device, suggested by Dana Scott, is to have alternative brackets for indicating right-to-left functional composition, so that one can write (((x)h)g)f instead of f(g(h(x))) when one wants to indicate more naturally what happens to a quantity in the course of a computation. This allows different 'accents,' as inf((h(x))g), which can be read: 'Compute f of what you get by first computing h(x) and then applying g to it.' The point is better made, perhaps, by analogy than by example. In their fanatic concern with syntax, language designers have become too sentence oriented. With such devices as i1, one can construct objects that are more like paragraphs, without falling all the way back to flow diagrams. Form and Content in Computer Science 225
Today's high level programming languages offer little expressive power in the sense of flexibility of style. One cannot control the sequence of presentation of ideas very much without changing the algorithm itself.
2.2 Efficiency and UnderstandingPrograms What is a compiler for? The usual answers resemble 'to translate from one language to another' or 'to take a description of an algorithm and assemble it into a program, filling in many small details.' For the future, a more ambitious view is required. Most compilers will be systems that 'produce an algorithm, given a description of its effect.' This is already the case for r-odern picture-format systems; they do all the creative work, while the user merely supplies examples of the desired formats: here the compilers are more expert than the users. Pattern-matching languages are also good examples. But except for a few such special cases, the compiler designers have made little progress in getting good programs written. Recognition of common subexpressions, optimization of inner loops, allocation of multiple registers, and so forth, lead but to small linear improvements in efficiency -and compilers do Little enough about even these. Automatic storage assignments can be wor h more. But the real payoff is in analysis of the computational content of the algorithm itself, rather than the way the programmer wrote it clown. Consider, for example: DEFINE FIB(N): if =1 then 1, if N =2 then 1, else FIB (N - 1) +-FIB (N - 2). This recursive definition of the Fibonacci numbers 1, 1, 2, 3, 5, 8, 13,... can be given to any respectable algorithmic language and will result in the branching tree of evaluation steps shown in Figure 1. F(6)
F(4)
F(5)
F(2)
F(3) F(2)
F(2)
F(1)
F(S) FIGURE 1
226
MARVIN MINSKY
F(2)
F(3)
F(3)
F(4)
F(2)
F(1)
One sees that the amount of work the machine will do grows exponentially with N. (More precisely, it passes through the order of FIB (N) evaluations of the definition!) There are better ways to compute this function. Thus we can define two temporary registers and evaluate FIB(Nl1) in DEFINE FIB (NAB): if N= I then A else FIB(N- 1A +BA).
which is singly recursive and avoids the branching tree, or even use A=O B= 1 LOOP SWAP AB if N= 1 return A N=N- 1 B=A+B goto LOOP Any programmer will soon think of these, once he sees what happens in the branching evaluation. This is a case in which a 'course-of values' recursion can be transformed into a simple iteration. Today's compilers don't recognize even simple cases of such transformations, although the reduction in exponential order outweighs any possible gains in local 'optimization' of code. It is no use protesting either that such gains are rare or that such matters are the programmer's responsibility. If it is important to save compiling tme, then such abilities could be excised. For programs written in the pattern-matching languages, for example, such simplifications are indeed often made. One usually wins by compiling an efficient tree-parser for BNF system instead of executing brute force analysis-by-synthesis. To be sure, a systematic theory of such transformations is difficult. A system will have to be pretty smart to detect which transformations are relevant and when it pays to use them. Since the programmer already knows his intent, the problem would often be easier if the proposed algorithm is accompanied (or even replaced) by a suitable goal-declaration expression. To move in this direction, we need a body of knowledge about analyzing and synthesizing programs. On the theoretical side there is now a lot of activity studying the equivalence of algorithms and schemata, and on proving that procedures have stated properties. On the practical side the works of W. A. Martin [10] and J. Moses [11] illustrate how to make systems that know enough about symbolic transformations of particular mathematical techniques to significantly supplement the applied mathematical abilities of their users. Form and Content in Computer Science
227
There is no practical consequence to the fact that the programreduction problem is recursively unsolvable, in general. In any case one would expect programs eventually to go far beyond human ability in this activity and make use of a large body of program transformations in formally purified form ;. These will not be easy to apply directly. Instead, one can expect the development to follow the lines we have seen in symbolic integration, e.g., as in Slagle [12] and Moses [11]. First a set of simple formal transformations that correspond to the elementary entries of a Table of Integrals was developed. On top of these Slagle built a set of heuristic techniques for the algebraic and analytic transformation of a practical problem into those already understood elements; this involved a set of characterization and matching procedures that might be said to use 'pattern recognition.' In the system of Moses both the matching procedures and the transformations were so refined that, in most practical. problems, the heuristic search strategy that played a large part in the performance of Slagle's program became a minor augmentation of the sure knowledge and its skillful application comprised in Moses' system. A heuristic compiler system will eventually need much more general knowledge and common sense than did the symbolic integration systems, for its goal is more like making a whole mathematician than a specialized integrator.
2.3 Describing ProgrammingSystems No matter how a language is described, a computer must use a procedure to interpret it. One should remember that in describinga language the main goal is to explain how to write programs in it and what such programs mean. The main goal isn't to describe the syntax. Within the static framework of syntax rules, normal forms, Post productions, and other such schemes, one obtains the equivalents of logical systems with axioms, rules of inference, and theorems. To design an unambiguous syntax corresponds then to designing a mathematical system in which each theorem has exactly one proof! But in the computational framework, this is quite beside the point. One has an extra ingredient -control - that lies outside the usual framework of a logical system; an additional set of rules that specify when a rule of inference is to be used. Sc, for many purposes, ambiguity is a pseudoproblem. If we view a program as a process, we can remember that our most powerful process-describing tools are programs themselves, and they are inherently unambiguous. There is no paradox in defining a programming language by a program. The procedural definition must be understood, of course. One can achieve this understanding by definitions written in another language, one that may be d fferent, more familiar, or simpler than the one being defined. But it is Dften practical, convenient, and proper 228
MARVIN MINSKY
to use the same language! For to understand the definition, one needs to know only the working of that particular program, and not all implications of all possible applications of the language. It is this particularization that makes bootstrapping possible, a point that often puzzles beginners as well as apparent authorities. Using BNF to describe the formation of expressions may be retarding development of new languages that smoothly incorporate quotation, self-modification, and symbolic manipulation into a traditional algorithmic framework. This, in turn, retards progress toward problemsolving, goal-oriented programming systems. Paradoxically, though modern programming ideas were developed because processes were hard to depict with classical mathematical notations, designers are turning back to an earlier form - the equation - in just the kind of situation that needs program. In Section 3, which is on education, a similar situation is seen in teaching, with perhaps more serious consequences.
3 Learning, Teaching, and the 'New Mathematics' Education is another area in which the computer scientist has confused form and content, but this time the confusion concerns his professional role. He perceives his principal function to provide programs and machines for use in old and new educational schemes. Well and good, but I believe he has a more complex responsibility -to work out and communicate models of the process of education itself. In the discussion below, I sketch briefly the viewpoint (developed with Seymour Papert) from which this belief stems. The following statements are typical of our view: -To help people learn is to help them build, in their heads, various kinds of computational models. -This can best be done by a teacher who has, in his head, a reasonable model of what is in the pupil's head. - For the same reason the student, when debugging his own models and procedures, should have a model of what he is doing, and must know good debugging techniques, such as how to formulate simple but critical test cases.
-It will help the student to know something about computational models and programming. The idea of debugging itself, for example, 2
Turing was quite good at debugging hardware. He would leave the power on, so as not to lose the 'feel' of the thing. Everyone does that today, but it is not the same thing now that the circuits all work on three or five volts. Form and Content in Computer Science
229
is a very powerful concept-- in contrast to the helplessness promoted by our cultural heritage abou: gifts, talents, and aptitudes. The latter encourages 'I'm not good at this' instead of 'How can I make myself better at it?' These have the sound of cc mmon sense, yet they are not among the basic principles of any of the popular educational schemes such as 'operant reinforcement,' 'discovery methods,' audio-visual synergism, etc. This is not because educators have ignored the possibility of mental models, but because they simply had no effective way, before the beginning of work on simulation of thought processes, to describe, construct, and test such ideas. We cannot digress here to answer skeptics who feel it too simpleminded (if not impious, or obscene) to compare minds with programs. We can refer many such critics to Turing's paper [13]. For those who feel that the answer cannot lie in any machine, digital or otherwise, one can argue 114] that machines, when they become intelligent, very likely will fee l the same way. For some overviews of this area, see Feigenbaum and Feldman [15] and Minsky [161; one can keep really up-to-date in this fast-moving field only by reading the contemporary doctoral theses and conference papers on artificial intelligence. There is a fundamental pragmatic point in favor of our propositions. The child needs models: to understand the city he may use the organism model; it must eat, breathe, excrete, defend itself, etc. Not a very good model, but useful enough. Thz metabolism of a real organism he can understand, in turn, by comparison with an engine. But to model his own self he cannot use the engine or the organism or the city or the telephone switchboard; nothing will serve at all but the computer with its programs and their bugs. Evz ntually, programming itself will become more important even than mathematics in early education. Nevertheless I have chosen mathematics as the subject of the remainder of this paper, partly because we understand it better but mainly because the prejudice against programming as an academic subject would provoke too much resistance. Any other subject could also do, I suppose, but mathematical issues and concepts are the Sharpest and least confused by highly charged emotional problems.
3.1 Mathematical Portrait of a Small Child Imagine a small child of between five and six years, about to enter the first grade. If we extrapolate today's trend, his mathematical education will be conducted by poorly oriented teachers and, partly, by poorly programmed machines; neither will be able to respond to much beyond 'correct' and 'wrong' answer, let alone to make reasonable inter230
MARVIN MINSKY
pretations of what the child does or says, because neither will contain good models of the children, or good theories of children's intellectual development. The child will begin with simple arithmetic, set theory, and a little geometry; ten years later he will know a little about the formal theory of the real numbers, a little about linear equations, a little more about geometry, and almost nothing about continuous and limiting processes. He will be an adolescent with little taste for analytical thinking, unable to apply the ten years' experience to understanding his new world. Let us look more closely at our young child, in a composite picture drawn from the work of Piaget and other observers of the child's mental construction. Our child will be able to say 'one, two, three . . . ' at least up to thirty and probably up to a thousand. He will know the names of some larger numbers but will not be able to see, for example, why ten thousand is a hundred hundred. He will have serious difficulty in counting backwards unless he has recently become very interested in this. (Being good at it would make simple subtraction easier, and might be worth some practice.) He doesn't have much feeling for odd and even. He can count four to six objects with perfect reliability, but he will not get the same count every time with fifteen scattered objects. He will be annoyed with this, because he is quite sure he should get the same number each time. The observer will therefore think the child has a good idea of the number concept but that he is not too skillful at applying it. However, important aspects of his concept of number will not be at all secure by adult standards. For example, when the objects are rearranged before his eyes, his impression of their quantity will be affected by the geometric arrangement. Thus he will say that there are fewer x's than y's in: xxxxxxx Y YYYYY Y and when we move the x's to x x x x x x x YYYYYYY
he will say there are more x's than y's. To be sure, he is answering (in his own mind) a different question about size, quite correctly, but this is exactly the point; the immutability of the number, in such situations, has little grip on him. He cannot use it effectively for reasoning although he shows, on questioning, that he knows that the number of things cannot change simply because they are rearranged. Similarly, when Form and Content in Computer Science
231
water is poured from one glass to another (Figure 2(a)), he will say that there is more water in the tall jar than in the squat one. He will have poor estimates about plane areas, so that we will not be able to find a context in which he tre alts the larger area in Figure 2 (b) as four times the size of the smaller one.
(a)
A V (b)
(c) FIGURE 2
When he is an adult, by the wAy, and is given two vessels, one twice as large as the other, in all dimensions (Figure 2(c)), he will think the one holds about four times as much as the other; probably he will never acquire better estimates of volume. As for the numbers themselves, we know little of what is in his mind. According to Galton [17], thirty children in a hundred will associate small numbers with definite visual locations in the space in front of their body image, arranged in some idiosyncratic manner such as that shown in Figure 3. They will probably still retain these as adults, and may use them in some obscure semiconscious way to remember telephone numbers; they will probably grow different spatio-visual representations for historical dates, etc. The teachers will never have
X
9
Z34
23
67 F GURE 232
MARVIN MINSKY
3
2.
heard of such a thing and, if a child speaks of it, even the teacher with her own number form is unlikely to respond with recognition. (My experience is that it takes a series of carefully posed questions before one of these adults will respond, 'Oh, yes; 3 is over there, a little farther back.') When our child learns column sums, he may keep track of carries by setting his tongue to certain teeth, or use some other obscure device for temporary memory, and no one will ever know. Perhaps some ways are better than others. His geometric world is different from ours. He does not see clearly that triangles are rigid, and thus different from other polygons. He does not know that a 100-line approximation to a circle is indistinguishable from a circle unless it is quite large. He does not draw a cube in perspective. He has only recently realized that squares become diamonds when put on their points. The perceptual distinction persists in adults. Thus in Figure 4 we see, as noted by Attneave [18], that the impression of square versus diamond is affected by other alignments in the scene, evidently by determining our choice of which axis of symmetry is to be used in the subjective description.
FIGURE
4
Our child understands the topological idea of enclosure quite well. Why? This is a very complicated concept in classical mathematics but in terms of computational processes it is perhaps not so difficult. But our child is almost sure to be muddled about the situation in Figure 5 (see Papert [19]): 'When the bus begins its trip around the lake, a boy
FIGURE 5
Form and Content in Computer Science
233
is seated on the side away front the water. Will he be on the lake side at some time in the trip?' Difficulty with this is liable to persist through the child's eighth year, and per haps relates to his difficulties with other abstract double reversals suct as in subtracting negative numbers, or with apprehending other consequences of continuity-'At what point in the trip is there any sudden change?' - or with other bridges between logical and global. Our portrait is drawn in more detail in the literature on developmental psychology. But no one has yet built enough of a computational model of a child to see how these abilities and limitations link together in a structure compatible with (and perhaps consequential to) other things he can do so effectively Such work is beginning, however, and I expect the next decade to see substantial progress on such models. If we knew more about these matters, we might be able to help the child. At present we don't even have good diagnostics: his apparent ability to learn to give correct answers to formal questions may show only that he has developed come isolated library routines. If these cannot be called by his central problem-solving programs, because they use incompatible data structures or whatever, we may get a high rated test-passer who will never th'.r.k very well. Before computation, the community of ideas about the nature of thought was too feeble to sup port an effective theory of learning and development. Neither the finite-state models of the Behaviorists, the hydraulic and economic analogies of the Freudians, nor the rabbit-inthe-hat insights of the Gestaltists supplied enough ingredients to understand so intricate a subject. It needs a substrate of already debugged theories and solutions of related but simpler problems. Now we have a flood of such ideas, well defined and implemented, for thinking about thinking; only a fraction are represented in traditional psychology: symbol table pure procedure time-sharing
closed subroutines pushdown list interrupt
calling sequence
communication cell common storage
functional argument memory protection dispatch table error message function-call trace breakpoint languages compiler indirect address macro property list data type hash coding microprogram format matching 234
MARVIN MINSKY
decision tree hardware-software trade-off serial-parallel trade-off time-memory trade-off conditional breakpoint asynchronous processor interpreter garbage collection list structure block structure look-ahead look-behind diagnostic program executive program
These are just a few ideas from general systems programming and debugging; we have said nothing about the many more specifically relevant concepts in languages or in artificial intelligence or in computer hardware or other advanced areas. All these serve today as tools of a curious and intricate craft, programming. But just as astronomy succeeded astrology, following Kepler's regularities, the discovery of principles in empirical explorations of intellectual process in machines should lead to a science. (In education we face still the same competition! The Boston Globe has an astrology page in its 'comics' section. Help fight intellect pollution!) To return to our child, how can our computational ideas help him with his number concept? As a baby he learned to recognize certain special pair configurations such as two hands or two shoes. Much later he learned about some threes - perhaps the long gap is because the environment doesn't have many fixed triplets: if he happens to find three pennies he will likely lose or gain one soon. Eventually he will find some procedure that manages five or six things, and he will be less at the mercy of finding and losing. But for more than six or seven things, he will remain at the mercy of forgetting; even if his verbal count is flawless, his enumeration procedure will have defects. He will skip some items and count others twice. We can help by proposing better procedures; putting things into a box is nearly foolproof, and so is crossing them off. But for fixed objects he will need some mental grouping procedure. First one should try to know what the child is doing; eye-motion study might help, asking him might be enough. He may be selecting the next item with some unreliable, nearly random method, with no good way to keep track of what has been counted. We might suggest: sliding a cursor; inventing easily remembered groups; drawing a coarse mesh. In each case the construction can be either real or imaginary. In using the mesh method one has to remember not to count twice objects that cross the mesh lines. The teacher should show that it is good to plan ahead, as in Figure 6, distorting the mesh to avoid the ambiguities! -
FIGURE 6 Mathematically, the important concept is that 'every proper counting procedure yields the same number' The child will understand that any algorithm is proper which (1) counts all the objects, (2) counts none of them twice. Form and Content in Computer Science 235
Perhaps this procedural condition seems too simple; even an adult could understand it. In any case, it is not the concept of number adopted in what is today generally cal ed the 'New Math,' and taught in our primary schools. The following polemic discusses this.
3.2
The 'New Mathematics' By the 'new math' I mean certain primary school attempts to imitate the formalistic outputs of professional mathematicians. Precipitously adopted by many schools in the wake of broad new concerns with early education, I think the approach is generally bad because of form-content displacements of several kinds. These cause problems for the teacher as well as for the child. Because of the formalistic approach the teacher will not be able to help the child very much with problems of formulation. For she will feel insecure herself as she drilkl him on such matters as the difference between the empty set and nothing, or the distinction between the 'numeral' 3 + 5 and the numeral 8 which is the 'common name' of the number eight, hoping that he will not ask what is the common name of the fraction 8/l, which is probably different from the rational 8/1 and different from the quotient 8/1 and different from the 'indicated division' 8/, and different frcm the ordered pair (8,1). She will be reticent about discussing parallel lines. For parallel lines do not usually meet, she knows, but they might (she has heard) if produced far enough, for did not something like that happen once in an experiment by some Russian mathematicians? But enough of the problems of the teacher: let us consider now three classes of objections from the child's standpoint. Developmental Objections. It is very bad to insist that the child keep his knowledge in a simple ordered hierarchy. In order to retrieve what he needs, he must have a multiply connected network, so that he can try several ways to do each thing. He may not manage to match the first method to the needs of the problem. Emphasis on the 'formal proof' is destructive at this stage, because the knowledge needed for finding proofs, and for understanding them, is far more complex (and less useful) than the knowledge mentioned in proofs. The network of knowledge one needs for understanding geometry is a web of examples and phenomena, and observations about the similarities and differences between them. One does not find evidence, in children, that such webs are ordered like the axioms and theorems of a logistic system, or that the child could use such a lattice if he had one. After one understands a phenomenon, it may be of great value to make a formal system for it, to make it easier to understand more advanced things. But even then, such a formal system is just one of many possible models; the New Math writers seem to confuse their axiom-theorem model with 236
MARVIN MINSKY
the number system itself. In the case of the axioms for arithmetic, I will now argue, the formalism is often likely to do more harm than good for the understanding of more advanced things. Historically, the 'set' approach used in New Math comes from a formalist attempt to derive the intuitive properties of the continuum from a nearly finite set theory. They partly succeeded in this stunt (or 'hack,' as some programmers would put it), but in a manner so complex that one cannot talk seriously about the real numbers until well into high school, if one follows this model. The ideas of topology are deferred until much later. But children in their sixth year already have welldeveloped geometric and topological ideas, only they have little ability to manipulate abstract symbols and definitions. We should build out from the child's strong points, instead of undermining him by attempting to replace what he has by structures he cannot yet handle. But it is just like mathematicians-certainly the world's worst expositors-to think: 'You can teach a child anything, if you just get the definitions precise enough,' or 'If we get all the definitions right the first time, we won't have any trouble later.' We are not programming an empty machine in FORTRAN: we are meddling with a poorly understood large system that, characteristically, uses multiply defined symbols in its normal heuristic behavior. Intuitive Objections. New Math emphasizes the idea that a number can be identified with an equivalence class of all sets that can be put into one-to-one correspondence with one another. Then the rational numbers are defined as equivalence classes of pairs of integers, and a maze of formalism is introduced to prevent the child from identifying the rationalswith the quotients or fractions. Functions are often treated as sets, although some texts present 'function machines' with a superficially algorithmic flavor. The definition of a 'variable' is another fiendish maze of complication involving names, values, expressions, clauses, sentences, numerals, 'indicated operations,' and so forth. (In fact, there are so many different kinds of data in real problem-solving that real-life mathematicians do not usually give them formal distinctions, but use the entire problem context to explain them.) In the course of pursuing this formalistic obsession, the curriculum never presents any coherent picture of real mathematical phenomena of processes, discrete or continuous; of the algebra whose notational syntax concerns it so; or of geometry. The 'theorems' that are 'proved' from time to time, such as, 'A number x has only one additive inverse, -x,' are so mundane and obvious that neither teacher nor student can make out the purpose of the proof. The 'official' proof would add y to both sides of x + ( - y) 0, apply the associative law, then the commutative law, then the y = (- y) = 0 law, and finally the axioms of equality, to show that y must equal x. The child's mind can more easily understand deeper ideas: 'In x + (-y) = 0, if y were less than x there would be Form and Content in Computer Science
237
fl
some left over; while if x were less than y there would be a minus number left-so they must be exactly equal.' The child is not permitted to use this kind of order-plus-continuity thinking, presumably because it uses 'more advanced knowledge,' hence isn't part of a 'real proof.' But in the network of ideas the child needs, this link has equal logical status and surely greater heuristic value. For another example, the student is made to distinguish clearly between the inverse of addition and the opposite sense of distance, a discrimination that seems entirely against the fusion of these nations that would seem desirable. Computational Objections. The idea of a procedure, and the know-how that comes from learning how to test, modify, and adapt procedures, can transfer to many of the child's other activities. Traditional academic subjects such as algebra and arithmetic have relatively small developmental significance, especially when they are weak in intuitive geometry. (The quest ion of which kinds of learning can 'transfer' to other activities is a fundamental one in educational theory: I emphasize again our conjecture that the ideas of procedures and debugging will turn out to be unique in their transferability.) In algebra, as we have noted, the concept of 'variable' is complicated; but in computation the child can easily see 'x + y + z' as describing a procedure (any procedure for adding!) with 'x,' 'y,' and 'z' as pointing to its 'data.' Functions are easy to grasp as procedures, hard if imagined as ordered pairs. If you want a graph, describe a machine that draws the graph; if you have a graph, describe a machine that can read it to find the values of the function. Both are easy and useful concepts. Let us not fall into a cultural trap; the set theory 'foundation' for mathematics is popular today among mathematicians because it is the one they tackled and mastered (in college). These scientists simply are not acquainted, generally, with computation or with the Post-TuringMcCulloch-Pitts-McCarthy-Newell-Simon-. . . family of theories that will be so much more important when the children grow up. Set theory is not, as the logicians and publishers would have it, the only and true foundation of mathematics; it is a viewpoint that is pretty good for investigating the transfinite, hut undistinguished for comprehending the real numbers, and quite substandard for learning about arithmetic, algebra, and geometry. To summarize my objections, the New Math emphasized the use of formalism and symbolic manipulation instead of the heuristic and intuitive content of the subject matter. The child is expected to learn how to solve problems but we do not teach him what we know, either about the subject or about problem-solving. 3 3 In a shrewd but hilarious discussion o(f New Math textbooks, Feynman [20] explores the consequences of distinguishing between the thing and itself. 'Color the picture of the ball red,' a book says, instead of 'Color the ball red.' 'Shall we color the entire square area in which the ball image appears or just the part inside the circle of the ball?' asks Feynman. JTo 'color the balls red' would presumably have to be 'color the insides of
the circles of all the members of the set of balls' or something like that.[ 238
MARVIN MINSKY
As an example of how the preoccupation with form (in this case, the axioms for arithmetic) can warp one's view of the content, let us examine the weird compulsion to insist that addition is ultimately an operation on just two quantities. In New Math, a + b + c must 'really' be one of (a+(b+c))or ((a+b)+c), anda + b + c + dcanbe meaningful only after several applications of the associative law. Now this is silly in many contexts. The child has already a good intuitive idea of what it means to put several sets together; it is just as easy to mix five colors of beads as two. Thus addition is already an n-ary operation. But listen to the book trying to prove that this is not so: Addition is . . . always performed on two numbers. This may not seem reasonable at first sight, since you have often added long strings of figures. Try an experiment on yourself. Try to add the numbers 7, 8, 3 simultaneously. No matter how you attempt it, you are forced to choose two of the numbers, add them, and then add the third to their sum. -From
a ninth-grade text
Is the height of a tower the result of adding its stages by pairs in a certain order? Is the length or area of an object produced that way from its parts? Why did they introduce their sets and their one-one correspondences then to miss the point? Evidently, they have talked themselves into believing that the axioms they selected for algebra have some special kind of truth! Let us consider a few important and pretty ideas that are not discussed much in grade school. First consider the sum '/2 + /4 + 1/8 + * .Interpreted as area, one gets fascinating regrouping ideas, as in Figure 7. Once the child knows how to do division, he can compute
FIGURE
7
and appreciate some quantitative aspects of the limiting process .5, .75, .875, .9375, .96875,
.
, and he can learn about folding and cutting
and epidemics and populations. He could learn about x = px + qx, where p + q = 1, and hence appreciate dilution; he can learn that 3/4, 4/s, 5/6, 6/7, 7/8, 1 and begin to understand the many colorful and common-sense geometrical and topological consequences of such matters. But in the New Math, the syntactic distinctions between rationals, quotients, and fractions are carried so far that to see which of 3/8 and 4/9 is larger, one is not permitted to compute and compare .375 with .
Form and Content in Computer Science 239
.4444. One must cross-multiply. Now cross-multiplication is very cute, but it has two bugs: (1) no one can remember which way the resulting conditional should branch, anc. (2) it doesn't tell how far apart the numbers are. The abstract concpt of order is very elegant (another set of axioms for the obvious) but the children already understand order pretty well and want to know the amounts. Another obsession is the concern for number base. It is good for the children to understand clearly that 223 is 'two hundred' plus 'twenty' plus 'three,' and I tisink that t-is should be made as simple as possible rather than complicated. 4 I do not think that the idea is so rich that one should drill young children to do arithmetic in several bases! For there is very little transfer of this feeble concept to other things, and it risks a crippling insult to the fragile arithmetic of pupils who, already troubled with 6 + 7 = 13, now find that 6 + 7 = 15. Besides, for all
the attention to number base :[,o not see in my children's books any concern with even a few nontrivial implications-concepts that might justify the attention, such as: Why is there only one way to write a decimal integer? Why does casting out nines work? (It isn't even mentioned.) What happens if we use arbitrary nonpowers, such as a + 37b + 24c + 1Id + instead of the usual a+ lOb+ 100, 1000d± + .. ?
If they don't discuss such matters, they must have another purpose. My conjecture is that the whole fuss is to make the kids better understand the procedures for multiplying and dividing. But from a developmental viewpoint this may be a serious mistake-in the strategies of both the old and the 'new' mathematical curricula. At best, the standard algorithm for long division is cumbersome, and most children will never use it to explore numeric phenomena. And, although it is of some interest to understand how it works, writing out the whole display suggests that the educator believes that the child ought to understand the horrible thing every time! This is wrong. The important idea, if any, is the repeated subtraction; the rest is just a clever but not vital programming hack. If we can teach, perhaps by rote, a practical division algorithm, fine. But in any case let us give them little calculators; if that is too expensive, why not slide rules. Please, without an impossible explanation. The important thing is tc get on to the real numbers! The New Math's concern with integers is so fanatical that it reminds me, if I may mention another pseudoscience, of numerology. (How about that, Boston Globe!) The Cauchy-Dedekind-Russell-Whitehead set-theory formalism was a large accomplishment-another (following Euclid) of a series of demonstrations that many mathematical ideas can be derived from a few primitives, albeit by a long and tortuous route. But the child's 4Cf. Tom Lehrer's song, 'New Math' 121]. 240
MARVIN MINSKY
problem is to acquire the ideas at all; he needs to learn about reality. In terms of the concepts available to him, the entire formalism of set theory cannot hold a candle to one older, simpler, and possibly greater idea: the nonterminating decimal representation of the intuitive real number line. There is a real conflict between the logician's goal and the educator's. The logician wants to minimize the variety of ideas, and doesn't mind a long, thin path. The educator (rightly) wants to make the paths short and doesn't mind-in fact, prefers-connections to many other ideas. And he cares almost not at all about the directions of the links. As for better understanding of the integers, countless exercises in making little children draw diagrams of one-one correspondences will not help, I think. It will help, no doubt, in their learning valuable algorithms, not for number but for the important topological and procedural problems in drawing paths without crossing, and so forth. It is just that sort of problem, now treated entirely accidentally, that we should attend to. The computer scientist thus has a responsibility to education. Not, as he thinks, because he will have to program the teaching machines. Certainly not because he is a skilled user of 'finite mathematics.' He knows how to debug programs; he must tell the educators how to help the children to debug their own problem-solving processes. He knows how procedures depend on their data structures; he can tell educators how to prepare children for new ideas. He knows why it is bad to use double-purpose tricks that haunt one later in debugging and enlarging programs. (Thus, one can capture the kids' interest by associating small numbers with arbitrary colors. But what will this trick do for their later attempts to apply number ideas to area, or to volume, or to value?) The computer scientist is the one who must study such matters, because he is the proprietor of the concept of procedure, the secret educators have so long been seeking. References 1. Feynman, R. P.Development of the space-time view of quantum electrodynamics. Science 153, No. 3737 (Aug. 1966), 699-708. 2. Shannon, C. E. A universal Thring machine with two internal states. In Automata Studies, Shannon, C. E., and McCarthy, J. (Eds.), Princeton U. Press, Princeton, N.J., 1956, pp. 157-165. 3. Cook, S.A.On the minimum computation time for multiplication. Doctoral diss., Harvard U., Cambridge, Mass., 1966. 4. Knuth, D. The Art of Computer Programming, Vol. II. Addison-Wesley, Reading, Mass., 1969. 5. Minsky, M., and Papert, S.Perceptions:An Introduction to Computational Geometry. MIT Press, Cambridge, Mass., 1969. 6. Guzman, A., and McIntosh, H. V.CONVERT. Comm. ACM 9, 8 (Aug. 1966), 604-615.
7. Hewitt, C. PLANNER: A language for proving theorems in robots. In: Proc. of the International Joint Conference on Artificial Intelligence, Form and Content in Computer Science
241
8. 9. 10. 11. 12.
13.
14.
15. 16. 17. 18. 19. 20.
21.
May 7-9, 1969, Washingtcn, D.C., Walker, D.E., and Norton, L M. (Eds.), pp. 295-301. Levin, M., et al. The LISP 1.5 Programmer'sManual. MIT Press, Cambridge, Mass., 1965. Weissman, C. The LISP 1.5 Primer. Dickenson Pub. Co., Belmont, Calif., 1967. Martin, W. A. Symbolic n-athematical laboratory. Doctoral diss., MIT, Cambridge, Mass., Jan. 1967. Moses, J. Symbolic integration. Doctoral diss., MIT, Cambridge, Mass., Dec. 1967. Slagle, J. R. A heuristic program that solves symbolic integration problems in Freshman calculus. In Computers and Thought, Feigenbaum, E. A., and Feldman, J. (Eds.), McGraw-Hill, New York, 1963. Turing, A. M. Computing machinery and intelligence. Mind 59 (Oct. 1950), 433-460; reprinted in Computers and Thought, Feigenbaum, E.A., and Feldman, J. (Eds.), McGraw-Hill, New York, 1963. Minsky, M. Matter, mind and models. Proc. IFIP Congress 65, Vol. 1, pp. 45-49 (Spartan Books, Washington, D.C.). Reprinted in Semantic Information Processing,Minsky, M. (Ed.), MIT Press, Cambridge, Mass., 1968, pp. 425-432. Feigenbaum, E. A., and Feldlman, J. Computers and Thought. McGrawHill, New York, 1963. Minsky, M. (Ed.). Semani'ic Information Processing. MIT Press, Cambridge, Mass., 1968. Galton, F. Inquiries into lhrman Faculty and Development. Macmillan, New York, 1883. Attneave, F. Triangles as ambiguous figures. Amer. J. Psychol. 81, 3 (Sept. 1968), 447-453. Papert, S. Principes analogues a la recurrence. In Problemes de la Construction du Nombre, Presses Universitaires de France, Paris, 1960. Feynman, R. P. New textbooks for the 'new' mathematics. Engineering and Science 28, 6 (March 1965), 9-15 (California Inst. of Technology, Pasadena). Lehrer, T. New math. In That Was the Year That Was, Reprise 6179, Warner Bros. Records.
Categories and Subject Descriptors: D.3.1 [Software]: Formal Definitions and Theory-syntax; D.3.4 [Software]: Processors-compIlers; F.2.1 [Theory of Computation]: Numerical Algorithms anc Problems-computations on matrices; F.4.1 [Theory of Computation]: Mathematical Logic-recursive function theory; 1.2.6 [Computing Methodologies]: Learning-concept learning; K.3.0 [Computing Milieux]: computers and Education-general
General Terms: Algorithms, Languages, Theoiy
Key Words and Phrases: Heuristic programming, new math
242
MARVIN MINSKY
Some Comments from a Numerical Analyst J. H. WILKINSON National Physical Laboratory Teddington, Middlesex, England A description is given of life with A. M. During at the National Physical Laboratory in the early days of the development of electronic computers (1946-1948). The present mood of pessimism among numerical analysts resulting from difficult relationships with computer scientists and mathematicians is discussed. It is suggested that in light of past and present performance this pessimism is unjustified and is the main enemy of progress in numerical mathematics. Some achievements in the fields of matrix computations and error analysis are discussed and likely changes in the direction of research in numerical analysis are sketched.
Introduction When at last I recovered from the feeling of shocked elation at being invited to give the 1970 Turing Award Lecture, I became aware that I must indeed prepare an appropriate lecture. There appears to be a tradition that a Turing Lecturer should decide for himself what is expected from him, and probably for this reason previous lectures have differed considerably in style and content. However, it was made quite clear that I was to give an after-luncheon speech and that I would not have the benefit of an overhead projector or a blackboard. Author deceased, October 1986. Address at that time: Department of Computer Science, Stanford University, Stanford, CA 94305. 243
Although I have been associated with high speed computers since the pioneering days, my main claim, such as it is, to the honor of giving the 1970 lecture rests on my work as a numerical analyst, particularly in the field of errcr analysis. A study of the program for this meeting revealed that numerical analysis was conspicuous by its absence, and accordingly I felt that it would be inappropriate to prepare a rather heavy discourse on rounding errors; indeed I doubt whether it would be a suitable topic for an after-luncheon speech in any setting. I decided therefore to make some rather personal comments based on my experience as a numerical analyst over the last twenty-five years. There is one important respect in which it is reasonably probable that I shall occupy a unique position among Turing Lecturers. Maurice Wilkes, giving the 1967 Turing Lecture remarked that it was unlikely that many of those who followed him would be people who were acquainted with Alan Turing. In fact I can claim a good deal more than that. From 1946 to 1948 1 had the privilege of working with the great man himself at the National Physical Laboratory. I use the term 'great man' advisedly because he was indeed a remarkable genius. To those of us at N.P.L. who knew him and worked with him, it has been a source of great pleasure that the ACM should have recognized his outstanding contributions to computer science by founding this Thring Award, and because of my connection with his work at an important period in his career, it is particularly gratifying for me to be a recipient. I trust that in the circumstances it will not be regarded as inappropriate if I devote a part of my lecture :o the period I spent working with him. My career was certainly profoundly influenced by the association and, without it, it is unlikely that I would have remained in the computer field.
Life with Alan Turing I was by inclination and training a classical analyst. Cambridge was still dominated by classical analysis in the '30s and I was strongly influenced by the Hardy-Littlewcood tradition. Had it not been for World War II, I would almost certainly have taken my Ph.D. in that field. However, I was of military age when the war broke out and being by nature a patriotic man I felt that I could serve my country more effectively, and incidentally a lot more comfortably, working in the Government Scientific Service, than serving as an incompetent infantryman. The British Government took a surprisingly enlightened attitude on the subject, and almost from the start of the war those with scientific qualifications were erncouraged to take this course. I therefore spent the war in I he Armament Research Department, which has much in common with Aberdeen Proving Ground, working mainly on such fascinating topics as external ballistics, fragmentation of bombs and shells, and the thermodynamics of explosives. My task was to solve problems of a mat hematical nature arising in these fields, 244 J. H. WILKINSON
using computational methods if necessary. Anybody who has ever been subjected to this discipline will know that it can be quite a chastening experience and successes are heavily diluted with failures. I did not at first find this task particularly congenial, but gradually I became interested in the numerical solution of physical problems. Later in this lecture I shall describe an early experience with matrix computations which was to have a considerable influence on my subsequent career. It was not possible to obtain an immediate release at the end of the war and in 1946 I joined the newly formed Mathematics Division at the National Physical Laboratory. It was there that I first met Alan Touring, though he was, of course, known to me before by reputation, but mainly as an eccentric. It is interesting to recall now that computer science virtually embraces two whole divisions at N.P.L. and spreads its tentacles into the remainder, that at that time the staff of the high speed computing section (or ACE section as it was called) numbered 1'/2. The one, of course, was no less a person than Alan Turing himself and I was the half. I hasten to add that this doesn't represent false modesty on my part. I was to spend half my time in the Computing Section, which was in the capable hands of Charles Goodwin and Leslie Fox, and the other half with Alan Turing. For several months Alan and I worked together in a remarkably small room at the top of an old house which had been taken over 'temporarily' by N.P.L. to house Mathematics Division. Needless to say, twenty-five years later it is still part of N.P.L. Thring never became an empire builder; he assembled his staff rather slowly and worked rather intimately with them. A year later the staff had reached only 3½/2, the two additions being Mike Woodger, who is best known for his work on Algol, and Harry Huskey (who needs no introduction to ACM audiences) who spent 1947 at N.P.L. My task was to assist Turing in the logical design of the computer ACE which was to be built at N.P.L. and to consider the problems of programming some of the more basic algorithms of numerical analysis, and my work in the Computing Section was intended to broaden my knowledge of that subject. (Those of you who are familiar with Turing's work will be interested to know that he referred to the sets of instructions needed for a particular problem as the relevant 'instruction table,' a term which later led to misunderstandings with people elsewhere.) As you can imagine, this left me with little idle time. Working with Turing was tremendously stimulating, perhaps at times to the point of exhaustion. He had recently become keenly interested in the problems of numerical analysis himself, and he took great pleasure in subjecting Leslie Fox, who was our most experienced numerical analyst at N.P.L., to penetrating but helpful criticisms of the methods he was using. It was impossible to work 'half-time' for a man like Turing and almost from the start the periods spent with the Computing Section were rather brief. The joint appointment did, however, have its useful Some Comments from a Numerical Analyst
245
aspect. Touring occasionally had days when he was 'unapproachable' and at such times it was advisable to exercise discretion. I soon learned to recognize the symptoms and would exercise my right (or, as I usually put it, 'meet my obligations') of working in the Computing Section until the mood passed, which it usually did quite quickly. Turing had a strong predel ction for working things out from first principles, usually in the first instance without consulting any previous work on the subject, and no doubt it was this habit which gave his work that characteristically original flavor. I was reminded of a remark which Beethoven is reputed to have made when he was asked if he had heard a certain work of Mozart which was attracting much attention. He replied that he had not, and added 'neither shall I do so, lest I forfeit some of my own originality.' ringg carried this to extreme lengths and I must confess that at first I found it rather irritating. He would set me a piece of work and when I had completed it he would not deign to look at my solution but would embark on the problem himself; only after having a preliminary trial on his own was he prepared to read my work. I soon came to see the advantage of his approach. In the first place he was really not as quick at grasping other people's ideas as he was at formulating his own, but what is more important, he would frequently come up with some original approach which had escaped me and might well have eluded him, had he read my account immediately. When he finally got around to reading my own work he was generally very appreciative; he was particularly fond of little programming tricks (some people would say that he was too fond of them to be a 'good' programmer) and would chuckle with boyish good hunior at any little tricks I may have used. When I joined N.P.L. I hac not made up my mind to stay permanently and still thought in terms of returning to Cambridge to take up research in classical analysis. The period with Turing fired me with so much enthusiasm for the computer project and so heightened my interest in numerical analysis that gradually I abandoned this idea. As I rather like to put it when speaking to pure mathematical friends, 'had it not been for Turing I would probably have become just a pure mathematician,' taking care t: give the remark a suitably pejorative flavor. Touring's reputation is now so well established that it scarcely stands in need of a boost from me. Ilowever, I feel bound to say that his published work fails to give an adequate impression of his remarkable versatility as a mathematician His knowledge ranged widely over the whole field of pure and applied mathematics and seemed, as it were, not merely something he had learned from books, but to form an integral part of the man himself. One could scarcely imagine that he would ever 'forget' any of it. In spite of this he had only twenty published papers to his credit (and this only if one includes virtually everything), written over a period of some twenty years. Remarkable 246
J. H. WILKINSON
as some of these papers are, this work represents a mere fraction of what he might have done if things had turned out just a little differently. In the first place there were the six years starting from 1939 which he spent at the Foreign Office. He was 27 in 1939, so that in different circumstances this period might well have been the most productive of his life. He seemed not to have regretted the years he spent there and indeed we formed the impression that this was one of the happiest times of his life. 'hiring simply loved problems and puzzles of all kinds and the problems he encountered there must have given him a good deal of fun. Certainly it was there that he gained his knowledge of electronics and this was probably the decisive factor in his deciding to go to N.P.L. to design an electronic computer rather than returning to Cambridge. Mathematicians are inclined to refer to this period as the 'wasted years' but I think he was too broad a scientist to think of it in such terms. A second factor limiting his output was a marked disinclination to put pen to paper. At school he is reputed to have had little enthusiasm for the 'English subjects' and he seemed to find the tedium of publishing a paper even more oppressive than most of us do. For myself I find his style of writing rather refreshing and full of little personal touches which are particularly attractive to someone who knew him. When in the throes of composition he would hammer away on an old typewriter (he was an indifferent typist, to put it charitably) and it was on such occasions that visits to the Computing Section were particularly advisable. While I was preparing this talk, an early Mathematics Division report was unearthed. It was written by Thring in 1946 for the Executive Committee of N.P.L., and its main purpose was to convince the committee of the feasibility and importance of building an electronic computer. It is full of characteristic touches of humor, and rereading it for the first time for perhaps 24 years I was struck once again by his remarkable originality and versatility. It is perhaps salutary to be reminded that as early as 1946 Turing had considered the possibility of working with both interval and significant digit arithmetic and the report recalled forgotten conversations, not to mention heated arguments, which we had on this topic. Thring's international reputation rests mainly on his work on computable numbers but I like to recall that he was a considerable numerical analyst, and a good part of his time from 1946 onwards was spent working in this field, though mainly in connection with the solution of physical problems. While at N.P.L. he wrote a remarkable paper on the error analysis of matrix computations [1] and I shall return to this later. During the last few months at N.P.L., lTring became increasingly dissatisfied with progress on the ACE project. He had always thought in terms of a large machine with 200 long relay lines storing some 6,000 Some Comments from a Numerical Analyst 247
words and I think this was too ambitious a project for the resources of N.P.L. (and indeed of most other places) at that time. During his visit Harry Huskey attempted to get work started on a less ambitious machine, based on 'TIring's ideas. Alan could never bring himself to support this project and in 1948 he left N.P.L. to join the group at Manchester University. After he left, the four senior members of the ACE section of Mathematics Division and the recently formed Electronics Section joined forces and collaborated on the construction of the computer PILOT ACE, for which we took over some of the ideas we had worked out with Harry Huskey; for the next two to three years we all worked as electronic engineers. I think we can claim that the PILOT ACE was a complete success and since Turing would not have permitted this project to get off the ground, to this extent at least we benefitted from his departure, though the Mathematics Division was never quite the same again. Working with a genius has both advantages and disadvantages! Once the machine was a success, however, there were no sour grapes from Turing and he was always extremely generous about what had been achieved.
The Present State of Numerical Analysis I would now like to come to the main theme of my lecture, the present status of numerical analysis. Numerical analysis is unique among the various topics as hich comprise the rather ill-defined discipline of computer science. I make this remark rather defiantly because I would be very sorry to see numerical analysis sever all its connections with computer science, though I recognize that my views must be influenced to some extent by having worked in the exciting pioneer days on the construction of electronic computers. Nevertheless, numerical analysis is clearly different from the other topics in having had a long and distinguished history. Only the name is new (it appears not to have been used before the '50s) and this at least it has in common with computer science. Some like to trace its history back to the Babylonians and if one chooses to regard any reasonable systematic computation as numerical analysis I suppose this is justifiable. Certainly many of the giants of the mathematical world, including both the great Newton and Gauss themselves, devoted a substantial part of their research to computational problems. In those days it was impossible for a mathematician to spend his time in this way without being apprehensive of the criticism of his colleagues. Many of the leaders of the computer revolution thought in terms of developing a tool which was specifically intended for the solution of problems arising in physics and engineering. This was certainly true of the two men of genius, von Neumann and Thring, who did so much to attract people of real ability into the computing field in the early 248
J. H. WILKINSON
days. The report of Turing to which I referred earlier makes it quite clear that he regarded such applications as the main justification for embarking on what was, even then, a comparatively expensive undertaking. A high percentage of the leading lights of the newly formed computer societies were primarily numerical analysts and the editorial boards of the new journals were drawn largely from their ranks. The use of electronic computers brought with it a new crop of problems all perhaps loosely associated with 'programming' and quite soon a whole field of new endeavors grew up around the computer. In a brilliant article on numerical analysis [2] Philip Davis uses the term 'computerology' to encompass these multifarious activities but is careful to attribute the term to an unnamed friendly critic. I do not intend to use the term in a pejorative sense in this talk, but it is a useful collective word to cover everything in computer science other than numerical analysis. Many people who set out originally to solve some problem in mathematical physics found themselves temporarily deflected by the problems of computerology and we are still waiting with bated breath for the epoch-making contributions they will surely make when they return to the fold, clothed in their superior wisdom. In contrast to numerical analysis the problems of computerology are entirely new. The whole science is characterized by restless activity and excitement and completely new topics are constantly springing up. Although, no doubt, a number of the new activities will prove to be short-lived, computerology has a vital part to play in ensuring that computers are fully exploited. I'm sure that it is good for numerical analysts to be associated with a group of people who are so much alive and full of enthusiasm. I'm equally sure that there is merit in computer science embracing a subject like numerical analysis which has a solid background of past achievement. Inevitably though, numerical analysis has begun to look a little square in the computer science setting, and numerical analysts are beginning to show signs of losing faith in themselves. Their sense of isolation is accentuated by the present trend towards abstraction in mathematics departments which makes for an uneasy relationship. How different things might have been if the computer revolution had taken place in the 19th century! In his article Davis remarks that people are already beginning to ask, 'Is numerical analysis dead?' Davis has given his own answer to this question and I do not propose to pursue it here. In any case 'numerical analysts' may be likened to 'The Establishment' in computer science and in all spheres it is fashionable to diagnose 'rigor mortis' in the Establishment. There is a second question which is asked with increasing frequency. It assumes many different guises but is perhaps best expressed by the catch-phrase, 'What's new in numerical analysis?' This is invariably delivered in such a manner as to leave no doubt that the questioner's answer is 'Nothing,' or, even more probably, one of the more vulgar two-word synonyms, in which the English language is so rich. This Some Comments from a Numerical Analyst 249
criticism reminds me of a somewhat similar situation which exists with respect to functional analysis. Those brought up in an older tradition are inclined to say that 'there is nothing new in functional analysis, it merely represents a dressing up of old results in new clothes.' There is just sufficient truth in this to confirm the critics in their folly. In my opinion the implied criticism involves a false comparison. Of course everything in computerology is new; that is at once its attraction, and its weakness. Only recently I learned that computers are revolutionizing astrology. Horoscopes by computer! -it's certainly never been done before, and I understand that it is very remunerative! Seriously though, it was not to be expected that numerical analysis would be turned upside down in the course of a decade or two, just because we had given it a new name and at last had satisfactory tools to work with. Over the last 300 years some of the finest intellects in the mathematical world have been brought to bear on the problems we are trying to solve. It is not surprising that our rate of progress cannot quite match the heady pace which is so characteristic of computerology.
Some Achievements in Numerical Analysis In case you are tempted to think that I am about to embark on excuses for not having made any real progress, I hasten to assure you that I have no such intention. While I was preparing this lecture I made a brief review of what has been achieved since 1950 and found it surprisingly impressive. In the next few minutes I would like to say just a little about the achievements in the area with which I am best acquainted, matrix computations. We are fortunate here in having, in the little book written by V. N. Faddeeva [3], an admirably concise and accurate account of the situation as it was in 1950. A substantial part of the book is devoted to the solution of the eigenvalue pro'Iem, and scarcely any of the methods discussed there are in use today. ]n fact as far as non-Hermitian matrices are concerned, even the methods which were advocated at the 1957 Wayne matrix conference have been almost completely superseded. Using a modern version of the QR algorithm one can expect to produce an accurate eigensystem D' a dense matrix of order 100 in a time which is of the order of a minute. One can then go on to produce rigorous error bounds for both the eigenvalues and eigenvectors if required, deriving a more accurate system as a by-product. At the 1957 Wayne conference we did not appear to be within hailing distance of such an achievement. A particularly pleasing feature of the progress is that it is an appreciation of the problem of numerical stability resulting from advances in error analys s that has played a valuable part in suggesting the new algorithms. 250 J. H. WILKINSON
Comparable advances have been made in the development of iterative methods for solving sparse linear systems of the type arising from the partial differential equations; here algorithmic advances have proceeded pari pass with a deepening understanding of the convergence properties of iterative methods. As far as dense systems are concerned the development of new algorithms has been less important, but our understanding of the stability of the standard methods has undergone a complete transformation. In this connection I would like to make my last remarks about life with Thring. When I joined N.P.L. in 1946 the mood of pessimism about the stability of elimination methods for solving linear systems was at its height and was a major talking point. Bounds had been produced which purported to show that the error in the solution would be proportional to 4n and this suggested that it would be impractical to solve systems even of quite modest order. I think it was true to say that at that time (1946) it was the more distinguished mathematicians who were most pessimistic, the less gifted being perhaps unable to appreciate the full severity of the difficulties. I do not intend to indicate my place on this scale, but I did find myself in a rather uncomfortable position for the following reason. It so happens that while I was at the Armament Research Department I had an encounter with matrix computations which puzzled me a good deal. After a succession of failures I had been presented with a system of twelve linear equations to solve. I was delighted at last at being given a problem which I 'knew all about' and had departed with my task, confident that I would return with the solution in a very short time. However, when I returned to my room my confidence rapidly evaporated. The set of 144 coefficients suddenly looked very much larger that they had seemed when I was given them. I consulted the few books that were then available, one of which, incidentally, recommended the direct application of Cramer's rule using determinants! It did not take long to appreciate that this was not a good idea and I finally decided to use Gaussian elimination with what would now be called
'partial pivoting.' Anxiety about rounding errors in elimination methods had not yet reared its head and I used ten-decimal computation more as a safety precaution rather than because I was expecting any severe instability problems. The system was mildly ill-conditioned, though we were not so free with such terms of abuse in those days, and starting from coefficients of order unity, I slowly lost figures until the final reduced equation was of the form, say, .0000376235x1 2
=
.0000216312
At this stage I can remember thinking to myself that the computed x,2 derived from this relation could scarcely have more than six correct Some Comments from a Numerical Analyst
251
figures, even supposing that there had been no buildup in rounding errors, and I contemplated computing the answers to six figures only. However, as those of you who have worked with a desk computer will know, one tends to make fewer blunders if one adheres to a steady pattern of work, and accordingly I computed all variables to ten figures, though fully aware of the absurdity of doing so. It so happened that all solutions were of order unity, which from the nature of the physical problem was to be expected. Then, being by that time a well-trained computer, I substituted my solution in the original equations to see how they checked. Since xi had been derived from the first of the original equations, I started by substituting in the 12th equat.cn. You will appreciate that on a desk machine the inner-product is accumulated exactly giving 20 figures in all. (It is interesting that nowadays we nearly always accept a poorer performance from the arithmetic units of computers!) To my astonishment the left-hand side agreed with the given right-hand side to ten figures, i.e., to the full extent of Ihe right-hand side. That, I said to myself, was a coincidence. Eleven more 'coincidences' followed, though perhaps not quite in rapid succession! I was completely baffled by this. I felt sure that none of the variables could have more than six correct figures and yet the agreement was as good as it would have been if I had been given the exact answer and had then rounded it to ten figures. However, the war had still to be won, and it was no time to become introspective about rounding errors; in any case I had already taken several times longer than my first confident estimate. My taskmaster was not as appreciative as he might have been but he had to admit he was impressed when I claimed that I had 'the exact solution' corresponding to a right-hand side which differed only in the tenth figure from the given one. As you can imagine this experience was very much in my mind when I arrived at N.P.L. and encountered the preoccupation with the instability of elimination methods. Of course I still believed that my computed answers had at besi six correct figures, but it was puzzling that in my only encounter with linear systems it was the surprising accuracy of the solutions (at least in the sense of small residuals) which required an explanation. In the current climate at N.P.L. I decided not to risk looking foolish by stressing this experience. However, it happened that some time after my arrival, a system of 18 equations arrived in Mathematics Division and after talking around it for sometime we finally dec.d.ed to abandon theorizing and to solve it. A system of 18 is surpris.r.gly formidable, even when one had previous experience with 12, and we accordingly decided on a joint effort. The operation was manned by Fox, Goodwin, Turing, and me, and we decided on Gaussian elimination with complete pivoting. Turing was not particularly enthusiastic, partly because he was not an experienced performer on a desk machine and partly because he was convinced that it would be a fa dare. History repeated itself remarkably 252
J. H. WILKINSON
closely. Again the system was mildly ill-conditioned; the last equation had a coefficient of order 10-4 (the original coefficients being of order unity) and the residuals were again of order 10-10, that is, of the size corresponding to the exact solution rounded to ten decimals. It is interesting that in connection with this example we subsequently performed one or two steps of what would now be called 'iterative refinement,' and this convinced us that the first solution had had almost six correct figures. I suppose this must be regarded as a defeat for Turing since he, at
that time, was a keener adherent than any of the rest of us to the pessimistic school. However, I'm sure that this experience made quite an impression on him and set him thinking afresh on the problem of rounding errors in elimination processes. About a year later he produced his famous paper 'Rounding-off errors in matrix processses' [1] which together with the paper of J. von Neumann and H. Goldstine [4] did a great deal to dispel the gloom. The second round undoubtedly went to Thring! This anecdote illustrates rather well, I think, the confused state of mind which existed at that time, and was shared even by the most distinguished people working in the field. By contrast I think we can fairly claim today to have a reasonably complete understanding of matrix stability problems, not only in the solution of linear systems, but also in the far more difficult eigenvalue problem.
Failures in the Matrix Field Although we can claim to have been successful in the matrix area as far as the development of algorithms and an understanding of their performance is concerned, there are other respects in which we have not been particularly successful even in this field. Most important of these is a partial failure in communication. The use of algorithms and a general understanding of the stability problem has lagged much further behind developments than it should have. The basic problems of matrix computation have the advantage of simple formulations, and I feel that the preparation of well-tested and well-documented algorithms should have advanced side by side with their development and analysis. There are two reasons why this has not happened. (i) It is a much more arduous task than was appreciated to prepare the documentation thoroughly. (ii) Insufficient priority has been attached to doing it. There are signs in the last year or two that these shortcomings are at last being overcome with the work on the Handbook for Automatic Computation [5], that on matrix algorithms centered at Argonne National Laboratory, and the more general project at Bell Telephone Laboratories [6]. I think it is of vital importance that all the work that has been expended on the development of satisfactory algorithms should be made fully available to the people who need to use it. I would go further than this and claim that it is a social duty to see that this is achieved. Some Comments from a Numerical Analyst
253
A second disquieting feature about work in the matrix field is that it has tended to be isolated from that in very closely related areas. I would like to mention in particular linear programming and statistical computations. Workers in lirear algebra and linear programming seemed until recently to comprise almost completely disjoint sets and this is surely undesirable. The standard computations required in practical statistics provide the most direct opportunities for applying the basic matrix algorithms anc yet there is surprisingly little collaboration. Only recently I saw an article by a well-known practical statistician on the singular value decomposition which did not, at least in its first draft, contain any reference to the work of Kahan and Golub who have developed such an admirable algorithm for this purpose. Clearly there is a failure on both sides, but I think it is primarily the duty of people working in the matrix field to make certain that their work is used in related areas, and this calls for an aggesssive policy. Again there are signs that this isolation is breaking down. At Stanford, Professor Dantzig, a pioneer in linear programming, now has a joint appointment with the Computer Science Department and schemes are afoot in the UK to have joint meetings of matrix experts and members of the Statistical Society. Historical accidents often play a great part in breaking down barriers and it is interesting that collaboration between workers on the numerical solution of partial differential equations and on matrix algebra has always been extremely close. A third disappointing feature is the failure of numerical analysts to influence computer hardware anrd software in the way that they should. In the early days of the computer revolution computer designers and numerical analysts worked closely together and indeed were often the same people. Now there is a regrettable tendency for numerical analysts to opt out of any responsibility for the design of the arithmetic facilities and a failure to influence the more basic features of software. It is often said that the use of computers for scientific work represents a small part of the market and numerical analysts have resigned themselves to accepting facilities 'designed' for other purposes and making the best of them. I am not convinced that this in inevitable, and if there were sufficient unity in expressing their demands there is no reason why they could not be met. After all, one of the main virtues of an electronic computer from the point of view of the numerical analyst is its ability to 'do arithmetic fast.' Need the arithmetic be so bad! Even here there are hopeful developments. The work of W. Kahan deserves particular mention and last September a well-known manufacturer sponsored a meeting on this topic at which he, among others, had an opportunity to express his views.
Final Comments I am convinced that mathematical computation has a great part to play in the future and that its contribution will fully live up to the expectations of the great pioneers of the computer revolution. The 254
J. H. WILKINSON
greatest danger to numerical analysts at the moment springs from a lack of faith in themselves for which there is no real justification. I think the nature of research in numerical analysis is bound to change substantially in the next decade. In the first two decades we have concentrated on the basic problems, such as arise, for example, in linear and nonlinear algebra and approximation theory. In retrospect these will appear as a preliminary sharpening of the tools which we require for the real task. For success in this it will be essential to recruit more effectively than we have so far from the ranks of applied mathematicians and mathematical physicists. On a recent visit to the Soviet Union I was struck by the fact that most of the research in numerical analysis is being done by people who were essentially mathematical physicists, who have decided to tackle their problems by numerical methods, and they are strongly represented in the Academy of Sciences. Although I think that we in the West have nothing to fear from a comparison of achievements, I do feel that morale is markedly higher in the Soviet Union. In the UK there are signs that the tide is already turning. There is to be a Numerical Analysis Year at the University of Dundee, during the course of which many of the more distinguished of the world's numerical analysts will be visiting the UK. Quite recently a Discussion Meeting on a numerical analysis topic was held at the Royal Society. Such things would scarcely have been contemplated a year or two ago. I look forward to the time when numerical mathematics will dominate the applied field and will again occupy a central position at meetings of the ACM.
1. 2. 3. 4. 5. 6.
References Touring, A. M. Rounding-off errors in matrix processes. Quart.J. Mech. 1 (1948), 287-308. Davis, P. J. Numerical analysis. In The Mathematical Sciences: A Collection of Essays. MIT Press, Cambridge, Mass., 1969. Faddeeva, V. N. Computational Methods of Linear Algebra, Translated by C. D. Benster. Dover, New York, 1959. von Neumann, J., and Goldstine, H. H. Numerical inverting of matrices of high order. Bull. Amer. Math. Soc. 53 (1947), 1021-1099. Wilkinson, J. H. Handbook for Automatic Computation, Vol. 2. Linear Algebra. Springer-Verlag, Berlin. Gentleman, W. M., and Traub, J. F The Bell Laboratories numerical mathematics program library project. Proc. ACM 23rd Nat. Conf., 1968, Brandon/Systems Press, Princeton, N.J., pp. 485-490.
Categories and Subject Descriptors: F.2.1 [Theory of Computation]: Numerical Algorithms and ProblemsComputationson Matrices; G. 1.0 [Mathematics of Computing]: Numerical Analysis - error analysis; numericalalgorithms; G. 1.3 [Mathematics of Computing]: Numerical Linear Algebra-linear systems; K.2 [Computing Milieux]: History of Computing-people Some Comments from a Numerical Analyst 255
General Terms: Algorithms, Theory Additional Keywords and Phrases: National Physical Laboratory Turing
256 J. H. WILKINSON
Generality in Artificial Intelligence JOHN McCARTHY Stanford University The Thring Award Lecture given in 1971 by John McCarthy was never published. The postscript that follows, written by the author in 1986, endeavors to reflect the flavor of the original, as well as to comment in the light of development over the past 15 years.
Postscript My 1971 Turing Award Lecture was entitled 'Generality in Artificial Intelligence.' The topic turned out to have been overambitious in that I discovered that I was unable to put my thoughts on the subject in a satisfactory written form at that time. It would have been better to have reviewed previous work rather than attempt something new, but such wasn't my custom at that time. I am grateful to the ACM for the opportunity to try again. Unfortunately for our science, although perhaps fortunately for this project, the problem of generality in artificial intelligence (AI) is almost as unsolved as ever, although we now have many ideas not available in 1971. This paper relies heavily on such ideas, but it is far from a full 1986 survey of approaches for achieving generality. Ideas are discussed Author's present address: Department of Computer Science, Stanford University, Stanford, CA 94305-2095. 257
at a length proportional to my familiarity with them rather than according to some objective criterion. It was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality. It is still obvious, and now there are many more details. The first gross symptom is that a small addition to the idea of a program often involves a complete rewrite beginning with the data structures. Some progress has been made in modularizing data structures, but small modifications of the search strategies are even less likely to be accomplished without rewriting. Another symptom is that no one knows how to make a general database of common sense knowledge that could be used by any program that needed the knowledge. Along with other information, such a database would contain what a robot would need to know about the effects of moving objects around, what a person can be expected to know about his family, and the facts about buying and selling. This doesn't depend on whether the knowledge is to be expressed in a logical language or in some other formalism. When we take the logic approach to Al, lack of generality shows up in that the axioms we devise to express common sense knowledge are too restricted in their applicability for a general common sense data ease. In my opinion, getting a language for expressing general common sense knowledge for inclusion in a general database is the key pr-oblem of generality in AI. Here are some ideas for achieving generality proposed both before and after 1971. I repeat my disclaimer of comprehensiveness.
Representing Behavior by Program Friedberg [7, 8] discussed a completely general way of representing behavior and provided a way of learning to improve it. Namely, the behavior is represented by a computer program and learning is accomplished by making random modifications to the program and testing the modified program. The Friedberg approach was successful in learning only how to move a single bit from one memory cell to another, and its scheme of rewarding instructions involved in successful runs by reducing their probability of modification was shown by Simon [24] to be inferior to testing each program thoroughly and completely scrapping any program that wasn't perfect. No one seems to have attempted to follow up the idea of learning by modifying whole programs. The defect of the Friedberg approach is that while representing behaviors by programs is entirely general, modifying behaviors by small modifications to the programs is very special. A small conceptual modification to a behavior is usually not represented by a small modification to the program, especially if machine language programs are used and any one small modification to the text of a program is considered as likely as any oliher. It might be worth trying something more analogous to genetic evolution; duplicates of subroutines would be made, some copies would 258 JOHN McCARTHY
be modified and others left unchanged. The learning system would then experiment whether it was advantageous to change certain calls of the original subroutine to calls of the modified subroutine. Most likely even this wouldn't work unless the relevant small modifications of behavior were obtainable by calls to slightly modified subroutines. It would probably be necessary to provide for modifications to the number of arguments of subroutines. While Friedberg's problem was learning from experience, all schemes for representing knowledge by program suffer from similar difficulties when the object is to combine disparate knowledge or to make programs that modify knowledge.
The General Problem Solver (GPS) and Its Successor One kind of generality in AI comprises methods for finding solutions that are independent of the problem domain. Allen Newell, Herbert Simon, and their colleagues and students pioneered this approach and continue to pursue it. Newell et al. first proposed GPS in 1957 [18]. The initial idea was to represent problems of some general class as problems of transforming one expression into another by means of a set of allowed rules. It was even suggested in [20] that improving GPS could be thought of as a problem of this kind. In my opinion, GPS was unsuccessful as a general problem solver because problems don't take this form in general and because most of the knowledge needed for problem solving and achieving goals is not simply representable in the form of rules for transforming expressions. However, GPS was the first system to separate the problem-solving structure of goals and subgoals from the particular domain. If GPS had worked out to be really general, perhaps the Newell and Simon predictions about rapid success for Al would have been realized. Newell's current candidate [22] for general problem representation is SOAR, which, as I understand it, is concerned with transforming one state to another, where the states need not be represented by expressions.
Production Systems The first production systems were done by Newell and Simon in the 1950s, and the idea was written up in [21]. A kind of generality is achieved by using the same goal-seeking mechanism for all kinds of problems, changing only the particular productions. The early production systems have grown into the current proliferation of expert system shells. Production systems represent knowledge in the form of facts and rules, and there is almost always a sharp syntactic distinction between the rules. The facts usually correspond to ground instances of logical Generality in Artificial Intelligence
259
formulas, that is, they correspond to predicate symbols applied to constant expressions. Unlike logic-based systems, these facts contain no variables or quantifiers. Niew facts are produced by inference, observation, and user input. Variables are reserved for rules, which usually take a pattern -action .Orm. Rules are put in the system by the programmer or 'knowledge engineer' and in most systems cannot arise via the action of the system. In exchange for accepting these limitations, the production system programmer gets a relatively fast program. Production system programs rarely use fundamental knowledge of the domain. For example, MYCLN [2] has many rules about how to infer which bacterium is causing an illness based on symptoms and the result of laboratory tests. However, its formalism has no way of expressing the fact that bacteria are orgar isms that grow within the body. In fact, MYCIN has no way of representing processes occurring in time, although other production sy:items can represent processes at about the level of the situation calculus to be described in the next section. The result of a production system pattern match is a substitution of constants for variables in the pattern part of the rule. Consequently, production systems do not infer general propositions. For example, consider the definition that a container is sterile if it is sealed against entry by bacteria, and all the bacteria in it are dead. A production system (or a logic program) can only use this fact by substituting particular bacteria for the variables. Thus it cannot reason that heating a sealed container will sterilize it given that a heated bacterium dies, because it cannot reason about the unerxu merated set of bacteria in the container. These matters are discussed further in [14].
Representing Knowledge in Logic It seemed to me in 1958 that small modifications in behavior are most often representable as small modifications in beliefs about the world, and this requires a system that represents beliefs explicitly. If one wants a machine to be sble to discover an abstraction, it seems most likely that the machine must be ablz to represent this abstraction in some relatively simple way. [11, p. 78]
The 1960 idea for increasing generality was to use logic to express facts in a way independent of` the way the facts might subsequently be used. It seemed then and still seems that humans communicate mainly in declarative sentences rather than in programming languages for good objective reasons that will apply whether the communicator is a human, a creature from Aloha Centauri, or a computer program. Moreover, the advantages of declarative information also apply to internal representation. The advantage of declarative information is one of generality. The fact that wh'sn two objects collide they make a noise may be used in particular situations to make a noise, to avoid making noise, to explain a noise, or to explain the absence of noise. (I guess those cars didn't collide, because while I heard the squeal of brakes, I didn't hear a crash.) 260 JOHN McCARTHY
Once one has decided to build an Al system that represents information declaratively, one still has to decide what kind of declarative language to allow. The simplest systems allow only constant predicates applied to constant symbols, for example, on(Blockl, Block2). Next, one can allow arbitrary constant terms, built from function symbols constants and predicate symbols, for example, location(Block]) = top(Block2). Prolog databases allow arbitrary Horn clauses that include free variables, for example, P(x, y) A Q(y, z) D R(x, z), expressing the Prolog in standard logical notation. Beyond that lies full first-order logic, including both existential and universal quantifiers and arbitrary firstorder formulas. Within first-order logic, the expressive power of a theory depends on what domains the variables are allowed to range. Important expressive power comes from using set theory, which contains expressions for sets of any objects in the theory. Every increase in expressive power carries a price in the required complexity of the reasoning and problem-solving programs. To put it another way, accepting limitations on the expressiveness of one's declarative information allows simplification of the search procedures. Prolog represents a local optimum in this continuum, because Horn clauses are medium expressive but can be interpreted directly by a logical problem solver. One major limitation that is usually accepted is to limit the derivation of new facts to formulas without variables, that is, to substitute constants for variables and then do propositional reasoning. It appears that most human daily activity involves only such reasoning. In principle, Prolog goes slightly beyond this, because the expressions found as values of variables by Prolog programs can themselves involve free variables. However, this facility is rarely used except for intermediate results. What can't be done without more of predicate calculus than Prolog allows is universal generalization. Consider the rationale of canning. We say that a container is sterile if it is sealed and all the bacteria in it are dead. This can be expressed as a fragment of a Prolog program as follows: sterile(X) :- sealed(X), not alive-bacterium(Y, X). alive-bacterium(Y, X) :- in(Y, X), bacterium(Y), alive(Y). However, a Prolog program incorporating this fragment directly can sterilize a container only by killing each bacterium individually and would require that some other part of the program successively generate the names of the bacteria. It cannot be used to discover or rationalize canning -sealing the container and then heating it to kill all the bacteria at once. The reasoning rationalizing canning involves the use of quantifiers in an essential way. My own opinion is that reasoning and problem-solving programs will eventually have to allow the full use of quantifiers and sets and have strong enough control methods to use them without combinatorial explosion. Generality in Artificial Intelligence
261
While the 1958 idea was well received, very few attempts were made to embody it in program in the immediately following years, the main one being Black's Harvard Ph D. dissertation of 1964. I spent most of my time on what I regarded a:s preliminary projects, mainly LISP. My main reason for not attempting an implementation was that I wanted to learn how to express common sense knowledge in logic first. This is still my goal. I might be discouraged from continuing to pursue it if people pursuing nonlogical approaches were having significant success in achieving generality. McCarthy and Hayes [12] made the distinction between epistemological and heuristic aspects of the AI problem and asserted that generality is more easily studied epistemologically. The distinction is that the epistemology is completed when the facts available have as a consequence that a certain strategy is appropriate to achieve the goal, whereas the heuristic problem involves the search that finds the appropriate strategy. Implicit in [11] was the idea of a general-purpose, common sense database. The common sense information possessed by humans would be written as logical sentences and included in the database. Any goalseeking program could consult the database for the facts needed to decide how to achieve its goal. Especially prominent in the database would be facts about the effec .s of actions. The much studied example is the set of facts about the effects of a robot trying to move objects from one location to another. This led in the 1960s to the situation calculus [121, which was intended to provide a way of expressing the consequences of actions independent of the problem. The basic formalism of the situation calculus is .s
-=result(e, s),
which asserts that s' is the situation that results when event e occurs in situation s. Here are some situation calculus axioms for moving and painting blocks. Qualified Result-of-Action Axioms Vx I s.clear(top(x), s) A clear 1, s) A -i tooheavy(x) D loc(.x, result(move(x, I), s)) =1. Vx c s.color(x, result(paint(x, c), s)) = c. Frame Axioms Vx y I s.color(y, resul(move(x, 1), s)) =color(y, s). V x y I s.y # x D loc(y, result(move(x, 1), s)) = loc(y, s). V x y c s.loc(x, result(paint(y, c), s)) = loc(x, s). Vx y c s.y Ax D color/x, result(paint(y, c), s)) =color(x, s). Notice that all qualifications to the performance of the actions are explicit in the premises and that statements (called frame axioms) about 262 JOHN McCARTHY
what doesn't change when an action is performed are explicitly included. Without those statements it wouldn't be possible to infer much about result(e2, result(el, s)), since we wouldn't know whether the premises for the event e2 to have its expected result were fulfilled in result(el, s). Notice further that the situation calculus applies only when it is reasonable to reason about discrete events, each of which results in a new total situation. Continuous events and concurrent events are not covered. Unfortunately, it wasn't very feasible to use the situation calculus in the manner proposed, even for problems meeting its restrictions. In the first place, using general-purpose theorem provers made the programs run too slowly, since the theorem provers of 1969 [9] had no way of controlling the search. This led to STRIPS [6], which reduced the use of logic to reasoning within a situation. Unfortunately, the STRIPS formalizations were much more special than full situation calculus. The facts that were included in the axioms had to be delicately chosen in order to avoid the introduction of contradictions arising from the failure to delete a sentence that wouldn't be true in the situation that resulted from an action.
Nonmonotonicity The second problem with the situation calculus axioms is that they were again not general enough. This was the qualificationproblem, and a possible way around it wasn't discovered until the late 1970s. Consider putting an axiom in a common sense database asserting that birds can fly. Clearly the axiom must be qualified in some way since penguins, dead birds, and birds whose feet are encased in concrete can't fly. A careful construction of the axiom might succeed in including the exceptions of penguins and dead birds, but clearly we can think up as many additional exceptions, like birds with their feet encased in concrete, as we like. Formalized nonmonotonic reasoning (see [4], [ 15]- [17], and [23]) provides a formal way of saying that a bird can fly unless there is an abnormal circumstance and of reasoning that only the abnormal circumstances whose existence follows from the facts being taken into account will be considered. Nonmonotonicity has considerably increased the possibility of expressing general knowledge about the effects of events in the situation calculus. It has also provided a way of solving the frame problem, which constituted another obstacle to generality that was already noted in [12]. The frame problem (the term has been variously used, but I had it first) occurs when there are several actions available, each of which changes certain features of the situation. Somehow it is necessary to say that an action changes only the features of the situation to which it directly refers. When there is a fixed set of actions and features, it can be explicitly stated which features are unchanged by an action, even though it may take a lot of axioms. However, if we imagine that Generality in Artificial Intelligence
263
additional features of situations and additional actions may be added to the database, we face the problem that the axiomatization of an action is never completed. McCarthy [16] indicates how to handle this using circumscription, but Lifschitz [1 Y has shown that circumscription needs to be improved and has made proposals for this. Here are some situation calculus axioms for moving and painting blocks using circumscription taken from [16]. Axioms about Locations and the Effects of Moving Objects Vx e s.-ab(aspectl(x, e. s)) D loc(x, result(e, s))=loc(x, s). Vx I s.ab(aspectl(x, move(x, 1), s)). Vx I s. - ab(aspect3(x, 1, s) D loc(x, result(move(x, 1), s)) =l. Axioms about Colors and Painting V x e s.
ab(aspect2(x, e, s)) D) color(x, result(e, s)) = color(x, s). Vx c s.ab(aspect2(x, paint(x, c), s)). V x c s. ab(aspect4(x, c, s)) D color(x, result(paint(x, c), s)) = c. -
This treats the qualification problem, because any number of conditions that may be imagined as preventing moving or painting can be added later and asserted to imply the corresponding ab aspect .... It treats the frame problem in that we don't have to say that moving doesn't affect colors and painl.i ng locations. Even with formalized nonmo'notonic reasoning, the general common sense database still seems elusive. The problem is writing axioms that satisfy our notions of incorporating the general facts about a phenomenon. Whenever we tentatively decide on some axioms, we are able to think of situations in which they don't apply and a generalization is called for. Moveover, the difficulties that are thought of are often ad hoc like that of the bird with its feet encased in concrete.
Reification Reasoning about knowledge, belief, or goals requires extensions of the domain of objects reasoned about. For example, a program that does backward chaining on goals uses them directly as sentences: on(Blockl, Block2); that is, the symbol or, is used as a predicate constant of the language. However, a program that wants to say directly that on(Blockl, Block2) should be postponed urLtil on(Block2, Block3) has been achieved needs a sentence like precedes(on(Block2, Block3), on(Blockl, Block2)), and if this is to be a sentence of first-order logic, then the symbol on must be taken as a function symbol, and on(Blockl, Block2) regarded as an object in the first-order language. This process of making objects out of sentences and other entities is called reification. It is necessary for expressive power but again leads to complications in reasoning. It is discussed in [13]. 264 JOHN McCARTHY
Formalizing the Notion of Context Whenever we write an axiom, a critic can say that the axiom is true only in a certain context. With a little ingenuity the critic can usually devise a more general context in which the precise form of the axiom doesn't hold. Looking at human reasoning as reflected in language emphasizes this point. Consider axiomatizing 'on' so as to draw appropriate consequences from the information expressed in the sentence, 'The book is on the table.' The critic may propose to haggle about the precise meaning of 'on,' inventing difficulties about what can be between the book and the table or about how much gravity there has to be in a spacecraft in order to use the word 'on' and whether centrifugal force counts. Thus we encounter Socratic puzzles over what the concepts mean in complete generality and encounter examples that never arise in life. There simply isn't a most general context. Conversely, if we axiomatize at a fairly high level of generality, the axioms are often longer than is convenient in special situations. Thus humans find it useful to say, 'The book is on the table,' omitting reference to time and precise identifications of what book and what table. This problem of how general to be arises whether the general common sense knowledge is expressed in logic, in program, or in some other formalism. (Some people propose that the knowledge is internally expressed in the form of examples only, but strong mechanisms using analogy and similarity permit their more general use. I wish them good fortune in formulating precise proposals about what these mechanisms are.) A possible way out involves formalizing the notion of context and combining it with the circumscription method of nonmonotonic reasoning. We add a context parameter to the functions and predicates in our axioms. Each axiom makes its assertion about a certain context. Further axioms tell us that facts are inherited by more restricted context unless exceptions are asserted. Each assertion is also nonmonotonically assumed to apply in any particular more general context, but there again are exceptions. For example, the rules about birds flying implicitly assume that there is an atmosphere to fly in. In a more general context this might not be assumed. It remains to determine how inheritance to more general contexts differs from inheritance to more specific contexts. Suppose that whenever a sentence p is present in the memory of a computer, we consider it as in a particular context and as an abbreviation for the sentence holds(p, C), where C is the name of a context. Some contexts are very specific, so that Watson is a doctor in the context of Sherlock Holmes stories and a baritone psychologist in a tragic opera about the history of psychology. There is a relation cl < c2 meaning that context c2 is more general than context cl. We allow sentences like holds(ci < c2, cO) so that Generality in Artificial Intelligence
265
even statements relating conteKts can have contexts. The theory would not provide for any 'most general context' any more than Zermelo Frankel set theory provides for a most general set. A logical system using contexts might provide operations of entering and leaving a context yielding what we might call ultranaturaldeduction allowing a sequence of reasoning like hoids(p, C) ENTER C p q
LEAVE C holds(q, C).
This resembles the usual logical natural deduction systems, but for reasons beyond the scope of this lecture, it is probably not correct to regard contexts as equivalent to sets of assumptions -not even infinite sets of assumptions. All this is unpleasantly vagie, but it's a lot more than could be said in 1971.
References 1. Black F. A deductive question answering system. Ph.D. dissertation, Harvard Univ., Cambridge, Mass., 1964. 2. Buchanan, B. G., and Shortliffe, E. H., Eds. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. American Elsevier, New York, 1984. 3. Davis, R., Buchanan, B., and Shortliffe, E. Production rules as a representation for a knowledge-based consultation program. Artif Intell. 8, 1 (Feb. 1977). 4. Doyle, J. Truth maintenance systems for problem solving. In Proceedings of the 5th InternationalJoint Conference on Artificial Intelligence. 1977, p. 247. 5. Ernst, G. W., and Newell, A. GPS: A Case Study in Generalityand Problem Solving. Academic Press, Orlando, Fla, 1969. 6. Fikes, R. and Nilsson, N. STRIPS: A new approach to the application of theorem proving to problem solving. Artif Intell. 2, 3,4 (Jan. 1971), 189-208. 7. Friedberg, R. M. A learning machine. IBMJ. Res. 2, 1 (Jan. 1958), 2- 13. 8. Friedberg, R. M., Dunharn, B., and North, J. H. A learning machine, p.Il. IBMJ. Res. 3, 3 (July, 1959), 282-287. 9. Green, C. Theorem-proving by resolution as a basis for question answering systems. In Machine intelligence 4, B. Melter and D. Michie, Eds. University of Edinburgh Press, Edinburgh, 1969, pp. 183-205. 10. Lifschitz, V. Computing cir umscription. In Proceedingsof the 9th InternationalJointConference on Artificial Intelligence, vol. 1, 1985, pp. 121- 127. 266 JOHN McCARTHY
11. McCarthy, J. Programs with common sense. In Proceedingsof the Teddington Conference on the Mechanization of Thought Processes. Her Majesty's Stationery Office, London. Reprinted in Semantic InformationProcessing, M. Minsky, Ed. M.I.T. Press, Cambridge, Mass., 1960. 12. McCarthy, J., and Hayes, P. J. Some philosophical problems from the standpoint of artificial intelligence. In Machine Intelligence 4, D. Michie, Ed. American Elsevier, New York, N.Y., 1969. 13. McCarthy, J. First order theories of individual concepts and propositions. In Machine Intelligence 9, D. Michie, Ed. University of Edinburgh Press, Edinburgh, 1979. 14. McCarthy, J. Some expert systems need common sense. In Computer Culture: The Scientific, Intellectual and Social Impact of the Computer, vol. 426, Pagels, Ed. Annals of the New York Academy of Sciences, New York, 1983. 15. McCarthy, J. Circumscription-A form of non-monotonic reasoning. Artif. Intell. 13, 1, 2 (Apr. 1980). 16. McCarthy, J. Applications of circumscription to formalizing common sense knowledge. Artif Intell. (Apr. 1986). 17. McDermott, D., and Doyle, J. Non-monotonic logic I. Artif Intell. 13, 1, 2 (1980), 41-72. 18. Newell, A., Shaw, J. C., and Simon, H. A. Preliminary description of general problem solving program- I(GPS-I). CIP Working Paper 7, Carnegie-Mellon Univ., Dec. 1957. 19. Newell, A., Shaw, J. C., and Simon, H. A. Report on a general problemsolving program for a computer. In Information Processing:Proceedings of the InternationalConference on Information Processing (Paris). UNESCO, 1960, pp. 256-264. (RAND P-1584, and reprinted in Computers and Automation, July 1959.) 20. Newell, A., Shaw, J. C., and Simon, H. A. A variety of intelligent learning in a General Problem Solver. In M. C. Yovits and S. Cameron, Eds. SelfOrganizing Systems, Pergammon Press, Elmsford, N.Y., 1960, pp. 153- 189. 21. Newell, A., and Simon, H. A. Human Problem Solving. Prentice-Hall, Englewood Cliffs, N.J., 1972. 22. Laird, J. E., Newell, A., and Rosenbloom, P. S. Soar: An architecture for general intelligence. To be published. 23. Reiter, R. A logic for default reasoning. Artif Intell. 13, 1, 2 (Apr. 1980). 24. Simon, H. Still unsubstantiated rumor, 1960. GENERA[W86,JMC] TEXed on May 27, 1986, at 11:50 p.m.
Categories and Subject Descriptors: 1.2.3 [Artificial Intelligence): Deduction and Theorem Proving -logic
programming; 1.2.4 [Artificial Intelligence]: Knowledge Representation Formalisms and Methods - representationlanguages; I.2.6 [Artificial Intelligence]: Learning - concept learning
General Terms: Languages, Theory
Additional Key Words and Phrases: General problem solver, Prolog, nonmonotonicity, reification Generality in Artificial Intelligence
267
The Programmer as Navigator CHARLES W. BACHMAN The Tiring Award citation read by Richard G. Canning, chairman of the 1973 DringAward Committee, at the presentationof this lecture on August 28 at the ACM Annual Conference in Atlanta: A significant change in the computer field in the last five to eight years has been made in the way we treat and handle data. In the early days of our field, data was intimately tied to the application programs that used it. Now we see that we want to break that tie. We want data that is independent of the application programs that use it-that is, data that is organized and structuredto serve many applicationsand many users. What we seek is the database. This movement toward the database is in its infancy. Even so, it appears that there are now between 1,000 and 2,000 true database management systems installed worldwide. In ten years very likely, there will be tens of thousands of such systems. Just from the quantities of installed systems, the impact of databases promises to be huge. This year's recipientof the A. M. Wring Award is one of the realpioneers of database technology. No other individual has had the influence that he has had upon this aspect of our field. I single out three prime examples of what he has done. He was the creator and principal architect of the first Author's present address: Bachman Information Systems, Inc., 4 Cambridge Center, Cambridge, MA 02142. 269
commercially available databasemanagement system -the IntegratedData Store-originally developed from 1961 to 1964.1'23,4 I-D-S is today one of the three most widely used database management systems. Also, he was one of the founding members of the CODASYL Database Task Group, and served on that task group from 1966 to 1968. The specifications of that task group are being implemented b, many suppliers in various parts of the world.5 6, Indeed, currently these specifications represent the only proposal of stature for a common architecture for database management systems. It is to his credit that these specifications, after extended debate and discussion, embody much of the original thinking of the Integrated Data Store. Thirdly, he was the creator of a powerful method for displaying data relationships-atool for databaske designers as well as application system designers.78 His contributions have thus represented the union of imagination and practicality. The richness of his work has already had, and will continue to have, a substantial influence upon our field. I am very pleased to present the 1973 A. M. Thring Award to Charles W Bachman. Copernicus completely reoriented our view of astronomical phenomena when he suggested that the earth revolve', about the sun. There is a growing feeling that data processing people would benefit if they were to accept a radically new point of view, one that would libera:e the application programmer's thinking from the centralism of core storage and allow him the freedom to act as a navigator within a database. To do this, he must first learn the various navigational skills; then he must learn the 'rules of the road' to avoid conflict with other programmers as they jointly navigate the database information space. This orientation will cause as much anguish among programmers as the heliocentric theory did among ancient astronomers and theologians.
This year the whole world celebrates the five-hundredth birthday of Nicolaus Copernicus, the famous Polish astronomer and mathematician. In 1543, Copernicus published his book, Concerning the Revolutions of CelestialSpheres, which described a new theory about the relative physical movements of the earth, the planets, and the sun. It was in direct contradiction with the earth-centered theories which had been established by Ptolemy 1400 years earlier. 'A general purpose programming system I or random access memories (with S.B. Williams). Proc. AFIPS 1964 FJCC, Vol. 26, AFIFS Press, Montvale, N.J., pp. 411-422. 'Integrated Data Store. DPMA Quartelb (Jan. 1965). 3 Software for random access processi -g. Datamation (Apr. 1965), 36-41. 4 Integrated Data Store -Case Study. Proc. Sec. Symp. on Computer-Centered Data Base Systems sponsored by ARPA, SDC, and ESD, 1966. 'Implementation techniques for data structure sets. Proc. of SHARE Working Conf. on Data Base Systems, Montreal, Canada, July 1973. 6 The evolution of data structures. Proc. NordDATA Conf., Aug. 1973, Copenhagen, Denmark, pp. 1075-1093. 'Data structure diagrams. Data Base 1, 2 (1969), Quarterly Newsletter of ACM SIGBDP, pp. 4-10. 8 Set concepts for data structures. In Enctclopedia of Computer Science, Amerback Corp. (to be published in 1974).
270
CHARLES W. BACHMAN
Copernicus proposed the heliocentric theory, that planets revolve in a circular orbit around the sun. This theory was subjected to tremendous and persistent criticism. Nearly 100 years later, Galileo was ordered to appear before the Inquisition in Rome and forced to state that he had given up his belief in the Copernican theory. Even this did not placate his inquisitors, and he was sentenced to an indefinite prison term, while Copernicus's book was placed upon the Index of Prohibited Books, where it remained for another 200 years. I raise the example of Copernicus today to illustrate a parallel that I believe exists in the computing or, more properly, the information systems world. We have spent the last 50 years with almost Ptolemaic information systems. These systems, and most of the thinking about systems, were based on a computer-centered concept. (I choose to speak of 50 years of history rather than 25, for I see today's information systems as dating from the beginning of effective punched card equipment rather than from the beginning of the stored program computer.) Just as the ancients viewed the earth with the sun revolving around it, so have the ancients of our information systems viewed a tab machine or computer with a sequential file flowing through it. Each was an adequate model for its time and place. But after a while, each has been found to be incorrect and inadequate and has had to be replaced by another model that more accurately portrayed the real world and its behavior. Copernicus presented us with a new point of view and laid the foundation for modern celestial mechanics. That view gave us the basis for understanding the formerly mysterious tracks of the sun and the planets through the heavens. A new basis for understanding is available in the area of information systems. It is achieved by a shift from a computer-centered to the database-centered point of view. This new understanding will lead to new solutions to our database problems and speed our conquest of the n-dimensional data structures which best model the complexities of the real world. The earliest databases, initially implemented on punched cards with sequential file technology, were not significantly altered when they were moved, first from punched card to magnetic tape and then again to magnetic disk. About the only things that changed were the size of the files and the speed of processing them. In sequential file technology, search techniques are well established. Start with the value of the primary data key, of the record of interest, and pass each record in the file through core memory until the desired record, or one with a higher key, is found. (A primary data key is a field within a record which makes that record unique within the file.) Social security numbers, purchase order numbers, insurance policy numbers, bank account numbers are all primary data keys. Almost without exception, they are synthetic attributes specifically designed and created for the purpose of uniqueness. Natural attributes, e.g., names of people The Programmer as Navigator
271
and places, dates, time, and quantities, are not assuredly unique and thus cannot be used. The availability of direct access storage devices laid the foundation for the Copernican-like change in viewpoint. The directions of 'in' and 'out' were reversed. Where the input notion of the sequential file world meant 'into the computer from tape,' the new input notion became 'into the database.' This revolution in thinking is changing the programmer from a stationary viewer of objects passing before him in core into a mobile navigator who is able to probe and traverse a database at will. Direct access storage devices also opened up new ways of record retrieval by primary data key. The first was called randomizing, calculated addressing, or hashing. It involved processing the primary data key with a specialized algorithm, the output of which identified a preferred storage location for that record. If the record sought was not found in the preferred location, then an overflow algorithm was used to search places where t -ie record alternately would have been stored, if it existed at all. Overflow is created when the preferred location is full at the time the record was originally stored. As an alternative to the randomizing technique, the index sequential access technique was deve hoped. It also used the primary data key to control the storage and retrieval of records, and did so through the use of multilevel indices. The programmer who has advanced from sequential file processing to either index sequential or randomized access processing has greatly reduced his access time because he can now probe for a record without sequentially passing all the intervening records in the file. However, he is still in a one-dirnensional world as he is dealing with only one primary data key, which is his sole means of controlling access. From this point, I want to begin the programmer's training as a full-fledged navigator in an n-drmensional data space. However, before I can successfully describe this process, I want to review what 'database management' is. It involves all aspects of storing, retrieving, modifying, and deleting data in the files on personnel and production, airline reservations, or laboratory experiments-data which is used repeatedly and updated as new information becomes available. These files are mapped through some storage structure onto magnetic tapes or disk packs and the drives that support them. Database management has two main functions. First is the inquiry or retrieval activity that reaccesses previously stored data in order to determine the recorded status of some real world entity or relationship. This data has previously been stored by some other job, seconds, minutes, hours, or even days earlier, and has been held in trust by the database management system. A database management system has a continuing responsibility to maintain data between the time when it 272 CHARLES W. BACHMAN
was stored and the time it is subsequently required for retrieval. This retrieval activity is designed to produce the information necessary for decision making. Part of the inquiry activity is report preparation. In the early years of sequential access storage devices and the resultant batch processing there was no viable alternative to the production of massive file dumps as formatted as reports. Spontaneous requirements to examine a particular checking account balance, an inventory balance, or a production plan could not be handled efficiently because the entire file had to be passed to extract any data. This form of inquiry is now diminishing in relative importance and will eventually disappear except for archival purposes or to satisfy the appetite of a parkinsonian bureaucracy. The second activity of database management is to update, which includes the original storage of data, its repeated modification as things change, and ultimately, its deletion from the system when the data is no longer needed. The updating activity is a response to the changes in the real world which must be recorded. The hiring of a new employee would cause a new record to be stored. Reducing available stock would cause an inventory record to be modified. Cancelling an airline reservation would cause a record to be deleted. All of these are recorded and updated in anticipation of future inquiries. The sorting of files has been a big user of computer time. It was used in sorting transactions prior to batch sequential update and in the preparation of reports. The change to transaction-mode updating and on-demand inquiry and report preparation is diminishing the importance of sorting at the file level. Let us now return to our story concerning the programmer as navigator. We left him using the randomizing or the index sequential technique to expedite either inquiry or update of a file based upon a primary data key. In addition to a record's primary key, it is frequently desirable to be able to retrieve records on the basis of the value of some other fields. For example, it may be desirable, in planning ten-year awards, to select all the employee records with the 'year-of-hire' field value equal to 1964. Such access is retrieval by secondary data key. The actual number of records to be retrieved by a secondary key is unpredictable and may vary from zero to possibly include the entire file. By contrast, a primary data key will retrieve a maximum of one record. With the advent of retrieval on secondary data keys, the previously one-dimensional data space received additional dimensions equal to the number of fields in the record. With small or medium-sized files, it is feasible for a database system to index each record in the file on every field in the record. Such totally indexed files are classified as inverted files. In large active files, however, it is not economical to index every The Programmer as Navigator 273
field. Therefore, it is prudent to select the fields whose content will be frequently used as a retrieval criterion and to create secondary indices for those fields only. The distinction between a file and a database is not clearly established. However, one difference is pertinent to our discussion at this time. In a database, it is common to have several or many different kinds of records. For an example, in a personnel database there might be employee records, department records, skill records, deduction records, work history records, and education records. Each type of record has its own unique primary data key, and all of its other fields are potential secondary data keys. In such a database the primary and secondary keys take on an interesting relationship when the primary key of one type of record is the secondary key of another type of record. Returning to our personnel database as an example -the field named 'department code' appears in both the employee record and the department record. It is one of several possible secondary data keys of the employee records and the single primary data key of the department records. This equality of primary and secondary data key fields reflects real world relationships and provides a way to reestablish these relationships for computer processing purpc'ses. The use of the same data value as a primary key for one record and as a secondary key for a set of records is the basic concept upon which data structure sets are declared and maintained. The Integrated D)a,:a Store (I-D-S) systems and all other systems based on its concepts consider their basic contribution to the programmer to be the capability to associate records into data structure sets and the capability to use these sets as retrieval paths. All the COBOL Database Task Group systems implementations fall into this class. There are many benefits gained in the conversion from several files, each with a single type of record, to a database with several types of records and database sets. One such benefit results from the significant improvement in performance that accrues from using the database sets in lieu of both primary and secondary indices to gain access to all the records with a particular data key value. With database sets, all redundant data can be eliminal ed, reducing the storage space required. If redundant data is deliberately maintained to enhance retrieval performance at the cost of maintenance, then the redundant data can be controlled to ensure that the updating of a value in one record will be properly reflected in all other appropriate records. Performance is enhanced by the so-called clusteringg' ability of databases where the owner and some or most of the members records of a set are physically stored and accessed together on the same block or page. These systems have been running ir. virtual memory since 1962. Another significant functio nal and performance advantage is to be able to specify the order of retrieval of the records within a set based upon a declared sort field or .he time of insertion. 274 CHARLES W. BACHMAN
In order to focus the role of programmer as navigator, let us enumerate his opportunities for record access. These represent the commands that he can give to the database system-singly, multiply or in combination with each other-as he picks his way through the data to resolve an inquiry or to complete an update. 1. He can start at the beginning of the database, or at any known record, and sequentially access the 'next' record in the database until he reaches a record of interest or reaches the end. 2. He can enter the database with a database key that provides direct access to the physical location of a record. (A database key is the permanent virtual memory address assigned to a record at the time that it was created.) 3. He can enter the database in accordance with the value of a primary data key. (Either the indexed sequential or randomized access techniques will yield the same result.) 4. He can enter the database with a secondary data key value and sequentially access all records having that particular data value for the field. 5. He can start from the owner of a set and sequentially access all the member records. (This is equivalent to converting a primary data key into a secondary data key.) 6. He can start with any member record of a set and access either the next or prior member of that set. 7. He can start from any member of a set and access the owner of the set, thus converting a secondary data key into a primary data key. Each of these access methods is interesting in itself, and all are very useful. However, it is the synergistic usage of the entire collection which gives the programmer great and expanded powers to come and go within a large database while accessing only those records of interest in responding to inquiries and updating the database in anticipation of future inquiries. Imagine the following scenario to illustrate how processing a single transaction could involve a path through the database. The transaction carries with it the primary data key value or database key of the record that is to be used to gain an entry point into the database. That record would be used to gain access to other records (either owner or members) of a set. Each of these records is used in turn as a point of departure to examine another set. For example, consider a request to list the employees of a particular department when given its departmental code. This request could be supported by a database containing only two different types of records: personnel, records and department records. For simplicity purposes, the department record can be envisioned as having only two fields: the department code, which is the primary data key; and the department name, which is descriptive. The personnel record can be envisioned The Programmer as Navigator
275
as having only three fields: the employee number, which is the primary data key for the record; the employee name, which is descriptive; and the employees department code, which is a secondary key which controls set selection and the records placement in a set. The joint usage of the department code by both records and the declaration of a set based upon this data key provide the basis for the creation and maintenance of the set relationship between a department record and all the records representing the employees of that department. Thus the usage of the set of employee records provides the mechanism to readily list all the employees of a particular department following the primary data key retrieval of the appropriate department record. No other record for index need be accessed. The addition of the department manager's employee number to the department record greatly extends the navigational opportunities, and provides the basis for a second class of sets. Each occurrence of this new class includes the department records for all the departments managed by a particular employee. A single employee number or department code now provides an entry point into an integrated data structure of an enterprise. Given an employee number, and the set of records of departments managed, all the departments which he manages can be listed. The personnel of each such department can be further listed. The question of departments managed by each of these employees can be asked repeatedly until all the subordinate employees and departments have been displayed. Inversely, the same data structure can easily identify the employee's manager, the manager's manager, and the manager's manager's manager, and so on, until the company president is reached. There are additional risks and adventures ahead for the programmer who has mastered operation in the n-dimensional data space. As navigator he must brave dimly perceived shoals and reefs in his sea, which are created because he has to navigate in a shared database environment. There is no other obvious way for him to achieve the required performance. Shared access is a new and complex variation of multiprogramming or time sharing, which were invented to permit shared, but independent, use of the computer resources. In multiprogramming, the programmer of one job doesn t know or care that his job might be sharing the computer, as long as he is sure that his address space is independent of that of any other programs. It is left to the operating system to assure each program s integrity and to make the best use of the memory, processor, and other physical resources. Shared access is a specialized version of multip 'ogramming where the critical, shared resources are the records of the database. The database records are fundamentally different than either main storage or the processor because their data fields change value through update and do not return to their original condition afterward. Therefore, a job that repeatedly 276 CHARLES W. BACHMAN
uses a database record may find that record's content or set membership has changed since the last time it was accessed. As a result, an algorithm attempting a complex calculation may get a somewhat unstable picture. Imagine attempting to converge on an iterative solution while the variables are being randomly changed! Imagine attempting to carry out a trial balance while someone is still posting transactions to the accounts! Imagine two concurrent jobs in an airline reservations system trying to sell the last seat on a flight! One's first reaction is that this shared access is nonsense and should be forgotten. However, the pressures to use shared access are tremendous. The processors available today and in the foreseeable future are expected to be much faster than are the available direct access storage devices. Furthermore, even if the speed of storage devices were to catch up with that of the processors, two more problems would maintain the pressure for successful shared access. The first is the trend toward the integration of many single purpose files into a few integrated databases; the second is the trend toward interactive processing where the processor can only advance a job as fast as the manually created input messages allow. Without shared access, the entire database would be locked up until a batch program or transaction and its human interaction had terminated. The performance of today's direct access storage devices is greatly affected by patterns of usage. Performance is quite slow if the usage is an alternating pattern of: access, process, access, process,..., where each access depends upon the interpretation of the prior one. When many independent accesses are generated through multiprogramming, they can often be executed in parallel because they are directed toward different storage devices. Furthermore, when there is a queue of requests for access to the same device, the transfer capacity for that device can actually be increased through seek and latency reduction techniques. This potential for enhancing throughput is the ultimate pressure for shared access. Of the two main functions of database management, inquiry and update, only update creates a potential problem in shared access. An unlimited number of jobs can extract data simultaneously from a database without trouble. However, once a single job begins to update the database, a potential for trouble exists. The processing of a transaction may require the updating of only a few records out of the thousands or possibly millions of records within a database. On that basis, hundreds of jobs could be processing transactions concurrently and actually have no collisions. However, the time will come when two jobs will want to process the same record simultaneously. The two basic causes of trouble in shared access are interference and contamination. Interference is defined as the negative effect of the updating activity of one job upon the results of another. The example I have given of one job running an accounting trial balance while The Programmer as Navigator
277
another was posting transactions illustrates the interference problem. When a job has been interfered with, it must be aborted and restarted to give it another opportunity to develop the correct output. Any output of the prior execution must alo be removed because new output will be created. Contamination is defined as the negative effect upon a job which results from a combination of two events: when another job has aborted and when its output (i e., changes to the database or messages sent) has already been read by the first job. The aborted job and its output will be removed frorr the system. Moreover, the jobs contaminated by the output of the aborted job must also be aborted and restarted so that they can operate with correct input data. A critical question in designing solutions to the shared access problem is the extent of visibility that the application programmer should have. The Weyerhaeuser Company's shared access version of I-D-S was designed on the premise that the programmer should not be aware of shared access problems That system automatically blocks each record updated and every message sent by a job until that job terminates normally, thus eliminating the contamination problem entirely. One side effect of this dynamic blocking of records is that a deadlock situation can be created when two or more jobs each want to wait for the other to unblock a desired record. U pon detecting a deadlock situation, the I-D-S database system responds by aborting the job that created the deadlock situation, by restoring the records updated by that job, and by making those records available to the jobs waiting. The aborted job, itself, is subsequently restarted. Do these deadlock situations really exist? The last I heard, about 10 percent of all jobs started in Weyerhaeuser's transaction-oriented system had to be aborted for deadlock. Approximately 100 jobs per hour were aborted and restarted. Is this terrible? Is this too inefficient? These questions are hard to answer because our standards of efficiency in this area are not clearly de-fined. Furthermore, the results are application-dependent. The Weyerhaeuser I-D-S system is 90 percent efficient in terms of jobs successfully completed. However, the real questions are: -Would the avoidance of shared access have permitted more or fewer jobs to be completed each hour? -Would some other strategy based upon the detecting rather than avoiding contamination have been more efficient? -Would making the programmer aware of shared access permit him to program around the problem and thus raise the efficiency? All these questions are beginning to impinge on the programmer as navigator and on the people who design and implement his navigational aids. My proposition today is that it is time for the application programmer to abandon the memory-centered view, and to accept the challenge and opportunity of navigation within an n-dimensional data space. The 278 CHARLES W. BACHMAN
software systems needed to support such capabilities exist today and are becoming increasingly available. Bertrand Russell, the noted English mathematician and philosopher, once stated that the theory of relativity demanded a change in our imaginative picture of the world. Comparable changes are required in our imaginative picture of the information system world. The major problem is the reorientation of thinking of data processing people. This includes not only the programmer but includes the application system designers who lay out the basic application programming tasks and the product planners and the system programmers who will create tomorrow's operating system, message system, and database system products. Copernicus laid the foundation for the science of celestial mechanics more than 400 years ago. It is this science which now makes possible the minimum energy solutions we use in navigating our way to the moon and the other planets. A similar science must be developed which will yield corresponding minimum energy solutions to database access. This subject is doubly interesting because it includes the problems of traversing an existing database, the problems of how to build one in the first place and how to restructure it later to best fit the changing access patterns. Can you imagine restructuring our solar system to minimize the travel time between the planets? It is important that these mechanics of data structures be developed as an engineering discipline based upon sound design principles. It is important that it can be taught and is taught. The equipment costs of the database systems to be installed in the 1980's have been estimated at $100 billion (at 1970 basis of value). It has further been estimated that the absence of effective standardization could add 20 percent or $20 billion to the bill. Therefore, it is prudent to dispense with the conservatism, the emotionalism, and the theological arguments which are currently slowing progress. The universities have largely ignored the mechanics of data structures in favor of problems which more nearly fit a graduate student's thesis requirement. Big database systems are expensive projects which university budgets simply cannot afford. Therefore, it will require joint university/industry and university/government projects to provide the funding and staying power necessary to achieve progress. There is enough material for a half dozen doctoral theses buried in the Weyerhaeuser system waiting for someone to come and dig it out. By this I do not mean research on new randomizing algorithms. I mean research on the mechanics of nearly a billion characters of real live business data organized in the purest data structures now known. The publication policies of the technical literature are also a problem.
The ACM SIGBDP and SIGFIDET publications are the best available, and membership in these groups should grow. The refereeing rules and practices of Communications of the ACM result in delays of one year The Programmer as Navigator
279
to 18 months between submittal and publication. Add to that the time for the author to prepare his ideas for publication and you have at least a two-year delay between the detection of significant results and their earliest possible publication. Possibly the greatest single barrier to progress is the lack of general database information within a very large portion of the computer users resulting from the domination of the market by a single supplier. If this group were to bring to bear its e (perience, requirements, and problemsolving capabilities in a complete ly open exchange of information, the rate of change would certainly increase. The recent action of SHARE to open its membership to all vendors and all users is a significant step forward. The SHARE-sponsored Working Conference on Database Systems held in Montreal in July (1973) provided a forum so that users of all kinds of equipment and database systems could describe their experiences and their requirements. The widening dialog has started. I hope and trust that we can continue. If approached in this spirit, where no one organization attempts to dominate the thinking, then I am sure that we can provide the programmer with effective tools for navigation.
Related articles are: The evolution of storage structures. Comm. ACM 15, 7 (July 1972), 628-634. Architectural Definition Technique: its objectives, theory, process, facilities and practice (with J. Bouvard). Prc c. 1972 ACM SIGFIDET Workshop on Data Description, Access and Control, pp. 257-280. Data space mapped into three dimensions; a viable model for studying data structures. Data Base Managemen Rep., InfoTech Information Ltd., Berkshire, U.K., 1973. A direct access system with procedurally generated data structuring capability (with S. Brewer). Honeywell Comput. J. (to appear). Categories and Subject Descriptors: H.2.2 [Database Manageme nt]: Physical Design -access methods; H.2.4 [Database Management]: Systems - transactionprocessing; H1.3.2 [Storage and Retrieval]: Information Storage-file organization; H.3.3 Information Storage and Retrieval]: Infornmtion Search and Retrieval - retrieval models General Terms: Algorithms, Design, Performance Additional Key Words and Phrases: Contamination, interference
280 CHARLES W. BACHMAN
Postscript
The Programmer as Navigator, Architect, Communicator, Modeler, Collaborator, and Supervisor CHARLES W. BACHMAN Bachman Information Systems, Inc. Thirteen years have passed since the writing of the Turing Award paper entitled, 'The Programmer as Navigator.' Databases have become common, even popular. Some programmers navigate. Others join. I have spent considerable effort in arguing the merits of the network (CODASYL) data model and in extending it for greater modeling power.1l 2, 3 4 Arguments and debates concerning data models waxed hot and heavy and have now pretty much simmered down. Today, the only reasonable consensus is that one can do useful work with DBMSs based upon any of the popular data models, even with those DBMSs that have no apparent affinity to any particular data model.
The Programmer as Architect The study of the architecture of computer-based information systems has progressed well in this period. 'IWo projects, important in their own right, were instrumental in bringing this subject to the forefront. The ANSI/X3/SPARC Study Group on Database Management (1972- 1977) reported 5 its architecture of data storage and retrieval. This was one of the first attempts to clearly understand and document the layers of software and human activity involved in the process of data storage and retrieval. It went further and identified and described the interfaces between the various software modules and between them and their human counterparts (administrators, database designers, and programmers). It was significant that this report identified both administrative and run-time interfaces. This project was instrumental in establishing the concept of a conceptual schema6 as a higher level abstraction of information structure definitions, which is independent of data representation. 'Bachman, C. W. Why restrict the modeling capability of the CODASYL data structure sets? In Proceedingsof the AFIPS NationalComputer Conference, vol. 46. AFIPS Press, Reston, Va., 1977. 'Bachman, C. W., and Daya, M. The role concept in data models. In Proceedings of the 3rd Very Large Database Conference, 1977. 3 Bachman, C. W. The structuring capabilities of the molecular data model (partnership data model). In Entity-Relationship Approach to Software Engineering. Elsevier Science, New
York, 1983. 4 Bachman, C. W. The partnership data model. Presented at the Fall 1983 IEEE Computer Conference (Washington, D.C.). 5 ANSI/X3/SPARC/Study Group -Database Management Systems. Framework Report on Database Management Systems. AFIPS Press, Reston, Va., 1978. 6ISC/TC97/SC5/WG3. Concepts and terminology for the conceptual schema. January 15,
1981. Author's address: Bachman Information Systems, Inc., 4 Cambridge Center, Cambridge, MA 02142.
281
The Programmer as Communicator The International Organization for Standardization, through its ISO/TC97/ SC16, established (1979-1982) the Reference Model for Open Systems Interconnection. This Reference Model is an architectural master plan for data communications established as an international standard 7 with the intent that it be the controlling and integrating standard for a series of more detailed standards to follow. This architecture identified seven layers of processing involved in and supporting communication between application processes. Each layer was specified in terms of is 'administrative entities,'8 'processing entities,' 'services,' and 'protocols.' For the processsing entities of each layer, there were four important interfaces to be established and standardized: (1) the services that a processing entity offers to the processing entities in the layer immediately above; (2) the communication protocol by which a processing entity communicates with other processing entities in the same layer; (3) the use, by the processing entities of one layer, of the services provided by the processing entities of the layer immediately below; (4) the administrative protocol by which a processing entity is controlled by the administrative entities within the same layer. The detailed standards, developed subsequently for each layer, spell out the individual protocols, services, and service usage. The vision and scope of this work can be seen in part by reviewing some of the discussions relating to addressability. How large should the address space be to identify all the processing entities that might wish to communicate with one another? One discussion followed this scenario: There will be close to 10 billion people in the world by the end of the year 2000 (10 billion addresses). Assume that, on the average, 100 robots will be working for each of these people (1 trillion addresses). Plan for unforeseen contingencies and a useful address space life of 25 years; so multiply by 10 (10 trillion addresses). Assume that the assignment of ad it ess is made through the political processes starting with the United Nations and that 99 percent of the addresses are effectively unavailable for applications len el communications (1 quadrillion addresses). Thus 1 quadrillion addresses is about the right order of magnitude for the address space being considered. This is a 1 followed by 15 zeros in the decimal system, or a 1 followed by approximately 50 zeros in the binary system. This year the work on ISO standards for Open Systems Interconnection has received a great boost in support irn the United States by the creation of COS (Corporation for Open Systems). COS is an industry-wide organization of users, carriers, and manufacturers formed to encourage the implementation of the ISO standards and to provide the testing environment so that a new or revised implementation can be validated for adherence to the ISO standards. 'ISO. Computers and Information systems-Open Systems Interconnection Reference Model. Standard 7498. Amesican National Standards Institute, New York, N.Y. 'The word 'entity' is used in the ISO, TC97 world to mean an active element that plays some part in the communication process. I have used the adjectives 'processing' and 'administrative' to distinguish the cormmunication-time entities from the set-up-time entities. This usage of the word entity c ntrasts with its use in the data modeling world where the word entity means something that exists and about which something is known. 282
CHARLES W. BACHMAN
The author, in his capacity as the chairman of ISO/TC97/SC16 reporting to ISO/TC97, recommended to TC97 that it develop a 'reference model for computer-based information systems.'9 10 This extended reference model would be used to place all of ISO/TC97's work on computers and information systems into perspective and thus highlight the areas most critical for further standardization. In 1984- 1985, ISO/TC97 reorganized its committee structure creating a new subcommittee, ISO/TC97/SC21, which has assumed the former responsibilities of SC16 and has been given the additional responsibility of defining the architecture of data storage and retrieval. With time this responsibility should grow to include the aspects of data integrity and data security, since it is not possible to create a complete architecture for data storage and retrieval and data communication without their being integrated with the aspects of integrity and security.
The Programmer as Modeler I have invested a good deal of my time in these 13 years in extending the conceptual schema work of ANSI/SPARC Study Group on DBMS, joining it with my work on data communications and formal description techniques. The scope of the original conceptual schema work was limited to the information that existed in the business and to its data formats as stored in files and databases (internal schema) and as viewed by programs (external schema). My goal was to extend this abstraction to include descriptions of all the active agents (people, computer programs, and physical processes) that were the users of the information, the communication paths that they use, and the messages that are exchanged. I wanted to extend this abstraction further to include the rules that governed the behavior of the users of the information. These extended conceptual schemata have been called 'enterprise models' or 'business models'. Why build a business model? First, as a means of defining the information processing requirements for an organization in a manner that is equally clear to the user community and to the data processing community. Second, to provide the basis for automating the process of generating application software. I define the term application software to include database and file descriptions, the application programs, and the environmental control parameters required to install the required files and programs in the computers and to control their operation. The step of translating a business model into the set of application software required to support that model is the step of translating the what of the business world into the how of the computer and communications world. This translation requires three additional elements over and above the business model as the formal specification: 1. It requires information about the quantities, rates, and response times that must be satisfied. 2. It requires information about the available processors, storage, and communication hardware and information about the available compilers, DBMSs, communication systems, transaction monitors, and operating systems. 9
Bachman, C. W. The context of open systems interconnection within computer-based information systems. In Proceedings of Gesellschaft fur Informatik, Jan. 1980. ' 0 Bachman, C. W., and Ross, R. G. Toward a more complete reference model of computerbased information systems. J. Comput. Standards 1 (1982); also published in Comput. Networks 6 (1982). The Programmer as Navigator
283
3. It also requires the expertise to understand the operating and performance characteristics of the available software and hardware options and how to best use them to meet the functional and quantitative requirements in a costeffective way. This performance and optimization expertise has been embodied in the persons of real people, the database designers, application programmers, and system programmers. The best of them are very, very good, but the work of many has been disappointing. All these activities are expensive and more time consuming then any one would wish.
The Programmer as Collaborator This shortage of good people has started us looking for a means of automating the work of database designers and systems and application programmers. This automation is difficult, as the process of translating the business model into efficient application software is not completely deterministic. There are frequently several alternative approaches with different dynamics and costs. Real expertise and judgment are involved. This difficulty has led to the examination of the tools and techniques coming out of the world of artificial intelligence, where there has been an emphasis on domains of imperfect knowledge. The Al world, with its knowledge-based software system, has considerable experience developing interactive systems, where a resident human expert can collaborate with a 'cloned' exper., which is built into the software to achieve some otherwise difficult task. Toge ther they can carry out all the needed translations between the conceptual level of abstraction and the physical level taking into consideration the performance problems and opportunities.
Programmer as Supervisor It is reasonable to think that these cloned experts, who are embodied in knowledge-based (expert) systems, will improve with time. As this happens, the role of the resident human expert (database designer, application programmer, or systems programmer) will progressively shift from that of a collaborator with the knowledge-based system to that of the supervisor. This supervisor will be responsible for checking the work of the knowledge-based system, to see that it has covered all modes of operation and all likely operating conditions. After checking and requesting any appropriate modifications, the human expert as supervisor will be required to countersign the final design, just as the engineering supervisor countersigns the work of the engineering staff. In business information systems, nothing goes into production without its being reviewed and someone's taking responsibility for it.
Summary It is somewhat poetic to see the functional joining of database technology with AI technology. Poetic, because the early (1960) documentation of list processing in the artificial intelligence literature provided the basis for the linked lists used as the first and still most prevalent implementation mode for databases. The confusion between the concept and most prevalent implementation mode of the data structure set has been troublesome. There are a number of wellknown techniques' for implemneniting data structure sets, each with its own 1
IBachman, C. W. Implementation oi techniques for data structure sets. In Proceedings of SHARE Workshop on DataBase Sysiemns (Montreal, Canada, July, 1973).
284 CHARLES W. BACHMAN
performance characteristics, while maintaining the functional characteristics
of the set. It will be interesting to see whether the knowledge and implementation expertise of the database world will be able to make a significant contribution to the LISP and AI world as it reaches for commercial applications where the knowledge bases are large and concurrently shared among many distributed, cooperating Al workstations. Here performance and responsiveness are tied to the successful operation of shared virtual memories for knowledge-base purposes.
The Programmer as Navigator
285
-
Computer Science as Empirical Inquiry: Symbols and Search ALLEN NEWELL and HERBERT A. SIMON The 1975 ACM Wring Award was presented jointly to Allen Newell and HerbertA. Simon at the ACMAnnual Conference in Minneapolis, October 20. In introducing the recipients, Bernard A. Galler, Chairman of the Wring Award Committee, read the following citation: 'It is a privilege to be able to present the ACM Wring Award to two friends of long standing, Professors Allen Newell and HerbertA. Simon, both of Carnegie-Mellon University. 'In joint scientific efforts extending over twenty years, initially in collaborationwithJ. C. Shaw at the RAND Corporation, and subsequently with numerous faculty and student colleagues at Carnegie-Mellon University, they have made basic contributions to artificial intelligence, the psychology of human cognition, and list processing. 'In artificial intelligence, they contributed to the establishment of the field as an area of scientific endeavor, to the development of heuristic programming generally, and of heuristic search, means-ends analysis, and methods of induction, in particular, providing demonstrations of the sufficiency of these mechanisms to solve interesting problems. 'In psychology, they were principal instigatorsof the idea that human cognition can be described in terms of a symbol system, and they have Authors' present address: A.Newell, Department of Computer Science, and H. A.Simon, Department of Psychology, Carnegie-Mellon University, Pittsburgh, PA 15213. 287
developed detailed theories for human problem solving, verbal learningand inductive behavior in a number of task domains, using computer programs embodying these theories to simulate the human behavior. 'They were apparently the inventors of list processing, and have been major contributors to both software technology and the development of the concept of the computer as a system of manipulating symbolic structures and not just as a processor of numerical data. 'It is an honor for Professor, Newell and Simon to be given this award, but it is also an honor for ACAI to be able to add their names to our list of recipients, since by their presence, they will add to the prestige and importance of the ACM Thring Award.' Computer science is the s-:udy of the phenomena surrounding computers. The founders of this society understood this very well when they called themselves the Association for Computing Machinery. The machine-not just the hardware, but the programmed, living machine -is the organism we study. This is the tenth Turing Lecture. The nine persons who preceded us on this platform have presented nine different views of computer science, for our organism, the machine, can be studied at many levels and from many sides. We are deeply honored to appear here today and to present yet another view, the one that has permeated the scientific work for which we have been cited. We wish to speak of computer science as empirical inquiry. Our view is only one of manv; the previous lectures make that clear. However, even taken together the lectures fail to cover the whole scope of our science. Many fundamental aspects of it have not been represented in these ten awards. And if the time ever arrives, surely not soon, when the compass has been boxed, when computer science has been discussed from every side, it will be time to start the cycle again. For the hare as lecturer will have to make an annual sprint to overtake the cumulation of small, incremental gains that the tortoise of scientific and technical development has achieved in his steady march. Each year will create a new gap and call for a new sprint, for in science there is no final word. Computer science is an empirical discipline. We would have called it an experimental science, but I ke astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. None the less, they are experiments. Each new machine that is built is an experiment. Actually constructing the machine poses a question to nature; and we listen for the answer by observing the machine in operation and analyzing it by all analytical and measurement means available. Each new program that is built is an experiment. It poses a question to nature, and its behavior offers clues to an answer. Neither machines nor programs are black boxes; they are artifacts that have been designed, both hardware and software, and we can open them up and look inside. We can relate their structure to their behavior and draw many lessons from a single experiment. We don't have to build 100 copies 288
ALLEN NEWELL and HERBERT A. SIMON
of, say, a theorem prover, to demonstrate statistically that it has not overcome the combinatorial explosion of search in the way hoped for. Inspection of the program in the light of a few runs reveals the flaw and lets us proceed to the next attempt. We build computers and programs for many reasons. We build them to serve society and as tools for carrying out the economic tasks of society. But as basic scientists we build machines and programs as a way of discovering new phenomena and analyzing phenomena we already know about. Society often becomes confused about this, believing that computers and programs are to be constructed only for the economic use that can be made of them (or as intermediate items in a developmental sequence leading to such use). It needs to understand that the phenomena surrounding computers are deep and obscure, requiring much experimentation to assess their nature. It needs to understand that, as in any science, the gains that accrue from such experimentation and understanding pay off in the permanent acquisition of new techniques; and that it is these techniques that will create the instruments to help society in achieving its goals. Our purpose here, however, is not to plead for understanding from an outside world. It is to examine one aspect of our science, the development of new basic understanding by empirical inquiry. This is best done by illustrations. We will be pardoned if, presuming upon the occasion, we choose our examples from the area of our own research. As will become apparent, these examples involve the whole development of artificial intelligence, especially in its early years. They rest on much more than our own personal contributions. And even where we have made direct contributions, this has been done in cooperation with others. Our collaborators have included especially Cliff Shaw, with whom we formed a team of three through the exciting period of the late fifties. But we have also worked with a great many colleagues and students at Carnegie-Mellon University. Time permits taking up just two examples. The first is the development of the notion of a symbolic system. The second is the development of the notion of heuristic search. Both conceptions have deep significance for understanding how information is processed and how intelligence is achieved. However, they do not come close to exhausting the full scope of artificial intelligence, though they seem to us to be useful for exhibiting the nature of fundamental knowledge in this part of computer science.
Symbols and Physical Symbol Systems One of the fundamental contributions to knowledge of computer science has been to explain, at a rather basic level, what symbols are. This explanation is a scientific proposition about Nature. It is empirically derived, with a long and gradual development. Computer Science as Empirical Inquiry: Symbols and Search
289
-
Symbols lie at the root of intelligent action, which is, of course, the primary topic of artificial intelligence. For that matter, it is a primary question for all of computer science. All information is processed by computers in the service of e nds, and we measure the intelligence of a system by its ability to achieve stated ends in the face of variations, difficulties and complexities posed by the task environment. This general investment of computer science in attaining intelligence is obscured when the tasks bemng accomplished are limited in scope, for then the full variations in the environment can be accurately foreseen. It becomes more obvious as we extend computers to more global, complex and knowledge-inten sive tasks -as we attempt to make them our agents, capable of handl n g on their own the full contingencies of the natural world. Our understanding of the systems requirements for intelligent action emerges slowly. It is composite, for no single elementary thing accounts for intelligence in all its manifestations. There is no 'intelligence principle,' just as there is no 'vital principle' that conveys by its very nature the essence of life. But the lack of a simple deus ex machine does not imply that there are no structural requirements for intelligence. One such requirement is the ability to store and manipulate symbols. To put the scientific question, we may paraphrase the title of a famous paper by Warren McCulloch [1')61]: What is a symbol, that intelligence may use it, and intelligence, that it may use a symbol?
Laws of Qualitative Structure All sciences characterize the essential nature of the systems they study. These characterization: are invariably qualitative in nature, for they set the terms within which more detailed knowledge can be developed. Their essence can often be captured in very short, very general statements. One might judge these general laws, due to their limited specificity, as making relatively little contribution to the sum of a science, were it not for the historical evidence that shows them to be results of the greatest importance. The Cell Doctrine in B1iology. A good example of a law of qualitative structure is the cel. doctrine in biology, which states that the basic building block of all living organisms is the cell. Cells come in a large variety of forms, though they all have a nucleus surrounded by protoplasm, the whole encased by a membrane. But this internal structure was not, historically, part of the specification of the cell doctrine; it was subsequent sp zcificity developed by intensive investigation. The cell doctrine can be conveyed almost entirely by the statement we gave above, along with Home vague notions about what size a cell can be. The impact of this law on biology, however, has been tremendous, and the lost motion in the field prior to its gradual acceptance was considerable. 290
ALLEN NEWELL and HERBERT A. SIMON
Plate Tectonics in Geology. Geology provides an interesting example of a qualitative structure law, interesting because it has gained acceptance in the last decade and so its rise in status is still fresh in memory. The theory of plate tectonics asserts that the surface of the globe is a collection of huge plates -a few dozen in all - which move (at geological speeds) against, over, and under each other into the center of the earth, where they lose their identity. The movements of the plates account for the shapes and relative locations of the continents and oceans, for the areas of volcanic and earthquake activity, for the deep sea ridges, and so on. With a few additional particulars as to speed and size, the essential theory has been specified. It was of course not accepted until it succeeded in explaining a number of details, all of which hung together (e.g., accounting for flora, fauna, and stratification agreements between West Africa and Northeast South America). The plate tectonics theory is highly qualitative. Now that it is accepted, the whole earth seems to offer evidence for it everywhere, for we see the world in its terms. The Germ Theory of Disease. It is little more than a century since Pasteur enunciated the germ theory of disease, a law of qualitative structure that produced a revolution in medicine. The theory proposes that most diseases are caused by the presence and multiplication in the body of tiny single-celled living organisms, and that contagion consists in the transmission of these organisms from one host to another. A large part of the elaboration of the theory consisted in identifying the organisms associated with specific diseases, describing them, and tracing their life histories. The fact that the law has many exceptions that many diseases are not produced by germs - does not detract from its importance. The law tells us to look for a particular kind of cause; it does not insist that we will always find it. The Doctrine of Atomism. The doctrine of atomism offers an interesting contrast to the three laws of qualitative structure we have just described. As it emerged from the work of Dalton and his demonstrations that the chemicals combined in fixed proportions, the law provided a typical example of qualitative structure: the elements are composed of small, uniform particles, differing from one element to another. But because the underlying species of atoms are so simple and limited in their variety, quantitative theories were soon formulated which assimilated all the general structure in the original qualitative hypothesis. With cells, tectonic plates, and germs, the variety of structure is so great that the underlying qualitative principle remains distinct, and its contribution to the total theory clearly discernible. Conclusion: Laws of qualitative structure are seen everywhere in science. Some of our greatest scientific discoveries are to be found among them. As the examples illustrate, they often set the terms on which a whole science operates. Computer Science as Empirical Inquiry: Symbols and Search
291
Physical Symbol Systems Let us return to the topic of symbols, and define a physical symbol system. The adjective 'physical' denotes two important features: (1) Such systems clearly obey the laws of physics-they are realizable by engineered systems made of engineered components; (2) although our use of the term 'symbol' prefigures our intended interpretation, it is not restricted to human symbol systems. A physical symbol system consists of a set of entities, called symbols, which are physical patterns that occur as components of another type of entity called an expression For symbol structure). Thus, a symbol structure is composed of a number of instances (or tokens) of symbols related in some physical way (such as one token being next to another). At any instant of time the system will contain a collection of these symbol structures. Besides these structures, the system also contains a collection of processes that operate on expressions to produce other expressions: processes of creation, modification, reproduction and destruction. A physical symbol system is a machine that produces through time an evolving collection of symbol structures. Such a system exists in a world of objects wider than just these symbolic expressions themselves. Two notions are central to this structure of expressions, symbols, and objects: designation and interpretation. Designation. An expression designates an object if, given the expression, the system can either affect the object itself or behave in ways dependent on the object. In either case, access to the object via the expression has been obtained, which is the essence of designation. Interpretation. The system can interpret an expression if the expression designates a process arnd if, given the expression, the system can carry out the process. Interpretation implies a spec al form of dependent action: given an expression the system can perform the indicated process, which is to say, it can evoke and execute ti own processes from expressions that designate them. A system capable of designation and interpretation, in the sense just indicated, must also meet a number of additional requirements, of completeness and closure We will have space only to mention these briefly; all of them are important and have far-reaching consequences. (1) A symbol may be used 1c designate any expression whatsoever. That is, given a symbol, it is rot prescribed a priori what expressions it can designate. This arbitrariness pertains only to symbols; the symbol tokens and their mutual relations determine what object is designated by a complex expression. (2) There exist expressions that designate every 292
ALLEN NEWELL and HERBEFT A. SIMON
process of which the machine is capable. (3) There exist processes for creating any expression and for modifying any expression in arbitrary ways. (4) Expressions are stable; once created they will continue to exist until explicitly modified or deleted. (5) The number of expressions that the system can hold is essentially unbounded. The type of system we have just defined is not unfamiliar to computer scientists. It bears a strong family resemblance to all general purpose computers. If a symbol manipulation langauage, such as LISP, is taken as defining a machine, then the kinship becomes truly brotherly. Our intent in laying out such a system is not to propose something new. Just the opposite: it is to show what is now known and hypothesized about systems that satisfy such a characterization. We can now state a general scientific hypothesis-a law of qualitative structure for symbol systems: The Physical Symbol System Hypothesis. A physical symbol system has the necessary and sufficient means for general intelligent action. By 'necessary' we mean that any system that exhibits general intelligence will prove upon analysis to be a physical symbol system. By 'sufficient' we mean that any physical symbol system of sufficient size can be organized further to exhibit general intelligence. By 'general intelligence action' we wish to indicate the same scope of intelligence as we see in human action: that in any real situation behavior appropriate to the ends of the system and adaptive to the demands of the environment can occur, within some limits of speed and complexity. The Physical Symbol System Hypothesis clearly is a law of qualitative structure. It specifies a general class of systems within which one will find those capable of intelligent action. This is an empirical hypothesis. We have defined a class of systems; we wish to ask whether that class accounts for a set of phenomena we find in the real world. Intelligent action is everywhere around us in the biological world, mostly in human behavior. It is a form of behavior we can recognize by its effects whether it is performed by humans or not. The hypothesis could indeed be false. Intelligent behavior is not so easy to produce that any system will exhibit it willynilly. Indeed, there are people whose analyses lead them to conclude either on philosophical or on scientific grounds that the hypothesis is false. Scientifically, one can attack or defend it only by bringing forth empirical evidence about the natural world. We now need to trace the development of this hypothesis and look at the evidence for it.
Development of the Symbol System Hypothesis A physical symbol system is an instance of a universal machine. Thus the symbol system hypothesis implies that intelligence will be realized by a universal computer. However, the hypothesis goes Computer Science as Empirical Inquiry: Symbols and Search
293
far beyond the argument, often made on general grounds of physical determinism, that any computation that is realizable can be realized by a universal machine, provided that it is specified. For it asserts specifically that the intelligent machine is a symbol system, thus making a specific architectural assertion about the nature of intelligent systems. It is important to understand how this additional specificity arose. Formal Logic. The roots of the hypothesis go back to the program of Frege and of Whitehead and Russell for formalizing logic: capturing the basic conceptual notions of mathematics in logic and putting the notions of proof and deduction on a secure footing. This effort culminated in mathematical logic-our familiar propositional, firstorder, and higher-order logics. It developed a characteristic view, often referred to as the 'symbol game.' Logic, and by incorporation all of mathematics, was a game played with meaningless tokens according to certain purely syntactic rules. All meaning had been purged. One had a mechanical, though permissive (we would now say nondeterministic), system about which various things could be proved. Thus progress was first made by wa king away from all that seemed relevant to meaning and human symbols. We could call this the stage of formal symbol manipulation. This general attitude is well reflected in the development of information theory. It was pointed out time and again that Shannon had defined a system that was usef au only for communication and selection, and which had nothing to do with meaning. Regrets were expressed that such a general name as 'information theory' had been given to the field, and attempts were made to rechristen it as 'the theory of selective information' -to nc avail, of course. During Machines and the Digital Computer. The development of the first digital computers and of automata theory, starting with Touring's own work in the '30s, can be treated together. They agree in their view of what is essential. Let us use Turing's own model, for it shows the features well. A Turing machine consists of two memories: an unbounded tape and a finite state control. The tape holds data, i.e., the famous zeroes and ones. The machine has E very small set of proper operationsread, write, and scan operatiorts-on the tape. The read operation is not a data operation, but provides conditional branching to a control state as a function of the data under the read head. As we all know, this model contains the essentials of all computers, in terms of what they can do, though other computers with different memories and operations might carry out the same computations with different requirements of space and tirne. In particular, the model of a Turing machine contains within it the notions both of what cannot be computed and of universal machines -computers that can do anything that can be done by any machine. 294
ALLEN NEWELL and HERBERT A. SIMON
We should marvel that two of our deepest insights into information processing were achieved in the thirties, before modern computers came into being. It is a tribute to the genius of Alan Tlring. It is also a tribute to the development of mathematical logic at the time, and testimony to the depth of computer science's obligation to it. Concurrently with Turing's work appeared the work of the logicians Emil Post and (independently) Alonzo Church. Starting from independent notions of logistic systems (Post productions and recursive functions, respectively) they arrived at analogous results on undecidability and universality-results that were soon shown to imply that all three systems were equivalent. Indeed, the convergence of all these attempts to define the most general class of information processing systems provides some of the force of our conviction that we have captured the essentials of information processing in these models. In none of these systems is there, on the surface, a concept of the symbol as something that designates. The data are regarded as just strings of zeroes and ones-indeed that data be inert is essential to the reduction of computation to physical process. The finite state control system was always viewed as a small controller, and logical games were played to see how small a state system could be used without destroying the universality of the machine. No games, as far as we can tell, were ever played to add new states dynamically to the finite control -to think of the control memory as holding the bulk of the system's knowledge. What was accomplished at this stage was half the principle of interpretation -showing that a machine could be run from a description. Thus, this is the state of automatic formal symbol manipulation. The Stored Program Concept. With the development of the second generation of electronic machines in the mid-forties (after the Eniac) came the stored program concept. This was rightfully hailed as a milestone, both conceptually and practically. Programs now can be data, and can be operated on as data. This capability is, of course, already implicit in the model of Turing: the descriptions are on the very same tape as the data. Yet the idea was realized only when machines acquired enough memory to make it practicable to locate actual programs in some internal place. After all, the Eniac had only twenty registers. The stored program concept embodies the second half of the interpretation principle, the part that says that the system's own data can be interpreted. But it does not yet contain the notion of designation of the physical relation that underlies meaning. List Processing. The next step, taken in 1956, was list processing. The contents of the data structures were now symbols, in the sense of our physical symbol system: patterns that designated, that had referents. Lists held addresses which permitted access to other listsComputer Science as Empirical Inquiry: Symbols and Search 295
thus the notion of list structures. That this was a new view was demonstrated to us many times in the early days of list processing when colleagues would ask where the data were -that is, which list finally held the collections of bits that were the content of the system. They found it strange that there were no such bits, there were only symbols that designated yet other syrrmbol structures. List processing is simultaneously three things in the development of computer science. (1) It is the creation of a genuine dynamic memory structure in a machine that had heretofore been perceived as having fixed structure. It added to our ensemble of operations those that built and modified structure in addition to those that replaced and changed content. (2) It was an early dern Dnstration of the basic abstraction that a computer consists of a set of data types and a set of operations proper to these data types, so that a computational system should employ whatever data types are appropriate to the application, independent of the underlying machine. (3) List processing produced a model of designation, thus defining symbol manipulation in the sense in which we use this concept in computer science today. As often occurs, the practice of the time already anticipated all the elements of list processing: addresses are obviously used to gain access, the drum machines used linked programs (so-called one-plusone addressing), and so on. But the conception of list processing as an abstraction created a new world in which designation and dynamic symbolic structure were the defining characteristics. The embedding of the early list processing sy stems in languages (the IPLs, LISP) is often decried as having been a barrier to the diffusion of list processing techniques throughout programming practice; but it was the vehicle that held the abstraction together. LISP. One more step is worth noting: McCarthy's creation of LISP in 1959-60 [McCarthy, 1960]. It completed the act of abstraction, lifting list structures out of their embedding in concrete machines, creating a new formal system with S-expressions, which could be shown to be equivalent to the cther universal schemes of computation. Conclusion. That the concept of the designating symbol and symbol manipulation does not emerge until the mid-fifties does not mean that the earlier steps were either inessential or less important. The total concept is the join of computability, physical realizability (and by multiple technologies), universality the symbolic representation of processes (i.e., interpretability), and, finally, symbolic structure and designation. Each of the steps provided an essential part of the whole. The first step in this chain, authored by Turing, is theoretically motivated, but the others all have deep empirical roots. We have been led by the evolution of the computer itself. The stored program principle arose out of the experience with Eniac. List processing arose out of the attempt to construct intelligent programs. It took its cue from 296
ALLEN NEWELL and HERBERT A. SIMON
the emergence of random access memories, which provided a clear physical realization of a designating symbol in the address. LISP arose out of the evolving experience with list processing.
The Evidence We come now to the evidence for the hypothesis that physical symbol systems are capable of intelligent action, and that general intelligent action calls for a physical symbol system. The hypothesis is an empirical generalization and not a theorem. We know of no way of demonstrating the connection between symbol systems and intelligence on purely logical grounds. Lacking such a demonstration, we must look at the facts. Our central aim, however, is not to review the evidence in detail, but to use the example before us to illustrate the proposition that computer science is a field of empirical inquiry. Hence, we will only indicate what kinds of evidence there is, and the general nature of the testing process. The notion of physical symbol system had taken essentially its present form by the middle of the 1950's, and one can date from that time the growth of artificial intelligence as a coherent subfield of computer science. The twenty years of work since then has seen a continuous accumulation of empirical evidence of two main varieties. The first addresses itself to the sufficiency of physical symbol systems for producing intelligence, attempting to construct and test specific systems that have such a capability. The second kind of evidence addresses itself to the necessity of having a physical symbol system wherever intelligence is exhibited. It starts with Man, the intelligent system best known to us, and attempts to discover whether his cognitive activity can be explained as the working of a physical symbol system. There are other forms of evidence, which we will comment upon briefly later, but these two are the important ones. We will consider them in turn. The first is generally called artificial intelligence, the second, research in cognitive psychology. Constructing Intelligent Systems. The basic paradigm for the initial testing of the germ theory of disease was: identify a disease; then look for the germ. An analogous paradigm has inspired much of the research in artificial intelligence: identify a task domain calling for intelligence; then construct a program for a digital computer that can handle tasks in that domain. The easy and well-structured tasks were looked at first: puzzles and games, operations research problems of scheduling and allocating resources, simple induction tasks. Scores, if not hundreds, of programs of these kinds have by now been constructed, each capable of some measure of intelligent action in the appropriate domain. Of course intelligence is not an all-or-none matter, and there has been steady progress toward higher levels of performance in specific domains, as well as toward widening the range of those domains. Early Computer Science as Empirical Inquiry: Symbols and Search
297
chess programs, for example, wv re deemed successful if they could play the game legally and with some indication of purpose; a little later, they reached the level of human beginners; within ten or fifteen years, they began to compete with serious; amateurs. Progress has been slow (and the total programming effort invested small) but continuous, and the paradigm of construct-and-test: proceeds in a regular cycle -the whole research activity mimicking at a macroscopic level the basic generateand-test cycle of many of the AI programs. There is a steadily widening area within which intelligent action is attainable. From the original tasks, research has extended to building systems that handle and understand natural language in a variety of ways, systems for interpreting visual scenes, systems for hand-eye coordination, systems that design, systems that write computer programs, systems for speech understanding-the list is, if not endless, at least very long. If there are inits beyond which the hypothesis will not carry us, they have not yet become apparent. Up to the present, the rate of progress has been governed mainly by the rather modest quantity of scientific resources that have been applied and the inevitable requirement of a substantial system-building effort for each new major undertaking. Much more has been going on, of course, than simply a piling up of examples of intelligent systems adapted to specific task domains. It would be surprising and unappealing if it turned out that the Al programs performing these diverse tasks had nothing in common beyond their being instances of physical symbol systems. Hence, there has been great interest in searching for mechanisms possessed of generality, and for common components among programs performing a variety of tasks. This search carries the theor y beyond the initial symbol system hypothesis to a more complete characterization of the particular kinds of symbol systems that are effective in artificial intelligence. In the second section of the paper, we v ill discuss one example of a hypothesis at this second level of specificity: the heuristic search hypothesis. The search for generality spawned a series of programs designed to separate out general problem -solving mechanisms from the requirements of particular task domains. The General Problem Solver (GPS) was perhaps the first of these, while among its descendants are such contemporary systems as PLANNER and CONNIVER. The search for common components has led tc generalized schemes of representation for goals and plans, methods for constucting discrimination nets, procedures for the control of tree search, pattern-matching mechanisms, and language-parsing systems. Experiments are at present under way to find convenient devices for representing sequences of time and tense, movement, causality and the like. More and more, it becomes possible to assemble large intelligent systems in a modular way from such basic components. We can gain some perspective on what is going on by turning, again, to the analogy of the germ theory. If the first burst of research stimulated by that theory consisted largely in finding the germ to go with each 298
ALLEN NEWELL and HERBERF A. SIMON
disease, subsequent effort turned to learning what a germ was-to building on the basic qualitative law a new level of structure. In artificial intelligence, an initial burst of activity aimed at building intelligent programs for a wide variety of almost randomly selected tasks is giving way to more sharply targeted research aimed at understanding the common mechanisms of such systems. The Modeling of Human Symbolic Behavior. The symbol system hypothesis implies that the symbolic behavior of man arises because he has the characteristics of a physical symbol system. Hence, the results of efforts to model human behavior with symbol systems become an important part of the evidence for the hypothesis, and research in artificial intelligence goes on in close collaboration with research in information processing psychology, as it is usually called. The search for explanations of man's intelligent behavior in terms of symbol systems has had a large measure of success over the past twenty years, to the point where information processing theory is the leading contemporary point of view in cognitive psychology. Especially in the areas of problem solving, concept attainment, and long-term memory, symbol manipulation models now dominate the scene. Research in information processing psychology involves two main kinds of empirical activity. The first is the conduct of observations and experiments on human behavior in tasks requiring intelligence. The second, very similar to the parallel activity in artificial intelligence, is the programming of symbol systems to model the observed human behavior. The psychological observations and experiments lead to the formulation of hypotheses about the symbolic processes the subjects are using, and these are an important source of the ideas that go into the construction of the programs. Thus, many of the ideas for the basic mechanisms of GPS were derived from careful analysis of the protocols that human subjects produced while thinking aloud during the performance of a problem-solving task. The empirical character of computer science is nowhere more evident than in this alliance with psychology. Not only are psychological experiments required to test the veridicality of the simulation models as explanations of the human behavior, but out of the experiments come new ideas for the design and construction of physical symbol systems. Other Evidence. The principal body of evidence for the symbol system hypothesis that we have not considered is negative evidence: the absence of specific competing hypotheses as to how intelligent activity might be accomplished-whether by man or machine. Most attempts to build such hypotheses have taken place within the field of psychology. Here we have had a continuum of theories from the points of view usually labeled 'behaviorism' to those usually labeled 'Gestalt theory.' Neither of these points of view stands as a real competitor to the symbol system hypothesis, and this for two reasons. First, neither behaviorism nor Gestalt theory had demonstrated, or even Computer Science as Empirical Inquiry: Symbols and Search 299
shown how to demonstrate, that the explanatory mechanisms it postulates are sufficient to account for intelligent behavior in complex tasks. Second, neither theory has been formulated with anything like the specificity of artificial programs. As a matter of fact, the alternative theories are sufficiently vague so that it is not terribly difficult to give them information processing interpretations, and thereby assimilate them to the symbol system hypothesis.
Conclusion We have tried to use the example of the Physical Symbol System Hypothesis to illustrate concretely that computer science is a scientific enterprise in the usual meaning of that term: that it develops scientific hypotheses which it then seeks to verify by empirical inquiry. We had a second reason, however, for choosing this particular example to illustrate our point. The Physical Symbol System Hypothesis is itself a substantial scientific hypothesis of the kind that we earlier dubbed 'laws of qualitative structure.' It represents an important discovery of computer science, which if borne out by the empirical evidence, as in fact appears to be occurring, will have major continuing impact on the field. We turn now to a second example, the role of search in intelligence. This topic and the particular hypothesis about it that we shall examine have also played a central role in computer science, in general, and artificial intelligence, in particular.
II Heuristic Search Knowing that physical symbol systems provide the matrix for intelligent action does not tell us how they accomplish this. Our second example of a law of qualitative structure in computer science addresses this latter question, asserting .hat symbol systems solve problems by using the processes of heuristic search. This generalization, like the previous one, rests on empirical evidence, and has not been derived formally from other premises. However, we shall see in a moment that it does have some logical connection with the symbol system hypothesis, and perhaps we can look forward to formalization of the connection at some time in the future. Until that time arrives, our story must again be one of empirical inquiry. W'Ve will describe what is known about heuristic search and review the empirical findings that show how it enables action to be intelligent. We begin by stating this law of qualitative structure, the Heuristic Search Hypothesis. Heuristic Search Hypothesis. The solutions to problems are represented as symbol struc ures. A physical symbol system exercises its intelligence in problem solving by search-that is, by generating and progressively modifying symbol structures until it produces a solution structure. 300
ALLEN NEWELL and HERBERT A. SIMON
Physical symbol systems must use heuristic search to solve problems because such systems have limited processing resources; in a finite number of steps, and over a finite interval of time, they can execute only a finite number of processes. Of course that is not a very strong limitation, for all universal Turing machines suffer from it. We intend the limitation, however, in a stronger sense: we mean practically limited. We can conceive of systems that are not limited in a practical way, but are capable, for example, of searching in parallel the nodes of an exponentially expanding tree at a constant rate for each unit advance in depth. We will not be concerned here with such systems, but with systems whose computing resources are scarce relative to the complexity of the situations with which they are confronted. The restriction will not exclude any real symbol systems, in computer or man, in the context of real tasks. The fact of limited resources allows us, for most purposes, to view a symbol system as though it were a serial, one-process-at-a-time device. If it can accomplish only a small amount of processing in any short time interval, then we might as well regard it as doing things one at a time. Thus 'limited resource symbol system' and 'serial symbol system' are practically synonymous. The problem of allocating a scarce resource from moment to moment can usually be treated, if the moment is short enough, as a problem of scheduling a serial machine.
Problem Solving Since ability to solve problems is generally taken as a prime indicator that a system has intelligence, it is natural that much of the history of artificial intelligence is taken up with attempts to build and understand problem-solving systems. Problem solving has been discussed by philosophers and psychologists for two millenia, in discourses dense with the sense of mystery. If you think there is nothing problematic or mysterious about a symbol system solving problems, then you are a child of today, whose views have been formed since mid-century. Plato (and, by his account, Socrates) found difficulty understanding even how problems could be entertained, much less how they could be solved. Let me remind you of how he posed the conundrum in the Meno: Meno: And how will you inquire, Socrates, into that which you know not? What will you put forth as the subject of inquiry? And if you find what you want, how will you ever know that this is what you did not know? To deal with this puzzle, Plato invented his famous theory of recollection: when you think you are discovering or learning something, you are really just recalling what you already knew in a previous existence. If you find this explanation preposterous, there is a much simpler one available today, based upon our understanding of symbol systems. An approximate statement of it is: To state a problem is to designate (1) a test for a class of symbol structures (solutions of the problem), and (2) a generator of symbol structures (potential solutions). To solve a problem is to generate a structure, using (2), that satisfies the test of (1). Computer Science as Empirical Inquiry: Symbols and Search
301
We have a problem if we know what we want to do (the test), and if we don't know immediately how to do it (our generator does not immediately produce a symbol structure satisfying the test). A symbol system can state and solve problems (sometimes) because it can generate and test. If that is all there is to proo em solving, why not simply generate at once an expression that sat sfies the test? This is, in fact, what we do when we wish and dream. 'If wishes were horses, beggars might ride' But outside the world of dreams, it isn't possible. To know how we would test something, once constructed, does not mean that we know how to construct it-tha: we have any generator for doing so. For example, it is well known what it means to 'solve' the problem of playing winning chess. A simple test exists for noticing winning positions, the test for checkmate of the enemy King. In the world of dreams one simply generates a strategy that leads to checkmate for all counter strategies of the oppor.ent. Alas, no generator that will do this is known to existing symbol systems (man or machine). Instead, good moves in chess are sought by generating various alternatives, and painstakingly evaluating them with the use of approximate, and often erroneous, measures that are supposed to indicate the likelihood that a particular line of play is on the route to a winning position. Move generators there are; w.r.ning move generators there are not. Before there can be a move generator for a problem, there must be a problem space: a space of symbol structures in which problem situations, including the initial and goal situations, can be represented. Move generators are processes for modifying one situation in the problem space into another. The basic characteristics of physical symbol systems guarantee that they can represent problem spaces and that they possess move generates. How, in any concrete situation they synthesize a problem space ard. move generators appropriate to that situation is a question that is still very much on the frontier of artificial intelligence research. The task that a symbol system is faced with, then, when it is presented with a problem and a problem space, is to use its limited processing resources to generate possible solutions, one after another, until it finds one that satisfies the problem-defining test. If the system had some control over the order in which potential solutions were generated, then it would be desirable to arrange this order of generation so that actual solutions would have a high likelihood of appearing early. A symbol system would exhibit intelligence to the extent that it succeeded in doing this. Intelligence for a system with limited processing resources consists in making wise choices of what to do next.
Search in Problem Solving During the first decade or so of artificial intelligence research, the study of problem solving was almost synonymous with the study of search processes. From our characterization of problems and problem solving, it is easy to see why this was so. In fact, it might be asked 302 ALLEN NEWELL and HERBERT N. SIMON
whether it could be otherwise. But before we try to answer that question, we must explore further the nature of search processes as it revealed itself during that decade of activity. Extracting Information from the Problem Space. Consider a set of symbol structures, some small subset of which are solutions to a given problem. Suppose, further, that the solutions are distributed randomly through the entire set. By this we mean that no information exists that would enable any search generator to perform better than a random search. Then no symbol system could exhibit more intelligence (or less intelligence) than any other in solving the problem, although one might experience better luck than another. A condition, then, for the appearance of intelligence is that the distribution of solutions be not entirely random, that the space of symbol structures exhibit at least some degree of order and pattern. A second condition is that pattern in the space of symbol structures be more or less detectable. A third condition is that the generator of potential solutions be able to behave differentially, depending on what pattern it detected. There must be information in the problem space, and the symbol system must be capable of extracting and using it. Let us look first at a very simple example, where the intelligence is easy to come by. Consider the problem of solving a simple algebraic equation: AX + B = CX + D The test defines a solution as any expression of the form, X = E, such that AE + B = CE + D. Now one could use as generator any process that would produce numbers which could then be tested by substituting in the latter equation. We would not call this an intelligent generator. Alternatively, one could use generators that would make use of the fact that the original equation can be modified -by adding or subtracting equal quantities from both sides, or multiplying or dividing both sides by the same quantity -without changing its solutions. But, of course, we can obtain even more information to guide the generator by comparing the original expression with the form of the solution, and making precisely those changes in the equation that leave its solution unchanged, while at the same time, bringing it into the desired form. Such a generator could notice that there was an unwanted CX on the right-hand side of the original equation, subtract it from both sides and collect terms again. It could then notice that there was an unwanted B on the left-hand side and subtract that. Finally, it could get rid of the unwanted coefficient (A - C) on the left-hand side by dividing. Thus by this procedure, which now exhibits considerable intelligence, the generator produces successive symbol structures, each obtained by modifying the previous one; and the modifications are aimed at reducing the differences between the form of the input structure and the form of the test expression, while maintaining the other conditions for a solution. Computer Science as Empirical Inquiry: Symbols and Search
303
This simple example already illustrates many of the main mechanisms that are used by symbol systems for intelligent problem solving. First, each successive expression is not generated independently, but is produced by modifying one produced previously. Second, the modifications are not haphazard, but depend upon two kinds of information. They depend on information that is constant over this whole class of algebra problems, and that is built into the structure of the generator itself: all modifications of expressions must leave the equation's solution unchanged They also depend on information that changes at each step: detection of the differences in form that remain between the current expression and the desired expression. In effect, the generator incorporates some of the tests the solution must satisfy, so that expressions that don't meet these tests will never be generated. Using the first kind of information guarantees that only a tiny subset of all possible expressions is actually generated, but without losing the solution expression from this subset. Using the second kind of information arrives at the desired solution by a succession of approximations, employing a simple form of means-ends analysis to give direction to the search. There is no mystery where the information that guided the search came from. We need not follow Plato in endowing the symbol system with a previous existence in which it already knew the solution. A moderately sophisticated generator-test system did the trick without invoking reincarnation. Search Trees. The simple algebra problem may seem an unusual, even pathological, example of search. It is certainly not trial-and-error search, for though there were a few trials, there was no error. We are more accustomed to thinking of problem-solving search as generating lushly branching trees of partial solution possibilities which may grow to thousands, or even millions, of branches, before they yield a solution. Thus, if from each expression it produces, the generator creates B new branches, then Jhe tree will grow as BD, where D is its depth. The tree grown for the algebra problem had the peculiarity that its branchiness, B, equaled unity. Programs that play chess typically grow broad search trees, amounting in some cases to a million branches or more. (Although this example will serve to illustrate our points about tree search, we should note that the purpose of search in chess is not to generate proposed solutions, but to evaluate (test) them.) One line of research into gameplaying programs has been centrally concerned with improving the representation of the chess board, and the processes for making moves on it, so as to speed up search and make it possible to search larger trees. The rationale for this Directions of course, is that the deeper the dynamic search, the more accurate should be the evaluations at the end of it. On the other hand, there is good empirical evidence that the strongest human players, grandmasters, seldom explore trees 304 ALLEN NEWELL and HERBER'r A. SIMON
of more than one hundred branches. This economy is achieved not so much by searching less deeply than do chess-playing programs, but by branching very sparsely and selectively at each node. This is only possible, without causing a deterioration of the evaluations, by having more of the selectivity built into the generator itself, so that it is able to select for generation just those branches that are very likely to yield important relevant information about the position. The somewhat paradoxical-sounding conclusion to which this discussion leads is that search-successive generation of potential solution structures-is a fundamental aspect of a symbol system's exercise of intelligence in problem solving but that amount of search is not a measure of the amount of intelligence being exhibited. What makes a problem a problem is not that a large amount of search is required for its solution, but that a large amount would be required if a requisite level of intelligence were not applied. When the symbolic system that is endeavoring to solve a problem knows enough about what to do, it simply proceeds directly towards its goal; but whenever its knowledge becomes inadequate, when it enters terra incognita, it is faced with the threat of going through large amounts of search before it finds its way again. The potential for the exponential explosion of the search tree that is present in every scheme for generating problem solutions warns us against depending on the brute force of computers -even the biggest and fastest computers-as a compensation for the ignorance and unselectivity of their generators. The hope is still periodically ignited in some human breasts that a computer can be found that is fast enough, and that can be programmed cleverly enough, to play good chess by brute-force search. There is nothing known in theory about the game of chess that rules out this possibility. Empirical studies on the management of search in sizable trees with only modest results make this a much less promising direction than it was when chess was first chosen as an appropriate task for artificial intelligence. We must regard this as one of the important empirical findings of research with chess programs. The Forms of Intelligence. The task of intelligence, then, is to avert the ever-present threat of the exponential explosion of search. How can this be accomplished? The first route, already illustrated by the algebra example, and by chess programs that only generate 'plausible' moves for further analysis, is to build selectivity into the generator: to generate only structures that show promise of being solutions or of being along the path toward solutions. The usual consequence of doing this is to decrease the rate of branching, not to prevent it entirely. Ultimate exponential explosion is not avoided save in exceptionally highly structured situations like the algebra example - but only postponed. Hence, an intelligent system generally Computer Science as Empirical Inquiry: Symbols and Search 305
needs to supplement the select xity of its solution generator with other information-using techniques to guide search. Twenty years of experience with managing tree search in a variety of task environments has proct ced a small kit of general techniques which is part of the equipment of every researcher in artificial intelligence today. Since these techniques have been described in general works like that of Nilsson [1971], they can be summarized very briefly here. In serial heuristic search, tie basic question always is: what shall be done next? In tree search, that question, in turn, has two components: (1) from what node in the tree shall we search next, and (2) what direction shall we take from that node? Information helpful in answering the first question may be interpreted as measuring the relative distance of different nodes from the goal. Best-first search calls for searching next from the node that appear; closest to the goal. Information helpful in answering the second question-in what direction to search-is often obtained, as in the alge oa example, by detecting specific differences between the current nodal structure and the goal structure described by the test of a solution, and selecting actions that are relevant to reducing these particular kinds of differences. This is the technique known as means-er.d.s analysis, which plays a central role in the structure of the General Problem Solver. The importance of empirical studies as a source of general ideas in AI research can be demornstrated clearly by tracing the history, through large numbers of problem-solving programs, of these two central ideas: best-first searcL and means-ends analysis. Rudiments of best-first search were already present, though unnamed, in the Logic Theorist in 1955. The General ?roblem Solver, embodying means-ends analysis, appeared about 1957 - but combined it with modified depthfirst search rather than best-first search. Chess programs were generally wedded, for reasons of economy of memory, to depth-first search, supplemented after about 1958 by the powerful alpha-beta pruning procedure. Each of these techniques appears to have been reinvented a number of times, and it is hard to find general, task-independent theoretical discussions of problem solving in terms of these concepts until the middle or late 1960's. The amount of formal buttressing they have received from mathematical theory is still miniscule: some theorems about the reduction in search that can be secured from using the alpha-beta heuristic, a couple of theorems (reviewed by Nilsson [1971]) about shortest-path search, and some very recent theorems on best-first search with a probabilistic evaluation function. 'Weak' and 'Strong' Methods. The techniques we have been discussing are dedicated to the control of exponential expansion rather than its prevention. For this reason, they have been properly called 'weak methods' -methods to be used when the symbol system's knowledge or the amount of structure actually contained in the problem 306 ALLEN NEWELL and HERBERT A. SIMON
space is inadequate to permit search to be avoided entirely. It is instructive to contrast a highly structured situation, which can be formulated, say, as a linear programming problem, with the less structured situations of combinatorial problems like the traveling salesman problem or scheduling problems. ('Less structured' here refers to the insufficiency or nonexistence of relevant theory about the structure of the problem space.) In solving linear programming problems, a substantial amount of computation may be required, but the search does not branch. Every step is a step along the way to a solution. In solving combinatorial problems or in proving theorems, tree search can seldom be avoided, and success depends on heuristic search methods of the sort we have been describing. Not all streams of AI problem-solving research have followed the path we have been outlining. An example of a somewhat different point is provided by the work on theorem-proving systems. Here, ideas imported from mathematics and logic have had a strong influence on the direction of inquiry. For example, the use of heuristics was resisted when properties of completeness could not be proved (a bit ironic, since most interesting mathematical systems are known to be undecidable). Since completeness can seldom be proved for bestfirst search heuristics, or for many kinds of selective generators, the effect of this requirement was rather inhibiting. When theorem-proving programs were continually incapacitated by the combinatorial explosion of their search trees, thought began to be given to selective heuristics, which in many cases proved to be analogues of heuristics used in general problem-solving programs. The set-of-support heuristic, for example, is a form of working backwards, adapted to the resolution theorem-proving environment. A Summary of the Experience. We have now described the workings of our second law of qualitative structure, which asserts that physical symbol systems solve problems by means of heuristic search. Beyond that, we have examined some subsidiary characteristics of heuristic search, in particular the threat that it always faces of exponential explosion of the search tree, and some of the means it uses to avert that threat. Opinions differ as to how effective heuristic search has been as a problem-solving mechanism-the opinions depending on what task domains are considered and what criterion of adequacy is adopted Success can be guaranteed by setting aspiration levels low -or failure by setting them high. The evidence might be summed up about as follows. Few programs are solving problems at 'expert' professional levels. Samuel's checker program and Feigenbaum and Ledergerg's DENDRAL are perhaps the best-known exceptions, but one could point also to a number of heuristic search programs for such operations research problem domains as scheduling and integer programming. Computer Science as Empirical Inquiry: Symbols and Search
307
In a number of domains, programs perform at the level of competent amateurs: chess, some theorem-proving domains, many kinds of games and puzzles. Human levels have not yet been nearly reached by programs that have a complex perceptual 'front end': visual scene recognizers, speech understanders, robots that have to maneuver in real space and time. Nevertheless, impressive progress has been made, and a large body of experience assembled about these difficult tasks. We do not have deep theoretical explanations for the particular pattern of performance that has emerged. On empirical grounds, however, we might draw two conclusions. First, from what has been learned about human expert performance in tasks like chess, it is likely that any system capable of matching that performance will have to have access, in its memories, to very large stores of semantic information. Second, some part of the human superiority in tasks with a large perceptual component can be attributed to the special-purpose built-in parallel processing structure of the human eye and ear. In any case, the quality of performance must necessarily depend on the characteristics both of the problem domains and of the symbol systems used to tackle them. For most real-life domains in which we are interested, the domain structure has not proved sufficiently simple to yield (so far) theorems about complexity, or to tell us, other than empirically, how large real-world problems are in relation to the abilities of our symbol systems to solve them. That situation may change, but until it does, we must rely upon empirical explorations, using the best problem solvers we know how to build, as a principal source of knowledge about the magnitude and characteristics of problem difficulty. Even in highly structured areas like linear programming, theory has been much more useful in strengthening the heuristics that underlie the most powerful solution algorithms than in providing a deep analysis of complexity.
Intelligence Without Much Search Our analysis of intelligence equated it with ability to extract and use information about the structure of the problem space, so as to enable a problem solution to be generated as quickly and directly as possible. New directions for improving the problem-solving capabilities of symbol systems can be equated, then, with new ways of extracting and using information. At least three such ways can be identified. Nonlocal Use of Information. First, it has been noted by several investigators that information gathered in the course of tree search is usually only used locally, to help make decisions at the specific node where the information was generated. Information about a chess position, obtained by dynamic analysis of a subtree of continuations, is usually used to evaluate jCst that position, not to evaluate other positions that may contain marty of the same features. Hence, the same 308 ALLEN NEWELL and HERBERF A. SIMON
facts have to be rediscovered repeatedly at different nodes of the search tree. Simply to take the information out of the context in which it arose and use it generally does not solve the problem, for the information may be valid only in a limited range of contexts. In recent years, a few exploratory efforts have been made to transport information from its context of origin to other appropriate contexts. While it is still too early to evaluate the power of this idea, or even exactly how it is to be achieved, it shows considerable promise. An important line of investigation that Berliner [1975] has been pursuing is to use causal analysis to determine the range over which a particular piece of information is valid. Thus if a weakness in a chess position can be traced back to the move that made it, then the same weakness can be expected in other positions descendant from the same move. The HEARSAY speech understanding system has taken another approach to making information globally available. That system seeks to recognize speech strings by pursuing a parallel search at a number of different levels: phonemic, lexical, syntactic, and semantic. As each of these searches provides and evaluates hypotheses, it supplies the information it has gained to a common 'blackboard' that can be read by all the sources. This shared information can be used, for example, to eliminate hypotheses, or even whole classes of hypotheses, that would otherwise have to be searched by one of the processes. Thus, increasing our ability to use tree-search information nonlocally offers promise for raising the intelligence of problem-solving systems. Semantic Recognition Systems. A second active possibility for raising intelligence is to supply the symbol system with a rich body of semantic information about the task domain it is dealing with. For example, empirical research on the skill of chess masters shows that a major source of the master's skill is stored information that enables him to recognize a large number of specific features and patterns of features on a chess board, and information that uses this recognition to propose actions appropriate to the features recognized. This general idea has, of course, been incorporated in chess programs almost from the beginning. What is new is the realization of the number of such patterns and associated information that may have to be stored for master-level play: something of the order of 50,000. The possibility of substituting recognition for search arises because a particular, and especially a rare, pattern can contain an enormous amount of information, provided that it is closely linked to the structure of the problem space. When that structure is 'irregular,' and not subject to simple mathematical description, then knowledge of a large number of relevant patterns may be the key to intelligent behavior. Whether this is so in any particular task domain is a question more easily settled by empirical investigation than by theory. Our experience with symbol systems richly endowed with semantic information and pattern-recognizing capabilities for accessing it is still extremely limited. Computer Science as Empirical Inquiry: Symbols and Search
309
The discussion above refers specifically to semantic information associated with a recognition system. Of course, there is also a whole large area of AI research on semantic information processing and the organization of semantic information of semantic memories that falls outside the scope of the topic; we are discussing in this paper. Selecting Appropriate Representations. A third line of inquiry is concerned with the possibility that search can be reduced or avoided by selecting an appropriate problem space. A standard example that illustrates this possibility dramatically is the mutilated checkerboard problem. A standard 64 square checkerboard can be covered exactly with 32 tiles, each 1 x 2 rectangle covering exactly two squares. Suppose, now, that we cut off squares at two diagonally opposite corners of the checkerboard, leaving a total of 62 squares. Can this mutilated board be covered exactly with 31 tiles? With (literally) heavenly patience, the impossibility of achieving such a covering can be demonstrated by trying all possible arrangements. The alternative, for those with less patience, and more intelligence is to observe that the two diagonally opposite corners of a checkerboard are of the same color. Hence, the mutilated checkerboard has two less squares of one color than of the other. But each tile covers one square of one color and one square of the other, and any set of tiles must cover the same number of squares of each color. Hence, there is no solution. How can a symbol system discover this simple inductive argument as an alternative to a hopeless attempt to solve the problem by search among all possible coverings? We would award a system thaLt found the solution high marks for intelligence. Perhaps, however, in posing this problem we are not escaping from search processes. We have simply displaced the search from a space of possible problem solutions to a space of possible representations. In any event, the whole process of moving from one representation to another, and of discovering and evaluating representations, is largely unexplored territory in the domain of problem-solving research. The laws of qualitative structure governing representations remain to be discovered. The search for the m is almost sure to receive considerable attention in the coming decade.
Conclusion That is our account of symbol systems and intelligence. It has been a long road from Plato's Meno to the present, but it is perhaps encouraging that most of the progress along that road has been made since the turn of the twentieth century, and a large fraction of it since the midpoint of the century. Thought was still wholly intangible and ineffable until modern formal logic interpreted it as the manipulation of formal tokens. And it seemed still to inhabit mainly the heaven of Platonic ideals, or the equally obscure spaces of the human mind, until computers taught us how symbols could be processed by machines. A. M. Touring, who m we memorialize this morning, made 310 ALLEN NEWELL and HERBERT A. SIMON
his great contributions at the mid-century crossroads of these developments that led from modern logic to the computer. Physical Symbol Systems. The study of logic and computers has revealed to us that intelligence resides in physical symbol systems. This is computer sciences's most basic law of qualitative structure. Symbol systems are collections of patterns and processes, the latter being capable of producing, destroying and modifying the former. The most important properties of patterns is that they can designate objects, processes, or other patterns, and that, when they designate processes, they can be interpreted. Interpretation means carrying out the designated process. The two most significant classes of symbol systems with which we are acquainted are human beings and computers. Our present understanding of symbol systems grew, as indicated earlier, through a sequence of stages. Formal logic familiarized us with symbols, treated syntactically, as the raw material of thought, and with the idea of manipulating them according to carefully defined formal processes. The Turing machine made the syntactic processing of symbols truly machine-like, and affirmed the potential universality of strictly defined symbol systems. The stored-program concept for computers reaffirmed the interpretability of symbols, already implicit in the Turing machine. List processing brought to the forefront the denotational capacities of symbols, and defined symbol processing in ways that allowed independence from the fixed structure of the underlying physical machine. By 1956 all of these concepts were available, together with hardware for implementing them. The study of the intelligence of symbol systems, the subject of artificial intelligence, could begin.
Heuristic Search. A second law of qualitative structure for AI is that symbol systems solve problems by generating potential solutions and testing them, that is, by searching. Solutions are usually sought by creating symbolic expressions and modifying them sequentially until they satisfy the conditions for a solution. Hence symbol systems solve problems by searching. Since they have finite resources, the search cannot be carried out all at once, but must be sequential. It leaves behind it either a single path from starting point to goal or, if correction and backup are necessary, a whole tree of such paths. Symbol systems cannot appear intelligent when they are surrounded by pure chaos. They exercise intelligence by extracting information from a problem domain and using that information to guide their search, avoiding wrong turns and circuitous bypaths. The problem domain must contain information, that is, some degree of order and structure, for the method to work. The paradox of the Meno is solved by the observation that information may be remembered, but new information may also be extracted from the domain that the symbols designate. In both cases, the ultimate source of the information is the task domain. Computer Science as Empirical Inquiry: Symbols and Search
311
The Empirical Base. Artificial intelligence research is concerned with how symbol systems n ust be organized in order to behave intelligently. Twenty years of work in the areas has accumulated a considerable body of knowledge, enough to fill several books (it already has), and most of it in the form of rather concrete experience about the behavior of specific classes of symbol systems in specific task domains. Out of this experience, however, there have also emerged some generalizations, cutting cross task domains and systems, about the general characteristics of intelligence and its methods of implementation. We have tried to state some of these generalizations this morning. They are mostly qualitative rather than mathematical. They have more the flavor of geology or evolutionary biology than the flavor of theoretical physics. They are sufficiently strong to enable us today to design and build moderately intelligent systems for a considerable range of task domains, as well as to gain a rather deep understanding of how human intelligence worked in many situations. What Next? In our account today, we have mentioned open questions as well as settled ones; there are many of both. We see no abatement of the excitement of exploration that has surrounded this field over the past quarter century. Two resource limits will determine the rate of progress over the next such period. One is the amount of computing power that will be available. The second, and probably the more important, is the number of talented young computer scientists who will be attracted to this area of research as the most challenging they can tackle. A. M. flaring concluded his famous paper on 'Computing Machinery and Intelligence' with the words: 'We can only see a short distance ahead, but we can see plenty there that needs to be done.'
Many of the things Turing saw in 1950 that needed to be done have been done, but the agenda is as full as ever. Perhaps we read too much into his simple state ment above, but we like to think that in it Turing recognized the fundamental truth that all computer scientists instinctively know. For all physical symbol systems, condemned as we are to serial search of the problem environment, the critical question is always: What to cc next?
Acknowledgment
The authors' research over the years have been supported in part by the Advanced Research Projec- s Agency of the Department of Defense (monitored by the Air Force Office of Scientific Research) and in part by the National Institutes of Mental Health. 312 ALLEN NEWELL and HERBERr A. SIMON
References Berliner, H. [1975]. Chess as problem solving: the development of a tactics analyzer. Ph.D. Th., Computer Sci. Dep., Carnegie-Mellon U. (unpublished). McCarthy, J. [1960]. Recursive functions of symbolic expressions and their computation by machine. Comm. ACM 3, 4 (April 1960), 184-195. McCulloch, W.S. [1961]. What is a number, that a man may know it, and a man, that he may know a number. General Semantics Bulletin, Nos. 26 and 27 (1961), 7-18. Nilsson, NJ. [1971]. Problem Solving Methods in ArtificialIntelligence. McGrawHill, New York. During, A.M. [1950]. Computing machinery and intelligence. Mind 59 (Oct. 1950), 433-460. Categories and Subject Descriptors: E.1 [Data]: Data Structures-lists; F.1.1 [Computation by Abstract Devices]: Models of Computation - bounded-action devices; I.2.7 [Artificial Intelligence]: Natural Language Processing-speech recognition and understanding; I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods and Search -graph and tree searchstrategies; heuristic methods; K.2 [Computing Milieux]: History of Computing -systems General Terms: Design, Theory Additional Keywords and Phrases: Integer programming, LISP, list processing
Computer Science as Empirical Inquiry: Symbols and Search
313
Postscript
Reflections on the Tenth Turing Award Lecture: Computer Science as Empirical InquirySymbols and Search ALLEN NEWELL and HERBERT A. SIMON Department of Computer Science Department of Psychology Carnegie- Mellon University Pittsburgh, Pennsylvania 15213 Our Turing Award lecture was given in 1975, two decades after the beginnings of artificial intelligence in the mid-fifties. Another decade has now passed. The lecture mostly avoidecd prophecy and agenda building, choosing rather to assert a verity -that computer science is an empirical science. It did that by looking backward at the development of two general principles that underlie the theory of intelligent action-the requirements for physical symbol systems and for search. It migh: be interesting to ask whether the intervening decade has added to or subtracted from the stance and assessments set forth then. A lot has happened in that decade. Both computer science and artificial intelligence have continued to grow, scientifically, technologically, and economically. A point implicit but deliberate in the lecture was that artificial intelligence is a part of computer sci ence, both rising from the same intellectual ground. Social history has continued to follow logic here (it does not always do so), and artificial intelligence continues to be a part of computer science. If anything, their relations are becoming increasingly intimate as the application of intelligent systems to software engineering in the guise of expert systems becomes increasingly attractive. The main point about empirical inquiry is reflected throughout computer science. A lot has happened everywhere, but let us focus on artificial intelligence. The explosion of work ir, expert systems, the developments in learning systems, and the work on intelligent tutoring provide significant examples of areas that have blossomed sirce the lecture (and received no recognition in it). All of them have been driver by empirical inquiry in the strongest way. Even the emergence of the work on logic programming, which is an expression of the side of artificial intelligence that is most strongly identified with formal procedures and theorem proving, has attained much of its vitality from being turned into a programming enterprise-in which, thereby, experience leads the way. There have, of course, been significant developments of theory. Particularly pertinent to the content of our lec:u re has been work in the complexity analysis of heuristic search, as exemplified in the recent book by Pearl (10]. But this too illustrates the standard magic cycle of science, where theory finally builds up when analyzed experience has sufficiently accumulated. We are still somewhat shy of closing that cycle by getting theory to the point where it provides the routine framework within which further experience is planned 314
so that henceforth data and theory go hand in hand. That time will surely come, although it is still a little distant. We chose the forum of an award lecture to give voice to two fundamental principles (about symbols and search) that seemed to us to be common currency in the practice and understanding of artificial intelligence, but which needed to be recognized for what they were-the genuine foundations of intelligent action. Their histories in the meantime have been somewhat different, though both remain on paths we interpret as supporting their essential correctness. Bringing to the fore the physical symbol system hypothesis has proved useful to the field, although we did find it worthwhile subsequently to set out the hypothesis in more detail [7]. The hypothesis is taken rather generally to express the view of mind that has arisen from the emergence of the computer. However, that does not mean it is uncontroversial. There remain intellectual positions that stand outside the entire computational view and regard the hypothesis as undoubtedly false [3, 11]. More to the point are two other positions. One is found among the philosophers, many of whom believe that the central problem of semantics or intentionality -how symbols signify their external referents -is not addressed by physical symbol systems. The other position is found among some of the connectionists within artificial intelligence and cognitive science, who believe there are forms of processing organization (wrought in the image of neural systems) that will accomplish all that symbol systems do, but in which symbols will not be identifiable entities. In both cases more investigation is clearly needed and will no doubt be forthcoming. The case for symbols still seems clear to us, so our bets remain on the side of the symbol system hypothesis. A development related to the physical symbol system hypothesis is worth noting. It is the practice in computer science and artificial intelligence to describe systems simply in terms of the knowledge they have, presuming that there exist processing mechanisms that will cause the system to behave as if it could use the knowledge to attain the purposes the system is supposed to serve. This practice extends to design, where stipulating the knowledge a system is to have is a specification for what mechanisms are to be constructed. We took an opportunity, analogous to that of the Turing Award, namely, the presidential address of the American Association for Artificial Intelligence (AAAI), to also cast this.practice in crisp terms [8]. We defined another computer-system level above the symbol level, called the knowledge level. This corresponds pretty clearly to what Dan Dennett in philosophy had come to call the intentional stance [2]. Its roots, of course, lie in that remarkable characteristic of adaptive systems that they behave solely as a function of the task environment, hiding therewith the nature of their internal mechanisms [9, 12]. Again, our motives in identifying the knowledge level were the same as in the Thring Award lecture -to articulate what every good computer-science practitioner knows in a form that admits further technical expansion. There are some small signs that this expansion is beginning for the knowledge level [6]. Turning to the second hypothesis, that of heuristic search, recognition of its importance was explicit and abundant in the early years of artificial intelligence. Our aim in the Turing Award lecture was to emphasize that search is essential to all intelligent action, rather than just one interesting mechanism among many. As it happened, the year of the lecture, 1975, just preceded the efflorescence of the view that knowledge is of central importance to intelligence. The trend had been building from the early seventies. The sign of this new view was the emergence of the field of expert systems and the new role of the knowledge engineer [4]. The exuberance of this movement can be seen in the assertion that there had been a paradigm shift in artificial intelligence, which had finally abandoned search and would henceforth embrace knowledge as its guiding principle [5]. Computer Science as Empirical Inquiry: Symbols and Search 315
An alternative interpretation (and the one we hold) is that no revolution occurred, but something more akin to the cycle of accommodation, assimilation, and equilibration that Piaget describes as the normal process of development (although he was talking of children and not scientific fields). Science works by expanding each new facet of understanding as it emerges -it accommodates to new understanding by an extended preoccupation to assimilate it. The late seventies and early eighties were devoted to exploring what it meant for systems to have enough knowledge about their task to dispense with much search of the problem space, and yet to do tasks that demanded intelligence, as opposed to just implementing small algorithms. (As the amount of knowledge increased, of course, these systems did require search of the rules in the knowledge base.) Concomitantly, the tasks performe-d by these systems, although taken from the real world, were also of little intellectual (i.e., inferential) difficulty. The role of search in difficult intellectual tasks remained apparent to those who continued to work on programs to accomplish them-it is hard to avoid when the threat of combinatorial explosion lurks around every corner. Having now assimilated some of the mechanisms for bringing substantial amounts of knowledge to bear, the field seems to have reached an understanding that both search and knowledge play an essential role A last reflection concerns chess., which runs like a thread through the whole lecture, providing (as it always does) clear examples for many points. The progress of a decade is apparent in the current art, where the Hitech chess machine [1] has now attained high master ratings (2340, where masters range from 2200 to 2400). It is still climbing, although no one knows how long it can continue to rise. Hitech, itself, illustrates many things. First, it brings home the role of heuristic search. Second, it is built upon massive search (200,000 positions a second), so that it shows that progress has moved in exactly the direction we asserted in the lecture to be wrong. It is fun to be wrong, when the occasion is one of new scientific knowledge. But third, the basic theoretical lesson from the machine is still the one emphasized in the lecture: namely, intelligent behavior involves the interplay of knowledge obtained through search and knowledge obtained from stcred recognitional structure. For the last 200 points of Hitech's improvement--arnd the gains that have propelled it to famehave come entirely from the addit on of knowledge to a machine with fixed, albeit large, search capabilities. Fourth and finally, the astounding performance of Hitech and the new phenomena it generates bears witness once more, if more is needed, that progress in computer science and artificial intelligence occurs by empirical inquiry.
References 1. Berliner, H., and Ebeling, C. The SUPREM architecture: A new intelligent paradigm. Artif Intell. 28 (1986). 2. Dennett, D. C. Brainstorms. Bradford/MIT Press, Cambridge, Mass., 1978. 3. Dreyfus, H. L. What Corimputers Can't Do: A Critique of Artificial Reason, 2nd ed. Harper and Row, New York, 1979. 4. Feigenbaum, E. A. The art of artificial intelligence: Themes and case studies in knowledge engineering. In Proceedings of the 5th InternationalJoint Conference on Artificial Intelligence. Computer Science Dept., Carnegie-Mellon Univ., Pittsburgh, Pa., 1977. 5. Goldstein, I., and Papert, S. Artificial intelligence, language and the study of knowledge. Cogaitive Sci., 1 (1977), 84-124. 316
ALLEN NEWELL and HERBERT A. SIMON
6. Levesque, H. J. Foundations of a functional approach to knowledge representation. Artif Intell. 23 (1984), 155-212.
7. Newell, A. Physical symbol systems. Cognitive Sci. 4 (1980), 135-183. 8. Newell, A. The knowledge level. Artif Intell. 18 (1982), 87-127. 9. Newell, A., and Simon, H. A. Human Problem Solving. Prentice-Hall, Englewood Cliffs, N.J., 1972. 10. Pearl, J. Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley, Reading, Mass., 1984. 11. Searle, J. Minds, brains and programs. Behav. Brain Sci. 3 (1980), 417-457. 12. Simon, H. A. The Sciences of the Artificial. MIT Press, Cambridge, Mass., 1969.
Computer Science as Empirical Inquiry: Symbols and Search 317
Complexity of Computations MICHAEL 0. RABIN Hebrew University of Jerusalem The 1976 ACM Wiring Award was presented jointly to Michael A. Rabin and Dana S. Scott at the ACM Annual Conference in Houston, October 20. In introducing the recipients, BernardA. Galler, Chairman of the Wring Award Committee, read the following citation: 'The lWring Award this year is presented to Michael Rabin and Dana Scott for individual and joint contributions which not only have marked the course of theoretical computer science, but have set a standard of clarity and elegance for the entire field. Rabin and Scott's 1959 paper, 'Finite Automata and Their Decision Problems,' has become a classic paper in formal language theory that still forms one of the best introductions to the area. The paper is simultaneously a survey and a research article; it is technically simple and mathematically impeccable. It has even been recommended to undergraduates! In subsequent years, Rabin and Scott have made individual contributions which maintain the standardsof their early paper. Rabin's applications of automata theory to logic and Scott's development of continuous semantics for programming languages are two examples of work providing depth Author's present address: Department of Mathematics, Hebrew University of Jerusalem, Jerusalem, Israel; Aiken Computation Laboratory, Harvard University, Cambridge, MA 02138. 319
and dimension: the first applies computer science to mathematics, and the second applies mathematics to computer science. Rabin and Scott have shown us how well mathematicianscan help a scientist understand his own subject. Their work provides one of the best models of creative applied mathematics..' That was the formal citation, but there is a less formal side to this presentation. I want you to understand that the recipients of this award are real people, doing excellent work, but very much like those of us who are here today. Professor Michaei Rabin was born in Germany and emigrated as a small child with his parent,; to Israel in 1935. He got an M.Sc. degree in Mathematics from the Heorew University and later his Ph.D. in Mathematics from Princeton University. After obtaining his Ph.D. he was an H. B. Fine Instructor in Mathematics at Princeton University and Member of the Institute for Advanced Studies at Princeton. Since 1958 he has been a faculty member at the Hebrew University inJerusalem. From 1972 to 1975 he was al;;c' Rector of the Hebrew University. The Rector is elected by the Senate of the University and is Academic Head of the institution. ProfessorDana S. Scott received his Ph.D degree at Princeton University in 1958. He has since taught at the University of Chicago, the University of California at Berkeley, Stanford University, University of Amsterdam, Princeton University, and Oxford University in England. Professor Rabin will speakz on 'Computational Complexity,' and Professor Scott will speak on 'Logic and Progamming Languages.' Rabin's paper begins below; Scott's paper begins on page 47. The framework for research Ln :he theory of complexity of computations is described, emphasizing the interrelation between seemingly diverse problems and methods. Illustrative examples of practical and theoretical significance are given. Directions for new research are discussed.
1
Introduction The theory of complexity of computations addresses itself to the quantitative aspects of the solutions of computational problems. Usually there are several possible algorithms for solving a problem such as evaluation of an algebraic expression, sorting a file, or parsing a string of symbols. With each of the algorithms there are associated certain significant cost functions such as the number of computational steps as a function of the problem size, memory space requirements for the computation, program size, and in hardware implemented algorithms, circuit size and depth. The following questions can be raised with respect to a given computational problem P. Whal are good algorithms for solution of the problem P? Can one establish and prove a lower bound for one of the cost functions associated with the algorithm? Is the problem perhaps 320
MICHAEL 0. RABIN
intractable in the sense that no algorithm will solve it in practically feasible time? These questions can be raised for worst-case behavior as well as for the average behavior of the algorithms for P. During the last year an extension of algorithms to include randomization within the computation was proposed. Some of the above considerations can be generalized to these probabilistic algorithms. These questions concerning complexity were the subject of intensive study during the last two decades both within the framework of a general theory and for specific problems of mathematical and practical importance. Of the many achievements let us mention: -The Fast Fourier Transform, recently significantly improved, with its manifold applications including those to communications; -Showing that some of the mechanical theorem proving problems arising in proving the correctness of programs are intractable; -Determining the precise circuit complexity needed for addition of n-bit numbers; -Surprisingly fast algorithms for combinatorial and graph problems and their relation to parsing; -Considerable reductions in computing time for certain important problems, resulting from probabilistic algorithms. There is no doubt that work on all the abovementioned problems will continue. In addition we see for the future the branching out of complexity theory into important new areas. One is the problem of secure communication, where a new, strengthened theory of complexity is required to serve as a firm foundation. The other is the investigation of the cost functions pertaining to data structures. The enormous size of the contemplated databases calls for a deeper understanding of the inherent complexity of processes such as the construction and search of lists. Complexity theory provides the point of view and the tools necessary for such a development. The present article, which is an expanded version of the author's 1976 Turing lecture, is intended to give the reader a bird's-eye view of this vital field. We shall focus our attention on highlights and on questions of methodology, rather than attempt a comprehensive survey.
2
Typical Problems We start with listing some representative computation problems which are of theoretical and often also of practical importance, and which were the subject of intensive study and analysis. In subsequent sections we shall describe the methods brought to bear on these problems, and some of the important results obtained. Complexity of Computations
321
2.1
Computable Functions from Integers to Integers Let us consider functions of one or more variables from the set N = { 0, 1, 2, ...} of integer.. into N. We recognize intuitively that functions such asf(x) = x!, gixy) = x2 + yx are computable. A. M. Turing, after whom these lectures are so aptly named, set for himself the task of defining in precise terms which functions f: N - N, g: N x N - N, etc. are effectively computable. His model of the idealized computer and the class of recursive functions calculable by this computer are too weL known to require exposition. What concerns us here iE the question of measurement of the amount of computational work required for finding a value f(n) of a computable functions N - ?v. Also, is it possible to exhibit functions which are difficult to compute by every program? We shall return to these questions in 4.1. 2.2 Algebraic Expressions and' Equations Let E(x , ..., x.) be an algebraic expression constructed from the variables x, ..., x, by the arithmetical operations +, -, *, /. For example, E = (x,
+ X2 ) *
(x3 +- x4)/XI
* x5.
We are called upon to
evaluate E(xl, ... , x) for a numerical substitution xi = c, ... , x- = c,. More generally, the task may be to evaluate k expressions E, (xl, .... xn), ... , Ek(xl, ... , Xn), for the simultaneous substitution xl =c, Xn =Cn
Important special cases are the following. Evaluation of a polynomial f(x) = anX
+- an-,
Xn1
+
+
a.
(1o)
Matrix multiplication AB, wher-e A and B are n X n matrices. Here we have to find the values of the n2 expressions ai1b1j+ ... +a,bj, l < i, j < n, for given numerical values of the ai, bij,. Our example for the solution of equations is the system aix,
+ -+
aj, x_ =
bi,
I
< i < n,
(2)
of n linear equations in n unknowns x, .. ,X. We have to solve (evaluate) the unknowns, given the coefficients ai, bi, I < i, j < n. We shall not discuss here the interesting question of approximate solutions for algebraic and transcendental equations, which is also amenable to the tools of complexity theory. 322
MICHAEL 0. RABIN
2.3 Computer Arithmetic Addition. Given two n-digit numbers a = -lCi-2 aC ... a, b = 03-l -2 ... l*o (e.g., for n = 4, a = 1011, b = 1100), to find the
n + 1 digits of the sum a + b
'=ny,-I *--
70-
Multiplication. For the above a,b, find the 2n digits of the product a * b = 62 2, n-
*...6,.
The implementation of these arithmetical tasks may be in hardware. In this case the base is 2, and ai, hi = 0, 1. Given a fixed n we wish to construct a circuit with 2n inputs and, for addition, n + 1 outputs. When the 2n bits of a, b enter as inputs, the n + 1 outputs will be ens yn-l
Yo.
--,
Similarly for multiplication.
Alternatively we may think about implementation or arithmetic by an algorithm, i.e., in software. The need for this may arise in a number of ways. For example, our arithmetical unit may perform just addition; multiplication must then be implemented by a subroutine. Implementation of arithmetic by a program also comes up in the context of multiprecision arithmetic. Our computer has word size k and we wish to add and multiply numbers of length nk (n-word numbers). We take as base the number 2k, so that 0 < al, 0i < 2k, and use algorithms for finding a + b, a * b. 2.4
Parsing Expressions in Context-Free Languages The scope of complexity theory is by no means limited to algebraic or arithmetical calculations. Let us consider context-free grammars of which the following is an example. The alphabet of G consists of the symbols t, x, y, z, (,), +, *. Of these symbols, t is a nonterminal and all the other symbols are terminals. The productions (or rewrite rules) of G are 1. t
(t + t),
2. t
4. t
y,
5. t
t * t,
3. t - X,
Xz.
Starting from t, we can successively rewrite words by use of the productions. For example, j
l
(i
+ t) -*3 (x +
1)
(X +y+
2
(
X
*)
+
1*
t)
4
-5 (X + y *Z).
(3)
The number above each arrow indicates the production used, and i stands for the nonterminal to be rewritten. A sequence such as (3) is called a derivation, and we say that (x + y * z) is derivable from t. The set of all words u derivable from t and containing only terminals Complexity of Computations
323
is called the languagegenerated by G and is donated by L(G). The above G is just an example, and the generalization to arbitrary context-free grammars is obvious. Context-free grammars and languages commonly appear in programming languages and, of course, also in the analysis of natural languages. Two computational problems immediately come up. Given a grammar G and a word W (i.e., string of symbols) on the alphabet of G, is W E L(G)? This is the membership problem. The parsing problem is the following. Given a word W ' L(G), find a derivation sequence by productions of G, similar to (3), of W from the initial symbol of G. Alternatively, we want a parse tree of W.Finding a parse tree of an algebraic expression, for example, is an essential step in the compilation process.
2.5 Sorting of Files A file of records R1, R2 , .... R is stored in either secondary or main memory. The index i of the record R, indicates its location in memory. Each record R has a key (e.g., the social-security number in an incometax file) k(R). The computational task is to rearrange the file in memory into a sequence Ri, ..., Ri so that the keys are in ascending order k(Ri,) < k(R, 2 )
< *
< k(Ri,).
We emphasize both the distinction between the key and the record, which may be considerably larger than the key, and the requirement of actually rearranging the records. These features make the problem more realistic and somewhat Harder than the mere sorting of numbers.
2.6 Theorem Proving by Machine Ever since the advent of co rnputers, trying to endow them with some genuine powers of reasoning was an understandable ambition resulting in considerable efforts being expended in this direction. In particular, attempts were made to enable the computer to carry out logical and mathematical reasoning, and this by proving theorems of pure logic or by deriving theorems of mathematical theories. We consider the important example of the theory of addition of natural numbers. Consider the system A- = (NV, +) consisting of the natural numbers N = { 0, 1,... } and the operation + of addition. The formal language L employed for discussing properties of X4' is a so-called first-order predicate language. It has variables x, y, z, ... ranging over natural numbers, the operation symbol +, equality =, the usual propositional connectives, and the quantifiers V ('for all') and 3 ('there exists'). 324 MICHAEL 0. RABIN
A sentence such as 3xVy (x + y = y) is a formal transcription of 'there exists a number x so that for all numbers y, x + y = y.' This sentence is in fact true in X The set of all sentences of L true in X' will be called the theory of A'(7Th( XA)) and will be denoted by PA = Th( X ). For example, Vx Vy 3z[x + z = y V y + z = x] E PA. We shall also use the name 'Presburger's arithmetic,' honoring Presburger, who has proved important results about 7h( A'). The decision problem for PA is to find an algorithm, if indeed such an algorithm exists, for determining for every given sentence F of the language L whether F E PA or not. Presburger [12] has constructed such an algorithm for PA. Since his work, several researchers have attempted to devise efficient algorithms for this problem and to implement them by programs. These efforts were often within the framework of projects in the area of automated programming and program verification. This is because the properties of programs that one tries to establish are sometimes reducible to statements about the addition of natural numbers.
3
Central Issues and Methodology of Computational Complexity In the previous section we listed some typical computational tasks. Later we shall present results which were obtained with respect to these problems. We shall now describe, in general terms, the main questions that are raised, and the central concepts that play a role in complexity theory.
3.1
Basic Concepts A class of similar computational tasks will be called a problem. The individual cases of a problem P are called instances of P. Thus P is the set of all its instances. The delineation of a problem is, of course, just a matter of agreement and notational convenience. We may, for example, talk about the problem of matrix multiplication. The instances of this problem are, for any integer n, the pairs A, B of n X n matrices which are to be multiplied. With each instance I EP of a problem P we associate a size, usually an integer, I/.The size function III is not unique and its choice is dictated by the theoretical and practical considerations germane to the discussion of the problem at hand. Returning to the example of matrix multiplication, a reasonable measure on a pair I = (A, B) of n x n matrices to be multiplied, is Complexity of Computations
325
I|I = n. If we study memory space requirement for an algorithm for matrix multiplication, then the measure III = 2n 2 may be appropriate. By way of contrast, it does not seem that the size function III = n' would naturally arise in any context. Let P be a problem and All an algorithm solving it. The algorithm AL executes a certain computational sequence SI when solving the instance I E P. With S, we associate certain measurements. Some of the significant measurements are the following: (1) The length of SI, which is indicative of computation time. (2) Depth of S., i.e., the number of layers of concurrent steps into which S, can be decomposed. Depth corresponds to the time S. would require under parallel computation. (3) The memory space required for the computation SI. (4) Instead of total number of steps in S, we may count the number of steps of a certain kind such as arithmetical operations in algebraic computations, number of comparisons in sorting, or number of fetches from memory. For hardware implementations of algorithms, we usually define the size III so that all instances I of the same size n are to be solved on one circuit C,. The complexity of a circuit C is variously defined as number of gates; depth, which is again related to computing time; or other measurements, such as number of modules, having to do with the technology used for implementing the circuit. Having settled on a measure it on computations S, a complexity of computation function FAL can be defined in a number of ways, the principal two being wvorst-case complexity and average-behavior complexity. The first notion is defined by FAL(n) =
max A(S,)
'IEP,
|1 = n }.
(4)
In order to define average behavior we must assume a probability distribution p on each set P, = { I I P, III = n }. Thus for I E P. I| = n, p(I) is the probability of Iarising among all other instances of size n. The average behavior of AL is then defined by MA,
(n) =
E
p(l) A(S,).
(5)
We shall discuss in 4.7 the applicability of the assumption of a probability distribution. The analysis of algorithms deals with the following question. Given a size-function III and a measure 4(S,) on computations, to exactly determine for a given algorilhm AL solving a problem P either the
worst-case complexity
FAL(n
l
or, under suitable assumptions, the
average behavior M,(n). In the present article we shall not enter upon questions of analysis, but rather assume that the complexity function is known or is at least sufficiently well determined for our purposes. 326
MICHAEL 0. RABIN
3.2 The Questions We have now at our disposal the concepts needed for posing the central question of complexity theory: Given a computational problem P. how well, or at what cost can it be solved? We do not mention any specific algorithm for solving P. We rather aim at surveying all possible algorithms for solving P and try to make a statement concerning the inherent computational complexity of P. It should be borne in mind that a prelimimary step in the study of complexity of a problem P is the choice of the measure A(S) to be used. In other words, we must decide, on mathematical or practical grounds, which complexity we want to investigate. Our study proceeds once this choice was made. In broad lines, with more detailed examples and illustrations to come later, here are the main issues that will concern us. With the exception of the last item, they seem to fall into pairs. (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
Find efficient algorithms for the problem P Establish lower bounds for the inherent complexity of P Search for exact solutions of P. Algorithms for approximate (near) solutions. Study of worst-case inherent complexity. Study of the average complexity of P Sequential algorithms for P. Parallel-processing algorithms for P. Software algorithms. Hardware-implemented algorithms. Solution by probabilistic algorithms.
Under (1) we mean the search for good practical algorithms for a given problem. The challenge stems from the fact that the immediately obvious algorithms are often replaceable by much superior ones. Improvements by a factor of 100 are not unheard of. But even a saving of half the cost may sometimes mean the difference between feasibility and nonfeasibility. While any one algorithm AL for P yields an upper bound FAL(n) to the complexity of P. we are also interested in lower bounds. The typical result states that every AL solving P satisfies g(n)• FAL(n), at least for no < n where no no(AL). In certain happy circumstances upper bounds meet lower bounds. The complexity for such a problem is then completely known. In any case, besides the mathematical interest in lower bounds, once a lower bound is found it guides us in the -
search for good algorithms by indicating which efficiencies should not be attempted. The idea of near-solutions (4) for a problem is significant because sometimes a practically satisfactory near-solution is much easier to
calculate than the exact solution. Complexity of Computations
327
The main questions (1) and (2) can be studied in combination with one or more of the alternatives (3)-(11). Thus, for example, we can investigate an upper bound for the average time required for sorting by k processors working in parallel. Or we may study the number of logical gates needed for sorting n input-bits. It would seem that with the manifold possibilities of choosing the complexity measure and the variety of questions that can be raised, the theory of complexity of computations would become a collection of scattered results and unrelated methods. A theme that we try to stress in the examples we present is the large measure of coherence within this field and the commonality of ideas and methods that prevail throughout. We shall see that efficient algorithms for the parallel evaluation of polynomials are translatable into circuits for fast addition of n-bit numbers. The Fast Fourier Transform idea yields good algorithms for multiprecision number multipli-cation. On a higher plane, the relation between software and hardware algorithms mirrors the relation between sequential and paral el computations. Present-day programs are designed to run on a single processor and thus are sequential, whereas a piece of hardware contains many identical subunits which can be viewed as primitive processors operating in parallel. The method of preprocessing appears time and again in our examples, thus being another example of commonality.
4
Results 4.1 Complexity of General Recursive Functions In [13, 14] the present author initiated the study of classification of computable functions from 'ntegers to integers by the complexity of their computation. The framework is axiomatic so that the notions and results apply to every reasonable class of algorithms and every measure on computations. Let K be a class of algorithms, possibly based on some model of mathematical machines, so that for every computable functions N - N there exists an AL E K computing it. We do not specify the measure A( S) on computations S but rat her assume that Asatisfies certain natural axioms. These axioms are satisfied by all the concrete examples of measures listed in 3.1. The si2e of an integer n is taken to be InI = n. The computation off is a problem where for each instance n we have to findf(n). Along the lines of 3.1 (4), we have for each algorithm AL 328
MICHAEL 0. RABIN
for f the complexity of computation function FAL(n) measuring the work involved in computing f(n) by AL. THEOREM
[13,
14].
For every computable function g: N
-
N there
exists a computable function f: N - { 0, I } so that for every algorithm AL K computing f there exists a number n0 such that
g(n) < FAL(n), for no < n. We require that f be a 0-1 valued function because otherwise we could construct a complex function by simply allowing f(n) to grow very rapidly so that writing down the result would be hard. The limitation no < n in (6) is necessary. For everyf and k-we can construct an algorithm incorporating a table of the valuesf(n), n < k, making the calculation trivial for n < k. The main point of the above theorem is that (6), with a suitable no = no(AL), holds for every algorithm forf Thus the inherent complexity of computing f is larger than g. Starting from [14], M. Blum [1] introduced different but essentially equivalent axioms for the complexity function. Blum obtained many interesting results, including the speed-up theorem. This theorem shows the existence of computable functions for which there is no best algorithm. Rather, for every algorithm for such a function there exists another algorithm computing it much faster. Research in this area of abstract complexity theory made great strides during the last decade. It served as a basis for the theory of complexity of computations by first bringing up the very question of the cost of a computation, and by emphasizing the need to consider and compare all possible algorithms solving a given problem. On the other hand, abstract complexity theory does not come to grips with specific computational tasks and their measurement by practically significant yardsticks. This is done in the following examples.
4.2 Algebraic Calculations Let us start with the example of evaluation of polynomials. We take as our measure the number of arithmetical operations and use the notation (nA, kM) to denote a cost of n additions/subtractions and k multiplications/divisions. By rewriting the polynomial (1) as
flx) = (.. ((anx + an ,)x + a,-2)) x +
'+
ao,
we see that the general n-degree polynomial can be evaluated by (nA, nM). In the spirit of questions 1 and 2 in 3.2, we ask whether a clever algorithm might use fewer operations. Rather delicate mathematical arguments show that the above number is optimal so that this question is completely settled. Complexity of Computations
329
I
T. Motzkin introduced in [9] the important idea of preprocessingfor a computation. In many important applications we are called upon to evaluate the same polynomial f(x) for many argument values x = c, x = c2 . .... He suggested the following strategy of preprocessing the coefficients of the polynomial (1). Calculate once and for all numbers ao(ao, ... , a.) ... cjao, ... , aj) from the given coefficients ao, ... , a,. When evaluating f(c) use a0 , ... , a,. This approach makes computational sense when the cost of preprocessing is small as compared to the total savings in computing f(c1 ), f(c 2 ), ... , i.e., when the expected number or arguments for whiclif(x) is to be evaluated is large. Motzkin obtained the following. TIEOREM. Using preprocessing, a polynomial of degree n can be evaluated by (nA, ([n/2] + 2)M).
Again one can prove that trh s result is essentially the best possible. What about evaluation in parallel? If we use k processors and have to evaluate an expression requiring at least m operations, then the best that we can hope for is computation time (m/k) - 1 + log 2k. Namely, assume that all processors are continuously busy, then m - k operations are performed in time (m/k) - 1. The remaining k operations must combine by binary operations k inputs into one output, and this requires time 1og 2 k at least. In view of the above, the following result due to Munro and Paterson [10] is nearly best possible. The polynomial (1) can be evaluated by k processors working in parallel in time (2n/k) + log2 k + 0(1).
THEOREM.
With the advances in hardware it is not unreasonable to expect that we may be able to employ large numbers of processors on the same task. Brent [3], among other<, studied the implications of unlimited parallelism and proved the following. THEOREM. Let E(x .. ., xn) be' an arithmetical expression, where each
variable xi appears only once. The expression E can be evaluated under unlimited parallelism in time 4 log2 n.
Another important topic is the Fast Fourier Transform (FFT). The operation of convolution which has many applications, such as to signal processing, is an example of a computation greatly facilitated by the FFT. Let a,
... , a, be a sequence of n numbers, b, b2 , ... , be a stream
of incoming numbers. Define for i = 1, 2, .... ci = albi + a2 bi+, +
+ abi,
.
(7)
We have to calculate the values cl, c2. .... From (7) it seems that the cost per value of c, is 2n operations. If we compute the ci's in blocks of size n, i.e. c1, ... , c., and c, + , ..., C2., etc. using FFT, then the cost per block is about 5n log2n, so that the cost of a single ci is 5 log2 n. Using a clever combination of algebraic and number-theoretic ideas, S. Winograd [20] recently improved computation times of convolution 330 MICHAEL 0. RABIN
for small values of n and of the discrete Fourier transform for small to medium values of n. For n - 1000, for example, his method is about twice as fast as the conventional FFT algorithm. The obvious methods for n x n matrix multiplication and for the solution of the system (2) of n linear equations in n unknowns require about n3 operations. Strassen [17] found the following surprising result. Tw'o n x n matrices can be multiplied using at most 4.7n2 81 operations. A system of n linear equations in n unknowns can be solved by 4.8n 2 81 operations. THEOREM.
It is not likely that the exponent log27 - 2.81 is really the best possible, but at the time of writing of this article all attempts to improve this result have failed.
4.3 How Fast Can We Add or Multiply? This obviously important question underwent thorough analysis. A simple fan-in argument shows that if gates with r inputs are used, then a circuit for the addition of n-bit numbers requires at least time logrn. This lower bound is in fact achievable. It is worthwhile noticing that, in the spirit of the remarks in 3.2 concerning the analogy between parallel algorithms and hardware algorithms, one of the best results on circuits for addition (Brent [2]) employs Boolean identities which are immediately translatable into an efficient parallel evaluation algorithm for polynomials. The above results pertain to the binary representation of the numbers to be added. Could it be that under a suitably clever coding of the numbers 0 < a < 21, addition mod 2n is performable in time less than logrn? Winograd [19] answered this question. Under very general assumptions on the coding, the lower bound remains log1n. Turning to multiprecision arithmetic, the interesting questions arise in connection with multiplication. The obvious method for multiplying numbers of length n involves n2 bit-operations. Early attempts at improvements employed simple algebraic identities and resulted with a reduction to O(n1 58) operations. Schonhage and Strassen [16] utilized the connection between multiplication of natural numbers and polynomial multiplication and employed the FFT to obtain the following theorem. THEOREM. Two bit-operations.
n-bit numbers can be multiplied by O(n log n log log n)
Attempts at lower bounds for complexity of integer multiplication must refer to a specific computational model. Under very reasonable Complexity of Computations
331
assumptions Paterson, Fischer, and Meyer t11] have considerably narrowed the gap between the upper and lower bounds by showing the following. TiEOREM. At least O(n log n/log tog n) operations are necessary for multiplying n-bit numbers.
4.4 Speed of Parsing Parsing of expressions in context-free grammars would seem at first sight to require a costly backtracking computation. A dynamical computation which simultaneously seeks the predecessors of all substrings of the string to be parsed, leads to an algorithm requiring 0(n 3 ) steps for parsing a word of length n. The coefficient of n 3 depends on the grammar. This was for a long while the best result, even though for special classes of context-free grammars better upper bounds were obtained. Fischer and Meyer observed that Strassen's algorithm for matrix multiplication can be adapted to yield an O(n2 81 c(n)), bit-operations algorithm for the multiplication of two n X n Boolean matrices. Here c(n) = log n log log n log log log n and is thus O(nl) for every 0 < a. Valiant [18] found that parsing is at most as complex as Boolean matrix multiplication. Hence, since actually log2 7 < 2.81, the following theorem holds: TiEOREM. Expressions of length a in the context-free language L(G) can
be parsed in time d(G)n2 8X.
We again see how results from algebraic complexity bear fruit in the domain of complexity of cnmbinatorial computations.
4.5 Data Processing Of the applications of complexity theory to data processing we discuss the best known example, that of sorting. We follow the formulation given in 2.5. It is well known that the sorting of n numbers in random access memory requires about n log n comparisons. This is both the worst case behavior of some algorithms and the average behavior of other algorithms under the assumption that all permutations are equally likely. The rearrangement of records R., R2 , ..., R, poses additional problems because the file usually resides in some sequential or nearly sequential memory such as magnetic tape or disk. Size limitations enable us to transfer into the East memory for rearrangement only a small number of records at a time. Still it is possible to develop algorithms for the actual reordering of the files in time cn log n where c depends on characteristics of the system under discussion. An instructive result in this area is due to Floyd [6]. In his model the file is distributed on a number of pages P1, ..., P, and each page contains k records so that P, contains the records Rj, ,..., Rk. For our 332 MICHAEL 0. RABIN
purposes we may assume without loss of generality that m = k. The task is to redistribute the records so that Rii will go to page Pj for all 1 < i, j < k. The fast memory is large enough to allow reading in two pages Pe, p. redistribute their records, and read the pages out. Using a recursion analogous to the one employed in the FFT, Floyd proved the following. The redistribution of records in the above manner can be achieved by k log2 k transfers into fast memory. This result is the best possible. THEOREM.
The lower bound is established by considering a suitable entropy function. It applies under the assumption that within fast memory the records are just shuffled. It is not known whether allowing computations with the records, viewed as strings of bits, may produce an algorithm with fewer fetches of pages.
4.6 IntractableProblems The domain of theorem proving by machine serves as a source of computational problems which require such an inordinate number of computational steps so as to be intractable. In attempts to run programs for the decision problem of Presburger's arithmetic (PA) on the computer, the computation terminated only on the simplest instances tried. A theoretical basis for this pragmatic fact is provided by the following result due to Fischer and Rabin [5]. There exists a constant 0 < c so that for every decision algorithm AL for PA there is a number no such that for every no < n there is a sentence H of the language L (the language for addition of numbers) satisfying (1) I(H) = n, (2) AL takes more than 22cn steps to determine whether H E PA, i.e., whether H is true in (N, +). Here l(H) denotes the length of H. THEOREM.
The constant c depends on the notation used for stating properties of (N, +). In any case, it is not very small. The rapid growth of the inherent lower bound 22cn shows that even when trying to solve the decision problem for this very simple and basic mathematical theory, we run into practically impossible computations. Meyer [8] produced examples of theories with even more devastatingly complex decision problems. The simplest level of logical deduction is the propositional calculus. From propositional variables pi, p2,..., we can construct formulas such as [pi A -PdI V [P2 A - pi] by the propositional connectives. The satisfiability problem is to decide for a propositional formula G(p , ... , P.) whether there exists a truth-value assignment to the variables p', ..., p. so that G becomes true. The assignment pi = F (false), P2 = T (true), for example, satisfies the above formula. Complexity of Computations
333
The straightforward algorithm for the satisfiability problem will require about 2n steps for a formula with n variables. It is not known whether there exist nonexponential algorithms for the satisfiability problem. The great importance of this question was brought to the forefront by Cook [4]. One can define a natural process of so-called polynomial reduction of one computational problem P to another problem Q. If P is polynomially reducible io Q and Q is solvable in polynomial time then so is P. Two problems which are mutually reducible are called polynomially equivalent. Cook has shown that the satisfiability problem is equivalent to the so-called problem of cliques in graphs. Karp [7] brings a large number of problems equivalent to satisfiability. Among them are the problemT s of 0-1 integer programming, the existence of Hamiltonian circui ts in a graph, and the integer-valued traveling-salesman problem, to mention just a few examples. In view of these equivalences, if any one of these important problems is solvable in polynomial time then so are all the others. The question whether satisfiability is of polyr1nomial complexity is called the P = NP problem and is justly the most celebrated problem in the theory of complexity of computations. 4.7
Probabilistic Algorithms As mentioned in 3.1, the study of the average behavior or expected time of an algorithm is predicated on the assumption of a probability distribution on the space of instances of the problem. This assumption involves certain methodological difficulties. We may postulate a certain distribution such as all instances being equally likely, but in a practical situation the source of instances of the problem to be solved may be biased in an entirely different way. The distribution may be shifting with time and will often not be known to us. In the extreme case, most instances which actually come up are precisely those for which the algorithm behaves worst. Could we employ probability in computations in a different manner, one over which we have total control? A probabilistic algorithm AL for a problem P uses a source cf random numbers. When solving an instance I E P, a short sequence r = (b, ... , bk) of random numbers is generated, and these are used in AL to solve P in exact terms. With the exception of the random choice of r, the algorithm proceeds completely deterministically. We say that such an AL solves P in expected time f(n) if for every I E P, I|| = n, AL solves I in expected time less than or equal to f(n). By expected time we mean the average of all solution times of I by AL for all possible choice sequences r (which we assume to be equally likely). Let us notice the difference between this notion and the well-known Monte-Carlo method. In the latter method we construct for a problem 334
MICHAEL 0. RABIN
a stochastic process which emulates it and then measure the stochastic process to obtain an approximate solution for the problem. Thus the Monte-Carlo method is, in essence, an analog method of solution. Our probabilistic algorithms, by contrast, use the random numbers b, ... bk to determine branchings in otherwise deterministic algorithms and produce exact rather than approximate solutions. It may seem unlikely that such a consultation with a 'throw of the dice' could speed up a computation. The present author systematically studied [15] probabilistic algorithms. It turns out that in certain cases this approach effects dramatic improvements. The nearest pair in a set of points x, ... , x, E Rk (k-dimensional space) is the pair xi, xj, i * j, for which the distance d(xi, xj) is minimal. A probabilistic algorithm finds the nearest pair in expected time 0(n), more rapidly than any conventional algorithm. The problem of determining whether a natural number n is prime becomes intractable for large n. The present methods break down around n-1060 when applied to numbers which are not of a special form. A probabilistic algorithm devised by the author works in time 0(log n)3 ). On a medium-sized computer, 2400-593 was recognized as prime within a few minutes. The method works just as well on any other number of comparable size. The full potential of these ideas is not yet known and merits further study.
5
New Directions Of the possible avenues for further research, let us mention just two.
5.1 Large Data Structures Commercial needs prescribe the creation of ever larger databases. At the same time present-day and, even more so, imminent future technologies, make it possible to create gigantic storage facilities with varying degrees of freedom of access. Much of the current research on databases is directed at the interface languages between the user and the system. But the enormous sizes of the lists and other structures contemplated would tend to make the required operations on these structures very costly unless a deeper understanding of the algorithms to perform such operations is gained. We can start with the problem of finding a theoretical, but at the same time practically significant, model for lists. This model should be versatile enough to enable specialization to the various types of list structures now in use. What are operations on lists? We can enumerate a few. Search through a list, garbage collection, access to various points in a list, Complexity of Computations
335
insertions, deletions, mergers of lists. Could one systematize the study of these and other significant operations? What are reasonable cost functions that can be associated with these operations? Finally, a deep quantitative understanding of data structures could be a basis for recommendations as to technological directions to be followed. Does parallel processing appreciably speed up various operations on data structures? What useful properties can lists be endowed with in associative memories? These are, of course, just examples.
5.2 Secure Communications Secure communications employ some kind of coding devices, and we can raise fundamental questions of complexity of computations in relation to these systems. Let us illustrate this by means of the system of block-encoding. In block-encoding, one en-ploys a digital device which takes as inputs words of length n and encodes them by use of a key. If x is a word of length n and z is a key (.et us assume that keys are also of length n), then let Ez(x) = y, 1(y) = n, denote the result of encoding x by use of the key z. A message w -= x, X2 . Xk of length kn is encoded as EJ(XI) EJ~X2) ...
EZ(Xk) .
If an adversary is able to obtain the current key z, then he can decode the communications between the parties, since we assume that he is in possession of the coding and decoding equipment. He can also interject into the line bogus messages which will decode properly. In commercial communications this possibility is perhaps even more dangerous than the breach of secrecy. In considering security one should take into account the possibility that the adversary gets hold of c number of messages w1, w2, ... , in clear text, and in encoded form EJ( 1), EL(w 2 ), .... Can the key z be computed from this data? It would not do to prove that such a computation is intractable. For the results of current complexity theory give us worst-case information. Thus if, say, for the majority of key-retrieval computations a lower bound of 21 on computational complexity will be established, then the problem will be deemed intractable. But if an algorithm will discover the key in practical time in one case in a thousand then the possibilities of fraud would be unacceptably large. Thus we need a theory of complexity that will enable us to state and prove that a certain computation is intractable in virtually every case. For example, a block-encocding system is safe if any algorithm for key determination will terminate in practical time only on 0(2- ) of the cases. We are very far from creation of such a theory, especially at the present stage when P =- NP is not yet settled. 336
MICHAEL 0. RABIN
References 1. Blum, M. A machine independent theory of the complexity of recur-
sive functions. J. ACM 14 (1967), 322-336. 2. Brent, R. P. On the addition of binary numbers. IEEE Trans. Comptrs. C-19 (1970), 758-759. 3. Brent, R. P. The parallel evaluation of algebraic expressions in logarithmic time. Complexity of Sequential and Parallel Numerical Algorithms, J. R Traub, Ed., Academic Press, New York, 1973, pp. 83-102. 4. Cook, S. A. The complexity of theorem proving procedures. Proc. Third Annual ACM Symp. on Theory of Comptng., 1971, pp. 151-158. 5. Fischer, M. J., and Rabin, M. 0. Super-exponential complexity of Presburger arithmetic. In Complexity of Computations (SIAM-AMS Proc., Vol. 7), R. M. Karp Ed., 1974, pp. 27-41. 6. Floyd, R. W. Permuting information in idealized two-level storage. In Complexity of Computer Computations, R. Miller and J. Thatcher, Eds., Plenum Press, New York, 1972, pp. 105-109. 7. Karp, R. M. Reducibility among combinatorial problems. In Complexity of Computer Computations, R. Miller and J. Thatcher, Eds., Plenum Press, New York, 1972, pp. 85-103. 8. Meyer, A. R. The inherent computational complexity of theories of order. Proc. Int. Cong. Math., Vol. 2, Vancouver, 1974, pp. 477-482. 9. Motzkin, T. S. Evaluation of polynomials and evaluation of rational functions. Bull. Amer. Math. Soc. 61 (1955), 163. 10. Munro, I., and Paterson, M. Optimal algorithms for parallel polynomial evaluation. J. Comptr. Syst. Sci. 7 (1973), 189-198. 11. Paterson, M., Fischer, M. J., and Meyer, A. R. An improved overlap argument for on-line multiplication. Proj. MAC Tech. Report 40, M.I.T. (1974). 12. Presburger, M. Uber die Vollstandigkeit eines gewissen Systems Arithmetic ganzer Zahlen in welchem die Addition als einzige Operation hervortritt. Comptes-rendus du I Congres de Mathematiciens de Pays Slaves, Warsaw, 1930, pp. 92-101, 395. 13. Rabin, M. 0. Speed of computation and classification of recursive sets. Third Convention Sci. Soc., Israel, 1959, pp. 1-2. 14. Rabin, M. 0. Degree of difficulty of computing a function and a partial ordering of recursive sets. Tech. Rep. No. 1, O.N.R., Jerusalem, 1960. 15. Rabin, M. 0. Probabilistic algorithms. In Algorithms and Complexity, New Directionsand Recent Trends, J. F. Thaub, Ed., Academic Press, New York, 1976, pp. 29-39. 16. Schonhage, A., and Strassen, V. Schnelle Multiplication grosser Zahlen. Computing 7 (1971), 281-292. 17. Strassen, V. Gaussian elimination is not optimal. Num. Math. 13 (1969), 354-356. 18. Valiant, L. G. General context-free recognition in less than cubic time. Rep., Dept. Comptr. Sci., Carnegie-Mellon U., Pittsburgh, Pa., 1974. 19. Winograd, S. On the time required to perform addition. J. ACM 12 11965), 277-285. 20. Winograd, S. On computing the discrete Fourier transform. Proc. Natl. Acad. Sci. USA 73 (1976), 1005-1006.
Complexity of Computations
337
Categories and Subject Descriptors: F.2.1 [Analysis of Algorithms and Problem Complexity]: Numerical Algorithms and Problems-- computations on polynomials; E2.2 (Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems-sorting and searching; F.4. 1 [Mathematical Logic and Formal Languages]: mechanical theorem proving; F.4.2 [Mathematical Logic and Formal Languages]: Grammars and Other Rewriting Systems-grammar types; G.1.0 [Numerical Analysis]: General-computer arithmetic
General Terms: Algorithms, Languages, Security, Theory, Verification
Additional Key Words and Phrases: Complexity theory, parsing
338 MICHAEL 0. RABIN
Notation as a Tool of Thought KENNETH E. IVERSON IBM Thomas J. Watson Research Center The 1979ACM ThringAward was presented to Kenneth E. Iverson by Walter Carlson, Chairman of the Awards Committee, at the ACM Annual Conference in Detroit, Michigan, October 29, 1979. In making its selection, the General Technical Achievement Award Committee cited Iverson for his pioneering effort in programminglanguages and mathematical notation resulting in what the computing field now knows as APL. Iverson's contributions to the implementation of interactive systems, to the educational uses of APL, and to programming language theory and practice were also noted. Born and raised in Canada, Iverson received his doctorate in 1954 from HarvardUniversity. There he served as Assistant ProfessorofApplied Mathematics from 1955 to 1960. He then joined International Business Machines Corp. and in 1970 was named an IBM Fellow in honor of his contribution to the development of APL. Dr. Iverson is presently with I.P Sharp Associates in Toronto. He has published numerous articles on programminglanguages and has written four books about programming and mathematics: A Programming Language (1962), Elementary Functions (1966), Algebra: An Algorithmic Tyeatment 11972), and Elementary Analysis (1976). Author's present address: 1. P. Sharp Associates, 2 First Canadian Place, Suite 1900, Toronto, Ontario M5X 1B3, Canada. 339
The importance of nomencLature, notation, and language as tools of thought has long been recognized. In chemistry and in botany, for example, the establishment of systems of nomenclature by Lavoisier and Linnaeus did much to stimulate and to channel later investigation. Concerning language, George 3ozole in his Laws of Thought [1, p. 24] asserted 'That language is an instrument of human reason, and not merely a medium for the expression of thought, is a truth generally admitted.' Mathematical notation provides perhaps the best-known and bestdeveloped example of language used consciously as a tool of thought. Recognition of the important role of notation in mathematics is clear from the quotations from mathematicians given in Cajori's A History of MathematicalNotations [2, pp .332, 331]. They are well worth reading in full, but the following excerpts suggest the tone: By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced piob ens, and in effect increases the mental power of the race. A. N. Whitehead The quantity of meaning compressed into small space by algebraic signs, is another circumstance that facilitates the reasonings we are accustomed to carry on by their aid. Charles Babbage
Nevertheless, mathematical notation has serious deficiencies. In particular, it lacks universality, and must be interpreted differently according to the topic, according to the author, and even according to the immediate context. Programming languages, because they were designed for the purpose of directing computers, offer important advantages as tools of thought Not only are they universal (generalpurpose), but they are also executable and unambiguous. Executability makes it possible to use computers to perform extensive experiments on ideas expressed in a programming language, and the lack of ambiguity makes possible precise thought experiments. In other respects, however, most programming languages are decidedly inferior to mathematical notation and are little used as tools of thought in ways that would be considered signif' cant by, say, an applied mathematician. The thesis of the present paper is that the advantages of executability and universality found in programming languages can be effectively combined, in a single coherent language, with the advantages offered by mathematical notation. It is developed in four stages: (a) Section 1 identifies salient characteristics of mathematical notation and uses simple problems tc illustrate how these characteristics may be provided in an executable notation. (b) Sections 2 and 3 continue ihis illustration by deeper treatment of a set of topics chosen for their general interest and utility. Section 2 concerns polynomials, and Section 3 concerns transformations between representations of -unctions relevant to a number of topics, including permutations anc directed graphs. Although these topics might be characterized as mathematical, they are directly relevant to computer programming, and their relevance will increase as 340
KENNETH E. IVERSON
programming continues to develop into a legitimate mathematical discipline. (c) Section 4 provides examples of identities and formal proofs. Many of these formal proofs concern identities established informally and used in preceding sections. (d) The concluding section provides some general comparisons with mathematical notation, references to treatments of other topics, and discussion of the problem of introducing notation in context. The executable language to be used is APL, a general-purpose language which originated in an attempt to provide clear and precise expression in writing and teaching, and which was implemented as a programming language only after several years of use and development [3]. Although many readers will be unfamiliar with APL, I have chosen not to provide a separate introduction to it, but rather to introduce it in context as needed. Mathematical notation is always introduced in this way rather than being taught, as programming languages commonly are, in a separate course. Notation suited as a tool of thought in any topic should permit easy introduction in the context of that topic; one advantage of introducing APL in context here is that the reader may assess the relative difficulty of such introduction. However, introduction in context is incompatible with complete discussion of all nuances of each bit of notation, and the reader must be prepared to either extend the definitions in obvious and systematic ways as required in later uses, or to consult a reference work. All of the notation used here is summarized in Appendix A, and is covered fully in pages 24-60 of APL Language [4]. Readers having access to some machine embodiment of APL may wish to translate the function definitions given here in direct definition form [5, p. 10] (using a and w to represent the left and right arguments) to the canonical form required for execution. A function for performing this translation automatically is given in Appendix B.
1 Important Characteristics of Notation In addition to the executability and universality emphasized in the introduction, a good notation should embody characteristics familiar to any user of mathematical notation: * * * * *
Ease of expressing constructs arising in problems. Suggestivity. Ability to subordinate detail. Economy. Amenability to formal proofs. Notation as a Tool of Thought
341
The foregoing is not intended as an exhaustive list, but will be used to shape the subsequent discussion. Unambiguous executability of the notation introduced remains important, and will be emphasized by displaying below an expression the explicit result produced by it. To maintain the distinction between expressions and results, the expressions will be indented as they automatically are on APL computers. For example, the integer function denoted by i produces a vector of the first N integers when applied to the argument N, and the surn reduction denoted by + / produces the sum of the elements of its vector argument, and will be shown as follows: i5
1 2 3 4 5 15 We will use one nonexecutable bit of notation: the symbol +--* appearing between two expressions asserts their equivalence. 1.1
Ease of Expressing Constructs Arising in Problems If it is to be effective as a tool of thought, a notation must allow convenient expression not orly of notions arising directly from a problem but also of those arising in subsequent analysis, generalization, and specialization. Consider, for example, the crystal structure illustrated by Figure 1, in which successive layers of atoms lie not directly on top of one another, but lie 'close-packed' between those below them. The numbers of atoms in successive rows from the top in Figure 1 are therefore given by 5 , and the total number is given by - / i 5 . 0 0
0
)0 0 0
0 )0
0 0
0 0
0
FIGURE 1 The three-dimensional structure of such a crystal is also closepacked; the atoms in the plane lying above Figure 1 would lie between the atoms in the place below it, and would have a base row of four atoms. The complete three-dimensional structure corresponding to Figure 1 is therefore a tetrahedron whose planes have bases of lengths 1, 2, 3, 4, and 5. The numbers in successive planes are therefore the partialsums of the vector i 5, that is, the sum of the first 342 KENNETH E. IVERSON
element, the sum of the first two elements, etc. Such partial sums of a vector V are denoted by + V, the function + being called sum scan. Thus:
1
3
6
+ 15 10 15 +/t + 5
35
The final expression gives the total number of atoms in the tetrahedron. The sum + / i 5 can be represented graphically in other ways, such as shown on the left of Figure 2. Combined with the inverted pattern on the right, this representation suggests that the sum may be simply related to the number of units in a rectangle, that is, to a product. o 00000 00 0000 000 000 0000 00 00000 0 FIGURE 2
The lengths of the rows of the figure formed by pushing together the two parts of Figure 2 are given by adding the vector i 5 to the same vector reversed. Thus:
This pattern of have:
5
1
2
5
4 3
5 4 5 i i 5 2 1
6
6
( i5) 6 6
3
6
()+
5)
repetitions of 6 may be expressed as
6
6
6
5 p 6,
and we
5p6 6 6 +/5p6
30 6x 5 30
The fact that + / 5 p 6 -+ 6 x 5 follows from the definition of multiplication as repeated addition. The foregoing suggests that + / 1 5 + - ( 6 x 5 ) + 2 and, more generally, that: +/tN
++
((N+1)xN)+2
A.1
Notation as a Tool of Thought 343
1.2 Suggestivity A notation will be said to be suggestive if the forms of the expressions arising in one set of problems suggest related expressions which find application in other problems. We will now consider related uses of the functions introduced thus far, namely: 4
I
p
+/
+
The example: 5p 2 2
2 2
2
2
x/5p2 32 suggests that x IMp N +-- N
*
M, where * represents the power function.
The similarity between the definitions of power in terms of times, and of times in terms of plus may therefore be exhibited as follows: x,/MDN
+
N*M
MoN
++
NxM
Similar expressions for partial sums and partial products may be developed as follows: x' p 2
2 4
8 :6
32 .15
2
4
8
.lb 32
:(MPN +MPN
÷
N*iM NxiM N-*
Because they can be represented by a triangle as in Figure 1, the sums + i 5 are called triangularnumbers. They are a special case of the figurate numbers obtained by repeated applications of sum scan, beginning either with + I N, or with + N p 1. Thus: 5p1 11
1
2
1
3
++5pl
1
1
3
6
10
+5pl 4 5
1
4
10
+++5pl 20 35
15
Replacing sums over the successive integers by products yields the factorials as follows: i5 12 3 4
5
x/15
120
xt5
1 2
6
5 1 20
344
KENNETH E. IVERSON
1 2 6
24 120 ! 15 2 4 12 0
Part of the suggestive power of a language resides in the ability to represent identities in brief, general, and easily remembered forms. We will illustrate this by expressing dualities between functions in a form which embraces DeMorgan's laws, multiplication by the use of logarithms, and other less familiar identities. If V is a vector of positive numbers, then the product x / V may be obtained by taking the natural logarithms of each element of V (denoted by @v), summing them (- I/ v), and applying the exponential func-
tion (*+-I/eV). Thus: X/V
+
*+/OV
Since the exponential function * is the inverse of the natural logarithm *, the general form suggested by the right side of the identity is: IG FIG V where IG is the function inverse to C. Using A and v to denote the functions and and or, and - to denote the self-inverse function of logical negation, we may express DeMorgan's laws for an arbitrary number of elements by:
A/B 4-+v/B *--
-/B AI-B
The elements of B are, of course, restricted to the boolean values 0 and 1. Using the relation symbols to denote functions (for example, X < Y yields 1 if X is less than Y and o otherwise) we can express further dualities, such as: z1B | =1B +
=-1B -1-
B
Finally using I and L to denote the maximum and minimum functions, we can express dualities which involve arithmetic negation:
F/V L/V
-L--V 4
-F/-V
It may also be noted that scan (F ) may replace reduction (F /) in any of the foregoing dualities. Notation as a Tool of Thought
345
1.3
Subordination of Detail As Babbage remarked in the passage cited by Cajori, brevity facilitates reasoning. Brevity is achieved by subordinating detail, and we will here consider three important ways of doing this: the use of arrays, the assignment of names to functions and variables, and the use of operators. We have already seen examples of the brevity provided by onedimensional arrays (vectors) in the treatment of duality, and further subordination is provided by matrices and other arrays of higher rank, since functions defined on vectors are extended systematically to arrays of higher rank. In particular, one may spec ify the axis to which a function applies. For example, [ 1 ] M acts along the first axis of a matrix M to reverse each of the columns, and 4 [ 2 ] M reverses each row; M, E 1 ] N catenates columns (placing M above N ), and M, [ 2 ]N catenates rows; and +/ [ 1 ] M sums columns and + / [ 2 ] M sums rows. If no axis is specified, the function applies along the last axis. Thus + /M sums rows. Finally, reduction and scan along the first axis may be denoted by the symbols / and . Two uses of names may be distinguished: constant names which have fixed referents are used for entities of very general utility, and ad hoc names are assigned (bby means of the symbol - ) to quantities of interest in a narrower context. For example, the constant (name) 1 4 4 has a fixed referent, but the names CRA TE, LA YER, and ROW assigned by the expressions
CRA TE
+ 14'4 LAYER - CRATE+8 ROW + LAYER+3
are ad hoc, or variable names. Constant names for vectors are also provided, as in 2 3 5 7 1 1 for a numeric vector of five elements, and in IA BCDE' for a character vector of five elements. Analogous distinctions are made in the names of functions. Constant names such as +, x, and *, are assigned to so-called primitive functions of general utility. The detailed definitions, such as + / Mp N for N xMand x / Mp N, for N * M, are subordinated by the constant names x and *. Less familiar examples of constant function names are provided by the comma which catenates its arguments as illustrated by:
(i5),(45)
346
KENNETH E. IVERSON
+---
L2 3 4 5 5 4 3 2 1
and by the base-representation function T. which produces a representation of its right argument in the radix specified by its left argument. For example: 2
2
2
T
3
2
2
2
T
4
BN*2 BN
-
2
2 T 0
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
0
1
1
1 I-
0
0
1
2
3
4
5
6
7
BN , BN 0 0 0 0 0 1 0 1 0
0 1 1 1 1 1 1 1 1 0 0 0 0 1 0 0 1 1 1 1 0 0 1 1 0 0 1 0 1 0 1 1 0 1 0 1 0 1 0
The matrix BN is an important one, since it can be viewed in several ways. In addition to representing the binary numbers, the columns represent all subsets of a set of three elements, as well as the entries in a truth table for three boolean arguments. The general expression for N elements is easily seen to be ( N p 2 ) T ( i 2 * N ) - 1, and we may wish to assign an ad hoc name to this function. Using the direct definition for (Appendix B), the name T is assigned to this function as follows: T: ( wp 2 )T(
i 2*co )-
1
A.2
The symbol w represents the argument of the function; in the case of two arguments the left is represented by a. Following such a definition of the function T. the expression T 3 yields the boolean matrix BN shown above. Three expressions, separated by colons, are also used to define a function as follows: the middle expression is executed first; if its value is zero the first expression is executed, if not, the last expression is executed. This form is convenient for recursive definitions, in which the function is used in its own definition. For example, a function which produces binomial coefficients of an order specified by its argument may be defined recursively as follows: BC:(X,O)+(sO,X -BC w-1):w=0:1
A.3
Thus BC 0 -- 1 and BC 1 +-- 1 1 and BC 4 E 1 4 6 4 1. The term operator, used in the strict sense defined in mathematics rather than loosely as a synonym for function, refers to an entity which applies to functions to produce functions; an example is the derivative operator. Notation as a Tool of Thought 347
We have already met two operators, reduction, and scan, denoted by / and , and seen how they contribute to brevity by applying to different functions to produce families of related functions such as + / and x / and A / We will now illustrate the notion further by introducing the inner product operator denoted by a period. A function (such as + / ) produced by an operator will be called a derived function. If P and Q are two vectors, then the inner product + . x is defined by:
Pa+.,x
+/pxQ
and analogous definitions hold for function pairs other than + and x. For example: Pi-2 3 Q-2 1 P+ . xQ
5 2
17
P x . *Q 300 PL . tQ
4 Each of the foregoing expressions has at least one useful interpretation: P.+ x Q is the total cost of order quantities Q for items whose prices are given by P; because P is a vector of primes, P x . * Q is the
number whose prime decomposition is given by the exponents Q; and if P gives distances frorr, a source to transhipment points and Q gives distances from the transhipment points to the destination, then P L . +Q gives the minimum distance possible.
The function + . x is equivalent to the inner product or dot product of mathematics, and is extended to matrices as in mathematics. Other cases such as x . * are extended analogously. For example, if T is the function defined by A.2, there : 0 0 0 0
0 0 1 5
T 3 0 0 1 1 1 1 1 1 0 0 1 1 0 1 0 1 0 1 P+.xT 3 3 8 2 7 5 1C
1
5
3
Px.*T 15 2
3 10
6
30
These examples bring out an important point: if B is boolean, then P+ . x B produces sums over subsets of P specified by 1 's in B, and P x . * B produces products over subsets.
The phase o . x is a special use of the inner product operator to produce a derived function which yields products of each element of its left argument with each element of its right. For example: 2 2 3 5
348
KENNETH E. IVERSON
4 6 10
6 9 15
3
5o.xi5 8 10 12 15 20 25
The function a x is called outer product, as it is in tensor analysis and functions such as o . and a . * and . < are defined analogously, producing 'function tables' for the particular functions. For example: D-O0
1 1 1
0
2 2 3 3
1 2
3
Do.ŽD 1 00 1 1 0 0 1 1 1 1 1 1 1
D-.FD 2 3 2 3 2 3 3 3
Do.!D
1
11
0 1 2 3 00 0 1 3 0 0 0 1
The symbol ! denotes the binomial coefficient function, and the table . ! D is seen to contain Pascal's triangle with its apex at the left; if extended to negative arguments (as with D - 3 2 1 0 1 2 3 ) it will be seen to contain the triangular and higher-order figurate numbers as well. This extension to negative arguments is interesting for other functions as well. For example, the table D o . x D consists of four quadrants separated by a row and a column of zeros, the quadrants showing clearly the rule of signs for multiplication. Patterns in these function tables exhibit other properties of the functions, allowing brief statements of proofs by exhaustion. For example, commutativity appears as a symmetry about the diagonal. More precisely, if the result of the transpose function I (which reverses the order of the axes of its argument) applied to a table T-D ° . fD agrees with T, then the function f is commutative on the domain. For example, T =? T- D ° . r D produces a table of 1 's because f is commutative. Corresponding tests of associativity require rank 3 tables of the form Do . f ( Do . fD ) and ( Do . fD ) ° . fD . For example: D
D-O Do.A(Do.AD)
00 00 00 0 1
1 (DO.AD)O.AD
Do.•(Do.•D)
00
0 0 0 1
(Do.5D)o.•D
11 10 1
0 0
1 1
11 0 1
11 0 1
1.4 Economy The utility of a language as a tool of thought increases with the range of topics it can treat, but decreases with the amount of vocabulary and the complexity of grammatical rules which the user must keep in mind. Economy of notation is therefore important. Economy requires that a large number of ideas be expressible in terms of a relatively small vocabulary. A fundamental scheme for achieving this is the introduction of grammatical rules by which meaningful phrases and sentences can be constructed by combining elements of the vocabulary. Notation as a Tool of Thought
349
This scheme may be illustrated by the first example treated - -the relatively simple and widely useful notion of the sum of the first N integers was not introduced as a primitive, but as a phrase constructed from two more generally useful notions, the function i for the production of a vector of integers, and the function + / for the summation of the elements of a vector. Moreover, the derived function + / is itself a phrase, summation being a derived function constructed from the more general notion of the reduction operator applied to a particular function. Economy is also achieved by generality in the functions introduced. For example, the definition of the factorial function denoted by ! is not restricted to integers, and :he gamma function of X may therefore be written as! X - 1. Similarly, the relations defined on all real arguments provide several important logical functions when applied to boolean arguments: exclusive-or ( X ), material implication ( • ), and equivalence ( = ). The economy achieved fcr the matters treated thus far can be assessed by recalling the vocabulary introduced: I / +
4t
p X - *.!r
V A--.<=
-
L
T
0
> X
The five functions and three operators listed in the first two rows are of primary interest, the remaining familiar functions having been introduced to illustrate the versatility of the operators. A significant economy of symbols, as opposed to economy of functions, is attained by allowing anry symbol to represent both a monadic function (i.e., a function of one argument) and a dyadic function, in the same manner that the minus sign is commonly used for both subtraction and negation. Because the two functions represented may, as in the case of the minus sign, be related, the burden of remembering symbols is eased. For example, X * Y and * Y represent power and exponential, Xe Y and * Y represent base X logarithm and natural logarithm, X z Y and . Y represent division and reciprocal, and X ! Y and ! Y represent the binomial coefficient faction and the factorial (that is, X ! Y+-( ! Y ) * ( ! X ) x ( ! Y-A ) ). The symbol p used for the dyadic function of replication also represents a monadic function which gives the shape of the argument (that is, X-*-.p Xp Y ), the symbol < used for the monadic reversal function also represents the dyadic rotate function exemplified by 24 4i 5++3 4 5 1 2, and by - 24t i 5 .-- 4 5 1 2 3, and finally, the comma represents not only catenation, but also the monadic ravel, which produces a vector of the elements of its argument in 'row-major' order. For example:
T 2 0 0
0 1 1 1 0 1
350 KENNETH E. IVERSON
,T 0
0
1 1 0
2 1 0
1
Simplicity of the grammatical rules of a notation is also important. Because the rules used thus far have been those familiar in mathematical notation, they have not been made explicit, but two simplifications in the order of execution should be remarked: (1) All functions are treated alike, and there are no rules of precedence such as x being executed before + . (2) The rule that the right argument of a monadic function is the value of the entire expression to its right, implicit in the order of execution of an expression such as SI N L OG ! N, is extended to dyadic functions. The second rule has certain useful consequences in reduction and scan. Since F / V is equivalent to placing the function F between the elements of V the expression - / V gives the alternating sum of the elements of v, and * / V gives the alternating product. Moreover, if B is a boolean vector, then < B 'isolates' the first 1 in B, since all elements following it become o. For example: <0
0
1 1 0 1 1 --
00
1 0
0
0 0
Syntactic rules are further simplified by adopting a single form for all dyadic functions, which appear between their arguments, and for all monadic functions, which appear before their arguments. This contrasts with the variety of rules in mathematics. For example, the symbols for the monadic functions of negation, factorial, and magnitude precede, follow, and surround their arguments, respectively. Dyadic functions show even more variety. 1.5
Amenability to Formal Proofs The importance of formal proofs and derivations is clear from their role in mathematics. Section 4 is largely devoted to formal proofs, and we will limit the discussion here to the introduction of the forms used. Proof by exhaustion consists of exhaustively examining all of a finite number of special cases. Such exhaustion can often be simply expressed by applying some outer product to arguments which include all elements of the relevant domain. For example, if D+0 1,thenDo.AD gives all cases of application of the and function. Moreover, DeMorgan's law can be proved exhaustively by comparing each element of the matrix Do .D with each element of (-D ) o .v ( -D ) as follows: -(-D)°.v(-D)
Do.AD 0 0 0 1
0 0 0 1
(Do.AD)=(-(-D)
1
o.v(-D))
AD
Notation as a Tool of Thought
351
Questions of associativity can be addressed similarly, the following expressions showing the associativity of and and the nonassociativity of not-and: A/,((Do.AL1)o.AD)=(DoA.(DA.AD)) 1 A/,(
(D° .7sD)0 .,-D)=(D- .N( Do .*D) )
0
A proof by a sequence of identities is presented by listing a sequence of expressions, annotating each expression with the supporting evidence for its equivalence with its predecessor. For example, a formal proof of the identity A. 1 suggested by the first example treated would be presented as follows: +I/iN
+/4)t N
+ is associative and commutative (X+X)+2--X + is associative and commutative Lemma Definition of x
((i+/iN)+( +14)iN) )+2
(+/((IN)+(4iN)))+2 (+/((N+l)pN))+2 ((N+l)xN)+2
The fourth annotation above concerns an identity which, after observation of the pattern in the special case ( i 5 ) + ( 4 i 5 ), might be considered obvious or might be considered worthy of formal proof in a separate lemma. Inductive proofs proceed in two steps: (1) some identity (called the induction hypothesis) is assumed true for a fixed integer value of some parameter N and this assumption is used to prove that the identity also holds for the value N + Iand (2) the identity is shown to hold for some integer value K. The conclusion is that the identity holds for all integer values of N which equal or exceed K. Recursive definitions often provide convenient bases for inductive proofs. As an example we will use the recursive definition of the binomial coefficient function BC given by A.3 in an inductive proof showing that the sum of the binomial coefficients of order N is 2 * N. As the induction hypothesis we assume the identity: +/BC
N
4-
2*N
and proceed as follows: +/BC N+1 +/(X,O)+(O,X-BC N) (i+/X,O)+(+/O,X) (+/X )+( +/X) 2x+I/X
2x+/BC N 2x2*N
2*N+1 352 KENNETH E. IVERSON
A.3 + is associative and commutative O+Y
MY
Y-Y~--2xY Definition of X Induction hypothesis Property of Power (*)
It remains to show that the induction hypotheses is true for some integer value of N. From the recursive definition A.3, the value of -C 0 is the value of the rightmost expression, namely 1. Consequently, t /BC 0 is 1, and therefore equals 2 * 0 . We will conclude with a proof that DeMorgan's law for scalar arguments, represented by:
AAB
- (-A
)v(-B )
AA4
and proved by exhaustion, can indeed be extended to vectors of arbitrary length as indicated earlier by the putative identity:
A/V
+
-v/-V
A.5
As the induction hypothesis we will assume that A.5 is true for vectors of length ( p V ) - 1.
We will first give formal recursive definitions of the derived functions and-reduction and or-reduction ( A/ and v / ), using two new primitives, indexing, and drop. Indexing is denoted by an expression of the form X[ I ], where I is a single index or array of indices of the vector X . For example, if XK2 3 5 7, then XI2J is 3, and X [ 2 1 ] is 3 2. Drop is denoted by K + X and is defined to drop I K (i.e., the magnitude of K) elements from X, from the head if K > 0 and from the tail if K
A.6 A.7
The inductive proof of A.5 proceeds as follows: A/V (V[1] )A(A/1+V)
-(-V[1] )v(-A/J+V) -(-V[1] )v(--v/-1+V) -(VI I I)v( v/-l+V) -v/(-Vl)] ),(-1+V) v/ (V[1], 1+V) -v/-V
A.6 A.4 A.5 -- X-*-IX A.7 v distributes over Definition of , (catenation) Notation as a Tool of Thought
353
2
Polynomials If C is a vector of coefficients and X is a scalar, then the polynomial in X with coefficients C may bce written simply as + / C x X * 1 + i p C, or+/(X* 1+ipC)xCor (X*1+pC)+.xC. However, to apply to a non scalar array of arguments X, the power function * should be replaced by the power table o . * as shown in the following definition of the polynomial function: P:(wo. e 1+ipa)+.xa B.1 Forexample, 1 3 3 1 P0 1 2 3 4 -+ 1 8 27 64 125 If pa is replaced by 1 + p a, then the function applies also to matrices and higher dimensional arrays of sets of coefficients representing (along the leading axis of a) collections of coefficients of different polynomials. This definition shows clearly that the polynomial is a linear function of the coefficient vector. Moreover, if a and w are vectors of the same shape, then the pre-ulultiplier w - . * - 1 + i p a is the Vandermonde matrix of w and is therefore invertible if the elements of W are distinct. Hence if C and X are vectors of the same shape, and if Y+C E X, then the inverse (curve-fitting) problem is clearly solved by applying the matrix inverse function Wto the Vandermonde matrix and using the identity: C
+-
(MX.*-I+Ipx)+.Xy
2.1
Products of Polynomials The 'product of two polynomials B and C ' is commonly taken to mean the coefficient vector D such that: D P X +- (B P X)x(C P X) It is well known that D can be computed by taking products over all pairs of elements from B and C and summing over subsets of these products associated with the same exponent in the result. These products occur in the function table B o . XC, and it is easy to show informally that the powers of X associated with the elements of B o . x C ( 1 + p C). For are given by the addition table E+-( 1 + pB) example: X-2 B+3 1 2 3 C+2 0 3 E+V(l+ipB)'.+(1+ipC) Be.xC E X*E 6 2 4
0 0 0
9 3 6
6 0 9
0 1 L 2 2 3
2 3 4
3 4 5 +I,(Bo.xC)'cX*E
51 8 (B P X)x(C 5 18 354
KENNETH E. IVERSON
K) XD
1 2 4
2 4 8
4 8 16
8 16 32
The foregoing suggests the following identity, which will be established formally in Section 4: (B P X)x(C P X)-+/,(Bo.xC)xX*(
l+lpB)-.+(-1+ipC)
B.2
Moreover, the pattern of the exponent table E shows that elements of B o . x C lying on diagonals are associated with the same power, and that the coefficient vector of the product polynomial is therefore given by sums over these diagonals. The table B - . xC therefore provides an excellent organization for the manual computation of products of polynomials. In the present example these sums give the vector D+6 2 13 9 6 9, and D E X may be seen to equal (BPX)x(CPX). Sums over the required diagonals of B a x C can also be obtained by bordering it by zeros, skewing the result by rotating successive rows by successive integers, and then summing the columns. We thus obtain a definition for the polynomial product function as follows: PP:+/(1- ipa)lao .xw,1+Oxa
We will now develop an alternative method based upon the simple observation that if B PP C produces the product of polynomials B and C, then PP is linear in both of its arguments. Consequently, PP: a+ . xA+ .x where A is an array to be determined. A must be of rank 3, and must depend on the exponents of the left argument ( - 1+ pa ), of the result ( - 1 + i p 1 + a , w ) and of the right argument. The 'deficiencies' of the right exponent are given by the difference table ( i p 1 +a , w ) o . - l p w , and comparison of these values with the left exponents yields A . Thus and
A-(-V+ipa)o.=((
pl+a,w)o.-lp'j)
PP:a+.x((Vl+lpa)o.=(PlP+aW)o.-lPw)+.xw
Since a + . xA is a matrix, this formulation suggests that if D-B PP C, then C might be obtained from D by premultiplying it by the inverse matrix ( l B + . xA ), thus providing division of polynomials. Since B+ . xA is not square (having more rows than columns), this will not work, but by replacing M- B+ . xA by either its leading square part( 2pL/pM)+Mor by its trailing square part( -2pL/pM)+M, one obtains two results, one corresponding to division with low-order remainder terms, and the other to division with high-order remainder terms. 2.2 Derivative of a Polynomial Since the derivative of X* N is N x X* N- 1, we may use the rules for the derivative of a sum of functions and of a product of a function with a constant, to show that the derivative of the polynomial C P X is the polynomial ( 1 + Cx - 1+ i pC ) P X. Using this result it is clear that Notation as a Tool of Thought
355
the integral is the polynomial (.A , C + i p C ) E X, where A is an arbitrary scalar constant. The expression 1 4)C x - 1 + . p C also yields the coefficients of the derivative, but as a vector of the same shape as C and having a final zero element.
2.3 Derivative of a Polynomial with Respect to Its Roots If R is a vector of three elements, then the derivatives of the polynomial x / X - R with respect to each of its three roots are -(X - R E 2 ]) x (X - R [ 3 ]) and - ( X - R [ 1 ] ) x ( X - R [ 3 ] ), and X - R [ 1] )x X - R E 2]). More generally, the derivative of x/X-R with respect toRtJ] is simply -(X-R)x.*JxipR, and the vector of derivatives with respect to each of the roots is
- (X-R ) X.*1 o. XI+IpR. The expression x / X - R for a polynomial with roots R applies only to a scalar X, the more general expression being x / X o . - R . Consequently, the general expression for the matrix of derivatives (of the polynomial evaluated at X II] with respect to root R [ J ]) is given by: - ( X .- R) K.*I
.
I+-i pR
B.3
2.4 Expansion of a Polynomial Binomial expansion concerns the development of an identity in the form of a polynomial in X for the expression ( X +Y ) * N. For the special case of Y = 1 we have the well-known expression in terms of the binomial coefficients of order N: (X+1 )*N --
((O,iN)!N)P X
By extension we speak of t.le expansion of a polynomial as a matter of determining coefficients D such that: C P XfY
--
D P X
The coefficients D are, in general, functions of Y. If Y= 1 they again depend only on binomial coefficients, but in this case on the several binomial coefficients of various orders, specifically on the matrix Jo.lJ1+IpC. For example, if C-3 1 2 4, and C P X+ 1++D P X, then D depends on the matrix: 0 1 2 3 o.! 1 1 1 1
0 1 2 3 0 0 1 3 0 0 0 356 KENNETH E. IVERSON
1
0 1 2 3
and D must clearly be a weighted sum of the columns, the weights being the elements of C. Thus: D+( -(Jo.! J
l+ipC)+.xC
Jotting down the matrix of coefficients and performing the indicated matrix product provides a quick and reliable way to organize the otherwise messy manual calculation of expansions. If B is the appropriate matrix of binomial coefficients, then D+Bt . XC, and the expansion function is clearly linear in the coefficients C. Moreover, expansion for Y =- 1 must be given by the inverse matrix WB, which will be seen to contain the alternating binomial coefficients. Finally, since: C E X+(K+1)
-+
C P (X+K)tl
(B+.xC) P (X+K)
--
it follows that the expansion for positive integer values of Y must be given by products of the form: B+. xB+. xB+. xB+. xC where the B occurs Y times. Because + . x is associative, the foregoing can be written as M+ . x C, where M is the product of Y occurrences of B. It is interesting to examine the successive powers of B, computed either manually or by machine execution of the following inner product power function: IPP:a+.xa IPP W-1:=0Jo J+
1+1tpcs
Comparison of B IPP K with B for a few values of K shows an obvious pattern which may be expressed as: B IPP K
+-
BxK*O[-Jo.-J+t1+1+pB
The interesting thing is that the right side of this identity is meaningful for noninteger values of K, and, in fact, provides the desired expression for the general expansion C P X +Y: C
P(XtY)
-
(((Jo.!J)xy*or-Jo.-J-
l+ipc)+.xc)p
x
B.4
The right side of B.4 is of the form ( M+ . x C ) P X, where M itself is of the form B x Y * E and can be displayed informally (for the case 4 = p C ) as follows: 1 1 11 0 1 2 3 0 0 1 2 0 1 2 3 xy* 0 0 0 1 0 0 1 3 0
0
0
1
0
0 0
0
Since Y* Kmultiplies the single-diagonal matrix B x ( K =E ), the expression for M can also be written as the inner product ( Y * J ) + xT, where T is a rank 3 array whose Kth plane is the matrix B x ( K =E ). Such a rank three array can be formed from an upper triangular matrix Notation as a Tool of Thought
357
M by making a rank 3 array whose first plane is M (that is, ( 1 = i 1 + p M ) o . xM) and rotating it along the first axis by the matrix J o - J, whose Kth superdiagonal has the value - K. Thus: DS: (1o DS 1 0 0 1 0 0 0
.
K.o
-I)40
1]( 1=I+i1+pw)o.x0
B.5
K *1 +3
0 0 1
1 0
0 0
2
0 0 0 0 0 1 0 0 0 0 0 0
Substituting these results in B.4 and using the associativity of + . , we have the following identity fcr the expansion of a polynomial, valid for noninteger as well as integer values of Y: C P X+Y -- ((Y*J)+.x(JDS Jo !J+-l+ipC)+.xC)P X B.6 For example: Y- 3
C+3 1 4 2 1 0
M+(Y*,J)i-.XDS Jo. !J-4llpC M 3 9 27 1 6 27
0 0
0 0
1 0
9 1
M+. xC 96 79 22 2 (M-.xC) 358 C P
P X-2
X+Y
358
3 Representations The subjects of mathematical analysis and computation can be represented in a variety of ways;, and each representation may possess particular advantages. For example, a positive integer N may be represented simply by N check-mrarks; less simply, but more compactly, in Roman numerals; even less simply, but more conveniently for the performance of addition and multiplication, in the decimal system; and less familiarly, but more conveniently for the computation of the least common multiple and the greatest common divisor, in the prime decomposition scheme to be discussed here. 358
KENNETH E. IVERSON
Graphs, which concern connections among a collection of elements, are an example of a more complex entity which possesses several useful representations. For example, a simple directed graph of N elements (usually called nodes) may be represented by an N by N boolean matrix B (usually called an adjacency matrix) such that B [ I ; J ] = 1 if there is a connection from node I to node J. Each connection represented by a 1 in B is called an edge, and the graph can also be represented by a + / , B by N matrix in which each row shows the nodes connected by a particular edge. Functions also admit different useful representations. For example, a permutation function, which yields a reordering of the elements of its vector argument X, may be represented by a permutation vector P such that the permutation function is simply X[ P ], by a cycle representation which presents the structure of the function more directly, by the boolean matrix B+ P a= i p P such that the permutation function is B + . x X, or by a radix representation R which employs one of the columns of the matrix 1+ (4 i N ) T 1 + I ! N- p X, and has the property that 2 1+ / R - 1 is the parity of the permutation represented. In order to use different representations conveniently, it is impor tant to be able to express the transformations between representations clearly and precisely. Conventional mathematical notation is often deficient in this respect, and the present section is devoted to developing expressions for the transformations between representations useful in a variety of topics: number systems, polynominals, permutations, graphs, and boolean algebra.
3.1 Number Systems We will begin the discussion of representations with a familiar example, the use of different representations of positive integers and the transformations between them. Instead of the positional or basevalue representations commonly treated, we will use prime decomposition, a representation whose interesting properties make it useful in introducing the idea of logarithms as well as that of number representation (6, Ch.16]. If P is a vector of the first PP primes and E is a vector of nonnegative integers, then E can be used to represent the number P x . * E, and all of the integers F/ P can be so represented. For example, 2 3 5 7 x* 0 0 0 Ois 1 and 2 3 5 7 x* 1 1 0 0 is 6 and: P
2 3 5 7 ME 0 1 0 2 0 1 0 3 0 1 0 0 1 0 0 1 00 2 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0
Px . *ME 1 2 3 4 5 6 7 8 9 10 Notation as a Tool of Thought 359
The similarity to logarithms can be seen in the identity: x/Px.*ME
--
Px.*+/ME
which may be used to effect multiplication by addition. Moreover, if we define GCD and L CM to give the greatest common divisor and least common multiple of elements of vector arguments, then: CCD Px.*ME
-
LCM Px.*ME
--
ME 2
P:<.*L/ME P<.*r/ME
V*-PX.*ME v V 18900 735) 3087 GCD V 21
1
3 1 2 2 2 0 1 2 3
LCM V 926100
Px.*L/ME
Px.*F/ME
926100
21
In defining the function CCD, we will use the operator / with a boolean argument B (as in B/). It produces the compression function which selects elements from its right argument according to the ones in B. For example, 1 0 1 ( 1/ i5 is 1 3 5. Moreover, the function B / applied to a matrix argUment compresses rows (thus selecting certain columns), and the function B / compresses columns to select
rows. Thus: CCD:GCD M,(M*-L/R) IR: 1PR-(wX0 )/ :+/R LCM:(x/X):GCD X--(1t+),LCM 1+w:O=pw:l
The transformation to the value of a number from its prime decomposition representation (VFR) and the inverse transformation to the representation from the value (RFV) are given by: VFR: ax .**
RFV:Dca RFV w ax.*D:A/D4-0=ajw:D
For example: F
VFR
2
1
3
P 1
RFV
10500
1
10500 2
1
3
3.2
Polynomials Section 2 introduced two representations of a polynomial on a scalar argument X, the first in terms of a vector of coefficients C (that is, + / C x x- 1 + i p C ), and the second in terms of its roots R (that is, x / X - R ). The coefficient representation is convenient for adding polynomials (C +D) and for obtaining derivatives (1 + C x - 1 + I p C). 360
KENNETH E. IVERSON
The root representation is convenient for other purposes, including multiplication which is given by R1, R 2. We will now develop a function CFR (Coefficients from Roots) which transforms a roots representation to an equivalent coefficient representation, and an inverse function RFC. The development will be informal; a formal derivation of CFR appears in Section 4. The expression for CFR will be based on Newton's symmetric functions, which yield the coefficients as sums over certain of the products over all subsets of the arithmetic negation (that is, - R ) of the roots R. For example, the coefficient of the constant term is given by x / - R. the product over the entire set, and the coefficient of the next term is a sum of the products over the elements of - R taken ( p R ) - 1 at a time. The function defined by A.2 can be used to give the products over all subsets as follows: P +((-R)x.*M+T pR The elements of P summed to produce a given coefficient depend upon the number of elements of R excluded from the particular product, that is, upon +/ -M, the sum of the columns of the complement of the boolean 'subset' matrix TpR. The summation over P may therefore be expressed as ( ( O p R ) ° . = + / AM ) + . XP, and the complete expression for the coefficients C becomes: C+((OpR)o.=+1 M)+.X( -R)x.*MiT pR For example, if R-2 3 5, then M 0 0 0 0
+'-M 3 2 2 1 2 1 1 0
1 1 1 1
0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 (-R)x.*M
1
5
3 15
2 10 6
(OipR)o.=+fM 0 0 0 0 0 0 0 1 0 0 0 1 0 1 1 0
30
0 1 1 0 1 0 0 0 1
0
0
0
( ( O.lpR ) o.= +1 M) +. x (-R )x .*M-T 30 31 10 1
0
0
0
0
pR
The function CFR which produces the coefficients from the roots may therefore be defined and used as follows: CFR: (( Oipw )o .=+f-M)+. CFR 2 3 5 10 1 (CFR 2 3 5) P X-1 8 0 0 2 0 12 40 90 x/Xo.-2 3 5 8 0 0 2 0 12 40 90
x( -
)x.*M-T
pw
C.1
30 31
2 3 4 5 6 7 8
Notation as a Tool of Thought 361
The inverse transformation RFC is more difficult, but can be expressed as a successive approximation scheme as follows: RFC:(-l+ipl+w)G w G:(a-Z)G w:TOL2J/IZ-a STEP w:a-Z STEP:(W(ao.-a)x.*Io. .I+ipac)+.x(ao.*1l+lpw)+.xw 210
O-C+CFR 2 3 5 7 247 101 17 1 TOL-1E 8 RFC C
7 5 2 3 The order of the roots in the result is, of course, immaterial. The final element of any argument of RFC must be 1, since any polynomial equivalent to x / X - R must necessarily have a coefficient of 1 for the high-order term. The foregoing definition cr. RFC applies only to coefficients of polynomials whose roots are all real. The left argument of C in RFC provides (usually satisfactory) initial approximations to the roots, but in the general case some at least must be complex. The following example, using the roots of unity as the initial approximation, was executed on an APL system which handles complex numbers: (*o0J2x(
1+lN)+N-pl+w)Gw
O-C+CFP 1J1 14 11 4 1 RFC C 1J 1 1J2 1J1 1J 2
1J 1
1J2
C.2
1J 2
10
The monadic function o used above multiplies its argument by pi. In Newton's method for the root of a scalar function F, the next approximation is given by A+A - (F A ) +DF A, where DF is the derivative of F. The function STEP is the generalization of Newton's method to the case where F is a vector function of a vector. It is of the form ( MM ) +-.x B, where fins the value of the polynomial with coefficients w, the original argument of RFC, evaluated at a, the current approximation to the roots; analysis similar to that used to derive B.3 shows that M is the matrix of derivatives of a polynomial with roots a, the derivatives being evaluated at a. Examination of the expression for M shows that its off-diagonal elements are all zero, and the expression ( MM) +.x B may therefore be replaced by B + D, where D is the vector of diagonal elements of M. Since (I ,J ) + N drops I rows and J columns from a matrix N, the vector D may be expressed as x / 0 1 + ( - 1 + i p a ) + a o .-a; the definition of the function STE P may therefore be replaced by the more efficient definition: STEP:((ao.* 1+lpw)+.>j)& 362
KENNETH E. IVERSON
x/O 1+(V1+ipa)4ao
.- a
C.3
This last is the elegant method of Kerner [7]. Using starting values given by the left argument of C in C.2, it converges in seven steps (with a tolerance TOL+1E-8) for the sixth-order example given by Kerner.
3.3 Permutations A vector P whose elements are some permutation of its indices (that is, A/ 1 = + / P . = I p ) will be called a permutation vector. If D is a permutation vector such that ( p X ) =p D, then X [ D I is a permutation of X, and D will be said to be the direct representation of this permutation. The permutation X [ DI may also be expressed as B + . xXwhere B is the boolean matrix D o . = i p D. The matrix B will be called the boolean representation of the permutation. The transformations between direct and boolean representations are: BFD:wo.=ipw
DFB:w+.xil+pw
Because permutation is associative, the composition of permutations satisfies the following relations: (X[D1))[D2] B2+.x(Bl+.xX)
-
+-b
X[(D1 [D2I)]
(B2+.xBl)+.xX
The inverse of a boolean representation B is B, and the inverse of a direct representation is either 4 D or D i i p D. (The grade function 4 grades its argument, giving a vector of indices to its elements in ascending order, maintaining existing order among equal elements. Thus 43 7 1 4 is 3 1 4 2 and 43 7 3 4 is 1 3 4 2. The index -of function i determines the smallest index in its left argument of each element of its right argument. For example, ' A B CDE' 'BA BE is2 1 2 5, and'BABE'I 'ABCDE'is 2 1 5 5 4.) The cycle representation also employs a permutation vector. Consider a permutation vector C and the segments of C marked off by the vector C =L C . For example, if C+ 7 3 6 5 2 1 4, then C= L C is 1 1 0 0 1 1 0, and the blocks are: 7
3 6 5 2 1 4 Each block determines a 'cycle' in the associated permutation in the sense that if R is the result of permuting X, then: R[7) R[3J R[2] R[1]
is is is is
X[7] X[6) X[2] X[4]
R[6] is X[5]
R[5J
is X[3]
R[4] is X[1] Notation as a Tool of Thought
363
If the leading element of C is the smallest (that is, 1), then C consists of a single cycle, ancL the permutation of a vector X which it represents is given by XIC]+XE[1C. For example: X'ABCDEFG' C-I 7 6 5 2 4 3 XEC]4-XE 1JC]
X GDACBEF Since XC Q I -A is equivalent to X-A I i Q] , it follows that X [ c I+X[1 f c I is equivalent to X+X [ ( 14C)ECCI, and the direct representation vector D equivalent to C is therefore given (for the special case of a single cycle) by D+( 1 C)E[C] In the more general case, the rotation of the complete vector (that is,10C) must be replaced by rotations of the individual subcycles marked off by C= L C, as shown in the following definition of the transformation to direct from cycle representation: DFC:(w[4AX++X-w=LwJ)Ekw] If one wishes to catenate a collection of disjoint cycles to form a single vector C such that C = LN,C marks off the individual cycles, then each cycle CI must first be brought to standard form by the rotation ( 1+CIi L /CI )'FCI, and the resulting vectors must be catenated in descending order on their leading elements. The inverse transformation from direct to cycle representation is more complex, but can be approached by first producing the matrix of all powers of D up to the p Dth, that is, the matrix whose successive columns are D and D[DI ancL (DC DI ) [D), etc. This is obtained by applying the function POW to the one-column matrix Do +, 0 formed from D, where POW is defined and used as follows: POW:POW DL,(D-w[ ;1] )[E]w:•/pw:w O-D-DFC C'-7,3 6 5,2,1 4 4 2 6 1 3 5 7 POW Do.+,O
4 2 6 1 3 5 7
1 2 5 4 6 3 7
4 2 3 1 5 6 7
1 2 6 4 3 5 7
4 2 5 1 6 3 7
1 2 3 4 5 6 7
4 2 6 1 3 5 7
If M+POW Do .- 0, then the cycle representation of D may be obtained by selecting from M only 'standard' rows which begin with their smallest elements ( SSR ) , by arranging these remaining rows in 364
KENNETH E. IVERSON
descending order on their leading elements( DOL ), and then catenating the cycles in these rows ( CIR). Thus: CFD:CIR DOL SSR POW Wo.+,O SSR:( AM=l4M+Lw)1&w D0L:w[wE; 11;] CIR:( ,1,AO 1+wLw)I,w DFC 4 2 6 1 3 CFD 7 3 6 5 2
C-7,3 6 5,2,1 4 5 7 DFC C 1 4
In the definition of DOL, indexing is applied to matrices. The indices for successive coordinates are separated by semicolons, and a blank entry for any axis indicates that all elements along it are selected. Thus ME ;1 ] selects column 1 of M. The cycle representation is convenient for determining the number of cycles in the permutation represented (NC: + / , = L w ), the cycle lengths( CL:X - 0 , - 1 +X+(ld =L ) / I p&,), and the power of the permutation ( PP:LCM CL ).On the other hand, it is awkward for composition and inversion. The ! N column vectors of the matrix( t N ) T 1 + I ! Nare all distinct, and therefore provide a potential radix representation [8] for the i N permutations of order N. We will use instead a related form obtained by increasing each element by 1 RR:1+( IW
)T1t
1!W
RR 4
1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2
Transformations between this representation and the direct form are given by: DFR:w(1],X+w[11]SXfDFR 1+o):0=pj:( RFD:w[l],RFD X-(j[1]s5X-1+(:0=p&:w Some of the characteristics of this alternate representation are perhaps best displayed by modifying DFR to apply to all columns of a matrix argument, and applying the modified function MT to the result of the function RR . MF:io[.l;J,[I]X+&)[(I MF RR 4 1 1 1 1 1 1 2 2 2 2 2 2 3 3 4 4 1 1 3 3 3 4 2 4 2 3 3 4 1 4 4 3 4 2 3 2 4 3 4 1
pX)pl;]SX-+MF 1 0+&):0=1+pw:w 2 4 1 3
2 4 3 1
3 3 3 1 1 2 2 4 1 4 2 4
3 2 4 1
3 4 1 2
3 4 2 1
4 4 4 1 1 2 2 3 1 3 2 3
4 2 3 1
4 3 1 2
Notation as a Tool of Thought
4 3 2 1 365
The direct permutations in th2 columns of this result occur in lexical order (that is, in ascending order on the first element in which two vectors differ); this is true in general, and the alternate representation therefore provides a convenient way for producing direct representations in lexical order. The alternate representation also has the useful property that the parity of the direct permutation D is given by 21+/ 1+RFD D, where M IN represents the residue of N module 14. The parity of a direct representation can also be determined by the function: PAR:
21±/
,(Io.
>I- I p
)AUo
.>&
3.4 Directed Graphs A simple directed graph is defined by a set of K nodes and a set of directed connections from one to another of pairs of the nodes. The directed connections may be conveniently represented by a K by K boolean connection matrix C in which C [ I ; J I = 1 denotes a connection from the Ith node to the Jth. For example, if the four nodes of a graph are represented by N-' QRST ', and if there are connections from node S to node Q, from R to T, and from T to Q. then the corresponding connection matrix is given by: 0
0 0
0 0 0 1 00 0
1 1. 0 0 0
A connection from a node to itself (called a self-loop) is not permitted, and the diagonal of a connection matrix must therefore be zero. If P is any permutation vector of order pN, then N1-N[ PI is a reordering of the nodes, and the corresponding connection matrix is given by C P; PI. We may (and will) without loss of generality use the numeric labels i p N for the nodes, because if N is any arbitrary vector of names for the nodes and L is any list of numeric labels, then the expression Q+N L] gives the corresponding list of names and, conversely, N t Q gives the list L of numeric labels. The connection matrix C is convenient for expressing many useful functions on a graph. For example, + / C gives the out-degrees of the nodes, + / C gives the in-degree, +/, C gives the number of connections or edges, OC gives a related graph with the directions of edges reversed, and CvOC gives a related 'symmetric' or 'undirected' graph. Moreover, if we use the boolean vector B *-V / ( I 1 p C ) o . = L to represent the list of nodes L , then B v . AC gives the boolean vector
which represents the set of nodes directly reachable from the set B. Consequently, CV . AC gi- es the connections for paths of length two in the graph C , and Cv.Cv . AC gives connections for paths of 366
KENNETH E. IVERSON
length one or two. This leads to the following function for the transitive closure of a graph, which gives all connections through paths of any
length: TC: TC Z:
A
,W=Z*WVWV
.AWXZ
Node J is said to be reachable from node I if ( TC C ) [ I; J I = 1. A graph is strongly-connected if every node is reachable from every node, that is, A /, TC C . If D- TC C and D [I; I I = 1 or some I, then node I is reachable from itself through a path of some length; the path is called a circuit, and node I is said to be contained in a circuit. A graph T is called a tree if it has no circuits and its in-degrees do not exceed 1, that is, A /1 2 + / T. Any node of a tree with an in-degree of 0 is called a root, and if KX + / 0 = + / T ,then T is called a K-rooted tree. Since a tree is circuit-free, K must be at least 1. Unless otherwise stated, it is normally assumed that a tree is singly-rooted (that is, K =1 ); multiply-rooted trees are sometimes called forests. A graph C covers a graph D if A /, C 2 D. If C is a strongly-connected graph and T is a (singly-rooted) tree, then T is said to be a spanningtree of G if C covers T and if all nodes are reachable from the root of T, that is, (A/,G2T) A AIRVRv.ATC T where R is the (boolean representation of the) root of T. A depth-first spanning tree [9] of a graph C is a spanning tree produced by proceeding from the root through immediate descendants in G, always choosing as the next node a descendant of the latest in the list of nodes visited which still possesses a descendant not in the list. This is a relatively complex process which can be used to illustrate the utility of the connection matrix representation: C.4
DFST:((,1)o.=K) R caAKo.V'-K-a=11+p R: (CC,[1] a)ROJAPo .V-C4-<UAPV. :-v/P-(
<av
.AW V.AU4--V/a
,
)v .Aa
: W
Using as an example the graph G from [9]: 0 0 0 0 0 0 0 0 0 1 0 1
0 0 1 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 1 0 0 0
C 1 0 0 0 0 0 0 0 0 0 0 0
0 1 1 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 1 1 0 0 0 0 0 0
0 0 0 0 0 0 1 1 0 0 1 0
0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 1 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0 0 0 0 0
DFST G 0 00 0 1 00 0 0 10 0 0 01 1 0 00 0 0 00 0 0 00 0 0 0 0 0 0 00 0 0 0 0 0 0 00 0 0 00 0
0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0
Notation as a Tool of Thought
0 0 0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 1 0 0 0 367
The function DFST establishes the left argument of the recursion R as the one-row matrix representing the root specified by the left argument of DFST, and the right argument as the original graph with the connections into the root K deleted. The first line of the recursion R shows that it continues by appending on the top of the list of nodes thus far assembled in -,he left argument the next child C, and by deleting from the right argument all connections into the chosen child C except the one from its parent P. The child C is chosen from among those reachable frorn the chosen parent(pv AW) , but is limited to those as yet untouched ( U A P V Ao , and is taken, arbitrarily, as the first of these ( .CUAPV. AW). The determinations of P and U are shown in the second line, P being chosen from among those nodes which have children among the untouched nodes (V A U). These are permuted to the order of the nodes in the left argument ( a v . A WV . A U ), bringing them into an order so that the last visited appears first, and P is finally chosen as the first of these. The last line of R shows the final result to be the resulting right argument o, that is, the original graph with all connections into each node broken except for its par ent in the spanning tree. Since the final value of a is a square matrix giving the nodes of the tree in reverse order as visited, substitution of w , + [ 1 ] a (or, equivalently, a, ea ) for w would yield a result of shape 1 2 x p G containing the spanning tree followed by its 'preordering' information. Another representation o& directed graphs often used, at least implicitly, is the list of all noce pairs V, W such that there is a connection from V to W. The transformation to this list form from the connection matrix may be defined and used as follows: *
LFC:(,;u)/1+DT 1+1x/D-pw LFC C 0 0 1 1 1 1 2 3 3 4 0 0 1 0 3 4 3 2 4 1 C
0 1
1 0
0 0
1 0
However, this representation is deficient since it does not alone determine the number of nodes in the graph, although in the present example this is given by r / , L FC C because the highest numbered node happens to have a connection. A related boolean representation is provided by the expression ( LFC C) - = 1 +pC, the first plane showing the out- and the second showing the in-connections. An incidence matrix representation often used in the treatment of electric circuits [10] is given by the difference of these planes as follows: *
IFC:-1(L,.C w)o.=illpoj 368
KENNETH E. IVERSON
For example:
(LFC C)o.=l+pC
IFC C
1 0 0 0
1
0 -1
1 0
0 0
1 0
0 1
0 0 1 0 0 0 1 0 0 0 0 1
0 0 1
1 0 0
0 0 0 0 0 1
0 1
0 0 0 1 0 0
0 0
1 0 1 0 0 0
-1
0
0 - 1 0
1 0 1 -1 0 1
0 1 0 0 1 0
In dealing with nondirected graphs, one sometimes uses a representation derived as the or over these planes ( v ). This is equivalent to I IFC C. The incidence matrix I has a number of useful properties. For example, + II is zero, + II gives the difference between the in- and outdegrees of each node, pI gives the number of edges followed by the number of nodes, and x / p I gives their product. However, all of these are also easily expressed in terms of the connection matrix, and more significant properties of the incidence matrix are seen in its use in electric circuits. For example, if the edges represent components connected between the nodes, and if V is the vector of node voltages, then the branch voltages are given by I+ . x V; if BI is the vector of branch currents, the vector of node currents is given by BI+. xI. The inverse transformation from incidence matrix to connection matrix is given by: CFI:Dp( lt1xD)eDi(1
1o .=w)+.xltl+
D-LVpw
The set membership function e yields a boolean array, of the same shape as its left argument, which shows which of its elements belong to the right argument.
3.5 Symbolic Logic A boolean function of N arguments may be represented by a boolean vector of 2 * N elements in a variety of ways, including what are
sometimes called the disjunctive, conjunctive, equivalence, and exclusivedisjunctive forms. The transformation between any pair of these forms may be represented concisely as some 2 *N by 2 * N matrix formed by a related inner product, such as Tv . AOT, where T + T N is the 'truth table' formed by the function I defined by A.2. These matters are treated fully in [11, Ch.7]. Notation as a Tool of Thought 369
I
4
Identities and Proofs In this section we will int oduce some widely used identities and provide formal proofs for some of them, including Newton's symmetric functions and the associativity of inner product, which are seldom proved formally.
4.1 Dualities in Inner Products The dualities developed for reduction and scan extend to inner products in an obvious way. If DF is the dual of F and DG is the dual of G with respect to a monadic function Mwith inverse MI, and if A and B are matrices, then:
A F.G B
MI (M A) DF.DG (M B)
-+
For example: Av.AB
+-
-( A)A.v('-B)
AA.=B + -(-A)V.*( B) Al.+B ++-(-A)r.+(-B) The dualities for inner product, reduction, and scan can be used to eliminate many uses of boolean negation from expressions, particularly when used in conjunction with identities of the following form:
AA, -B ) +
-~A )B
A>B A
(-A )^( -B)
4.2 PartitioningIdentities Partitioning of an array leads to a number of obvious and useful identities. For example: x/3 1 4 2 6
++-
(x/3
1)
x (x/4
2 6)
More generally, for any associative function F: F/V +-
F/V,W
+4
(F/KtV) F (F/K+V) (FlV) F (F/W)
If F is commutative as well as associative, the partitioning need not be limited to prefixes and suffixes, and the partitioning can be made by compression by a boolean vector U:
F/V 370
+4
KENNETH E. IVERSON
(F/U/V)
F (F/(-U)/V)
If E is an empty vector ( 0 = p E ), the reduction F / E yields the identity element of the function F, and the identities therefore hold in the limiting cases 0 =K and 0 = v / U. Partitioning identities extend to matrices in an obvious way. For example, if V, M, and A are arrays of ranks 1, 2, and 3 , respectively, then:
V+.xM
--b ((K+V)+.x(KlpIM)+M)+(K+V)+.x(Ko)+M D.1 (I,J)+A+.xV +-+ ((I,J,O)+A)+.xV D.2
4.3 Summarization and Distribution Consider the definition and and use of the following functions:
D.3
N: (v/<o.=)/w Nzu )o . =w
DA4
A-3 3 1 4 1 C+10 20 30 40 50 N A 3 1 4
S A 1 1 0 0 0 0 0 1 0 1 0 0 0
(E A)+.xC 30 80 40
1 0
The function N selects from a vector argument its nub, that is, the set of distinct elements it contains. The expression I A gives a boolean 'summarization matrix' which relates the elements of A to the elements of its nub. If A is a vector of account numbers and C is an associated vector of costs, then the expression ( S A ) + . x C evaluated above sums or 'summarizes' the charges to the several account numbers occurring in A. Used as postmultiplier, in expressions of the form W+ . x S A, the summarization matrix can be used to distribute results. For example, if F is a function which is costly to evaluate and its argument V has repeated elements, it may be more efficient to apply F only to the nub of V and distribute the results in the manner suggested by the following identity: F V -- * (F - V)+.xS V D.5 The order of the elements of N V is the same as their order in V, and it is sometimes more convenient to use an ordered nub and corresponding ordered summarization given by: ON:Nw[*w] D.6 OQ: ( QNw ) o . =w D.7 The identity corresponding to D.5 is: F V -- (F ON V)+. xQS V
D.8
The summarization function produces an interesting result when applied to the function T defined by A.2:
+/S+IT N
|--
(OiN)!N Notation as a Tool of Thought 371
In words, the sums of the rows of the summarization matrix of the column sums of the subset matrix of order N is the vector of binomial coefficients of order N. 4.4
Distributivity The distributivity of one function over another is an important notion in mathematics, and we will now raise the question of representing this in a general way. Since multiplication distributes to the right over addition we have ax(b+q)+-abtaq, and since it distributes to the left we have (a+p)xb++ab+pb. These lead to the more general cases: (a+p)x(b+q)
ab+aq+pb+pq ++ ab(+abr+aqc+aqripbc+pbr+pqc+pqr
-+
(a+p)x(b+q)x(c+r)
(a+p)x(b+q)x . .. x(C+r i -- ab. . . c
. . .. .pq. .. r
Using the notion that V+A, E and W+P, Q or V-A, B, C and W+P, Q, R, etc., the left side can be written simply in terms of reduction as x / V+W. For this case of three elements, the right side can be written as the sum of the product over the columns of the following matrix: V[0]
V[0]
V[0]
VD/[]
W[OJ
W[0]
W[0]
W[01
V[1]
V[1]
W[1
WE t]
V[1)
V[1]
W[1]
W[1]
V[21
WE2I
VE21
W[2]
V[21
W[21
VE21
W[21
The pattern of V's and W's above is precisely the pattern of zeros and ones in the matrix T+-Tp V, and so the products down the columns are given by ( V x . *- T ) x ( Wx . * T ). Consequently: x/V+W
+/(Vx.*-T)xWx.*T+-
--
pV
D.9
We will now present a formal inductive proof of D.9, assuming as the induction hypothesis that D.9 is true for all V and W of shape N (that is, A/ N = ( p V ), p W) and proving that it holds for shape N+ 1, that is, for X, V and Y , W, where X and Y are arbitrary scalars. For use in the inductive prcof we will first give a recursive definition of the function T, equivalent to A.2 and based on the following notion: if M-T 2 is the result of order 2, then: M 0
0
1
0
1
0
1 1
0 , Fhi1
1, [1 1M
0 0
0 0 0 1
0 1
1 1 0 0
11 1 1
0
1
1
0
0
0
0
O
0
1
1
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
0
00
1
01111M)(11M
372 KENNETH E. IVERSON
1
Thus: .Z:(0,[13T),(1,[
11T-To-1):O=w:0
1pO
D.10
+/( (C-X, V)x. *-Q )xDx .*Q+Tp( DYW) t/(Cx.*-Z,U)xDx.*(Z-O,[1I T),U'-1,1] T+-.pW +/((Cx.*-Z),Cx.*-U)x(Dx.*Z),Dx.*U +/((Cx.*-Z),Cx.*-U)x((Y*O)xWx.*T),(Y*1)xWx.*T +/((Cx.*-Z),Cx.*-U)x(Wx.*T),YxWx.*T +/((XxVx.*-T),Vx.*-T)x(Wx.*T).YxWx.*T +/(Xx( Vx.*-T)xWx.*T),(Yx( Vx.*-*T)xWx.*T)
+/(XXX/V+W),(
YXX/V+W)
Induction hypothesis
+/(XY)xx/V+W
(XxS),(YxS)+.(X,Y)xS
Definition of x/
( V+W)
x/(X+Y), x/(X,V)t(
Y*O
D.10 Note 1 Note 2 1*-.l,Y Note 2 Note 3
+ distributes over
YW)
Note 1: Mt. xN. P +-+ (Mt. xN),Mt . xP (partitioning identity on matrices) Note2: V+.xM -- ((1tV)t.x(1,i+pM)+M)+(1+V)t.x1 0+h4 (partitioning identity on matrices and the definition of C, D, Z, and U) Note3:
(V.W)xP,Q +(VxP),WxQ 4
To complete the inductive proof we must show that the putative identity D.9 holds for some value of N. If N= 0, the vectors A and B are empty, and therefore X, A
+,
X and Y, B
--*
, Y. Hence the
left side becomes x /X + Y, or simply X+ Y. The right side becomes + / ( X x . * - Q ) xYx . * Q, where - Q is the one-rowed matrix 1 o and Q is 0 1. The right side is therefore equivalent to + / ( X, 1 ) x ( 1, Y ),
or X+ Y. Similar examination of the case
N
= 1 may be found instructive.
4.5 Newton's Symmetric Functions If X is a scalar and R is any vector, then x / X - R is a polynomial in X having the roots R. It is therefore equivalent to some polynomial C P X, and assumption of this equivalence implies that C is a function of R. We will now use D.8 and D.9 to derive this function, which is commonly based on Newton's symmetric functions: x/X-R x/X+( -R) +/(Xx.*-T)x(-R)x.*T-T pR (Xx.*-T)+.xP-(-R)x.*T
D.9 of + . x Note 1 D.8 +. x is associative Def
(X*S++I-T)+.xP ((X*QN S)+.xOS S)+.xP (X*QN S)+.x((QS S)+.xP) (X*O~ipR)+.x((QSf
S)+.xP)- X
((QS ((QS Note
S)+.XP)
+f-T)+.x((-R)x.*T+-T 1:
Xx.*B
If
-+
X
is
a
scalar and
pR))Pv B
is
X a
Note 2 B.1 (polynomial) Defs of S and P boolean vector, then
X*+/B.
Note 2: Since T is boolean and has pR rows, the sums of its columns range from 0 to pR, and their ordered nub is therefore 0, i pR. Notation as a Tool of Thought
373
4.6 Dyadic Transpose The dyadic transpose, denoled by A , is a generalization of monadic transpose which permutes axes of the right argument, and (or) forms 'sectors' of the right argument by coalescing certain axes, all as determined by the left argument. VA e introduce it here as a convenient tool for treating properties of the inner product. The dyadic transpose will be defined formally in terms of the selection function SF:(,W)[1i-(pW)1a-1]
which selects from its right argument the element whose indices are given by its vector left argument the shape of which must clearly equal the rank of the right argument. The rank of the result of KOA is r IK, and if I is any suitable left argument of the selection I SF KOA then: I SF KQA --
((I
D.11
K I ) SFA
For example, if M is a matrix, then 2 1 OM -+ OM and 1 1 OM is the diagonal of M; if T is a rank three array, then 1 2 2 OT is a matrix 'diagonal section' of T produced by running together the last two axes, and the vector 1 1 1 OT is the principal body diagonal of T. The following identity will be used in the sequel: JOKOA
([JKI J
-
D.12
) A
Proof: I SF JOKOA (I[J]) SF KOA ((I[J])[K]) SF A (I[ (J[K] ) ]) SF A I SF(JCK] )A
Definition of O (D.11) Definition of 0 Indexing is associative Definition of O
4.7
Inner Products The following proofs are stated only for matrix arguments and for the particular inner product + . x . They are easily extended to arrays of higher rank and to other inner products F. G, where F and G need possess only the properties assumed in the proofs for + and x . The following identity (familiar in mathematics as a sum over the matrices formed by (outer) products of columns of the first argument with corresponding rows of the second argument) will be used in establishing the associativity and distributivity of the inner product: M+.xN
Proof: V[K]
s/i 3 3 2 t, Mo.xN
D.13
( I, J )SF M+ . xN is defined as the sum over V, where 4--M[I;K]xN[K;J]. Similarly, (I,J)SF
374
--+
KENNETH E. IVERSON
+/1
3
3
2 0 Mo.xN
I
is the sum over the vector Wsuch that W[K]
+-
(I,J,K)SF 1 3 3 2 0 Mo.xN
Thus: W[K] (I,J,K)SF 1 3 3 2 OMO.xN (I,J,K)[1 3 3 2]SF MO.XN (I,K,K,J)SF Mo.xN
D.12
Def of indexing Def of Outer product
MEI;K]XN[K;J] V[K]
Matrix product distributes over addition as follows: M+.x(N+P)
(M+.xN)+(M+.xP)
+-
D.14
Proof: M+. x (N+P) +/(J+ 1 3 3 2)4M°.xN+P +/J4(Mo .xN)+(M°.xP) +/(JMo . xN)+(JIMo .xP) (+/J4Mo.xN)+(+/J4Mo.xP) (M+.xN)+(M+.xP)
D.13 x distributes over +
I distributes over + + is assoc and comm D.13
Matrix product is associative as follows: M+.x(N+.xP)
++
D.15
(M+ .xN)+.xP
Proof: We first reduce each of the sides to sums over sections of an outer product, and then compare the sums. Annotation of the second reduction is left to the reader: M+ . x( N+i. M+.x+/1 +/1 3 3 +/1 3 3 +i/1 3 3 +/+/l 3 +/+/1 3 +/+/1 3 +/+/1 4
xP) 3 3 20No.xP D.12 D.12 2OMo.x+/l 3 3 20No.xP x distributes over + 2+I/Me.xi 3 3 2ONo.xP 24?+/1 2 3 5 5 4QMo.xNo.xP Note 1 3 2 4 01 2 3 5 5 4OMo.xNo. xP Note 2 D.12 3 4 4 2OMo.xNo.xP 3 4 4 20(Mo.xN)o.xP x is associative + is associative and 4 3 3 24(Mo.xN)o.xP commutative (M+ . xN )i+. xP (+/i 3 3 2OMo.xN)+.xP +/i 3 3 24(+/1 3 3 2OMo.xN)o.xP + /1 3 3 2?4+/1 5 5 2 3 40(Mo.xN)o.xP +/+/1 3 3 2 441 5 5 2 3 4?(Mo.xN)o.xP +/+/1 4 4 3 3 20(Mo.xN)o.xP Note 1: +I/Mo.xJA+-+/((lppM),J+ppM)OMo . xA b+ Note 2: JO+/1A --++/(J.
/J)OA Notation as a Tool of Thought 375
4.8 Product of Polynomials The identity B.2 used for the multiplication of polynomials will now be developed formally: (B P X)x(C P X) B.1
(+/BxX*E+1l+tpB)x(+/CxX*F+1+ipC) -/+/(BxX*E)o.x(CxX*F) +1+1(Bo.xC)x((X*E)o.x(X*F)) +/+/(Bo.xC)x(X*(Eo.+F))
Note 1 Note 2 Note 3
Note 1: ( +/V) x( +/W))+ I+ I/+V o . xX because x distributes over +and + is associative and commutative, or see [12,P21 I for a proof. Note 2:
The equivalence of (PxV)o . x(QxW) and (Po .xQ)x( Vo
x W) can be established by examining a typical element of each expression.
Note 3: ( X *I ) x ( X *J ) -- X * ( I+JT ) The foregoing is the proof presented, in abbreviated form, by Orth [13, p. 52], who also defines functions for the composition of polynomials.
4.9 Derivative of a Polynomial Because of their ability to approximate a host of useful functions, and because they are closed under addition, multiplication, composition, differentiation, and integration, polynomial functions are very attractive for use in introducing the study of calculus. Their treatment in elementary calculus is, however, normally delayed because the derivative of a polynomial is approached indirectly, as indicated in Section 2, through a sequence of more general results. The following presents a derivation of the derivative of a polynomial directly from the expression fcr the slope of the secant line through the points X, F X and ( X+Y),F(X+Y): ((C P X+Y)-(C E X))+Y ((C P X+Y)-(C P X+O))+Y ((C P X+Y)-((0*J)+.X(A.-)E5 Jo.4J- 1+tpC)+.xC) P X)+Y B.6 ((((Y*J))+.xM) E X)-((o*.J ,+.xMA+A.xC) E X)+Y B.6 ((((y*J)+.xM)-(O*J)+.xM) E X)+Y P dist over ((((Y*J)-0*J)+.XM)
(((0,Y*1+J)+.xM)
E XD)+'
t.X
P X)+Y
(((Y*1+J)+.x(l 0 0 +A:)+. C) P X)+Y ((Y*1+J-1)+.x(l 0 0 +A)'-.XC) P X ((Y*-1+i71+pC)+.x(1 0 0 +A)+.xC) P X (((Y*-l+i-1+pC)+.x 1 0 0 +A)+.xC) P X
376
1:
0*0..1--l-Y*0
KENNETH E. IVERSON
and
-
Note I
(((Y*1+J)+.x 1 0 +M) E l')4Y
Note
dist over
A/0=0*l+J
D.1 D.2 (Y*A)+Yq--Y*A-1 Def of J D.15
The derivative is the limiting value of the secant slope for Y at zero, and the last expression above can be evaluated for this case because if E+ - 1 + i - 1 + p C is the vector of exponents of Y, then all elements of E are nonnegative. Moreover, 0 * E reduces to a 1 followed by zeros, and the inner product with 1 0 0 +A therefore reduces to the first plane of 1 0 0+A, or equivalently, the second plane of A. If B+J o. JT- 1 + i pC is the matrix of binomial coefficients, then A is DS B and, from the definition of DS in B.5, the second plane of A is B x 1 -J - J, that is, the matrix B with all but the first superdiagonal replaced by zeros. The final expression for the coefficients of the polynomial which is the derivative of the polynomial C P w is therefore: ((Jo.!J)x1=-,o.-gJ41+ipC)+.XC
For example: C - 5 7 11 13 (Jo .J)x1=-Jo.-J+
1+1pC
0 1 0 0
0 0 2 0 0 0 0 3 0 0 0 0
((Jo J)x1=-Jo-J-+ 1+1pC)+.xC 7 22 39 0 Since the superdiagonal of the binomial coefficient matrix ( iN) o . i N is ( - 1 + i N -1 ) ! i N- 1 or simply NN- 1, the final result is 1 + C x 1 + i p C in agreement with the earlier derivation. In concluding the discussion of proofs, we will re-emphasize the fact that all of the statements in the foregoing proofs are executable, and that a computer can therefore be used to identify errors. For example, using the canonical function definition node [4, p. 81], one could define a function F whose statements are the first four statements of the preceding proof as follows: -
VF [1] ((C P XrY)-(C P X))+Y
[2] ((C E X+Y)-(C P X+O))+Y [3] ((c E X+Y)-( ( O*J)+.x(A-DS J.J+ 1+lpC)+.xC) P X)+Y [4) ((((Y*J)+.xM) P X)-((0*J)+.xM-A+.xC) P X)+Y V
The statements of the proof may then be executed by assigning values to the variables and executing F as follows: C+5 2 3 1 Y-5
X+3 132
X+110 F 66 96 132 174 222 276 336 402 474 552
132 132 132
66 96 66 96 66 96
F
132 174 132 174 132 174
222 222 222
276 276 276
336 402 474 552 336 402 474 552 336 402 474 552
The annotations may also be added as comments between the lines without affecting the execution. Notation as a Tool of Thought
377
5 Conclusion The preceding sections have attempted to develop the thesis that the properties of executability and universality associated with programming languages can be combined, in a single language, with the wellknown properties of mathematical notation which make it such an effective tool of thought. This is an important question which should receive further attention, regardless of the success or failure of this attempt to develop it in terms of APL. In particular, I would hope that others would treat the same question using other programming languages and conventional mathematical notation. If these treatments addressed a common set of topics, such as those addressed here, some objective comparisons of languages could be made. Treatments of some of the topics covered here are already available for comparison. For example, Kerner [7] expresses the algorithm C.3 in both ALGOL and conventional mathematical notation. This concluding section is more general, concerning comparisons with mathematical notation, the problems of introducing notation, extensions to APL which would further enhance its utility, and discussion of the mode of presentation of the earlier sections.
5.1 Comparison with Conventional MathematicalNotation Any deficiency remarked irL mathematical notation can probably be countered by an example of its rectification in some particular branch of mathematics or in some particular publication; comparisons made here are meant to refer to the more general and commonplace use of mathematical notation. APL is similar to conventional mathematical notation in many important respects: in the use of junctions with explicit arguments and explicit results, in the concomitant use of composite expressions which apply functions to the results of other functions, in the provision of graphic symbols for the more commonly used functions, in the use of vectors, matrices, and higher-rank arrays, and in the use of operators which, like the derivative and the convolution operators of mathematics, apply to functions to produce functions. In the treatment of functions APL differs in providing a precise formal mechanism for the definition of new functions. The direct definition form used in this paper is perhaps most appropriate for purposes of exposition and analysis, but the canonical form referred to in the introduction, and defined in [4, p. 81], is often more convenient for other purposes. In the interpretation of composite expressions APL agrees in the use of parentheses, but differs in eschewing hierarchy so as to treat all functions (user-defined as well as primitive) alike, and in adopting a 378
KENNETH E. IVERSON
single rule for the application of both monadic and dyadic functions: the right argument of a function is the value of the entire expression to its right. An important consequence of this rule is that any portion of an expression which is free of parentheses may be read analytically from left to right (since the leading function at any stage is the 'outer' or overall function to be applied to the result on its right), and constructively from right to left (since the rule is easily seen to be equivalent to the rule that execution is carried out from right to left). Although Cajori does not even mention rules for the order of execution in his two-volume history of mathematical notations, it seems reasonable to assume that the motivation for the familiar hierarchy (power before x and x before + or - ) arose from a desire to make polynomials expressible without parentheses. The convenient use of vectors in expressing polynomials, as in +/ CXX * E, does much to remove this motivation. Moreover, the rule adopted in APL also makes Horner's efficient expression for a polynomial expressible without parentheses: +/3 4 2 5xX*O
1 2 3 --
3+Xx4+Xx2+Xx5
In providing graphic symbols for commonly used functions APL goes much farther, and provides symbols for functions (such as the power function) which are implicitly denied symbols in mathematics. This becomes important when operators are introduced; in the preceding sections the inner product x . * (which must employ a symbol for power) played an equal role with the ordinary inner product + . x . Prohibition of elision of function symbols (such as x ) makes possible the unambiguous use of multicharacter names for variables and functions. In the use of arrays APL is similar to mathematical notation, but more systematic. For example, V+W has the same meaning in both, and in APL the definitions for other functions are extended in the same element-by-element manner. In mathematics, however, expressions such as Vx Wand V* Ware defined differently or not at all. For example, Vx Wcommonly denotes the vector product [14, p. 308]. It can be expressed in various ways in APL. The definition VP: ( ( 14cx )x
ads )-(
-14c
)x14w~
provides a convenient basis for an obvious proof that VP is 'anticommutative' (that is, V VP W -+ - w VP v ), and (using the fact that - 1 4 X - - 24 X for 3 -element vectors) for a simple proof that in 3-space V and Ware both orthogonal to their vector product, that is, A / 0 =V + . X V VP W and A/ O =W+ . XV
VP W.
APL is also more systematic in the use of operators to produce functions on arrays: reduction provides the equivalent of the sigma and pi notation (in+/ and x /) and a host of similar useful cases; outer product extends the outer product of tensor analysis to functions other than x, and inner product extends ordinary matrix product ( + . x ) to many cases, such as v . A and L +, for which ad hoc definitions are often made. Notation as a Tool of Thought
379
Ej.2-i i=1
1-2-3 + 2-3-4 + 1-2-3-4 + 2-3-4-5 +
[ai]
r( -q)
. 1 -n(n + 1) (n + 2) (n + 3) 4
n terms
n terms
Ne
j_1
1n(n + 1) (n + 2) (n + 3) (n + 4) 5
-
r(j-q) f(x-j [x-a]) fj+1) ~ L J FFIGURE 3
The similarities between A1'1, and conventional notation become more apparent when one learns a few rather mechanical substitutions, and the translation of mathematical expressions is instructive. For example, in an expression suc - as the first shown in Figure 3, one simply substitutes i N for each occurrence of j and replaces the sigma by +/. Thus: +/( iN)x2*-lN, or +/Jx2*-J+iN Collections such as Jolley's Summation of Series [151 provide interesting expressions for such an exercise, particularly if a computer is available for execution of the results. For example, on pages 8 and 9 we have the identities shown in the second and third examples of Figure 3. These would be written as: +/x/(1+iN)o*.,3 +/x/(V1+iN)o.+
+
4
++
(x/N+O,
3)+4
(x/N+O,i4)+5
Together these suggest the following identity: +/x/(- 1+N)o .+.K --
( x/N+O, iK)+K+l
The reader might attempt to restate this general identity (or even the special case where K = 0 ) in Jolley's notation. The last expression of Figure 3 is taken from a treatment of the fractional calculus [16, p. 30], and represents an approximation to the qth order derivative of a function f. It would be written as: (S*-Q)x+/(J!J-l+Q)xF X-(J4-l+iN)xS-(X-A)+N The translation to APL is a simple use of i N as suggested above, combined with a straightforward identity which collapses the several occurrences of the gamma function into a single use of the binomial coefficient function !, whose domain is, of course, not restricted to integers. 380
KENNETH E. IVERSON
In the foregoing, the parameter Q specifies the order of the derivative if positive, and the order of the integral (from A to X ) if negative. Fractional values give fractional derivatives and integrals, and the following function can, by first defining a function F and assigning suitable values to N and A, be used to experiment numerically with the derivatives discussed in [16]: OS:(S*-a)x+/(J!J-l+a)xFw-(J-1l+N)xS+(w-A)+N Although much use is made of 'formal' manipulation in mathematical notation, truly formal manipulation by explicit algorithms is very difficult. APL is much more tractable in this respect. In Section 2 we saw, for example, that the derivative of the polynomial expression ( w0 . * - 1 + i pa ) + . x a is given by (w o * + p a ) + . xl 4a x + p a, and a set of functions for the formal differentiation of APL expressions given by Orth in his treatment of the calculus [13] occupies less than a page. Other examples of functions for formal manipulation occur in [17, p. 347] in the modeling operators for the vector calculus. Further discussion of the relationship with mathematical notation may be found in [3] and in the paper 'Algebra as a Language' [6, p. 325]. A final comment on printing, which has always been a serious problem in conventional notation. Although APL does employ certain symbols not yet generally available to publishers, it employs only 88 basic characters, plus some composite characters formed by superposition of pairs of basic characters. Moreover, it makes no demands such as the inferior and superior lines and smaller type fonts used in subscripts and superscripts.
5.2 The Introduction of Notation At the outset, the ease of introducing notation in context was suggested as a measure of suitability of the notation, and the reader was asked to observe the process of introducing APL. The utility of this measure may well be accepted as a truism, but it is one which requires some clarification. For one thing, an ad hoc notation which provided exactly the functions needed for some particular topic would be easy to introduce in context. It is necessary to ask further questions concerning the total bulk of notation required, the degree of structure in the notation, and the degree to which notation introduced for a specific purpose proves more generally useful. Secondly, it is important to distinguish the difficulty of describing and of learning a piece of notation from the difficulty of mastering its implications. For example, learning the rules for computing a matrix product is easy, but a mastery of its implications (such as its associativity, Notation as a Tool of Thought 381
its distributivity over addition, and its ability to represent linear functions and geometric operations) Ls a different and much more difficult matter. Indeed, the very suggestiveness of a notation may make it seem harder to learn because of the many properties it suggests for exploration. For example, the notation + . x for matrix product cannot make the rules for its computation more difficult to learn, since it at least serves as a reminder that the process is an addition of products, but any discussion of the properties of matrix product in terms of this notation cannot help but suggest a host of questions such as: Is v AA associative? Over what does it distribute? Is Bv . AC +-- ( QC ) v ^B A valid identity?
5.3 Extensions to APL In order to ensure that the notation used in this paper is well-defined and widely available on existing computer systems, it has been restricted to current APL as defined in [4] and in the more formal standard published by STAPL, the ACM SIGPLAN Technical Committee on APL [17, p. 409]. We will now comment briefly on potential extensions which would increase its convenience for the topics treated here, and enhance its suitability for the treatment of other topics such as ordinary and vector calculus. One type of extension has already been suggested by showing the execution of an example (roots of a polynomial) on an APL system based on complex numbers. This implies no change in function symbols, although the domain of certain functions will have to be extended. For example, I X will give the magnitude of complex as well as real arguments, + X will give the conjugate of complex arguments as well as the trivial result it now gives for real arguments, and the elementary functions will be appropriately extended, as suggested by the use of * in the cited example. It also implies the possibility of meaningful inclusion of primitive functions for zeros of polynomials and for eigenvalues and eigenvectors of matrices. A second type also suggested by the earlier sections includes functions defined for particular purposes which show promise of general utility. Examples include the nub function N defined by D.3, and the summarization function S, defined by D.4. These and other extensions are discussed in [18]. McDonnell [1 9, p. 240] has proposed generalizations of and and or to non-booleanm, no that A v B is the GCD of A and B. and A A B is the LCM. The functions GCD and L CM defined in Section 3 could then be defined simply by GCD: v /and L CM: A A more general line of development concerns operators, illustrated in the preceding sections by the reduction, inner-product, and outerproduct. Discussions of operators now in APL may be found in [20] and in [17, p. 129], proposed new operators for the vector calculus are discussed in [17, p. 47], and others are discussed in [18] and in [17, p. 129]. 382
KENNETH E. IVERSON
5.4 Mode of Presentation The treatment in the preceding sections concerned a set of brief topics, with an emphasis on clarity rather than efficiency in the resulting algorithms. Both of these points merit further comment. The treatment of some more complete topic, of an extent sufficient for, say, a one- or two-term course, provides a somewhat different, and perhaps more realistic, test of a notation. In particular, it provides a better measure of the amount of notation to be introduced in normal course work. Such treatments of a number of topics in APL are available, including: high school algebra [6], elementary analysis [5], calculus, [13], design of digital systems [21], resistive circuits [10], and crystallography [22]. All of these provide indications of the ease of introducing the notation needed, and one provides comments on experience in its use. Professor Blaauw, in discussing the design of digital systems [21], says that 'APL makes it possible to describe what really occurs in a complex system,' that 'APL is particularly suited to this purpose, since it allows expression at the high architectural level, at the lowest implementation level, and at all levels between,' and that '...learning the language pays of (sic) in- and outside the field of computer design.' Users of computers and programming languages are often concerned primarily with the efficiency of execution of algorithms, and might, therefore, summarily dismiss many of the algorithms presented here. Such dismissal would be short-sighted, since a clear statement of an algorithm can usually be used as a basis from which one may easily derive more efficient algorithms. For example, in the function STEP of Section 3.2, one may signficantly increase efficiency by making substitutions of the form BMM for( MM ) + . xB, and in expressions using +/ CxX* 1 + i pC one may substitute Xl +C or, adopting an opposite convention for the order of the coefficients, the expression Xi C. More complex transformations may also be made. For example, Kerner's method (C.3) results from a rather obvious, though not formally stated, identity. Similarly, the use of the matrix a to represent permutations in the recursive function R used in obtaining the depth first spanning tree (C.4) can be replaced by the possibly more compact use of a list of nodes, substituting indexing for inner products in a rather obvious, though not completely formal, way. Moreover, such a recursive definition can be transformed into more efficient nonrecursive forms. Finally, any algorithm expressed clearly in terms of arrays can be transformed by simple, though tedious, modifications into perhaps more efficient algorithms employing iteration on scalar elements. For example, the evaluation of +/ X depends upon every element of X and does not admit of much improvement, but evaluation of v/ B could stop at the first element equal to 1, and might therefore be improved by an iterative algorithm expressed in terms of indexing. Notation as a Tool of Thought
383
I
The practice of first developing a clear and precise definition of a process without regard to efficiency, and then using it as a guide and a test in exploring equivalent processes possessing other characteristics, such as greater efficiency, is very common in mathematics. It is a very fruitful practice which should not be blighted by premature emphasis on efficiency in computer execution. Measures of efficiency are often unrealistic because they concern counts of 'substantive' functions such as multiplication and addition, and ignore the housekeeping (indexing and other selection processes) which is often greatly increased by less straightforward algorithms. Moreover, realistic measures depend strongly on the current design of computers and of language embodiments. For example, because functions on booleans (such as A / B and v / B ) are found to be heavily used in APL, implementers have provided efficient execution of them. Finally, overemphasis of efficiency leads to an unfortunate circularity in design: for reasons of efficiency early programming languages reflected the characteristics of the early computers, and each generation of computers reflects the reeds of the programming languages of the preceding generation.
Acknowledgments I am indebted to my colleague A. D. Falkoff for suggestions which greatly improved the organization of the paper, and to Professor Donald McIntyre for suggestions arising from his reading of a draft.
Appendix A Summa ry of Notation Fw O -w ( j>0 )-w<0 1 *wu Integer
part -(w 2.71828 ... *. Inverse of * x/11-IU 3.14159... .xw Boolean: Relations:
v <
v <
SCALAR FUNCTIONS Conjugate + Plus Negative - Minus x Times Signum Reciprocal + Divide Magnitude I Residue Floor P Minimum Ceilirg F Maximum Exponential * Power Natural log * Logarithm Factorial ' Binomial
Pi times
aFu
W-axxwwa+a=O (WXW
-( -a )---w x/Wpa ( *w )+*a ( !&u)+( !a)x!&w-a
0
- (and, cr, not-and, not-or, not) = 2 > r (aRw is 1 if relation R
384 KENNETH E. IVERSON
holds).
Sec.
V4- 2 3 5
M--1 2 3 4
Ref.
Integers
1
Shape Catenation
1 1
Ravel Indexing Compress
1 1 3 1 1 1 1, 4 3 1 1 3 2, 5 1 1 1 1 1
TakeDrop
Reversal Rotate Transpose Grade Base value &inverse Membership Inverse Reduction Scan Inner prod Outer prod Axis
t5-+1 2 3 4 5 pM3 2 3 pV*-e3 V,Ve+2 3 5 2 3 5
5
6
2p44--44 2 3pi6--M MM--1 2 3 1 2 3 4 5 6 4 5 6
,M-*-1 2 3 4 5 6 M[2;)'-14 5 6 M[2;23--5 V(3 1].-5 2 0 I/M+.4 5 6 1 0 1/V1-2 5 2+Vi-1+V4-3 5 2+V--2 3 0V4-45 3 2 2OV4-3 5 2 20V+--5 2 3 o0wc permutes axes i w reverses axes T3 2 6-2-.3 1 2 4 J3 2 6 24-2 4 1 3 VLV4--50 10iV+-235 VT50--2 3 5 10 10 10T235+-2 3 5 Ve5 24-1 0 1 Ve3- 0 1 0 ( 1W )+ . xa C1iAW-i 1w is matrix inverse +/V4-10
+/M.-6
15
+/M'--5 7
+M--2 3pl +V.-2 5 10 + . x is matrix product 0 3 .+1 2 3 --. M F[I] applies F along axis I
9
3 6 4 9 15
Appendix B Compiler from Direct to Canonical Form This compiler has been adapted from [22, p. 222]. It will not handle definitions which include a or: or a) in quotes. It consists of the functions FIX and F 9, and the character matrices C 9 and A 9:
FIX OpOFX F9 09 D+F9 E;F;I;K F-(,(E='u' )o.;t5+1)/,E,(¢4,pE)p' Y9 I P+(,(P='a' )o.;t5tl)/.F.(<14,pF)p' X9 I F1+lpD+(0,+/-6,I)+(-(3x1)++I+':'zF)4F,(46,pF)p' D-3fC9[1+(l+'a'EE),I,0;],QD[;l,(I-2+iF),2] K+K±2xK<1lK+-IAKe(>>l O
0'+0'°.=E)/K+V+I+EcA9
2+K4' F+(0,1+pE)rpD+D,(F,pE)+00 D-(F+D),[1]F[2] 'A' ,E A9 C9 012345678 Z9+ 9ABCDEFGH Y9Z9+ IJKLMNOPQ Y9Z9-X9 RSTUVWXYZ )/3-(0=lt, -OOpZ9-
',E,[1.5]';'
A---EP--I
PQR JKL STUVW-IZO
Notation as a Tool of Thought
385
Example: FIX FIB:Z,+/ 2+Z-FIBw-l:w=1:1
1
FIB 15 1 2 3 5 8 13 21
34
55
89
144
233
377
610
OCR'FIB'
Z9+FIB Y9;Z +( 0=1+ ,Y9=1)/3 0O OpZ9÷1
Z9-Z,+/ 2+ZFIB Y9-1 AFIB:Z,+/ 2+Z-FIBw-1:w=1:1
References 1. Boole, G. An Investigation or the Laws of Thought, Dover Publications, N.Y., 1951. Originally published in 1954 by Walton and Maberly, London and by MacMillan and Co., Cambridge. Also available in Volume II of the Collected logical Works of George Boole, Open Court Publishing Co., La Salle, Illinois, 1916. 2. Cajori, F. A History of MathematicalNotations, Volume II, Open Court Publishing Co., La Salle, Illinois, 1929. 3. Falkoff, A. D., and Iverson, K. E. The Evolution of APL, Proceedings of a Conference on the History of Programming Languages, ACM SIGPLAN, 1978. 4. APL Language, Form No. GC26-3847-4, IBM Corporation. 5. Iverson, K. E. Elementary Analysis, APL Press, Pleasantville, N. Y., 1976. 6. Iverson, K. E. Algebra: An Algorithmic Treatment, APL Press, Pleasantville, N. Y., 1972. 7. Kerner, I. 0. Ein Gesaratschrittverfahren zur Berechnung der Nullstellen von Polynomen Numerische Mathematik, Vol. 8, 1966, pp. 290-294. 8. Beckenbach, E. F., ed. Applied CombinatorialMathematics, John Wiley and Sons, New York, N. Y., 1964. 9. Tarjan, R. E. Testing Flow Graph Reducibility,Journal of Computer and Systems Sciences,Vol. 9, No. 3, Dec. 1974. 10. Spence, R. Resistive Circuit Theory, APL Press, Pleasantville, N. Y., 1972. 11. Iverson, K. E. A ProgrammingLanguage, John Wiley and Sons, New York, N. Y., 1962. 12. Iverson, K. E. An Introductimn to APL for Scientists and Engineers, APL Press, Pleasantville, N. Y. 13. Orth, D. L. Calculus in a New Key, APL Press, Pleasantville, N. Y., 1976. 14. Apostol, T. M. Mathematical Analysis, Addison Wesley Publishing Co., Reading, Mass., 1957. 15. Jolley, L. B. W. Summation of Series, Dover Publications, N. Y. 16. Oldham, K. B., and Spanier, J. The FractionalCalculus, Academic Press, N. Y., 1974. 17. APL Quote Quad, Vol. 9, No.4, June 1979, ACM STAPL. 386 KENNETH E. IVERSON
18.
Iverson, K. E., Operators andFunctions, IBM Research Report RC 7091, 1978.
19. McDonnell, E. E., A Notation for the GCD and LCM Functions, APL 20. 21.
75, Proceedings of an APL Conference, ACM, 1975. Iverson, K. E., Operators, ACM 7tansactions on ProgrammingLanguages and Systems, October 1979. Blaauw, G. A., DigitalSystem Implementation, Prentice-Hall, Englewood
Cliffs, N. J., 1976. 22.
McIntyre, D. B., The Architectural Elegance of Crystals Made Clear by APL, An APL Users Meeting, I.P. Sharp Associates, Toronto, Canada, 1978.
Categories and Subject Descriptors: E2.1 [Theory of Computation]: Numerical Algorithms and Problems computations on matrices; computations on polynomials; G.l.m [Mathe-
matics of Computing]: Miscellaneous; G.2.1 [Discrete Mathematics]: Combinatorics -permutations and combinations; G.2.2 [Discrete Mathe-
matics]: Graph Theory-trees; I.1.1 [Computing Methodologies]: Expressions and Their Representations-representations(general and polynomial)
General Terms: Algorithms, Design, Languages
Additional Key Words and Phrases: APL, executability, mathematical notation, universality
Notation as a Tool of Thought 387
Postscript
Notation as a Tool of Thought: 1986 KENNETH E. IVERSON The thesis of the present paper is that the advantages of executability and universality found in programming languages can be effectively combined, in a single coherent language, with the advantages offered by mathematical notation. The executable language to be used is APL, a general-purpose language which originated in an attempt to provide clear and precise expression in writing and teaching, and which was implemented as a programming language only after several years of use and development. The first of the foregoing passages from my 1980 paper states the case to be made for the use of an executable analytic notation, and the second states the particular vehicle to be used in developing it. The most obvious and important use of executable analytic notation is in teaching. The following comments summarize recent progress in this area.
Materials and Courses A common theme in the materials mentioned here is the casual introduction of the necessary notation in context, in the manner familiar from the teaching of mathematics. A good example at a high-school level is the treatment of probability by Alvord [1]. In their treatment of circuit analysis, Spence and Burgess [21 make heavier use of APL as a means of implementing their system, and Hazony [3] makes combined use of graphic input and APL expressions to specify designs in an expert support system. The direction of my own recent work is described in an ACM Forum letter [4], and drafts of two texts used in courses are currently available [5]. The Pesch and Berry paper on style and literacy [6] should be read by anyone interested in these matters.
Development of the Notation A version of APL has recently been developed [7] which, while remaining within the bounds adopted in t-e ISO standard for the language, has both simplified its structure and increased its expressive power. It provides a significantly better basis for teaching than the notation used in my 1980 paper.
Availability of Implementations Although APL has long been provided by central university computing services, it has been impracticable to use in teaching because of charging rates and lack of suitable terminals. The present availability of APL systems on microcomputers has changed this situation drastically. The system provided for students here at the T. H. Twente is the one I find most satisfactory [8]; it does not yet incorporate such new functions as nub, raze, and all (a generalization of Cartesian product), but does provide the fundamental notions of function rank, the box function (for the general handling of representation or structuress'), and the under operator for the important mathematical notion cf duality. Moreover, the system handles complex numbers (with all of the mathematical functions suitably extended); provides the determinant (- . x ), the permanent (+ . x ), the test for a Latin square ( v . A ), and related func-
388
tions produced by the dot operator; generalizes the or and and functions to provide the greatest common divisor and least common multiple; and exploits the characteristics of the microcomputer and its screen display to provide a 'union' keyboard in which most characters (such as the parentheses and the upper- and lower-case letters used in names) are in their normal typewriter positons.
References 1. Alvord, L. Probabilityin APL. APL Press, STSC Corp., Bethesda, Md. 2. Spence, R., and Burgess, J. CircuitAnalysis. Prentice-Hall, Englewood Cliffs, N.J., 1986. 3. Hazony, Y. A brief report of his work at Boston University appears in a summary of a Minnowbrook Conference reported in APL Quote-Quad 16, 3 (1986). 4. Blaauw, G. A., et al. A curriculum proposal for computer science. Commun. ACM, Forum (Sept. 1985). 5. Iverson, K. E. Mathematics and Programmingand Applied Mathematics for Programmers. (Drafts of both are available from I. P. Sharp Associates, Toronto, Ont., Canada.) 6. Pesch, R., and Berry, M. J. A. Style and literacy in APL. In Proceedings of APL86. ACM, New York, 1986. 7. Iverson, K. E. A Dictionary of the APL Language. Draft available from I. P. Sharp Associates, Toronto, Ont., Canada. 8. Sharp APL/PCX. Computer system for use on IBM AT/370 and XT/370 computers. Available from I. P. Sharp Associates, Toronto, Ont., Canada. The system also runs on a normal IBM PC or AT, much more slowly, but adequately for teaching purposes.
Notation as a Tool of Thought: 1986 389
Relational Database: A Practical Foundation for Productivity E. F. CODD IBM San Jose Research Laboratory The 1981 ACM Turing Award was presented to Edgar F Codd, an IBM Fellow of the San Jose Research Laboratory, by President Peter Denning on November 9, 1981, at the ACM Annual Conference in Los Angeles, California.It is the Association's foremost award for technical contributions to the computing community. Codd was selected by the ACM General Technical Achievement Award Committee for his 'fundamental and continuing contributions to the theory and practice of database management systems.' The originator of the relational model for databases, Codd has made further importantcontributions in the development of relational algebra, relational calculus, and normalization of relations. Edgar F Coddjoined IBM in 1949 to prepare programsfor the Selective Sequence Electronic Calculator. Since then, his work in computing has encompassed logical design of computers (IBM 701 and Stretch), managing a computer center in Canada, heading the development of one of the first operating systems with a general multiprogramming capability, conAuthor's present address: Codd &Date Consulting Group, P.O. Box 20038, San Jose, CA 95160. 391
tributing to the logic of self-reproducing automata, developing high level techniques for software specification, creating and extending the relational approach to database management, and developing an English analyzing and synthesizing subsystem for casual users of relationaldatabases. He is also the author of Cellular Aaiwomata, an early volume in the ACM Monograph Series. Codd received his B.A. and M.A. in Mathematics from Oxford University in England, and his M.Sc. and Ph.D. in Computer and Communication Sciences from the University of Michigan. He is a Member of the National Academy of Engineering(USA) and a Fellow of the British Computer Society. The ACM Tring Award is presented each year in commemoration of A. M. Trying, the English mathematician who made major contributions to the computing sciences. It is well known that the growth in demands from end users for new applications is outstripping the capability of deta processing departments to implement the corresponding application programs. There are two complementary approaches to attacking this problem (and both approaches are needed): one is to put end users into direct touch with the information stored in computers; the other is to increase the productivity of data processing professionals in the development of application programs. It is less well known that a single technology, relational database management, provides a practical foundation for both approaches. It is explained why this is so. While developing this productivity theme, it is noted that the time has come to draw a very sharp line between relational and nonrelational database systems, so that the label 'relational' will not be used in misleading ways. The key to drawing this line is something called a 'relational processing capability.'
1 Introduction It is generally admitted that there is a productivity crisis in the development of 'running code' for commercial and industrial applications. The growth in end user demands for new applications is outstripping the capability of data processing departments to implement the corresponding appliz.ation programs. In the late sixties and early seventies many people in the computing field hoped that the introduction of database management systems (commonly abbreviated DBMS) would markedly increase the productivity of application programmers by removing many of their problems in handling input and output files. DBMS (along with data dictionaries) appear to have been highly successful as instruments of data control, and they did remove many of the file handling details from the concern of application programmers. Why then have they failed as productivity boosters? There are three principal reasons: (1) These systems burdened application programmers with numerous concepts that were irrelevant to their data retrieval and manipulation tasks, forcing them to think and code at a needlessly low level 392
E. F. CODD
of structural detail (the 'owner-member set' of CODASYL DBTG is an outstanding example 1);
(2) No commands were provided for processing multiple records at a time -in other words, DBMS did not support set processing and, as a result, programmers were forced to think and code in terms of iterative loops that were often unnecessary (here we use the word 'set' in its traditional mathematical sense, not the linked structure sense of CODASYL DBTG); (3) The needs of end users for direct interaction with databases, particularly interaction of an unanticipated nature, were inadequately recognized -a query capability was assumed to be something one could add on to a DBMS at some later time. Looking back at the database management systems of the late sixties, we may readily observe that there was no sharp distinction between the programmer's (logical) view of the data and the (physical) representation of data in storage. Even though what was called the logical level usually provided protection from placement expressed in terms of storage addresses and byte offsets, many storage-oriented concepts were an integral part of this level. The adverse impact on development productivity of requiring programmers to navigate along access paths to reach the target data (in some cases having to deal directly with the layout of data in storage and in others having to follow pointer chains) was enormous. In addition, it was not possible to make slight changes in the layout in storage without simultaneously having to revise all programs that relied on the previous structure. The introduction of an index might have a similar effect. As a result, far too much manpower was being invested in continual (and avoidable) maintenance of application programs. Another consequence was that installation of these systems was often agonizingly slow, due to the large amount of time spent in learning about the systems and in planning the organization of the data at both logical and physical levels, prior to database activation. The aim of this preplanning was to 'get it right once and for all' so as to avoid the need for subsequent changes in the data description that, in turn, would force coding changes in application programs. Such an objective was, of course, a mirage, even if sound principles for database design had been known at the time (and, of course, they were not). To show how relational database management systems avoid the three pitfalls cited above, we shall first review the motivation of the relational model and discuss some of its features. We shall then classify 'The crux of the problem with the CODASYL DBTG owner-member set is that it combines into one construct three orthogonal concepts: one-to-many relationship, existence dependency, and a user-visible linked structure to be traversed by application programs. It is that last of these three concepts that places a heavy and unnecessary navigation burden on application programmers. Italso presents an insurmountable obstacle for end users. Relational Database: A Practical Foundation for Productivity
393
,E
systems that are based upon thai model. As we proceed, we shall stress application programmer productivity, even though the benefits for end users are just as great, because much has already been said and demonstrated regarding the value of relational database to end users (see [23] and the papers cited therein).
2 Motivation The most important motivation for the research work that resulted in the relational model was the objective of providing a sharp and clear boundary between the logical and physical aspects of database management (including database design data retrieval, and data manipulation). We call this the data independence objective. A second objective was to make the model structurally simple, so that all kinds of users and programmers could have a common understanding of the data, and could therefore communicate with one another about the database. We call this the communicability objective. A third objective was to introduce high-level language concepts (but not specific syntax) to enable users to express operations upon large chunks of information at a tirne. This entailed providing a foundation for set-oriented processing (i.e., the ability to express in a single statement the processing of multiple sets of records at a time). We call this the set-processing objective. There were other objectives, such as providing a sound theoretical foundation for database organization and management, but these objectives are less relevant to our present productivity theme.
3
The Relational Model To satisfy these three objectives, it was necessary to discard all those data structuring concepts 1e.g._ repeating groups, linked structures) that were not familiar to end users and to take a fresh look at the addressing of data. Positional concepts have always played a significant role in computer addressing, beginning with plugboard addressing, then absolute numeric addressing, relative numeric addressing, and symbolic addressing with arithmetic properties le.g., the symbolic address A + 3 in assembler language; the address X(I + 1, J - 2) of an elementin a Fortran, Algol, or PL/I array named X). In the relational model we replace positional addressing by totally associative addressing. Every datum in a relational database can be uniquely addressed by means of the relation name, primary key value, and attribute name. Associative addressing of this form enables users (yes, and even programmers also!) to leave it to the system to (1) determine the details of placement of a new piece of information that is being inserted into a database and (2) select appropriate access paths when retrieving data. 394
E. F. CODD
All information in a relational database is represented by values in tables (even table names appear as character strings in at least one table). Addressing data by value, rather than by position, boosts the productivity of programmers as well as end users (positions of items in sequences are usually subject to change and are not easy for a person to keep track of, especially if the sequences contain many items). Moreover, the fact that programmers and end users all address data in the same way goes a long way to meeting the communicability objective. The n-ary relation was chosen as the single aggregate structure for the relational model, because with appropriate operators and an appropriate conceptual representation (the table) it satisfies all three of the cited objectives. Note that an n-ary relation is a mathematical set, in which the ordering of rows is immaterial. Sometimes the following questions arise: Why call it the relational model? Why not call it the tabular model? There are two reasons: (1) At the time the relational model was introduced, many people in data processing felt that a relation (or relationship) among two or more objects must be represented by a linked data structure (so the name was selected to counter this misconception); (2) tables are at a lower level of abstraction than relations, since they give the impression that positional (array-type) addressing is applicable (which is not true of n-ary relations), and they fail to show that the information content of a table is independent of row order. Nevertheless, even with these minor flaws, tables are the most important conceptual representation of relations, because they are universally understood. Incidentally, if a data model is to be considered as a serious alternative for the relational model, it too should have a clearly defined conceptual representation for database instances. Such a representation facilitates thinking about the effects of whatever operations are under consideration. It is a requirement for programmer and end-user productivity. Such a representation is rarely, if ever, discussed in data models that use concepts such as entities and relationships, or in functional data models. Such models frequently do not have any operators either! Nevertheless, they may be useful for certain kinds of data type analysis encountered in the process of establishing a new database, especially in the very early stages of determining a preliminary informal organization. This leads to the question: What is the data model? A data model is, of course, not just a data structure, as many people seem to think. It is natural that the principal data models are named after their principal structures, but that is not the whole story. A data model [9] is a combination of at least three components: (1) A collection of data structure types (the database building blocks); (2) A collection of operators or rules of inference, which can be applied to any valid instances of the data types listed in (1), to retrieve, derive, or modify data from any parts of those structures in any combinations desired; Relational Database: A Practical Foundation for Productivity
395
(3) A collection of general integrity rules, which implicitly or explicitly define the set of consistent database states or changes of state or both-these rules are general in the sense that they apply to any database using this model (incidentally, they may sometimes be expressed as insert-update-delete rules). The relational model is a data model in this sense, and was the first such to be defined. We do not propose to give a detailed definition of the relational model here - the :riginal definition appeared in [7], and an improved one in Secs. 2 and 3 of [8]. Its structuralpart consists of domains, relations of assorted degrees (with tables as their principal conceptual representation), attributes, tuples, candidate keys, and primary keys. Under the principal representation, attributes become columns of tables and tuples become rows, but there is no notion of one column succeeding another or of one row succeeding another as far as the database tables are concerned. In other words, the left to right order of columns and the top to bottom order of rows in those tables are arbitrary and irrelevant. The manipulative part of the relational model consists of the algebraic operators (select, project, join, etc.) which transform relations into relations (and hence tables into tables). The integrity part consists of two integrity rules: entity integrity and referential integrity (see [E, 11] for recent developments in this latter area). In any particular application of a data model it may be necessary to impose further (database-specific) integrity constraints, and thereby define a smaller set of consistent database states or changes of state. In the development of the relational model, there has always been a strong coupling between the structural, manipulative, and integrity aspects. If the structures are defined alone and separately, their behavioral properties are not pinned down, infinitely many possibilities present themselves, and endless speculation results. It is therefore no surprise that attempts such as those of CODASYL and ANSI to develop data structure definition language (DDL) and data manipulation language (DML) in separate committees have yielded many misunderstandings and incompatibilities.
4
The Relational Processing Capability The relational model calls not only for relational structures (which can be thought of as tables), but also for a particular kind of set processing called relationalprocessing. Relational processing entails treating whole relations as operands. Iti primary purpose is loop-avoidance, an absolute requirement for end fleers to be productive at all, and a clear
productivity booster for application programmers. 396
E. F. CODD
The SELECT operator (also called RESTRICT) of the relational algebra takes one relation (table) as operand and produces a new relation (table) consisting of selected tuples (rows) of the first. The PROJECT operator also transforms one relation (table) into a new one, this time, however, consisting of selected attributes (columns) of the first. The EQUI-JOIN operator takes two relations (tables) as operands and produces a third consisting of rows of the first concatenated with rows of the second, but only where specified columns in the first and specified columns in the second have matching values. If redundancy in columns is removed, the operator is called NATURAL JOIN. In what follows, we use the term 'join' to refer to either the equi-join or the natural join. The relational algebra, which includes these and other operators, is intended as a yardstick of power. It is not intended to be standard language, to which all relational systems should adhere. The setprocessing objective of the relational model is intended to be met by means of a data sublanguage 2 having at least the power of the relational algebra without making use of iteration or recursion statements. Much of the derivability power of the relational algebra is obtained from the SELECT, PROJECT, and JOIN operators alone, provided the JOIN is not subject to any implementation restrictions having to do with predefinition of supporting physical access paths. A system has an unrestrictedjoincapability if it allows joins to be taken wherein any pair of attributes may be matched, providing only that they are defined on the same domain or data type (for our present purpose, it does not matter whether the domain is syntactic or semantic and it does not matter whether the data type is weak or strong, but see [10] for circumstances in which it does matter). Occasionally, one finds systems in which join is supported only if the attributes to be matched have the same name or are supported by a certain type of predeclared access path. Such restrictions significantly impair the power of the system to derive relations from the base relations. These restrictions consequently reduce the system's capability to handle unanticipated queries by end users and reduce the chances for application programmers to avoid coding iterative loops. Thus, we say that a data sublanguage L has a relational processing capability if the transformations specified by the SELECT, PROJECT, and unrestricted JOIN operators of the relational algebra can be specified in L without resorting to commands for iteration or recursion. For a database management system to be called relational it must support: (1) Tables without user-visible navigation links between them; (2) A data sublanguage with at least this (minimal) relational processing capability. 2
A data sublanguage is a specialized language for database management, supporting at least data definition, data retrieval, insertion, update, and deletion. It need not be cornputationally complete, and usually is not. In the context of application programming, it is intended to be used in conjunction with one or more programming languages. Relational Database: A Practical Foundation for Productivity
397
One consequence of this is that a DBMS that does not support relational processing should be considered nonrelational. Such a system might be more appropriately called tabular, providing that it supports tables without user-visible navigation links between tables. This term should replace the term 'senri-relational' used in 18], because there is a large difference in implementation complexity between tabular systems, in which the programmer does his own navigation, and relational systems, in which the system does the navigation for him, i.e., the system provides automatic navigation. The definition of relational DBMS given above intentionally permits a lot of latitude in the services provided. For example, it is not required that the full relational algebra be supported, and there is no requirement in regard to support of the two integrity rules of the relational model (entity integrity and referential integrity). Full support by a relational system of these latter two parts of the model justifies calling that system fully relational [8]. Although we know of no systems that qualify as fully relational today, some are quite close to qualifying, and no doubt will soon do so. In Fig. 1 we illustrate the distinction between the various kinds of relational and tabular system-r s. For each class the extent of shading in the S box is intended to show the degree of fidelity of members of that class to the structural requirements of the relational model. M
Tabular (previously called semi-relativr all
S .
Minimally Relational
,o C'a 0
Relatironall y
Complete
Fully Relational
FIGURE 1. Classification of DBMS: S, structural; M, manipulative; I, integrity; c, relationalcompleteness; m, minimal relational processing capability.
A similar remark applies to the M box with respect to the manipulative requirements, and to the I box with respect to the integrity requirements. 398
E. F. CODD
m denotes the minimal relational processing capability. c denotes relational completeness (a capability corresponding to a two-valued firstorder predicate logic without nulls). When the manipulation box M is fully shaded, this denotes a capability corresponding to the full relational completeness (a capability corresponding to a two-valued first-order predicate logic without nulls). When the manipulation box M is fully shaded, this denotes a capability corresponding to the full relational algebra defined in [8] (a three-valued predicate logic with a single kind of null). The question mark in the integrity box for each class except the fully relational is an indication of the present inadequate support for integrity in relational systems. Stronger support for domains and primary keys is needed [10], as well as the kind of facility discussed in [14]. Note that a relational DBMS may package its relational processing capability in any convenient way. For example, in the INGRES system of Relational Technology, Inc., the RETRIEVE statement of QUEL [29] embodies all three operators (select, project, join) in one statement, in such a way that one can obtain the same effect as any one of the operators or any combination of them. In the definition of the relational model there are several prohibitions. To cite two examples: user-visible navigation links between tables are ruled out, and database information must not be represented (or hidden) in the ordering of tuples within base relations. Our experience is that DBMS designers who have implemented nonrelational systems do not readily understand and accept these prohibitions. By contrast, users enthusiastically understand and accept the enhanced ease of learning and ease of use resulting from these prohibitions. Incidentally, the Relational Task Group of the American National Standards Institute has recently issued a report [4] on the feasibility of developing a standard for relational database systems. This report contains an enlightening analysis of the features of a dozen relational systems, and its authors clearly understand the relational model.
5 The Uniform Relational Property In order to have wide applicability most relational DBMS have a data sublanguage which can be interfaced with one or more of the commonly used programming languages (e.g., Cobol, Fortran, PL/I, APL). We shall refer to these latter languages as host languages. A relational DBMS usually supports at least one end-user oriented data sublanguage -sometimes several, because the needs of these users may vary. Some prefer string languages such as QUEL or SQL [5], while others prefer the screen-oriented two-dimensional data sublanguage of Query-by-Example [33]. Relational Database: A Practical Foundation for Productivity
399
Now, some relational systems (e.g., System R [6], INGRES [29]) support a data sublanguage that iE usable in two modes: (1) interactively at a terminal and (2) embedded in an application program written in a host language. There are strong arguments for such a double-mode data sublanguage: (1) With such a language application programmers can separately debug at a terminal the database statements they wish to incorporate in their application programs -- people who have used SQL to develop application programs claim that the double-mode feature significantly enhances their productivity; (2) Such a language significantly enhances communication among programmers, analysts, end users, database administration staff, etc.; (3) Frivolous distinctions between the languages used in these two modes place an unnecessary earning and memory burden on those users who have to work in both modes. The importance of this feature in productivity suggests that relational DBMS be classified according to whether they possess this feature or not. Accordingly, we call those relational DBMS that support a doublemode sublanguage uniform relational.Thus, a uniform relational DBMS supports relational processing at both an end-user interface and at an application programming interface using a data sublanguage common to both interfaces. The natural term for all other relational DBMS is nonuniform relational. An example of a nonun form relational DBMS is the TANDEM ENCOMPASS [19]. With this sys-em, when retrieving data interactively at a terminal, one uses the relational data sublanguage ENFORM (a language with relational processing capability). When writing a program to retrieve or manipulate data, one uses an extended version of Cobol (a language that does not possess the relational processing capability). Common to both levels of use are the structures: tables without uservisible navigation links between them. A question that immediately arises is this: how can a data sublanguage with relational processing capability be interfaced with a language such as Cobol or PL/I that can handle data one record at a time only (i.e., that is incapable of treating a set of records as a single operand)? To solve this problem we must separate the following two actions from one another: (1) definition of the relation to be derived; (2) presentation of the derived relation to the host language program. One solution (adopted in the Peterlee Relational Test Vehicle [31]) is to cast a derived relation in the form of a file that can be read record-by-record by means of host language statements. In this case delivery of records is delegated to the file system used by the pertinent host language. Another solution (adopted by System R) is to keep the delivery of records under the control of lt a sublanguage statements and, hence, 400
E. F. CODD
under the control of the relational DBMS optimizer. Q of SQL (the data sublanguage of System R) may host language program, using the following kind pository reasons, the syntax is not exactly that of
A query statement be embedded in a of phrase (for exSQL):
DECLARE C CURSOR FOR Q where C stands for any name chosen by the programmer. Such a statement associates a cursornamed C with the defining expression Q. Tuples from the derived relation defined by Q are presented to the program one at a time by means of the named cursor. Each time a FETCH per this cursor is executed, the system delivers another tuple from the derived relation. The order of delivery is system-determined unless the SQL statement Q defining the derived relation contains an ORDER BY clause. It is important to note that in advancing a cursor over a derived relation the programmer is not engaging in navigation to some target data. The derived relation is itself the target data! It is the DBMS that determines whether the derived relation should be materialized en bloc prior to the cursor-controlled scan or materialized piecemeal during the scan. In either case, it is the system (not the programmer) that selects the access paths by which the derived data is to be generated. This takes a significant burden off the programmer's shoulders, thereby increasing his productivity.
6
Skepticism about Relational Systems There has been no shortage of skepticism concerning the practicality of the relational approach to database management. Much of this skepticism stems from a lack of understanding, some from fear of the numerous theoretical investigations that are based on the relational model [1, 2, 15, 16, 24]. Instead of welcoming a theoretical foundation as providing soundness, the attitude seems to be: if it's theoretical, it cannot be practical. The absence of a theoretical foundation for almost all nonrelational DBMS is the prime cause of their ungepotchket quality. (This is a Yiddish word, one of whose meanings is patched up.) On the other hand, it seems reasonable to pose the following two questions: (1) Can a relational system provide the range of services that we have grown to expect from other DBMS? (2) If (1) is answered affirmatively, can such a system perform as well as nonrelational DBMS? 3 We look at each of these in turn. 3
One should bear in mind that the nonrelational ones always employ comparatively lowlevel data sublanguages for application programming. Relational Database: A Practical Foundation for Productivity
401
6.1
Range of Services A full-scale DBMS provides the following capabilities: * data storage, retrieval, and update; * a user-accessible catalog for data description; * transaction support to ensure that all or none of a sequence of database changes are reflected in the pertinent database (see [17] for an up-to-date summary of transaction technology); * recovery services in case of failure (system, media, or program); * concurrency control services to ensure that concurrent transactions behave the same way as if run in some sequential order; * authorization services to ensure that all access to and manipulation of data be in accordance with specified constraints on users and programs [18]; * integration with support for data communication; * integrity services to ensure tialt database states and changes of state conform to specified rules. Certain relational prototypes developed in the early seventies fell far short of providing all these services (possibly for good reasons). Now, however, several relational systems are available as software products and provide all these services with the exception of the last. Present versions of these p oducts are admittedly weak in the provision of integrity services, but this is rapidly being remedied [10]. Some relational DBMS actually provide more complete data services than the nonrelational systems. Three examples follow. As a first example, relational DBMS support the extraction of all meaningful relations from a database, whereas nonrelational systems support extraction only where there exist statically predefined access paths. As a second example of the additional services provided by some relational systems, consider views. A view is a virtual relation (table) defined by means of an express .n or sequence of commands. Although not directly supported by actual data, a view appears to a user as if it were an additional base table kept up-to-date and in a state of integrity with the other base tables. Views are useful for permitting application programs and users at terminals to interact with constant view structures, even when the base tables themselves are undergoing structural changes at the logical level (providing that the pertinent views are still definable from the new base tables). They are also useful in restricting the scope of access of programs and users. Nonrelational systems either do not support views at all or else support much more primitive counterparts, such as the CODASYL subschema. As a third example, some systems (e.g., SQL/DS [28] and its prototype predecessor System R) permit a variety of changes to be made to the logical and physical organization of the data dynamically402
E. F. CODD
while transactions are in progress. These changes usually require application programs to be recoded. Thus, there is less of a program maintenance burden, leaving programmers to be more productive doing development rather than maintenance. This capability is made possible in SQL/DS by the fact that the system has complete control over access path selection. In nonrelational systems such changes would normally require all other database activities including transactions in progress to be brought to a halt. The database then remains out of action until the organizational changes are completed and any necessary recompiling done.
6.2 Performance Naturally, people would hesitate to use relational systems if these systems were sluggish in performance. All too often, erroneous conclusions are drawn about the performance of relational systems by comparing the time it might take for one of these systems to execute a complex transaction with the time a nonrelational system might take to execute an extremely simple transaction. To arrive at a fair performance comparison, one must compare these systems on the same tasks or applications. We shall present arguments to show why relational systems should be able to compete successfully with nonrelational systems. Good performance is determined by two factors: (1) the system must support performance-oriented physical data structures; (2) highlevel language requests for data must be compiled into lower-level code sequences at least as good as the average application programmer can produce by hand. The first step in the argument is that a program written in a Cobollevel language can be made to perform efficiently on large databases containing production data structured in tabular form with no uservisible navigation links between them. This step in the argument is supported by the following information [19]: as of August 1981, Tandem Computer Corp. had manufactured and installed 760 systems; of these, over 700 were making use of the Tandem ENCOMPASS relational database management system to support databases containing production data. Tandem has committed its own manufacturing database to the care of ENCOMPASS. ENCOMPASS does not support links between the database tables, either user-visible (navigation) links or user-invisible (access method) links. In the second step of the argument, suppose we take the application programs in the above-cited installations and replace the database retrieval and manipulation statements by statements in a database sublanguage with a relational processing capability (e.g., SQL). Clearly, to obtain good performance with such a high-level language, it is Relational Database: A Practical Foundation for Productivity
403
essential that it be compiled into object code (instead of being interpreted), and it is essential that that object code be efficient. Compilation is used in System R and its product version SQL/DS. In 1976 Raymond Lorie developed an ingenious pre- and post-compiling scheme for coping with dynamic changes in access paths [21]. It also copes with early (and hence efficient) authorization and integrity checking (the latter, however, is not yet implemented). This scheme calls for compiling in a rather special way the SQL statements embedded in a host language program. This compilation step transforms the SQL statements into appropriate CALLs within the source program together with access modules containing object code. These modules are then stored in the database for later use at runtime. The code in these access modules is generated by the system so as to optimize the sequencing of the major operations and the selection of access paths to provide runtime efficiency. After this precompilation step, the application program is compiled by a regular compiler for the pertinent host language. If at any subsequent time one or more of the access paths is removed and an attempt is made to run the program, enough source information has been retained in the access module to enable the system to recompile a new access mocu le that exploits the now existing access paths without requiring a recompilation of the application program. Incidentally, the same data s ublanguage compiler is used on ad hoc queries submitted interactively from a terminal and also on queries that are dynamically generated during the execution of a program (e.g., from parameters submitted interactively). Immediately after compilation, such queries are executed and, with the exception of the simplest of queries, the performance is better than that of an interpreter. The generation of access modules (whether at the initial compiling or recompiling stage) entails a quite sophisticated optimization scheme [27], which makes use of system-maintained statistics that would not normally be within the programmer's knowledge. Thus, only on the simplest of all transactions would it be possible for an average application programmer to compete with this optimizer in generation of efficient code. Any attempts to compete are bound to reduce the programmer's productivity. Thus, the price paid for extra compile-time overhead would seem to be well worth paying. Assuming nonlinked tabular structures in both cases, we can expect SQL/DS to generate code comparable with average hand-written code in many simple cases, and superior in many complex cases. Many commercial transactions are extremely simple. For example, one may need to look up a record for a particular railroad wagon to find out where it is or find the balance in someone's savings account. If suitably fast access paths are supported (e.g., hashing), there is no reason why a high-level language such as SQL, QUEL, or QBE should result in less efficient runtime code for these simple transactions than a lowerlevel language, even though such transactions make little use of the optimizing capability of the high-level data sublanguage compiler. 404 E. F. CODD
7
Future Directions If we are to use relational database as a foundation for productivity, we need to know what sort of developments may lie ahead for relational systems. Let us deal with near-term developments first. In some relational systems stronger support is needed for domains and primary keys per suggestions in [10]. As already noted, all relational systems need upgrading with regard to automatic adherence to integrity constraints. Existing constraints on updating join-type views need to be relaxed (where theoretically possible), and progress is being made on this problem [201. Support for outer joins is needed. Marked improvements are being made in optimizing technology, so we may reasonably expect further improvements in performance. In certain products, such as the ICL CAFS [22] and the Britton-Lee IDM500 [13], special hardware support has been implemented. Special hardware may help performance in certain types of applications. However, in the majority of applications dealing with formatted databases, software-implemented relational systems can compete in performance with software-implemented nonrelational systems. At present, most relational systems do not provide any special support for engineering and scientific databases. Such support, including interfacing with Fortran, is clearly needed and can be expected. Catalogs in relational systems already consist of additional relations that can be interrogated just like the rest of the database using the same query language. A natural development that can and should be swiftly put in place is the expansion of these catalogs into full-fledged active dictionaries to provide additional on-line data control. Finally, in the near term, we may expect database design aids suited for use with relational systems both at the logical and physical levels. In the longer term we may expect support for relational databases distributed over a communications network [25, 30, 32] and managed in such a way that application programs and interactive users can manipulate the data (1) as if all of it were stored at the local node location transparency-and (2) as if no data were replicated anywhere - replicationtransparency. All three of the projects cited above are based on the relational model. One important reason for this is that relational databases offer great decomposition flexibility when planning how a database is to be distributed over a network of computer systems, and great recomposition power for dynamic combination of decentralized information. By contrast, CODASYL DBTG databases are very difficult to decompose and recompose due to the entanglement of the owner-member navigation links. This property makes the CODASYL approach extremely difficult to adapt to a distributed database environment and may well prove to be its downfall. A second Relational Database: A Practical Foundation for Productivity
405
reason for use of the relational model is that it offers concise high-level data sublanguages for transmitting requests for data from node to node. The ongoing work in extending the relational model to capture in a formal way more meaning of the data can be expected to lead to the incorporation of this meaning ari the database catalog in order to factor it out of application programs and make these programs even more concise and simple. Here, we are, of course, talking about meaning that is represented in such a way that the system can understand it and act upon it. Improved theories are being developed for handling missing data and inapplicable data (see foi example [3]). This work should yield improved treatment of null values. As it stands today, relational database is best suited to data with a rather regular or homogeneous structure. Can we retain the advantages of the relational approach while handling heterogeneous data also? Such data may include images, text, and miscellaneous facts. An affirmative answer is expected, and some research is in progress on this subject, but more is needed. Considerable research is needed to achieve a rapprochement between database languages and programming languages. Pascal/R [26] is a good example of work in this direction. Ongoing investigations focus on the incorporation of abstract data types into database languages on the one hand [12] and relational processing into programming languages on the other.
8 Conclusions We have presented a series of arguments to support the claim that relational database technology offers dramatic improvements in productivity both for end users and for application programmers. The arguments center on the data independence, structural simplicity, and relational processing defined in the relational model and implemented in relational database management systems. All three of these features simplify the task of developing application programs and the formulation of queries and updates to be submitted from a terminal. In addition, the first feature tends to keep programs viable in the face of organizational and descriptive changes in the database and therefore reduces the effort that is normally diverted into the maintenance of programs. Why, then, does the title of this paper suggest that relational database provides only a foundation for improved productivity and not the total solution? The reason is simple: relational database deals only with the shared data componer t of application programs and end-user interactions. There are numerous complementary technologies that may help with other components or aspects, for example, programming languages that support relational processing and improved checking 406 E. F. CODD
of data types, improved editors that understand more of the language being used, etc. We use the term 'foundation,' because interaction with shared data (whether by program or via terminal) represents the core of so much data processing activity. The practicality of the relational approach has been proven by the test and production installations that are already in operation. Accordingly, with relational systems we can now look forward to the productivity boost that we all hoped DBMS would provide in the first place.
Acknowledgments I would like to express my indebtedness to the System R development team at IBM Research, San Jose, for developing a full-scale, uniform relational prototype that entailed numerous language and system innovations; to the development team at the IBM Laboratory, Endicott, NY., for the professional way in which they converted System R into product form; to the various teams at universities, hardware manufacturers, software firms, and user installations, who designed and implemented working relational systems; to the QBE team at IBM Yorktown Heights, N.Y.; to the PRTV team at the IBM Scientific Centre in England; and to the numerous contributors to database theory who have used the relational model as a cornerstone. A special acknowledgment is due to the very few colleagues who saw something worth supporting in the early stages, particularly, Chris Date and Sharon Weinberg. Finally, it was Sharon Weinberg who suggested the theme of this paper.
References 1.
2.
3.
4.
Berri, C., Bernstein, P., and Goodman, N. A sophisticate's introduction to database normalization theory. Proc. Very Large Data Bases, West Berlin, Germany, Sept. 1978. Bernstein, P. A., Goodman, N., Lai, M-Y Laying phantoms to rest. Report TR-03-81, Center for Research in Computing Technology, Harvard University, Cambridge, Mass., 1981. Biskup, J. A. A formal approach to null values in database relations. Proc. Workshop on Formal Bases for Data Bases, Toulouse, France, Dec. 1979; published in [16] (see below), pp. 299-342. Brodie, M., and Schmidt, J. (Eds.) Report of the ANSI Relational Task
Group. (to be published ACM SIGMOD Record). 5.
6. 7.
Chamberlin, D. D., et al. SEQUEL2: A unified approach to data definition, manipulation, and control. IBMj. Res. & Dev. 20, 6 (Nov. 1976), 560-565. Chamberlin, D. D., et al. A history and evaluation of system R. Comm. ACM 24, 10 (Oct. 1981), 632-646. Codd, E. E A relational model of data for large shared data banks. Comm. ACM 13, 6 (June 1970), 377-387. Relational Database: A Practical Foundation for Productivity
407
-
8. 9. 10.
11. 12.
13.
14.
15. 16. 17. 18. 19.
20.
21. 22. 23. 24.
25. 26. 27.
Codd, E. F. Extending the database relational model to capture more meaning. ACM TODS 4, 4 (Dec. 1979), 397-434. Codd, E. E Data models in database management. ACM SIGMOD Record 11, 2 (Feb. 1981), 112-114. Codd, E. F. The capabilities of relational database management systems. Proc. Convencio Informatcwa Llatina, Barcelona, Spain, June 9-12, 1981, pp. 13-26; also available as Report 3132, IBM Research Lab., San Jose, Calif. Date, C. J. Referential inbtgrity. Proc. Very Large Data Bases, Cannes, France, Sept. 9-11, 1981, pp. 2-12. Ehrig, H., and Weber, H. Algebraic specification schemes for data base systems. Proc. Very Large Data Bases, West Berlin, Germany, Sept. 13-15, 1978, pp. 427-440. Epstein, R., and Hawthorne, P. Design decisions for the intelligent database machine. Proc. NNC 1980, AFIPS, Vol. 49, May 1980, pp. 237-241. Eswaran, K. P., and Chamr berlin, D. D. Functional specifications of a subsystem for database integrity. Proc. Very Large Data Bases, Framingham, Mass., Sept. 1975, Fp. 48-68. Fagin, R. Horn clauses arnd database dependencies. Proc. 1980 ACM SIGACT Symp. on Theory cf Computing, Los Angeles, CA, pp. 123-134. Gallaire, H., Minker, J., and Nicolas, J. M. Advances in Data Base Theory. Vol. 1, Plenum Press, New York, 1981. Gray, J. The transaction concept: Virtues and limitations. Proc. firy Large Data Bases, Cannes, Frane, Sept. 9-11, 1981, pp. 144-154. Griffiths, P. G., and Wade, B. W. An authorization mechanism for a relational database system. ACM TODS 1, 3 (Sept. 1976), 242-255. Held. G. ENCOMPASS: A relational data manager. Data Base/81, Western Institute of Computer Science, Univ. of Santa Clara, Santa Clara, Calif., Aug. 24-28, 1981. Keller, A. M. Updates to relational databases through views involving joins. Report RJ3282, IBM Research Laboratory, San Jose, Calif., October 27, 1981. Lorie, R. A., and Nilsson, J. F. An access specification language for a relational data base system IBMJ. Res & Dev. 23, 3 (May 1979), 286-298. Maller, V.A. J. The content addressable file store -CAFS. ICL Technical J. 1, 3 (Nov. 1979), 265-279. Reisner, P. Human factors studies of database query languages: A survey and assessment. ACM Comnouting Surveys 13, 1 (March 1981), 13-31. Rissanen, J. Theory of relations for databases -A tutorial survey. Proc. Symp. on Mathematical Foibndations of Computer Science, Zakopane, Poland, September 1978, Lccture Notes in Computer Science, No. 64, Springer Verlag, New Yors, 1978. Rothnie, J. B., Jr., et al. Introduction to a system for distributed databases (SDD-1). ACM TODS 5, 1 (March 1980), 1-17. Schmidt, J. W. Some high level language constructs for data of type relation. ACM TODS 2, 3 (Sept. 1977), 247-261. Selinger, P. G., et al. Access path selection in a relational database system. Proc. 1979 ACM SIGMOD InternationalConference on Management of Data, Boston, Mass., May 1979, pp. 23-34.
408 E. F. CODD
28. --SQL/Data system for VSE: A relational data system for application development. IBM Corp. Data Processing Division, White Plains, N.Y., G320-6590, Feb. 1981. 29. Stonebraker, M. R., et al. The design and implementation of INGRES, ACM TODS 1, 3 (Sept. 1976), 189-222. 30. Stonebraker, M. R., and Neuhold, E. J. A distributed data base version of INGRES. Proc. Second Berkeley Workshop on DistributedDataManagement and Computer Networks, Lawrence-Berkeley Lab., Berkeley, Calif., May 1977, pp. 19-36. 31. Todd., S. J. P. The Peterlee relational test vehicle-A system overview. IBM Systems J. 15, 4 (1976), 285-308. 32. Williams, R. et al. R*: An overview of the architecture. Report RJ 3325, IBM Research Laboratory, San Jose, Calif., Oct. 27, 1981. 33. Zloof, M.M. Query by example. Proc. NCC, AFIPS, Vol. 44, May 1975, pp. 431-438.
Categories and Subject Descriptors: D.2.9 [Software Engineering]: Management-productivity; D.3.4 [Programming Languages]: Processors-compilers;H.2.1 [Database Management]: Logical Design-data models General Terms: Design, Human Factors, Languages, Performance
Additional Key Words and Phrases: Data sublanguage, host languages, relational model
Relational Database: A Practical Foundation for Productivity
409
Postscript E. F. CODD Codd and Date Consulting Group San Jose, Calif. Two aims of my Thring Award p iper were (1) to emphasize that the relational model specifies more than the structural aspects of data as seen by users it specifies manipulative and integrity aspects too, and (2) to establish a minimal collection of features of the relational model which could be used to distinguish relational database management systems (DBMs) from nonrelational systems. To be termed 'relational' a DBMS would have to support each one of these features. In the first half of the 1980s most vendors announced DBMS products which they claimed to be relational. Many however, claimed their products to be 'fully relational,' when in fact their products met only the minimal requirements to be termed relational. A few vendors released products which failed to meet even the minimal requirements, but loudly claimed them to be 'fully relational' in their manuals, in their advertisements, and in their presentations and press releases. To protect users who might be expecting to reap all the benefits associated with the relational approach, I decided in the fall of 1985 to publish the twopart article 'How Relational is Your Database Management System?' in Computerworld (October 14 and 21). In Part 1 of this paper I described 12 rules, each of which had to be fully supported by a DBMS product, if that product had a chance of being truthfully claimed to be fully relational. In Part II, I rated three widely advertised DBMS products intended primarily for large mainframes. Two of these three products each received a score of zero on their support for the 12 rules. I believe that these papers have had the very salutary effect of reducing the frequency of flamboyant claims from vendors in the area of relational databases.
410
An Overview of Computational Complexity STEPHEN A. COOK University of Toronto The 1982 Wring Award was presented to Stephen Arthur Cook, Professor of Computer Science at the University of Toronto, at the ACM Annual Conference in Dallas on October 25, 1982. The award is the Association's foremost recognition of technical contributions to the computing community. The citation of Cook's achievements noted that 'Dr. Cook has advanced our understanding of the complexity of computation in a significant and profound way. His seminal paper, The Complexity of Theorem Proving Procedures, presented at the 1971 ACM SIGACT Symposium on the Theory of Computing, laid the foundations for the theory of NP-completeness. The ensuing exploration of the boundaries and nature of the NP-complete class of problems has been one of the most active and important research activities in computer science for the last decade. Cook is well known for his influential results in fundamental areas of computer science. He has made significant contributions to complexity theory, to time-space tradeoffs in computation, and to logics for programming languages. His work is characterized by elegance and insights and has illuminated the very nature of computation.' During 1970-1979, Cook did extensive work under grants from the National Research Council. He was also an E. W R. Staecie Memorial Author's present address: Department of Computer Science, University of Toronto, Toronto, Canada M5S 1A7. 411
Fellowship recipient for 1977-1978. The author of numerous landmark papers, he is currently involved in proving that no 'good' algorithm exists for NP-complete problems. The ACM Wiring Award memorializes A. M. Wring, the English mathematician who made major contributions to the computing sciences. An historical overview of computational complexity is presented. Emphasis is on the fundamental issues of defining the intrinsic computational complexity of a problem and proving upper and lower bounds on the complexity of problems. Probabilistic and parallel computation are discussed.
This is the second Stiring Award lecture on Computational Complexity. The first was given by Michael Rabin in 1976.' In reading Rabin's excellent article [62] now, one of the things that strikes me is how much activity there has been in the field since. In this brief overview I want to mention what to me are the most important and interesting results since the subject began in about 1960. In such a large field the choice of topics is inevitably somewhat personal; however, I hope to include papers which, by any standariS, are fundamental.
1 Early Papers The prehistory of the subject goes back, appropriately, to Alan 'Turing. In his 1937 paper, On computable numbers with an application to the Entscheidungsproblem [85], 'Turing introduced his famous Miring machine, which provided the most convincing formalization (up to that time) of the notion of an effectively (or algorithmically) computable function. Once this notion was pinned down precisely, impossibility proofs for computers were possible. In the same paper Turing proved that no algorithm (i.e., Turing machine) could, upon being given an arbitrary formula of the predicate calculus, decide, in a finite number of steps, whether that formula was satisfiable. After the theory explaining which problems can and cannot be solved by computer was well developed, it was natural to ask about the relative computational difficulty of computable functions. This is the subject matter of computational complexity. Rabin [59, 60] was one of the first persons (1960) to address this general question explicitly: what does it mean to say that J is more difficult to compute than g? Rabin suggested an axiomatic framework that provided the basis for the abstract complexity theory developed by Blum [6] and others. A second early (1965) influential paper was On the computational complexity of algorithms by J. Hcartmanis and R. E. Stearns [37].2 This paper was widely read and gave the field its title. The important 'Michael Rabin and Dana Scott shared the Turing Award in 1976. See Hartmanis [36] for some interesting reminiscences.
2
412 STEPHEN A. COOK
notion of complexity measure defined by the computation time on multitape Turing machines was introduced, and hierarchy theorems were proved. The paper also posed an intriguing question that is still open today. Is any irrational algebraic number (such as Af2) computable in real time, that is, is there a Turing machine that prints out the decimal expansion of the number at the rate of one digit per 100 steps forever? A third founding paper (1965) was The intrinsic computational difficulty of functions by Alan Cobham [15]. Cobham emphasized the word 'intrinsic,' that is, he was interested in a machine-independent theory. He asked whether multiplication is harder than addition, and believed that the question could not be answered until the theory was properly developed. Cobham also defined and characterized the important class of functions he called 49: those functions on the natural numbers computable in time bounded by a polynomial in the decimal length of the input. Three other papers that influenced the above authors as well as other complexity workers (including myself) are Yamada [91], Bennett [4], and Ritchie [66]. It is interesting to note that Rabin, Stearns, Bennett, and Ritchie were all students at Princeton at roughly the same time.
2 Early Issues and Concepts Several of the early authors were concerned with the question: What is the right complexity measure? Most mentioned computation time or space as obvious choices, but were not convinced that these were the only or the right ones. For example, Cobham [15] suggested '. . . some measure related to the physical notion of work [may] lead to the most satisfactory analysis.' Rabin [60] introduced axioms which a complexity measure should satisfy. With the perspective of 20 years experience, I now think it is clear that time and space - especially time - are certainly among the most important complexity measures. It seems that the first figure of merit given to evaluate the efficiency of an algorithm is its running time. However, more recently it is becoming clear that parallel time and hardware size are important complexity measures too (see Section 6). Another important complexity measure that goes back in some form at least to Shannon [74] (1949) is Boolean circuit (or combinational) complexity. Here it is convenient to assume that the function f in question takes finite bit strings into finite bit strings, and the complexity C(n) off is the size of the smallest Boolean circuit that computes f for all inputs of length n. This very natural measure is closely related to computation time (see [57a], [57b], [68b]), and has a well-developed theory in its own right (see Savage [68a]). Another question raised by Cobham [15] is what constitutes a 'step' in a computation. This amounts to asking what is the right An Overview of Computational Complexity 413
computer model for measuring the computation time of an algorithm. Multitape Thring machines are commonly used in the literature, but they have artificial restrictions from the point of view of efficient implementation of algorithms For example, there is no compelling reason why the storage media should be linear tapes. Why not planar arrays of trees? Why not allow a random access memory? In fact, quite a few computer models have been proposed since 1960. Since real computers have random access memories, it seems natural to allow these in the model. But just how to do this becomes a tricky question. If the machine can store integers in one step some bound must be placed on their size. [If the number 2 is squared 100 times the result has 2100 bits, which could not be stored in all the world's existing storage media.) I proposed charged RAM's in [19], in which a cost (number of steps) of about log lxi is charged every time a number x is stored or retrieved. This works but is not completely convincing. A more popular random access model is the one used by Aho, Hopcroft, and Ullman in [3], in which each operation involving an integer has unit cost, but integers are not allowed to become unreasonably large (for example, their magnitude might be bounded by some fixed polynomial in the size of the input). Probably the most mathematically satisfying model is Schonhage's storage modification machine [69], which can be viewed either as a Turing machine that builds its own storage structure or as a unit cost RAM that can only copy, add or subtract one, or store or retrieve in one step. Sch6nhage's machine is a slight generalization of the Kolmogorov-Uspenski machine proposed much earlier [46] (1958), and seems to me to represent the most general machine that could possibly be construed as doing a bounded amount of work in one step. The trouble is that it probably is a little too powerful. (See Section 3 under 'large number multi-
plication.') Returning to Cobham's question 'what is a step,' I think what has become clear in the last 20 years is that there is no single clear answer. Fortunately, the competing computer models are not wildly different in computation time~.In general, each can simulate any other by at most squaring the computation time (some of the first arguments to this effect are in I37]). Among the leading random access models, there is only a factor of log computation time in question. This leads to the final important concept developed by 1965the identification of the class of problems solvable in time bounded by a polynomial in the length of the input. The distinction between polynomial time and exponential time algorithms was made as early as 1953 by von Neumann 190]. However, the class was not defined formally and studied until Cobham [15] introduced the class _f of functions in 1964 (see Section '.). Cobham pointed out that the class was well defined, independent of which computer model was chosen, and gave it a characterization in the spirit of recursive function theory. The idea that polynomial time computability roughly corresponds to 414 STEPHEN A. COOK
tractability was first expressed in print by Edmonds [271, who called polynomial time algorithms 'good algorithms.' The now standard notation P for the class of polynomial time recognizable sets of strings was introduced later by Karp [42]. The identification of P with the tractable (or feasible) problems has been generally accepted in the field since the early 1970's. It is not immediately obvious why this should be true, since an algorithm whose running time is the polynomial n'000 is surely not feasible, and conversely, one whose running time is the exponential 20°°°'t is feasible in practice. It seems to be an empirical fact, however, that naturally arising problems do not have optimal algorithms with such running times.3 The most notable practical algorithm that has an exponential worst case running time is the simplex algorithm for linear programming. Smale [75, 76] attempts to explain this by showing that, in some sense, the average running time is fast, but it is also important to note that Khachian [431 showed that linear programming is in P using another algorithm. Thus, our general thesis, that P equals the feasible problems, is not violated.
3 Upper Bounds on Time A good part of computer science research consists of designing and analyzing enormous numbers of efficient algorithms. The important algorithms (from the point of view of computational complexity) must be special in some way; they generally supply a surprisingly fast way of solving a simple or important problem. Below I list some of the more interesting ones invented since 1960. (As an aside, it is interesting to speculate on what are the all time most important algorithms. Surely the arithmetic operations +, -, *, and . on decimal numbers are basic. After that, I suggest fast sorting and searching, Gaussian elimination, the Euclidean algorithm, and the simplex algorithm as candidates.) The parameter n refers to the size of the input, and the time bounds are the worst case time bounds and apply to a multitape Thring machine (or any reasonable random access machine) except where noted. (1) The fast Fourier transform [23], requiring O(n log n) arithmetic operations, is one of the most used algorithms in scientific computing. (2) Large number multiplication. The elementary school method requires 0(n ) bit operations to multiply two n digit numbers. In 1962 Karatsuba and Ofman [41] published a method requiring only 0(nl5 9) steps. Shortly after that Toom [84] showed how to construct Boolean circuits of size 0(nl+E) for arbitrarily small e > 0 in order to carry out 2
3
See [31], pp. 6-9 for a discussion of this. An Overview of Computational Complexity
415
the multiplication. I was a graduate student at Harvard at the time, and inspired by Cobham's question ']s multiplication harder than addition?' I was naively trying to prove That multiplication requires 9(n 2 ) steps on a multitape Turing machine. Toom's paper caused me considerable surprise. With the help of Stal Aanderaa [221, I was reduced to showing that multiplication requires Q2(n log n/(log log n) 2) steps using an 'online' Turing machine.4 I also pointed out in my thesis that Toom's method can be adapted to mtultitape Turing machines in order to multiply in 0(n1+e) steps, something that I am sure came as no surprise to Toom. The currently fastest asymptotic running time on a multitape Turing machine for number multiplication is 0(n log n log log n), and was devised by Schonhage and Str-issen [70] (1971) using the fast Fourier transform. However, Sch6nhage [69] recently showed by a complicated argument that his storage modification machines (see Section 2) can multiply in time 0(n) (linear time!). We are forced to conclude that either multiplication is easier than we thought or that Sch6nhage's machines cheat. (3) Matrix multiplication The obvious method requires n2(2n-1) arithmetic operations to multiply two n x n matrices, and attempts were made to prove the method optimal in the 1950's and 1960's. There was surprise when Strz.ssen [81] (1969) published his method requiring only 4.7n 2 .' operations. Considerable work has been devoted to reducing the exponent of 2 81, and currently the best time known is 0(n2 496) operations, due to coppersmith and Winograd [24]. There is still plenty of room for progress, since the best known lower bound is 2n2 -l (see [13]). (4) Maximum matchings in general undirected graphs. This was perhaps the first problem explicitly shown to be in P whose membership in P requires a difficult algorithm. Edmonds' influential paper [27] gave the result and discussed the notion of a polynomial time algorithm (see Section 2). He also pointed out that the simple notion of augmenting path, which suffices for the bipartite case, does not work for general undirected graphs. (5) Recognition of prime numbers. The major question here is whether this problem is in P [I 3ther words, is there an algorithm that always tells us whether an arbitrary n-digit input integer is prime, and halts in a number of steps boarded by a fixed polynomial in n? Gary Miller [53] (1976) showed that there is such an algorithm, but its validity depends on the extended Riemnann hypothesis. Solovay and Strassen [77] devised a fast Monte Carlo algorithm (see Section 5) for prime recognition, but if the input number is composite there is a small chance the algorithm will mistakenly say it is prime. The best provable deterministic algorithm known is due to Adleman, Pomerance, and Rumley [2] and runs in time °1 0,og0og1), which is slightly worse than 4
This lower bound has been slightly niproved. See [56] and [64].
416 STEPHEN A. COOK
polynomial. A variation of this due to H. Cohen and H. W. Lenstra Jr. [17] can routinely handle numbers up to 100 decimal digits in approximately 45 seconds. Recently three important problems have been shown to be in the class R The first is linear programming, shown by Khachian [43] in 1979 (see [55] for an exposition). The second is determining whether two graphs of degree at most d are isomorphic, shown by Luks [50] in 1980. (The algorithm is polynomial in the number of vertices for fixed d, but exponential in d.) The third is factoring polynomials with rational coefficients. This was shown for polynomials in one variable by Lenstra, Lenstra, and Lovasz [48] in 1982. It can be generalized to polynomials in any fixed number of variables as shown by Kaltofen's result [39], [40].
4
Lower Bounds The real challenge in complexity theory, and the problem that sets the theory apart from the analysis of algorithms, is proving lower bounds on the complexity of specific problems. There is something very satisfying in proving that a yes-no problem cannot be solved in n, or n 2, or 2n steps, no matter what algorithm is used. There have been some important successes in proving lower bounds, but the open questions are even more important and somewhat frustrating. All important lower bounds on computation time or space are based on 'diagonal arguments.' Diagonal arguments were used by Touring and his contemporaries to prove certain problems are not algorithmically solvable. They were also used prior to 1960 to define hierarchies of computable 0-1 functions. 5 In 1960, Rabin [60] proved that for any reasonable complexity measure, such as computation time or space (memory), sufficiently increasing the allowed time or space etc. always allows more 0-1 functions to be computed. About the same time, Ritchie in his thesis [65] defined a specific hierarchy of functions (which he showed is nontrivial for 0-1 functions) in terms of the amount of space allowed. A little later Rabin's result was amplified in detail for time on multitape Turing machines by Hartmanis and Stearns [37], and for space by Stearns, Hartmanis, and Lewis [78].
4.1
NaturalDecidable Problems Proved Infeasible The hierarchy results mentioned above gave lower bounds on the time and space needed to compute specific functions, but all such functions seemed to be 'contrived.' For example, it is easy to see that the functionf(x,y) which gives the first digit of the output of machine
x on input y after
(lxi + Iyj)2
steps cannot be computed in time
(lxi + yb)2. It was not until 1972, when Albert Meyer and Larry 'See, for example, Grzegorczyk [35]. An Overview of Computational Complexity
417
Stockmeyer [52] proved that the equivalence problem for regular expressions with squaring requires exponential space and, therefore, exponential time, that a nontrivial lower bound for general models of computation on a 'natural' problem was found (natural in the sense of being interesting, and not alout computing machines). Shortly after that Meyer [51] found a very strong lower bound on the time required to determine the truth of formulas in a certain formal decidable theory called WSIS (weak monadic second-order theory of successor). He proved that any computer whose running time was bounded by a fixed number of exponentials (2n, 2i', 222', etc.) could not correctly decide WSIS. Meyer's Ph.D. student, Stockmeyer, went on to calculate [79] that any Boolean circuit (think computer) that correctly decides the truth of an arbitrary WSIS formula of length 616 symbols must have more than 10123 gates. The number 10123 was chosen to be the number of protons that could fit in the kniwn universe. This is a very convincing infeasibility proof! Since Meyer and Stockmeycr there have been a large number of lower bounds on the complexity of decidable formal theories (see [29] and [80] for summaries). One of the most interesting is a doubly exponential time lower bound cn the time required to decide Presburger arithmetic (the theory of the natu ral numbers under addition) by Fischer and Rabin [30]. This is not far from the best known time upper bound for this theory, which is triply exponential [54]. The best space upper bound is doubly exponential '29]. Despite the above successE s, the record for proving lower bounds on problems of smaller complexity is appalling. In fact, there is no nonlinear time lower bound known on a general-purpose computation model for any natural problem in NP (see Section 4.4), in particular, for any of the 300 problems listed in [31]. Of course, one can prove by diagonal arguments the exi:3tence of problems in NP requiring time nk for any fixed k. In the case of space lower bounds, however, we do not even know how to prove thc existence of NP problems not solvable in space O(log n) on an off-line Turing machine (see Section 4.3). This is despite the fact that the beat known space upper bounds in many natural cases are essentially linear in n.
4.2 Structured Lower Bounds Although we have had little success in proving interesting lower bounds for concrete problems on general computer models, we do have interesting results for 'structured' models. The term 'structured' was introduced by Borodin [9] -:c refer to computers restricted to certain operations appropriate to the problem at hand. A simple example of this is the problem of sorting n numbers. One can prove (see [44]) without much difficulty that this requires at least nlogn comparisons, provided that the only operation the computer is allowed to do with the inputs is to compare them in pairs. This lower bound says nothing 418
STEPHEN A. COOK
about Turing machines or Boolean circuits, but it has been extended to unit cost random access machines, provided division is disallowed. A second and very elegant structured lower bound, due to Strassen [82] (1973), states that polynomial interpolation, that is, finding the coefficients of the polynomial of degree n-I that passes through n given points, requires U(nlogn) multiplications, provided only arithmetic operations are allowed. Part of the interest here is that Strassen's original proof depends on Bezout's theorem, a deep result in algebraic geometry. Very recently, Baur and Strassen [83] have extended the lower bound to show that even the middle coefficient of the interpolating polynomial through n points requires U(nlogn) multiplications to compute. Part of the appeal of all of these structured results is that the lower bounds are close to the best known upper bounds, 6 and the best known algorithms can be implemented on the structured models to which the lower bounds apply. (Note that radix sort, which is sometimes said to be linear time, really requires at least nlogn steps, if one assumes the input numbers have enough digits so that they all can be distinct.)
4.3
Time-Space Product Lower Bounds Another way around the impasse of proving time and space lower bounds is to prove time lower bounds under the assumption of small space. Cobham [16] proved the first such result in 1966, when he showed that the time-space product for recognizing n-digit perfect squares on an 'off-line' Turing machine must be 0(n 2 ). (The same is true of n-symbol palindromes.) Here the input is written on a two-way read-only input tape, and the space used is by definition the number of squares scanned by the work tapes available to the Turing machine. Thus, if, for example, the space is restricted to O(log3 n) (which is more
than sufficient), then the time must be
Q(n 2 /log 3 n)
steps.
The weakness in Cobham's result is that although the off-line Touring machine is a reasonable one for measuring computation time and space separately, it is too restrictive when time and space are considered together. For example, the palindromes can obviously be recognized in 2n steps and constant space if two heads are allowed to scan the input tape simultaneously. Borodin and I [10] partially rectified the weakness when we proved that sorting n integers in the range one to n2 requires a time-space product of U(n 2 /logn). The proof applies to any 'general sequential machine,' which includes off-line Touring machines with many input heads, or even random access to the input tape. It is unfortunately crucial to our proof that sorting requires many output bits, and it remains an interesting open question whether a similar lower bound can be made to apply to a set recognition problem, 6
See Borodin and Munro [12] for upper bounds for interpolation. An Overview of Computational Complexity
419
such as recognizing whether all r: input numbers are distinct. (Our lower bound on sorting has recently been slightly improved in [64].)
4.4 NP-Completeness The theory of NP-completeness is surely the most significant development in computational complexity. I will not dwell on it here because it is now well known and is the subject of textbooks. In particular, the book by Garey and Johnson [31] is an excellent place to read about it. The class NP consists of all sets recognizable in polynomial time by a nondeterministic Turing machine. As far as I know, the first time a mathematically equivalent class was defined was by James Bennett in his 1962 Ph.D. thesis [4]. Bennett used the name 'extended positive rudimentary relations' for his class, and his definition used logical quantifiers instead of computing machines. I read this part of his thesis and realized his class could be characterized as the now familiar definition of NP. I used the term + (after Cobham's class f ) in my 1971 paper [18], and Karp gave the now accepted name NP to the class in his 1972 paper [42]. Meanw.lle, quite independent of the formal development, Edmonds, back i.n 1965 [28], talked informally about problems with a 'good characterization,' a notion essentially equivalent to NP. In 1971 [18], I introduced the notion of NP-complete and proved 3-satisfiably and the subgraph problem were NP-complete. A year later, Karp [42] proved 21 problems were NP-complete, thus forcefully demonstrating the importance cf the subject. Independently of this and slightly later, Leonid Levin [49, in the Soviet Union (now at Boston University), defined a similar (and stronger) notion and proved six problems were complete in his sense. The informal notion of 'search problem' was standard in the Soviet literature, and Levin called his problems 'universal search problems.' The class NP includes an enormous number of practical problems that occur in business and industry (see [31]). A proof that an NP problem is NP-complete is a Froof that the problem is not in P (does not have a deterministic polynomial time algorithm) unless every NP problem is in P. Since the latter condition would revolutionize computer science, the practical effect of VP-completeness is a lower bound. This is why I have included this subject in the section on lower bounds. 4.5
#P-Completeness The notion of NP-completeness applies to sets, and a proof that a set is NP-complete is usually interpreted as a proof that it is intractable. There are, however, a large numrnber of apparently intractable functions for which no NP-completeness proof seems to be relevant. Leslie Valiant [86,87] defined the notion of #P-completeness to help remedy this 420
STEPHEN A. COOK
situation. Proving that a function is #P-complete shows that it is apparently intractable to compute in the same way that proving a set
is NP-complete shows that it is apparently intractable to recognize; namely, if a #P-complete function is computable in polynomial time, then P = NP Valiant gave many examples of #P-complete functions, but probably the most interesting one is the permanent of an integer matrix. The permanent has a definition formally similar to the determinant, but whereas the determinant is easy to compute by Gaussian elimination, the many attempts over the past hundred odd years to find a feasible way to compute the permanent have all failed. Valiant gave the first convincing reason for this failure when he proved the permanent #P-complete.
5 Probabilistic Algorithms The use of random numbers to simulate or approximate random processes is very natural and is well established in computing practice. However, the idea that random inputs might be very useful in solving deterministic combinatorial problems has been much slower in penetrating the computer science community. Here I will restrict attention to probabilistic (coin tossing) polynomial time algorithms that 'solve' (in a reasonable sense) a problem for which no deterministic polynomial time algorithm is known. The first such algorithm seems to be the one by Berlekamp [5] in 1970, for factoring a polynomialf over the field GF(p) of p elements. Berlekamp's algorithm runs in time polynomial in the degree off and log p, and with probability at least one-half it finds a correct prime factorization off; otherwise it ends in failure. Since the algorithm can be repeated any number of times and the failure events are all independent, the algorithm in practice always factors in a feasible amount of time. A more drastic example is the algorithm for prime recognition due to Solovay and Strassen [77] (submitted in 1974). This algorithm runs in time polynomial in the length of the input m, and outputs either 'prime' or 'composite.' If m is in fact prime, then the output is certainly 'prime,' but if m is composite, then with probability at most one-half the answer may also be 'prime.' The algorithm may be repeated any number of times on an input m with independent results. Thus if the answer is ever 'composite,' the user knows m is composite; if the answer is consistently 'prime' after, say, 100 runs, then the user has good evidence that m is prime, since any fixed composite m would give such results with tiny probability (less than 2-100). Rabin [61] developed a different probabilistic algorithm with properties similar to the one above, and found it to be very fast on computer An Overview of Computational Complexity
421
trials. The number 2400 -593 was identified as (probably) prime within a few minutes. One interesting application of probabilistic prime testers was proposed by Rivest, Shamir, and Adleman [67a] in their landmark paper on public key cryptosystems in L978. Their system requires the generation of large (100 digit) random primes. They proposed testing random 100 digit numbers using the Solovay-Strassen method until one was found that was probably prime in the sense outlined above. Actually with the new high-powered deterministic prime tester of Cohen and Lenstra [17] mentioned in Sect on 3, once a random 100 digit 'probably prime' number was found it could be tested for certain in about 45 seconds, if it is important to know for certain. The class of sets with polynomial time probabilistic recognition algorithms in the sense of Solovay and Strassen is known as R (or sometimes RP) in the literature. Thus a set is in R if and only if it has a probabilistic recognition algorithm that always halts in polynomial time and never makes a mistake for inputs not in R, and for each input in R it outputs the right answer for each run with probability at least one-half. Hence the set of corn oosite numbers is in R, and in general P C R C NP. There are other interesting examples of sets in R not known to be in P. For example Schwartz [71] shows that the set of nonsingular matrices whose entries are polynomials in many variables is in R. The algorithm evaluates the polynomials at random small integer values and computes the determinant of the result. (The determinant apparently cannot feasibly be computed directly because the polynomials computed would have exponentially many terms in general.) It is an intriguing, open question whether R = P. It is tempting to conjecture yes on the philosophical grounds that random coin tosses should not be of much use when the answer being sought is a welldefined yes or no. A related que stion is whether a probabilistic algorithm (showing a problem is in R) is for all practical purposes as good as a deterministic algorithm. After all, the probabilistic algorithms can be run using the pseudorandom number generations available on most computers, and an error probability of 2-'°° is negligible. The catch is that pseudorandom number generators do not produce truly random numbers, and nobody knows how well they will work for a given probalistic algorithm. In fact, experience shows they seem to work well. But if they always work we.1, then it follows that R= P, because pseudorandom numbers are generated deterministically so true randomness would not help after all. Another possibility is to use a physical process such as thermal noise lo generate random numbers. But it is an open question in the philosophy of science how truly random nature can be. Let me close this section by mentioning an interesting theorem of Adlemen [1] on the class R. It is easy to see [57b] that if a set is in P, then for each n there is a Boclean circuit of size bounded by a fixed polynomial in n which determines whether an arbitrary string of length 422
STEPHEN A. COOK
n is in the set. What Adleman proved is that the same is true for the class R. Thus, for example, for each n there is a small 'computer circuit' that correctly and rapidly tests whether n digit numbers are prime. The catch is that the circuits are not uniform in n, and in fact for the case of 100 digits it may not be feasible to figure out how to build the circuit. 7
6 Synchronous Parallel Computation With the advent of VLSI technology in which one or more processors can be placed on a quarter-inch chip, it is natural to think of a future composed of many thousands of such processors working together in parallel to solve a single problem. Although no very large generalpurpose machine of this kind has been built yet, there are such projects under way (see Schwartz [72]). This motivates the recent development of a very pleasing branch of computation complexity: the theory of large-scale synchronous parallel computation, in which the number of processors is a resource bounded by a parameter H(n) (tI is for hardware) in the same way that space is bounded by a parameter S(n) in sequential complexity theory. Typically H(n) is a fixed polynomial in n. Quite a number of parallel computation models have been proposed (see [21] for a review), just as there are many competing sequential models (see Section 2). There are two main contenders, however. The first is the class of shared memory models in which a large number of processors communicate via a random access memory that they hold in common. Many parallel algorithms have been published for such models, since real parallel machines may well be like this when they are built. However, for the mathematical theory these models are not very satisfactory because too much of their detailed specification is arbitrary: How are read and write conflicts in the common memory resolved? What basic operations are allowed for each processor? Should one charge log H(n) time units to access common memory? Hence I prefer the cleaner model discussed by Borodin [8] (1977), in which a parallel computer is a uniform family (B,) of acyclic Boolean circuits, such that B, has n inputs (and hence takes care of those input strings of length n). Then H(n) (the amount of hardware) is simply the number of gates in B, and T(n) (the parallel computation time) is the depth of the circuit B, (i.e., length of the longest path from an input to an output). This model has the practical justification that presumably all real machines (including shared memory machines) are built from Boolean circuits. Furthermore, the minimum Boolean size and depth needed to compute a function is a natural mathematical problem and was considered well before the theory of parallel computation was around. 'For more theory on probabilistic computation, see Gill [32]. An Overview of Computational Complexity
423
Fortunately for the theory 'he minimum values of hardware H(n) and parallel time T(n) are not wvidely different for the various competing parallel computer models. In particular, there is an interesting general fact true for all the models, first proved for a particular model by Pratt and Stockmeyer [58] in 1974 anr called the 'parallel computation thesis' in [33]; namely, a problem can be solved in time polynomial in T(n) by a parallel machine (with unlimited hardware) if and only if it can be solved in space polynomial in T(n) by a sequential machine (with unlimited time). A basic question in paralle. computation is: Which problems can be solved substantially faster using many processors rather than one processor? Nicholas Pippenger [57a] formalized this question by defining the class (now called NC, for 'Nick's class') of problems solvable ultra fast [time T(n):= (log n)°')(] on a parallel computer with a feasible [H(n) = n0 M)] amount of hardware. Fortunately, the class NC remains the same, independent of the particular parallel computer model chosen, and it is easy 0o see that NC is a subset of the class FP of functions computable sequentially in polynomial time. Our informal question can then be formalized as follows: Which problems in FP are also in FC? It is conceivable (though unlikely) that NC = FP,since to prove NC f FP would require a breakthrough in complexity theory (see the end of Section 4.1). Since we do not know how to prove a function fin FP is not in NC, the next best thing is to prove that f is log space-complete for FR This is the analog of proving a problem is NP-complete, and has the practical effect of discouraging efforts for finding super fast parallel algorithms for f This is because if f is log space-complete for FP and f is in NC, then FP=NC, which would be a big surprise. Quite a bit of progress has been made in classifying problems in FP as to whether they are in NC or log space-complete for FP (of course, they may be neither). The first example of a problem complete for P was presented in 197:3 by me in [20], although I did not state the result as a completeness result. Shortly after that Jones and Laaser [38] defined this notion of completeness and gave about five examples, including the emptiness problem for context-free grammers. Probably the simplest problem proved complete for FP is the so-called circuit value problem [47]: given a Boolean circuit together with values for its inputs, find the value of the output. The example most interesting to me, due to Goldschlager, Shaw, and Staples [34], is finding the (parity of) maximum flow through a given network with (large) positive integer capacities on its edges. The interest comes from the subtlety of the completeness proof. F. rally, I should mention that linear programming is complete for FR In this case the difficult part is showing that the problem is in P (see [4.3]), after which the completeness proof [26] is straightforward. Among the problems known to be in NC are the four arithmetic operations (+, -, *, ÷) on binary numbers, sorting, graph connectiv424
STEPHEN A. COOK
ity, matrix operations (multiplication, inverse, determinant, rank), polynomial greatest common divisors, context-free languages, and finding a minimum spanning forest in a graph (see [11], [21], [63], [67b]). The size of a maximum matching for a given graph is known [11] to be in 'random' NC (NC in which coin tosses are allowed), although it is an interesting open question of whether finding an actual maximum matching is even in random NC. Results in [89] and [67b] provide general methods for showing problems are in NC. The most interesting problem in FP not known either to be complete for FP or in (random) NC is finding the greatest common divisor of two integers. There are many other interesting problems that have yet to be classified, including finding a maximum matching or a maximal clique in a graph (see [88]).
7 The Future Let me say again that the field of computational complexity is large and this overview is brief. There are large parts of the subject that I have left out altogether or barely touched on. My apologies to the researchers in those parts. One relatively new and exciting part, called 'computational information theory,' by Yao [92], builds on Shannon's classical information theory by considering information that can be accessed through a feasible computation. This subject was sparked largely by the papers by Diffie and Hellman [25] and Rivest, Shamir, and Adleman [67a] on public key cryptosystems, although its computational roots go back to Kolmogorov [45] and Chaitin [14a], [14b], who first gave meaning to the notion of a single finite sequence being 'random,' by using the theory of computation. An interesting idea in this theory, considered by Shamir [73] and Blum and Micali [7], concerns generating pseudorandom sequences in which future bits are provably hard to predict in terms of past bits. Yao [92] proves that the existence of such sequences would have positive implications about the deterministic complexity of the probabilistic class R (see Section 5). In fact, computational information theory promises to shed light on the role of randomness in computation. In addition to computational information theory we can expect interesting new results on probabilistic algorithms, parallel computation, and (with any luck lower bounds. Concerning lower bounds, the one breakthrough for which I see some hope in the near future is showing that not every problem in P is solvable in space O(log n), and perhaps also P * NC. In any case, the field of computational complexity remains very vigorous, and I look forward to seeing what the future will bring. An Overview of Computational Complexity
425
Acknowledgments I am grateful to my complexity colleagues at Toronto for many helpful comments and sugges ions, especially Allan Borodin, Joachim von zur Gathen, Silvio Micali, and Charles Rackoff.
References 1. Adleman, L. TWo theorems 3n random polynomial time. Proc. 19th IEEE Symp. on Foundations of Computer Science. IEEE Computer Society, Los Angeles (1978), 75-83. 2. Adleman, L., Pomerance, C., and Rumley, R. S. On distinguishing prime numbers from composite numbers. Annals ofMath 117 (January 1983), 173-206. 3. Aho, A. V., Hopcroft, J. E., and Ullman, J. D. The Design and Analysis of Computer Algorithms. Avddison-Wesley, Reading, Mass., 1974. 4. Bennett, J. H. On Spectra. Doctoral dissertation, Department of Mathematics, Princeton lJniversity, 1962. 5. Berlekamp, E. R. Factorirg polynomials over large finite fields. Math. Comp. 24 (1970), 713-735. 6. Blum, M. A machine independent theory of the complexity of recursive functions. JACM 14, 2 (April 1967), 322-336. 7. Blum, M., and Micali, S. How to generate cryptographically strong sequences of pseudo random bits. Proc. 23rd IEEE Symp. on Foundations of Computer Science. IEEE Cqmputer Society, Los Angeles (1982), 112-117. 8. Borodin, A. On relating time and space to size and depth. SIAMJ. Comp. 6 (1977), 733-744. 9. Borodin, A. Structured vs. general models in computational complexity. In Logic and Algorithmir, Monographie no. 30 de LEnseignement Mathematique Universityi de Geneve, 1982. 10. Borodin, A., and Cook, S. A time-space tradeoff for sorting on a general sequential model of computation. SIAMJ. Comput. 11 (1982), 287-297. 11. Borodin, A., von zur Gathen, J., and Hopcroft, J. Fast parallel matrix and GCD computations. 23rd IEEE Symp. on Foundations of Computer Science. IEEE Computer Society, Los Angeles (1982), 65-71. 12. Borodin, A., and Munro, I. The Computational Complexity of Algebraic and Numeric Problems. Elsevier, New York, 1975. 13. Brockett, R. W., and Dob in, D. On the optimal evaluation of a set of bilinear forms. Linear Algebra and Its Applications 19 (1978), 207-235. 14a. Chaitin, G. J. On the length of programs for computing finite binary sequences. JACM 13, 4 (October 1966), 547-569;JACM 16, 1 (January 1969), 145-159. 14b. Chaitin, G. J. A theory of program size formally identical to informational theory. JACM 22, 3 (July 1975), 329-340. 15. Cobham, A. The intrinsic computational difficulty of functions. Proc. 1964 International Congress for Logic, Methodology, and Philosophy of Sciences. Y. Bar-Hellel, Ecd., North Holland, Amsterdam, 1965, 24-30. 16. Cobham, A. The recognition problem for the set of perfect squares. IEEE Conference Record Seventh SWAT (1966), 78-87. 426 STEPHEN A. COOK
17. Cohen, H., and Lenstra, H. W., Jr. Primarily testing and Jacobi sums. Report 82-18, University of Amsterdam, Dept. of Math., 1982. 18. Cook, S. A. The complexity of theorem proving procedures. Proc. 3rd ACM Symp. on Theory of Computing. Shaker Heights, Ohio (May 3-5, 1971), 151-158. 19. Cook, S. A. Linear time simulation of deterministic two-way pushdown automata. Proc. IFIP Congress 71 [Theoretical Foundations). North Holland, Amsterdam, 1972, 75-80. 20. Cook, S. A. An observation on time-storage tradeoff. JCSS 9 (1974), 308-316. Originally in Proc. 5th ACM Symp. on Theory of Computing, Austin, TX (April 30-May 2 1973), 29-33. 21. Cook, S. A. Towards a complexity theory of synchronous parallel computation. LEnseignement Mathematique XXVII (1981), 99-124. 22. Cook, S. A., and Aanderaa, S. 0. On the minimum computation time of functions. D-ans. AMS 142 (1969), 291-314. 23. Cooley, J. M., and Tukey, J. W. An algorithm for the machine calculation of complex Fourier series. Math. Comput. 19 (1965), 297-301. 24. Coppersmith, D., and Winograd, S. On the asymptomatic complexity of matrix multiplication. SIAMJ Comp. 11 (1982), 472-492. 25. Diffie, W., and Hellman, M. E. New direction in cryptography. IEEE Trans. on Inform. Theory IT-22, 6 (1976), 644-654. 26. Dobkin, D., Lipton, R. J., and Reiss, S. Linear programming is log-space hard for P. Inf Processing Letters 8 (1979), 96-97. 27. Edmonds, J. Paths, trees, flowers. Canad J Math. 17 (1965), 449-67. 28. Edmonds, J. Minimum partition of a matroid into independent subsets. J. Res. Nat. Bur. Standards Sect. B, 69 (1965), 67-72. 29. Ferrante, J., and Rackoff, C. W. The Computational Complexity of Logical Theories. Lecture Notes in Mathematics. #718, Springer Verlag, New York, 1979. 30. Fischer, M. J., and Rabin, M. 0. Super-exponential complexity of Presburger arithmetic. In Complexity of Computation. SIAM-AMS Proc. 7, R. Karp, Ed., 1974, 27-42. 31. Garey, M. R., and Johnson, D. S. Computers and Intractability:A Guide to the Theory of NP- Completeness. W. H. Freeman, San Francisco, 1979. 32. Gill, J. Computational complexity of probabilistic Thring machines. SIAMJ. Comput. 6 (1977), 675-695. 33. Goldschlager, L. M. Synchronous ParallelComputation. Doctoral dissertation, Dept. of Computer Science, Univ. of Toronto, 1977. See alsoJACM 29, 4 (October 1982), 1073-1086. 34. Goldschlager, L. M., Shaw, R. A., and Staples, J. The maximum flow problem is log space complete for P. Theoretical Computer Science 21 l1982), 105- 111. 35. Grzegorczyk, A. Some classes of recursive functions. Rozprawy Matemtyczne, 1953.
36. Hartmanis, J. Observations about the development of theoretical computer science. Annals Hist. Comput. 3, 1 (Jan. 1981), 42-51. 37. Hartmanis, J., and Stearns, R. E. On the computational complexity of algorithms. Trans. AMS 117 (1965), 285-306. 38. Jones, N. D., and Laaser, W. T. Complete problems for deterministic polynomial time. Theoretical Computer Science 3 (1977), 105-427. An Overview of Computational Complexity
427
39.
40.
41.
42.
43.
44. 45. 46.
47. 48.
49.
50.
51.
52.
53. 54. 55. 56.
57a.
428
Kaltofen, E. A polynomial reduction from multivariate to bivariate integer polynomial factorization. Proc. 14th ACM Symp. in Theory Comp., San Francisco, CA (May 5-7 1982), 261-266. Kaltofen, E. A polynomial-time reduction from bivariate to univariate integral polynomial factorization. Proc. 23rd IEEE Symp. on Foundations of Computer Science. IEEE Computer Society, Los Angeles (1982), 57-64. Karatsuba, A., and Ofman, Yu. Multiplication of multidigit numbers on automata. Doklady Aka.d. Nauk 145, 2 (1962), 293-294. Translated in Soviet Phys. Doklady 77 1963), 595-596. Karp, R. M. Reducibility among combinatorial problems. In: Complexity of Computer Computations. R. E. Miller and J. W. Thatcher, Eds., Plenum Press, New York, 1972, 85-104. Khachian, L. G. A polynomial time algorithm for linear programming. DokladyAkad. Nauk SSSF. 244, 5 (1979), 1093-96. Translated in Soviet Math. Doklady 20, 191-194. Knuth, D. E. The Art of Computer Programming, vol. 3. Sorting and Searching. Addison-Wesley, Reading, MA, 1973. Kolmogorov, A. N. Three approaches to the concept of the amount of information. Probl. Pered. rnf (Probl. of Inf D-ansm.) 1 (1965). Kolmogorov, A. N., and UJspenski, V. A. On the definition of an algorithm, Uspehi Mat. Nabk. 13 ( 1958), 3-28: AMS Transl. 2nd ser. 29 (1963), 217-245. Ladner, R. E. The circuit value problem is log space complete for P. SIGACT News 7, 1 (1975, 18-20. Lenstra, A. K., Lenstra, El. W., and Lovasz, L. Factoring polynomials with rational coefficients. Report 82-05, University of Amsterdam, Dept. of Math., 1982. Levin, L. A. Universal search problems. Problemy Peredaci Informacii 9 (1973), 115-116. Translated in Problems of Information Transmission 9, 265-266. Luks, E. M. Isomorphism of graphs of bounded valence can be tested in polynomial time. Proc. 21st IEEE Symp. on Foundations of Computer Science. IEEE Computer So'iety, Los Angeles (1980), 42-49. Meyer, A. R. Weak monacic second-order theory of successor is not elementary-recursive. Lecfure Notes in Mathematics 453. Springer Verlag, New York, 1975, 132-154. Meyer, A. R., and Stockmeyer, L. J. The equivalence problem for regular expressions with squaring requires exponential space. Proc. 13th IEEE Symp. on Switching and Automata Theory (1972), 125-129. Miller, G. L. Riemann's hypothesis and tests for primality. J. Comput. System Sci. 13 (1976), 300--317. Oppen, D. C. A 2 p upper bound on the complexity of Presburger arithmetic. J. Comput. SysL. Sci. 16 (1978), 323-332. Papadimitriou, C. H., and Steiglitz, K. Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, Englewood Cliffs, NJ, 1982. Paterson, M. S., Fischer, V.. J., and Meyer, A. R. An improved overlap argument for on-line multiplication. SIAM-AMS Proc. 7, Amer. Math. Soc., Providence, 1974, 97-111. Pippenger, N. On simultaneous resource bounds (preliminary version). Proc. 20th IEEE Symp. on Ibundations of Computer Science. IEEE Computer Society, Los Angeles (1979), 307-311.
STEPHEN A. COOK
57b. Pippenger, N. J., and Fischer, M. J. Relations among complexity measures. JACM 26, 2 (April 1979), 361-381. 58. Pratt, V. R., and Stockmeyer, L. J. A characterization of the power of vector machines. J Comput. System Sci. 12 (1976), 198-221. Originally in Proc. 6th ACM Symp. on Theory of Computing, Seattle, WA (April 30-May 2, 1974), 122-134. 59. Rabin, M. 0. Speed of computation and classification of recursive sets. Third Convention Sci. Soc., Israel, 1959, 1-2. 60. Rabin, M. 0. Degree of difficulty of computing a function and a partial ordering of recursive sets. Tech. Rep. No. 1, O.N.R., Jerusalem, 1960. 61. Rabin, M. 0. Probabilistic algorithms. In Algorithms and Complexity, New Directionsand Recent Itends, J. F. T'aub, Ed., Academic Press, New York, 1976, 29-39. 62. Rabin, M. 0. Complexity of computations. Comm. ACM 20, 9 (September 1977), 625-633. 63. Reif, J. H. Symmetric complementation. Proc. 14th ACM Symp. on Theory of Computing, San Francisco, CA (May 5-7, 1982), 201-214. 64. Reisch, S., and Schnitger, G. Three applications of Kolmorgorov complexity. Proc. 23rd IEEE Symp. on Foundationsof Computer Science. IEEE Computer Society, Los Angeles (1982), 45-52. 65. Ritchie, R. W. Classes of Recursive Functions of PredictableComplexity. Doctoral Dissertation, Princeton University, 1960. 66. Ritchie, R. W. Classes of predictably computable functions. 7)ans. AMS 106 (1963), 139-173. 67a. Rivest, R. L., Shamir, A., and Adleman, L. A method for obtaining digital signatures and public-key cryptosystems. Comm. ACM 21, 2 (February 1978), 120-126. 67b. Ruzzo, W. L. On uniform circuit complexity. J Comput. System Sci. 22 (1981), 365-383. 68a. Savage, J. E. The Complexity of Computing. Wiley, New York, 1976. 68b. Schnorr, C. P. The network complexity and the Turing machine complexity of finite functions. Acta Informatica 7 (1976), 95-107. 69. Sch6nhage, A. Storage modification machines. SIAMJ Comp. 9 (1980), 490-508. 70. Schonhage, A., and Strassen. V.Schnelle Multiplication grosser Zahlen. Computing 7 (1971), 281-292. 71. Schwartz, J. T. Probabilistic algorithms for verification of polynomial identities. JACM 27, 4 (October 1980), 701-717. 72. Schwartz, J. T. Ultracomputers. ACM Trans. on Prog. Languages and Systems 2, 4 (October 1980), 484-521. 73. Shamir, A. On the generation of cryptographically strong pseudo random sequences. 8th Int. Colloquium on Automata, Languages, and Programming (July 1981). Lecture Notes in Computer Science 115. Springer Verlag, New York, 544-550. 74. Shannon, C. E. The synthesis of two terminal switching circuits. BSTJ 28 (1949), 59-98. 75. Smale, S. On the average speed of the simplex method of linear programming. Preprint, 1982. 76. Smale, S. The problem of the average speed of the simplex method. Preprint, 1982. An Overview of Computational Complexity 429
77. Solovay, R., and Strassen, 'V. A fast monte-carlo test for primality. SIAM J. Comput. 6 (1977), 84-85. 78. Stearns, R. E., Hartmanis, [., and Lewis, P. M. II Hierarchies of memory limited computations. 6th IEEE Symp. on Switching Circuit Theory and Local Design (1965), 179-190. 79. Stockmeyer, L. J. The complexity of decision problems in automata theory and logic. Doctora' Thesis, Dept. of Electrical Eng., MIT, Cambridge, MA., 1974; Report TR-133, MIT, Laboratory for Computing Science. 80. Stockmeyer, L. J. Classifying the computational complexity of problems. Research Report RC 7606 (l979), Math. Sciences Dept., IBM T.J. Watson Research Center, Yorktown Heights, N.Y. 81. Strassen, V.Gaussian elimination is not optimal. Num. Math. 13 (1969), 354-356. 82. Strassen, V. Die Berechnungskomplexitdt von elementarsymmetrischen Funktionen und von Interpolationskoeffizienten. Numer. Math. 20 (1973), 238-251. 83. Baur, W., and Strassen, V. The complexity of partial derivatives. Preprint, 1982. 84. Toom, A. L. The complexity of a scheme of functional elements realizing the multiplication of integers. Doklady Akad. Nauk. SSSR 150, 3 (1963), 496-498. Translate d in Soviet Math. Doklady 3 (1963), 714-716. 85. Touring, A. M. On computaLle numbers with an application to the Entscheidnungsproblem. Proc London Math. Soc. ser. 2, 42 (1963-7), 230-265. A correction. ibid. 43 (1937), 544-546. 86. Valiant, L. G. The complexi -y of enumeration and reliability problems. SIAMJ. Comput. 8 (1979', 189-202. 87. Valiant, L. G. The complexity of computing the permanent. Theoretical Computer Science 8 (19791, 189-202. 88. Valiant, L. G. Parallel com:jutation. Proc. 7th IBMJapan Symp. Academic 6 Scientific Programs, IBM Japan, Tokyo (1982). 89. Valiant, L. G., Skyum, S., Berkowitz, S., and Rackoff, C. Fast parallel computation on polynomials using few processors. Preprint (Preliminary version in Springer Lecture Notes in Computer Science 118 (1981), 132-139. 90. von Neumann, J. A certain zero-sum two-person game equivalent to the optimal assignment p--cblem. Contributionsto the Theory of Games II. H. W. Kahn and A. W. Tucker, Eds. Princeton Univ. Press, Princeton, NJ, 1953. 91. Yamada, H. Real time computation and recursive functions not realtime computable. IRE lhiRsactionson Electronic ComputersEC-1 (1962), 753-760. 92. Yao, A. C. Theory and applications of trapdoor functions (extended abstract). Proc. 23rd IEEE Symp. on Foundationsof Computer Science. IEEE Computer Society, Los Angeles (1982), 80-91.
Categories and Subject Descriptors: F.1.2 [Computation by Abstract Devices]: Modes of Computationparallelism;probabilisticcomputation; F.2.1 [Analysis of Algorithms and 430 STEPHEN A. COOK
Problem Complexity]: Numerical Algorithms and Problems-cormputations on polynomials; G.3 [Mathematics of Computing]: Probability and Statistics: probabilistic algorithms
General Terms: Algorithms, Theory
Additional Key Words and Phrases: Fast Fourier transform, Monte Carlo algorithm, NP-completeness
An Overview of Computational Complexity
431
Combinatorics, Complexity, and Randomness RICHARD M. KARP RichardM. Karp, of the University of California-Berkeley, received the 1985 ACM A. M. Thring Award for his fundamental contributions to complexity theory. Presented at the Association's Annual Conference in Denver, Colorado, in October, the award is ACM's highest honor in computer science research. By 1972, Karp had established a reputation as one of the world's leading computer science theoristsparticularlywith his seminal paper, 'Reducibility among CombinatorialProblems' (in Complexity of Computer Computations (Symposium Proceedings),Plenum, New York, 1972). Extending earlier work of Stephen Cook, he applied the concept of polynomial-time reducibility to show that most classical problems of combinatorial optimization are NP-complete and hence intractable unless P equals NP. This changed the way computer scientists looked at practicalproblems like routing (including the celebrated traveling salesman problem), packing, covering, matching, partitioning,and scheduling, and led to greater emphasis on approximate methods for solving these difficult problems. In later work, Karp pioneered the use of probabilisticanalysis to validate the performance of such approximate methods. Karp is a professor in three Berkeley departments:Electrical Engineering and Computer Sciences, Mathematics, and Industrial Engineering and Operations Research. This year he is cochair of the year-long research Authors' present address: 571 Evans Hall, University of California, Berkeley, CA 94720. 433
program in computationalcomplexity at the MathematicalSciences Research Institute, funded by the National Science Foundation. Born in Boston, Karp earned his Ph.D. in applied mathematics at Harvard University in 1959. He worked nine years as a computer scientist in IBM's research laboratory iz Yorktown Heights, New York, and has held visiting faculty appointments at the University of Michigan, Columbia University, New York Universit>, and the Polytechnic Institute of Brooklyn. He joined the Berkeley faculty in 1968 and held the Miller Professorship at Berkeley in 1980-1981. In 19,30, he was elected to the NationalAcademy of Sciences. The 1985 Turing Award winner presents his perspective on the development of the field that has come to be called theoretical computer science. This lecture is dedicated to the memory of my father, Abraham Louis Karp.
I am honored and pleased to be the recipient of this year's Turing Award. As satisfying as it is to receive such recognition, I find that my greatest satisfaction as a researcher has stemmed from doing the research itself, and from the friendships I have formed along the way. I would like to roam with you through my 25 years as a researcher in the field of combinatorial algorithms and computational complexity, and tell you about some of the concepts that have seemed important to me, and about some of the people who have inspired and influenced me.
Beginnings My entry into the computer field was rather accidental. Having graduated from Harvard College in 1955 with a degree in mathematics, I was confronted with a decision as to what to do next. Working for a living had little appeal, so graduate school was the obvious choice. One possibility was to pursue a career in mathematics, but the field was then in the heyday of its emphasis on abstraction and generality, and the concrete and applicable mathematics that I enjoyed the most seemed to be out of fashion. And so, almost by default, I entered the Ph.D. program at the Harvard Computation Laboratory. Most of the topics that were to become the bread and butter of the computer science curriculum had not even been thought of then, and so I took an eclectic collection of courses: switching theory, numerical analysis, applied mathematics, probability and statistics, operations research, electronics, and mathematical linguistics. While the curriculum left much to be desired in depth and coherence, there was a very special spirit in the air; we knew that we were witnessing the birth of a new scientific discipline centered on the computer. I discovered that I found beauty and elegance in the structure of algorithms, and that I had a knack for the discrete mathematics that formed the basis for the study of computers and computation. In short, I had stumbled more or less by accident into a field that was very much to my liking. 434
RICHARD M. KARP
Easy and Hard Combinatorial Problems Ever since those early days, I have had a special interest in cormbinatorial search problems-problems that can be likened to jigsaw puzzles where one has to assemble the parts of a structure in a particular way. Such problems involve searching through a finite, but extremely large, structured set of possible solutions, patterns, or arrangements, in order to find one that satisfies a stated set of conditions. Some examples of such problems are the placement and interconnection of components on an integrated circuit chip, the scheduling of the National Football League, and the routing of a fleet of school buses. Within any one of these combinatorial puzzles lurks the possibility of a combinatorial explosion. Because of the vast, furiously growing number of possibilities that have to be searched through, a massive amount of computation may be encountered unless some subtlety is used in searching through the space of possible solutions. I'd like to begin the technical part of this talk by telling you about some of my first encounters with combinatorial explosions. My first defeat at the hands of this phenomenon came soon after I joined the IBM Yorktown Heights Research Center in 1959. I was assigned to a group headed by J. P. Roth, a distinguished algebraic topologist who had made notable contributions to switching theory. Our group's mission was to create a computer program for the automatic synthesis of switching circuits. The input to the program was a set of Boolean formulas specifying how the outputs of the circuit were to depend on the inputs; the program was supposed to generate a circuit to do the job using a minimum number of logic gates. Figure 1 shows a circuit for the majority function of three variables; the output is high whenever at least two of the three variables x, y, and z are high. x
y
FIGURE 1. A circuit for the majority function.
The program we designed contained many elegant shortcuts and refinements, but its fundamental mechanism was simply to enumerate the possible circuits in order of increasing cost. The number of circuits that the program had to comb through grew at a furious rate as the Combinatorics, Complexity, and Randomness
435
number of input variables increased, and as a consequence, we could never progress beyond the solution of toy problems. Today, our optimism in even trying an enumerative approach may seem utterly naive, but we are not the only ones to have fallen into this trap; much of the work on automatic theorem proving over the past two decades has begun with an initial surge of excitement as toy problems were successfully solved, followed by disillusionment as the full seriousness of the combinatorial explosion phenomenon became apparent. Around this same time, I began working on the traveling salesman problem with Michael Held of IBM. This problem takes its name from the situation of a salesman who wishes to visit all the cities in his territory, beginning and ending at his home city, while minimizing his total travel cost. In the special case where the cities are points in the plane and travel cost is equated with Euclidean distance, the problem is simply to find a polygon of minimum perimeter passing through all the cities (see Figure 2). A few years earlier, George Dantzig, Raymond Fulkerson, and Selmer Johnson at the Rand Corporation, using a mixture of manual and automatic computation, had succeeded in solving a 49-city problem, and we hoped to break their record.
FIGURE
2.
A traveling salesman tour.
Despite its innocent appearance, the traveling salesman problem has the potential for a combinatorial explosion, since the number of possible tours through n cities in the plane is (n-l)!/2, a very rapidly growing function of n. For example, when the number of cities is only 20, the time required for a brute-force enumeration of all possible tours, at the rate of a million tours per second, would be more than a thousand years. Held and I tried a number of approaches to the traveling salesman problem. We began by rediscovering a shortcut based on dynamic programming that had originally been pointed out by Richard Bellman. The dynamic programming method reduced the search time to n22n, 436 RICHARD M. KARP
but this function also blows up explosively, and the method is limited in practice to problems with at most 16 cities. For a while, we gave up on the idea of solving the problem exactly, and experimented with local search methods that tend to yield good, but not optimal, tours. With these methods, one starts with a tour and repeatedly looks for local changes that will improve it. The process continues until a tour is found that cannot be improved by any such local change. Our local improvement methods were rather clumsy, and much better ones were later found by Shen Lin and Brian Kernighan at Bell Labs. Such quick-and-dirty methods are often quite useful in practice if a strictly optimal solution is not required, but one can never guarantee how well they will perform. We then began to investigate branch-and-bound methods. Such methods are essentially enumerative in nature, but they gain efficiency by pruning away large parts of the space of possible solutions. This is done by computing a lower bound on the cost of every tour that includes certain links and fails to include certain others; if the lower bound is sufficiently large, it will follow that no such tour can be optimal. After a long series of unsuccessful experiments, Held and I stumbled upon a powerful method of computing lower bounds. This bounding technique allowed us to prune the search severely, so that we were able to solve problems with as many as 65 cities. I don't think any of my theoretical results have provided as great a thrill as the sight of the numbers pouring out of the computer on the night Held and I first tested our bounding method. Later we found out that our method was a variant of an old technique called Lagrangian relaxation, which is now used routinely for the construction of lower bounds within branch-and-bound methods. For a brief time, our program was the world champion travelingsalesman-problem solver, but nowadays much more impressive programs exist. They are based on a technique called polyhedral combinatorics, which attempts to convert instances of the traveling salesman problem to very large linear programming problems. Such methods can solve problems with over 300 cities, but the approach does not completely eliminate combinatorial explosions, since the time required to solve a problem continues to grow exponentially as a function of the number of cities. The traveling salesman problem has remained a fascinating enigma. A book of more than 400 pages has recently been published, covering much of what is known about this elusive problem. Later, we will discuss the theory of NP-completeness, which provides evidence that the traveling salesman problem is inherently intractable, so that no amount of clever algorithm design can ever completely defeat the potential for combinatorial explosions that lurks within this problem. During the early 1960s, the IBM Research Laboratory at Yorktown Heights had a superb group of combinatorial mathematicians, and under their tutelage, I learned important techniques for solving certain Combinatorics, Complexity, and Randomness
437
combinatorial problems without running into combinatorial explosions. For example, I became familiar with Dantzig's famous simple algorithm for linear programming. The linear programming problem is to find the point on a polyhedron in a high-dimensional space that is closest to a given external hyperplane (a polyhedron is the generalization of a polygon in two-dimensional space or an ordinary polyhedral body in three-dimensional space, and a hyperplane is the generalization of a line in the plane or a plane in three-dimensional space). The closest point to the hyperplane is always a corner point, or vertex, of the polyhedron (see Figure 3). In practice, the simplex method can be depended on to find the desired vertex very quickly.
FIGURE
3. The linear programming problem.
I also learned the beautiful network flow theory of Lester Ford and Fulkerson. This theory is conce led with the rate at which a commodity, such as oil, gas, electricity, or bits of information, can be pumped through a network in which each link has a capacity that limits the rate at which it can transmi: the commodity. Many combinatorial problems that at first sight seen to have no relation to commodities flowing through networks can be recast as network flow problems, and the theory enables such problems to be solved elegantly and efficiently using no arithmetic operations except addition and subtraction. Let me illustrate this beautiful theory by sketching the so-called Hungarian algorithm for solving a combinatorial optimization problem known as the marriage problem. This problem concerns a society consisting of n men and n women. The problem is to pair up the men and women in a one-to-one fashion at minimum cost, where a given cost is imputed to each pairing. These costs are given by an n X n matrix, in which each row corresponds to one of the men and each column to one of the women. In general, each pairing of 438
RICHARD M. KARP
the n men with the n women corresponds to a choice of n entries from the matrix, no two of which are in the same row or column; the cost of a pairing is the sum of the n entries that are chosen. The number of possible pairings is n!, a function that grows so rapidly that brute force enumeration will be of little avail. Figure 4a shows a 3 x 3 example in which we see that the cost of pairing man 3 with woman 2 is equal to 9, the entry in the third row and second column of the given matrix.
3 4 21
89
7 9 5 (a)
1 201 [00 01 7 8 0 §66° F3 2 4 of20 @ 1 20 (b)
[o(x]
(c)
(d)
FIGURE 4. An instance of the marriageproblem. The key observation underlying the Hungarian algorithm is that the problem remains unchanged if the same constant is subtracted from all the entries in one particular row of the matrix. Using this freedom to alter the matrix, the algorithm tries to create a matrix in which all the entries are nonnegative, so that every complete pairing has a nonnegative total cost, and in which there exists a complete pairing whose entries are all zero. Such a pairing is clearly optimal for the cost matrix that has been created, and it is optimal for the original cost matrix as well. In our 3 x 3 example, the algorithm starts by subtracting the least entry in each row from all the entries in that row, thereby creating a matrix in which each row contains at least one zero (Figure 4b). Then, to create a zero in each column, the algorithm subtracts, from all entries in each column that does not already contain a zero, the least entry in that column (Figure 4c). In this example, all the zeros in the resulting matrix lie in the first row or the third column; since a complete pairing contains only one entry from each row or column, it is not yet possible to find a complete pairing consisting entirely of zero entries. To create such a pairing, it is necessary to create a zero in the lower left part of the matrix. In this case, the algorithm creates such a zero by subtracting 1 from the first and second columns and adding 1 to the first row (Figure 4d). In the resulting nonnegative matrix, the three circled entries give a complete pairing of cost zero, and this pairing is therefore optimal, both in the final matrix and in the original one. This algorithm is far subtler and more efficient than brute-force enumeration. The time required for it to solve the marriage problem grows only as the third power of n, the number of rows and columns of the matrix, and as a consequence, it is possible to solve examples with thousands of rows and columns. The generation of researchers who founded linear programming theory and network flow theory had a pragmatic attitude toward issues of computational complexity: An algorithm was considered efficient Combinatorics, Complexity, and Randomness
439
if it ran fast enough in practice, and it was not especially important to prove it was fast in all possible cases. In 1967 I noticed that the standard algorithm for solving certain network flow problems had a theoretical flaw, which caused it to run very slowly on certain contrived examples. I found that it was not difficult to correct the flaw, and I gave a talk about my result to the combinatorics seminar at Princeton. The Princeton people informed me that Jack Edmonds, a researcher at the National Bureau of Standards, had presented very similar results at the same seminar during the previous week. As a result of this coincidence, Edmonds and I began to work together on the theoretical efficiency of network flow algorithms, and in due course, we produced a cint paper. But the main effect of our collaboration was to reinforce some ideas about computational complexity that I was already groping toward and that were to have a powerful influence on the future course of my research. Edmonds was a wonderful craftsman who had used ideas related to linear programming to develop ama2 ing algorithms for a number of combinatorial problems. But, in addition to his skill at constructing algorithms, he was ahead of his contemporaries in another important respect: He had developed a clear and precise understanding of what it meant for an algorithm to be efficient. His papers expounded the point of view that an algorithm should be considered 'good' if its running time is bounded by a polynomial function of the size of the input, rather than, say, by an exponential function. For example, according to Edmonds's concept, the Hungarian algorithm for the marriage problem is a good algorithm because its running time grows as the third power of the size of the input. But as far as we know there may be no good algorithm for the traveling salesman problem, because all the algorithms we have tried experience an ex oonential growth in their running time as a function of problem size. Edmonds's definition gave us a clear idea of how to define the boundary between easy and hard combinatorial problems and opened up for the first time, at least in my thinking, the possibility that we might somec ay come up with a theorem that would prove or disprove the conjecture that the traveling salesman problem is inherently intractable.
The Road to NP-Completeness Along with the developments in the field of combinatorial algorithms, a second major stream of research was gathering force during the 1960s-computational complexity theory. The foundations for this subject were laid in the 1930s by a group of logicians, including Alan Turing, who were concerned with the existence or nonexistence of automatic procedures for dec-ding whether mathematical statements were true or false. 440
RICHARD M. KARP
Turing and the other pioneers of computability theory were the first to prove that certain well-defined mathematical problems were undecidable, that is, that in principle, there could not exist an algorithm capable of solving all instances of such problems. The first example of such a problem was the Halting Problem, which is essentially a question about the debugging of computer programs. The input to the Halting Problem is a computer program, together with its input data; the problem is to decide whether the program will eventually halt. How could there fail to be an algorithm for such a well-defined problem? The difficulty arises because of the possibility of unbounded search. The obvious solution is simply to run the program until it halts. But at what point does it become logical to give up, to decide that the program isn't going to halt? There seems to be no way to set a limit on the amount of search needed. Using a technique called diagonalization, Turing constructed a proof that no algorithm exists that can successfully handle all instances of the Halting Problem. Over the years, undecidable problems were found in almost every branch of mathematics. An example from number theory is the problem of solving Diophantine equations: Given a polynomial equation such as 4xy 2 + 2xy 2z 3 -
11x 3y2 z2
=
-1164,
is there a solution in integers? The problem of finding a general decision procedure for solving such Diophantine equations was first posed by David Hilbert in 1900, and it came to be known as Hilbert's Tenth Problem. The problem remained open until 1971, when it was proved that no such decision procedure can exist. One of the fundamental tools used in demarcating the boundary between solvable and unsolvable problems is the concept of reducibility, which was first brought into prominence through the work of logician Emil Post. Problem A is said to be reducible to problem B if, given a subroutine capable of solving problem B, one can construct an algorithm to solve problem A. As an example, a landmark result is that the Halting Problem is reducible to Hilbert's Tenth Problem (see Figure 5). It follows that Hilbert's Tenth Problem must be undecidable, since otherwise we would be able to use this reduction to derive an
FIGURE
5.
The Halting Problem is reducible to Hilbert's Tenth Problem. Combinatorics, Complexity, and Randomness
441
algorithm for the Halting Problem, which is known to be undecidable. The concept of reducibility will come up again when we discuss P-completeness and the P:NP problem. Another important theme 'tat complexity theory inherited from computability theory is the distinction between the ability to solve a problem and the ability to check a solution. Even though there is no general method to find a solution to a Diophantine equation, it is easy to check a proposed s:].ution. For example, to check whether x = 3, y = 2, z = - 1 constitutes a solution to the Diophantine equation given above, one merely plugs in the given values and does a little arithmetic. As we will see later the distinction between solving and checking is what the P: NP problem is all about. Some of the most enduring branches of theoretical computer science have their origins in the abstract machines and other formalisms of computability theory. One of the most important of these branches is computational complexity theory. Instead of simply asking whether a problem is decidable at all, complexity theory asks how difficult it is to solve the problem. In other words, complexity theory is concerned with the capabilities of universal computing devices such as the Thring machine when restrictions are placed on their execution time or on the amount of memory they may use. The first glimmerings of complexity theory can be found in papers published in 1959 and 1960 by Michael Rabin and by Robert McNaughton and Hideo Yamada, but it is the 1965 paper by Juris Hartmanis and Richard Stearns that marks the beginning of the modern era of complexity theory. Using the Turing machine as their mcdel of an abstract computer, Hartmanis and Stearns provided a precise definition of the 'complexity class' consisting of all problems sorvable in a number of steps bounded by some given function of the input length n. Adapting the diagonalization technique that Turing had used to prove the undecidability of the Halting Problem, they proved many interesting results about the structure of complexity classes. All of us who read their paper could not fail to realize that we now had a satisfactory formal framework for pursuing the questions that Edmonds had raised earlier in an intuitive fashion -questions about whether, for instance, the traveling salesman problem is solvable in polynomial time. In that same year, I learned ccomputability theory from a superb book by Hartley Rogers, who had be en my teacher at Harvard. I remember wondering at the time whether the concept of reducibility, which was so central in computability theory, might also have a role to play in complexity theory, but I did not see how to make the connection. Around the same time, Michael Rabin, who was to receive the Turing Award in 1976, was a visitor- at the IBM Research Laboratory at Yorktown Heights, on leave from the Hebrew University in Jerusalem. We both happened to live in the same apartment building on the outskirts of New York City, and we fell into a habit of sharing the long commute to Yorktown Heights. Rabin is a profoundly original thinker and one of the founders of both automata theory and complexity theory, 442
RICHARD M. KARP
and through my daily discussions with him along the Sawmill River Parkway, I gained a much broader perspective on logic, computability theory, and the theory of abstract computing machines. In 1968, perhaps influenced by the general social unrest that gripped the nation, I decided to move to the University of California at Berkeley, where the action was. The years at IBM had been crucial for my development as a scientist. The opportunity to work with such outstanding scientists as Alan Hoffman, Raymond Miller, Arnold Rosenberg, and Shmuel Winograd was simply priceless. My new circle of colleagues included Michael Harrison, a distinguished language theorist who had recruited me to Berkeley, Eugene Lawler, an expert on combinatorial optimization, Manuel Blum, a founder of complexity theory who has gone on to do outstanding work at the interface between number theory and cryptography, and Stephen Cook, whose work in complexity theory was to influence me so greatly a few years later. In the mathematics department, there were Julia Robinson, whose work on Hilbert's Tenth Problem was soon to bear fruit, Robert Solovay, a famous logician who later discovered an important randomized algorithm for testing whether a number is prime, and Steve Smale, whose ground-breaking work on the probabilistic analysis of linear programming algorithms was to influence me some years later. And across the Bay at Stanford were Dantzig, the father of linear programming, Donald Knuth, who founded the fields of data structures and analysis of algorithms, as well as Robert Tarjan, then a graduate student, and John Hopcroft, a sabbatical visitor from Cornell, who were brilliantly applying data structure techniques to the analysis of graph algorithms. In 1971 Cook, who by then had moved to the University of Toronto, published his historic paper 'On the Complexity of Theorem-Proving Procedures.' Cook discussed the classes of problems that we now call P and NP, and introduced the concept that we now refer to as NP-completeness. Informally, the class P consists of all those problems that can be solved in polynomial time. Thus the marriage problem lies in P because the Hungarian algorithm solves an instance of size n in about n 3 steps, but the traveling salesman problem appears not to lie in P, since every known method of solving it requires exponential time. If we accept the premise that a computational problem is not tractable unless there is a polynomial-time algorithm to solve it, then all the tractable problems lie in P The class NP consists of all those problems for which a proposed solution can be checked in polynomial time. For example, consider a version of the traveling salesman problem in which the input data consist of the distances between all pairs of cities, together with a 'target number' T. and the task is to determine whether there exists a tour of length less than or equal to T. It appears to be extremely difficult to determine whether such a tour exists, but if a proposed tour is given to us, we can easily check whether its length is less than or equal to T; therefore, this version of the traveling salesman problem lies in the class NP. Similarly, through Combinatorics, Complexity, and Randomness
443
the device of introducing a target number T all the combinatorial optimization problems normally considered in the fields of commerce, science, and engineering have versions that lie in the class NP. So NP is the area into which combinatorial problems typically fall; within NP lies P, the class cf problems that have efficient solutions. A fundamental question is, What is the relationship between the class P and the class NP? It is clear that P is a subset of NP, and the question that Cook drew attention to is whether P and NP might be the same class. If P were equal to NP, there would be astounding consequences: It would mean that every problem for which solutions are easy to check would also be easy to solve; it would mean that, whenever a theorem had a short proof, a uniform procedure would be able to find that proof quickly; it would mean that all the usual combinatorial optimization problems would be solvable in polynomial time. In short, it would mean that the curse of combinatorial explosions could be eradicated. But, despite all this heuristic evidence that it would be too good to be true if P and NI were equal, no proof that P # NP has ever been found, and some experts even believe that no proof will ever be found. The most important achievement of Cook's paper was to show the P = NP if and only if a particular computational problem called the Satisfiability Problem lies in P. The Satisfiability Problem comes from mathematical logic and has applications in switching theory, but it can be stated as a simpkc combinatorial puzzle: Given several sequences of upper- and lowercase letters, is it possible to select a letter from each sequence without selecting both the upper- and lowercase versions of any letter? For example, if the sequences are Abc, BC, aB, and ac it is possible to choose A from the first sequence, B from the second and third, and c from the fourth; note that the same letter can be chosen more than once, provided we do not choose both its uppercase and lowercase versions. An example where there is no way to make the required selections is given by the four sequences AB, Ab, aB, and ab. The Satisfiability Problem is clearly NP, since it is easy to check whether a proposed selection of letters satisfies the conditions of the problem. Cook proved that if the Satisfiability Problem is solvable in polynomial time, then every 'roblem in NP is solvable in polynomial time, so that P = NP. Thus we see that this seemingly bizarre and inconsequential problem is an archetypal combinatorial problem; it holds the key to the efficient solution of all problems in NP. Cook's proof was based on the concept of reducibility that we encountered earlier in our discussion of computability theory. He showed that any instance of a problem in NP can be transformed into a corresponding instance of the Satisfiability Problem in such a way that the original has a solution if and only if the satisfiability instance does. Moreover, this translation can be accomplished in polynomial time. In other words, the Satisfiability Problem is general enough to capture the structure of any problem in NP. It follows 444 RICHARD M. KARP
that, if we could solve the Satisfiability Problem in polynomial time, then we would be able to construct a polynomial-time algorithm to solve any problem in NP. This algorithm would consist of two parts: a polynomial-time translation procedure that converts instances of the given problem into instances of the Satisfiability Problem, and a polynomial-time subroutine to solve the Satisfiability Problem itself (see Figure 6).
6. The traveling salesman problem is polynomial-time reducible to the satisfiability problem. FIGURE
Upon reading Cook's paper, I realized at once that his concept of an archetypal combinatorial problem was a formalization of an idea that had long been part of the folklore of combinatorial optimization. Workers in that field knew the integer programming problem, which is essentially the problem of deciding whether a system of linear inequalities has a solution in integers, was general enough to express the constraints of any of the commonly encountered combinatorial optimization problems. Dantzig had published a paper on that theme in 1960. Because Cook was interested in theorem proving rather than combinatorial optimization, he had chosen a different archetypal problem, but the basic idea was the same. However, there was a key difference: By using the apparatus of complexity theory, Cook had created a framework within which the archetypal nature of a given problem could become a theorem, rather than an informal thesis. Interestingly, Leonid Levin, who was then in Leningrad and is now a professor at Boston University, independently discovered essentially the same set of ideas. His archetypal problem had to do with tilings of finite regions of the plane with dominoes. I decided to investigate whether certain classic combinatorial problems, long believed to be intractable, were also archetypal in Cook's sense. I called such problems 'polynomial complete,' but that term became superseded by the more precise term 'NP-complete.' A problem is NP-complete if it lies in the class NP, and every problem in NP is polynomial-time reducible to it. Thus, by Cook's theorem, the Combinatorics, Complexity, and Randomness
445
Satisfiability Problem is NP-complete. To prove that a given problem in NP is NP-complete, it suffices to show that some problem already known to be NP-complete is polynomial-time reducible to the given problem. By constructing a series of polynomial-time reductions, I showed that most of the classical problems of packing, covering, matching, partitioning, routing, and scheduling that arise in combinatorial optimization are NP-complete. I presented these results in 1972 in a paper called 'Reducibility among Combinatorial Problems.' My early results were quickly refined and extended by other workers, and in the next few years, hundreds of different problems, arising in virtually every field where Computation is done, were shown to be NP-complete.
Coping with NP-Complete Problems I was rewarded for my research on NP-complete problems with an administrative post. From 1973 to 1975, I headed the newly formed Computer Science Division at Berkeley, and my duties left me little time for research. As a result, I sat on the sidelines during a very active period, during which many examples of NP-complete problems were found, and the first attempts to get around the negative implications of NP-completeness got under way. The NP-completeness results proved in the early 1970s showed that, unless P = NP, the great majority of the problems of combinatorial optimization that arise in commerce, science, and engineering are intractable: No methods for their solution can completely evade combinatorial explosions. How, then, are we to cope with such problems in practice? One possible approach stems from the fact that near-optimal solutions will often be good enough: A traveling salesman will probably be satisfied with a tour that is a few percent longer than the optimal one. Pursuing this approach, researchers began to search for polynomial-time algorithms that were guaranteed to produce near-optimal solutions to NP-complete combinatorial problems. In most cases, the performance guarantee for the approximation algorithm was in the form of an upper bound on the ratio between the cost of the solution produced by the algorithm and the cost of an optimal solution. Some of the most interesting work on approximation algorithms with performance guarantees concerned the one-dimensional binpacking problem. In this problem, a collection of items of various sizes must be packed into bins, all of which have the same capacity. The goal is to minimize the number of bins used for the packing, subject to the constraint that the sum of the sizes of the items packed into any bin may not exceed the bin capacity. During the mid 1970s, a series of papers on approximation algorithms for bin packing culminated in David Johnson's analysis of the first-fit-decreasing algorithm. In this simple algorithm, the items are considered in decreasing order of their sizes, and each item in turn is placed in the first bin that can accept 446
RICHARD M. KARP
it. In the example in Figure 7, for instance, there are four bins each with a capacity of 10, and eight items ranging in size from 2 to 8. Johnson showed that this simple method was guaranteed to achieve a relative error of at most 2/9; in other words, the number of bins required was never more than about 22 percent greater than the number of bins in an optimal solution. Several years later, these results were improved still further, and it was eventually shown that the relative error could be made as small as one liked, although the polynomialtime algorithm required for this purpose lacked the simplicity of the first-fit-decreasing algorithm that Johnson analyzed.
2
3 3 2
8
FIGURE
7
6
5
7. A packing created by the first-fit-decreasing algorithm.
The research on polynomial-time approximation algorithms revealed interesting distinctions among the NP-complete combinatorial optimization problems. For some problems, the relative error can be made as small as one likes; for others, it can be brought down to a certain level, but seemingly no further; other problems have resisted all attempts to find an algorithm with bounded relative error; and finally, there are certain problems for which the existence of a polynomial-time approximation algorithm with bounded relative error would imply that P = NP. During the sabbatical year that followed my term as an administrator, I began to reflect on the gap between theory and practice in the field of combinatorial optimization. On the theoretical side, the news was bleak. Nearly all the problems one wanted to solve were NP-complete, and in most cases, polynomial-time approximation algorithms could not provide the kinds of performance guarantees that would be useful in practice. Nevertheless, there were many algorithms that seemed to work perfectly well in practice, even though they lacked a theoretical pedigree. For example, Lin and Kernighan had developed a very successful local improvement strategy for the traveling salesman problem. Their algorithm simply started with a random tour and kept improving it by adding and deleting a few links, until a tour was eventually created that could not be improved by such local changes. On contrived instances, their algorithm, performed disastrously, but in practice instances, it could be relied on to give Combinatorics, Complexity, and Randomness
447
nearly optimal solutions. A similar situation prevailed for the simplex algorithm, one of the most important of all computational methods: It reliably solved the large linear programming problems that arose in applications, despite the fact that certain artificially constructed examples caused it to run for an exponential number of steps. It seemed that the success of such inexact or rule-of-thumb algorithms was an empirical phenomenon that needed to be explained. And it further seemed that the explanation of this phenomenon would inevitably require a departure from the traditional paradigms of complexity theory, which evaluate an algorithm according to its performance on the worst possible input that can be presented to it. The traditional worst-case analysis -the dominant strain in complexity theory -corresponds to a scenario in which the instances of a problem to be solved are constructed by an infinitely intelligent adversary who knows the structure of the algorithm and chooses inputs that will embarrass it to the maximal extent. This scenario leads to the conclusion that the simplex algorithm and the Lin-Kernighan algorithm are hopelessly defective. I began to pursue another approach, in which the inputs are assumed to come from a user who simply draws his instances from some reasonable probability distribution, attempting neither to foil nor to help the algorithm. In 1975 I decided to bite the bullet and commit myself to an investigation of the probabilistic analysis of combinatorial algorithms. I must say that this decision required some courage, since this line of research had its detractors, who pointed out quite correctly that there was no way to know what inputs were going to be presented to an algorithm, and that the best kind of guarantees, if one could get them, would be worst-case guarantees. I felt, however, that in the case of NPcomplete problems we weren't going to get the worst-case guarantees we wanted, and that the probabilistic approach was the best way and perhaps the only way to understand why heuristic combinatorial algorithms worked so well in practice. Probabilistic analysis starts from the assumption that the instances of a problem are drawn from a specified probability distribution. In the case of the traveling salesman problem, for example, one possible assumption is that the locations of the n cities are drawn independently from the uniform distribution over the unit square. Subject to this assumption, we can study tha probability distribution of the length of the optimal tour or the length of the tour produced by a particular algorithm. Ideally, the goal is to prove that some simple algorithm produces optimal or near-optimal solutions with high probability. Of course, such a result is meaningful only if the assumed probability distribution of problem instances bears some resemblance to the population of instances that arise in real life, or if the probabilistic analysis is robust enough to be valid for a wide range of probability distributions. Among the most striking phenomena of probability theory are the laws of large numbers, which tell us that the cumulative effect of a large number of random events is highly predictable, even though the out448
RICHARD M. KARP
comes of the individual events are highly unpredictable. For example, we can confidently predict that, in a long series of flips of a fair coin, about half the outcomes will be heads. Probabilistic analysis has revealed that the same phenomenon governs the behavior of many combinatorial optimization algorithms when the input data are drawn from a simple probability distribution: With very high probability, the execution of the algorithm evolves in a highly predictable fashion, and the solution produced is nearly optimal. For example, a 1960 paper by Beardwood, Halton, and Hammersley shows that, if the n cities in a traveling salesman problem are drawn independently from the uniform distribution over the unit square, then, when n is very large, the length of the optimal tour will almost surely be very close to a certain absolute constant times the square root of the number of cities. Motivated by their result, I showed that, when the number of cities is extremely large, a simple divide-and-conquer algorithm will almost surely produce a tour whose length is very close to the length of an optimal tour (see Figure 8). The algorithm starts by partitioning the region where the cities lie into rectangles, each of which contains a small number of cities. It then constructs an optimal tour through the cities in each rectangle. The union of all these, little tours closely
(a)
(b)
(c)
(d)
8. A divide-and-conquer algorithm for the traveling salesman problem in the plane. FIGURE
Combinatorics, Complexity, and Randomness
449
resembles an overall traveling salesman tour, but differs from it because of extra visits to those cities that lie on the boundaries of the rectangles. Finally, the algorithm performs a kind of local surgery to eliminate these redundant visits and produce a tour. Many further examples can be cited in which simple approximation algorithms almost surely give aear-optimal solutions to random large instances of NP-complete optimization problems. For example, my student Sally Floyd, building on earlier work on bin packing by Bentley, Johnson, Leighton, McGeoch, and McGeoch, recently showed that, if the items to be packed are drawn independently from the uniform distribution over the interval 10, 1/2), then, no matter how many items there are, the first-fit-decreasing algorithm will almost surely produce a packing with less than 10 bins worth of wasted space. Some of the most notable applications of probabilistic analysis have been to the linear programming problem. Geometrically, this problem amounts to finding the vertex of a polyhedron closest to some external hyperplane. Algebraically, it is equivalent to minimizing a linear function subject to linear inequality constraints. The linear function measures the distance to the hyperplane, and the linear inequality constraints correspond to the hyperplanes that bound the polyhedron. The simplex algorithm for the linear programming problem is a hill-climbing method. It repeatedly slides from vertex to neighboring vertex; always moving closer tc the external hyperplane. The algorithm terminates when it reaches a vertex closer to this hyperplane than any neighboring vertex; such a vertex is guaranteed to be an optimal solution. In the worst case, the simplex algorithm requires a number of iterations that grow exponentially with the number of linear inequalities needed to describe the polyhedron, but in practice, the number of iterations is seldom greater than three or four times the number of linear inequalities. Karl-Heinz Borgwardt of West Germany and Steve Smale of Berkeley were the first researchers to use probabilistic analysis to explain the unreasonable success of the simplex algorithm and its variants. Their analyses hinged on the evaluation of certain multidimensional integrals. With my limited background in mathematical analysis, I found their methods impenetrable. Fortunately, one of my colleagues at Berkeley, Ilan Adler, suggested an approach that promised to lead to a probabilistic analysis in which there would be virtually no calculation; one would use certain symmetry principles to do the required averaging and magically come up with the answer. Pursuing this line of research, Adler, Ron Shamir, and I showed in 1983 that, under a reasonably wide range of probabilistic assumptions, the expected number of iterations executed by a certain version of the simplex algorithm grow s only as the square of the number of linear inequalities. The same result was also obtained via multidimensional integrals by Michael Told and by Adler and Nimrod Megiddo. 450
RICHARD M. KARP
I believe that these results contribute significantly to our understanding of why the simplex method performs so well. The probabilistic analysis of combinatorial optimization algorithms has been a major theme in my research over the past decade. In 1975, when I first committed myself to this research direction, there were very few examples of this type of analysis. By now there are hundreds of papers on the subject, and all of the classic combinatorial optimization problems have been subjected to probabilistic analysis. The results have provided a considerable understanding of the extent to which these problems can be tamed in practice. Nevertheless, I consider the venture to be only partially successful. Because of the limitations of our techniques, we continue to work with the most simplistic of probabilistic models, and even then, many of the most interesting and successful algorithms are beyond the scope of our analysis. When all is said and done, the design of practical combinatorial optimization algorithms remains as much an art as it is a science.
Randomized Algorithms Algorithms that toss coins in the course of their execution have been proposed from time to time since the earliest days of computers, but the systematic study of such randomized algorithms only began around 1976. Interest in the subject was sparked by two surprisingly efficient randomized algorithms for testing whether a number n is prime; one of these algorithms was proposed by Solovay and Volker Strassen, and the other by Rabin. A subsequent paper by Rabin gave further examples and motivation for the systematic study of randomized algorithms, and the doctoral thesis of John Gill, under the direction of my colleague Blum, laid the foundations for a general theory of randomized algorithms. To understand the advantages of coin tossing, let us turn again to the scenario associated with worst-case analysis, in which an allknowing adversary selects the instances that will tax a given algorithm most severely. Randomization makes the behavior of an algorithm unpredictable even when the instance is fixed, and thus can make it difficult, or even impossible, for the adversary to select an instance that is likely to cause trouble. There is a useful analogy with football, in which the algorithm corresponds to the offensive team and the adversary to the defense. A deterministic algorithm is like a team that is completely predictable in its play calling, permitting the other team to stack its defenses. As any quarterback knows, a little diversification in the play calling is essential for keeping the defensive team honest. As a concrete illustration of the advantages of coin tossing, I present a simple randomized pattern-matching algorithm invented by Rabin and myself in 1980. The pattern-matching problem is a fundamental one in text processing. Given a string of n bits called the pattern, Combinatorics, Complexity, and Randomness
451
and a much longer bit string called the text, the problem is to determine whether the pattern occurs as a consecutive block within the text (see Figure 9). A brute-force method of solving this problem is to compare the pattern directly with every n-bit block within the text. In the worst case, the execution time of this method is proportional Pattern
11001
Text
(11011101
FIGURE 9.
11001 00
A pattern-matching problem.
to the product of the length of the pattern and the length of the text. In many text processing applications, this method is unacceptably slow unless the pattern is very short. Our method gets around the difficulty by a simple hashing trick. We define a 'fingerprinting function' that associates with each string of n bits a much shorter string called its fingerprint. The fingerprinting function is chosen so that it is possible to march through the text, rapidly computing the fingerprint of every n-bit-long block. Then, instead of comparing the pattern with each such block of text, we compare the fingerprint of the pattern with the fingerprint of every such block. If the fingerprint of the pattern differs from the fingerprint of each block, then we know that the pattern does not occur as a block within the text. The method of comparing short fingerprints instead of long strings greatly reduces the running time, but it leads to the possibility of false matches, which occur when some block of text has the same fingerprint as the pattern, even though the pattern and the block of text are unequal. False matches are a serious problem; in fact, for any particular choice of fingerprinting function it is possible for an adversary to construct an example of a pattern and a text such that a false match occurs at every position of the text. Thus, some backup method is needed to defend against false matches, and the advantages of the fingerprinting method seem to be lost. Fortunately, the advantages of fingerprinting can be restored through randomization. Instead of working with a single fingerprinting function, the randomized method has at its disposal a large family of different easy-to-compute fingerprinting functions. Whenever a problem instance, consisting of a patter a and a text, is presented, the algorithm selects a fingerprinting function at random from this large family, and uses that function to test for matches between the pattern and the text. Because the fingerprinting function is not known in advance, it is impossible for an adversary to construct a problem instance that is likely to lead to false matches; it can be shown that, no matter how the pattern and the text are selected, the probability of a false match is very small. For example, if the pattern is 250 bits long and the text is 4000 bits long, one can work with easy-to-compute 32-bit fingerprints 452 RICHARD M. KARP
and still guarantee that the probability of a false match is less than one in a thousand in every possible instance. In many text processing applications, this probabilistic guarantee is good enough to eliminate the need for a backup routine, and thus the advantages of the fingerprinting approach are regained. Randomized algorithms and probabilistic analysis of algorithms are two contrasting ways to depart from the worst-case analysis of deterministic algorithms. In the former case, randomness is injected into the behavior of the algorithm itself, and in the latter case, randomness is assumed to be present in the choice of problem instances. The approach based on randomized algorithms is, of course, the more appealing of the two, since it avoids assumptions about the environment in which the algorithm will be used. However, randomized algorithms have not yet proved effective in combating the combinatorial explosions characteristic of NP-complete problems, and so it appears that both of these approaches will continue to have their uses.
Conclusion This brings me to the end of my story, and I would like to conclude with a brief remark about what it's like to be working in theoretical computer science today. Whenever I participate in the annual ACM Theory of Computing Symposium, or attend the monthly Bay Area Theory Seminar, or go up the hill behind the Berkeley campus to the Mathematical Sciences Research Institute, where a year-long program in computational complexity is taking place, I am struck by the caliber of the work that is being done in this field. I am proud to be associated with a field of research in which so much excellent work is being done, and pleased that I'm in a position, from time to time, to help prodigiously talented young researchers get their bearings in this field. Thank you for giving me the opportunity to serve as a representative of my field on this occasion. Categories and Subject Descriptors: A.0 [General Literature]: General-biographies/autobiographies; FO [Theory of Computation]: General; F.1.1 [Computation by Abstract Devices]: Models of Computation-computabilitytheory; F.1.2 [Computation by Abstract Devices]: Modes of Computation -parallelism; probabilistic computation; F.1.3 [Computation by Abstract Devices]: Complexity classes - reducibility andcompleteness,; relationsamong complexity classes; .2.0 [Analysis of Algorithms and Problem Complexity]: General; G.2.1 [Discrete Mathematics]: Combinatorics; G.2.2 [Discrete Mathematics]: Graph Theory; K.2 [History of Computing]: People General Terms: Performance, Theory Additional Key Words and Phrases: Richard Karp, Turing Award Combinatorics, Complexity, and Randomness
453
THE DEVELOPMENT OF COMBINATORIAL OPTIMIZATION AND COMPUTATIONAL COMPLEXITY THEORY
1900 Hilbert's fundamental questions: Ismathematics complete, consistent, and decidable?
1937 Turing introduces abstract model of digital computer and proves undecidability of Halting Problem and decision problem for first-order logic.
Hilbert's 10th Problem: Are there general decision procedures for Diophantine equations?
1 1900
I
1947 Dantzig devises simplex method for linear programming problem.
I
--
1940
1930
1930s Computability pioneers
454
I
I 1950
1970s Search for near-optimal solutions basedon upperboundoncost ratio.
1965 'Complexity' defined by Hartmanis and Stearns - introduce framework for computational complexity using abstract machines -obtain results about structure of complexity classes. Edmonds defines 'good' algorithm as one with running time bounded by polynomial function the size of input. Found such an algorithm for Matching Problem.
I
1971 Building on work of Davis, Robinson, and Putman, Matiyasevic solves Hilbert's IOth: No general decision procedure exists for solving Diophantine equations. Cook's Theorem: All NP Problems polynomial-time reducible to Satisfiability Problem.
1957 Ford and Fulkerson et al.give efficient algorithms for solving network flow problems.
l
1984 Karmarkar devises theoretically efficient and practical linear programming algorithm.
Levin also discovers this principle.
l
l
1980 Borgwardt, Smale et al. conduct probabilistic analysis of simplex algorithm.
1960
l
1970
1959 Rabin, McNaughton, Yamada: First glimmerings of computational complexity theory.
1980 1976 Rabin et al. launch study of randomized algorithms. 1975 Karp departs from worst-case paradigm and investigates probabilistic analysis of combinatorial algorithms. 1973 Meyer Stockmeyer et al. prove intractability of certain decision problems inlogic and automata theory. 1972 Karp uses polynomial-time reducibility to show that 21problems of packing, matching, covering, etc., are NP-complete.
455
Postscript
Piecing Together Complexity KAR:EN FRENKEL Features Writer, Communications of the ACM To illustrate the 'remarkable extent to which complexity theory operates by means of analogs from computability theory,' Richard Karp created this conceptual map or jigsaw puzzle. Tb lay out the puzzle in the plane, he used a 'graph planarity algorithm.' The more distantly placed parts might not at first seem related, 'but in the encl, the theory of NP-completeness does bring them all together,' Karp says.
I'
The upper right portion of the puzzle shows concepts related to combinatorial explosions and the notion of a 'good' or 'efficient' algorithm. In turn, 'Complexity' connects these concepts to the upper left portion, which represents the concerns of early computability theorists. The traveling salesman problem is closer to the upper right corner because it is probably intractable. It therefore borders on 'NP-completeness' and 'Combinatorial explosion.' To some extent, however, certain divisions blur. 'Linear programming,' for example, has an anomalous status-- the most widely used algorithms for solving such problems in practice are not good in the theoretical sense, and those that are good in the theoretical sense are often not good in practice. One example is the ellipsoid method that was t -e object of so much attention six years ago. It ran in polynomial time, but th! polynomial was of such a high degree that the method proved good in the technical sense, but ineffective in practice. 'The reason is that our notion of polync'niial-time algorithms doesn't exactly capture the notion of an intuitively efficient algorithm,' Karp explains. 'When you get 456
up to n5 or n6 , then it's hard to justify saying that it is really efficient. So Edmond's concept of a good algorithm isn't quite a perfect formal counterpart of good in the intuitive sense.' Further, the simplex algorithm is good in every practical sense, Karp says, but not good according to the standard paradigm of complexity theory. The most recent addition to linear programming solutions, an algorithm devised by Narendra Karmarkar that some think challenges the simplex algorithm, is good in the technical sense and also appears to be quite effective in practice, says Karp.
The good algorithm segment is adjacent to 'Heuristics' because a heuristic algorithm may work well, but lacks a theoretical pedigree. Some heuristic algorithms are always fast, but sometimes fail to give good solutions. Others always give an optimal solution, but are not guaranteed to be fast. The simplex algorithm is of the latter type. 'Undecidability,' 'Combinatorial explosion,' and 'Complexity' are on the same plane because they are analogs of one another; undecidability involves unbounded search, whereas combinatorial explosions are by definition very long but not unbounded searches. Complexity theory bridges the gap because, instead of asking whether a problem can be solved at all, it poses questions about the resources needed to solve a problem. The lower left region contains the segments Karp has been concerned with most recently and that contain open-ended questions. 'Randomized algorithm,' for example, is situated opposite 'Probabilistic analysis' because both are alternatives to worst-case analyses of deterministic algorithms. Randomized algorithms might be able to solve problems in polynomial time that deterministic ones cannot and that could mean an extension of the notion of good algorithms. Perhaps through software designs for non-von Neumann machines, algorithms can be made more efficient in practice through parallelism. Finally, some parts of the puzzle are not yet defined. Says Karp, 'They correspond to the unknown territory that remains to be explored in the future.' Piecing Together Complexity
457
Postscript TURING AVIARD INTERVIEW
Complexity and Parallel Processing: An Interview with Richard Karp KAREN A. FRENKEL Features Writer, communications of the ACM
In the following interview, which took place at ACM 85 in Denver, Karp discusses the relation of his work to leading-edge computing topics like parallelprocessing and artificial intelligence. Tracing his experience as a pioneer in highly theoretical computer science, Karp describes how the decision to go against established wisdom led to the w9rk for which he is best known and how a colleague's findings led him to see links between two previously unrelatedareas. Throughout, he stresses the exchange of ideas with colleagues that helped yield fundamental insights. KF You decided fairly early on in your career to move from mathematics into computer science. Do you see yourself as a theoretical mathematician working in the realm of computer science, or as a computer scientist working on theoretical issues? RK I guess I'm somewhere in between an applied mathematician and a computer scientist. A priori I think the work I do could go either in a mathematics or computer science department, but the historical trend has been for computer science departments to take the major initiatives in developing theoretical computer science. Most math departments have dropped the ball. There are a few exceptions, but in general, they didn't realize the potential of this field quite early enough to begin building it up. So it tended to fall within the purview of computer science departments. Nowadays, mathematics departments are finally becoming much more cognizant of theoretical computer science. KF Do mathematicians think about computation differently than computer scientists do? RK When mathematicians use computers, they tend to operate in a very nontheoretical manner. If a nunrber theorist wants to factor a number, he'll throw everything at it but the kitchen sink. He wants that answer, and he usually isn't worried about the broader computational complexity issues. It's the same with group theorists or algebraic geometry. They're interested in this particular group, or that particular surface, and they want that answer-they become just like engineers. When I program, I'm the same way. For the first five minutes, I'm very conscious of theoretical issues, but then I just want to make the program work, and I forget that I'm a theoretician. KF Why has the traveling salesman problem received so much attention? RK The traveling salesman problem epitomizes and is a simplified version of the rather more complicated problems that occur in practice. Everyone 458
knows that the traveling salesman problem is a metaphor or a myth-it's obvious that no salesman is going to worry about absolutely minimizing his mileage-but it is an interesting and an easily defined problem. It probably gets more attention than it deserves because of its catchy name. There are other important proto-typical problems with less catchy names, like coloring, packing, matching, scheduling, and so forth. This is the way theory advances -you can't do clean theoretical work by taking on all the complications of real-world problems. So you take cleaner formulations, study them as closely as possible, go deeply into their structure, and hope that the results will transfer over to the real problems. KF It seems that you investigate metatheory -classes rather than real problems.
of problems -
RK Yes, that's right. There are three levels of problems. There's the level of solving a very specific instance: You want the shortest tour through the 48 continental state capitals plus Washington, D.C. That's the level closest to the practitioner. Then there's the level of studying the problem in general, with emphasis on methodology for solving it: You want to know what the complexity of the traveling salesman problem, or the marriage problem is, using the worst-case paradigm. That's one level up because you're not interested just in a specific instance. Then there's a metatheoretic level where you study the whole structure of a class of problems. This is the point of view that we've inherited from logic and computability theory. The question becomes, 'What is the whole classification scheme? Let's look at the hierarchy of problem complexities as we increase the bound on the allowable computation time.' Every now and then, two levels have an interface. Such interfaces are usually very important. A lot of important work in science emerges when two fields meet that had not previously been perceived to be related. The concept of NP-completeness links the abstract study of complexity classes to the properties of particular problems like the traveling salesman problem or the satisfiability problem. KF The step that you took toward probabilistic analysis was a departure from the worst-case analysis paradigm. And you pursued it despite its detractors. What pushed your decision? RK I don't mean to give the impression that probabilistic analysis had never been heard of before I thought of it. It certainly had been applied, but mainly to problems of sorting, searching, and data structures, rather than to combinatorial optimization problems. The decision was particularly difficult because, to a certain extent, I agreed with the detractors. There is a really fundamental methodological problem: How do you choose the probability distributions? How can you possibly know what the population of problem instances is going to be? Nobody had ever taken careful measurements of the characteristics of real-world problems, and even if they had, they would be measuring just one computing environment. But I didn't see any way out, because, if we didn't go the probabilistic route, NP-completeness would just be devastating. Now there was also a line of research on approximation algorithms that do not give guarantees. If you have an NP-complete combinatorial optimization problem, you can relax the requirement of getting an optimal solution and try to construct a fast algorithm that's guaranteed in its worst case to be not more than, say, 20 percent off. This is another very interesting paradigm that was explored, and it gave mixed results. In some problems, it really cleared up the difficulty-you could get a solution as close to optimum as you liked. Turing Award Interview
459
I a
For some other problems, you could guarantee being off by 22, 33, or 50 percent. Those results were very nice, but I didn't think they were descriptive of what happens when you use practical heuristics. Practical heuristics do very well most of the time, but not in the worst case. So it wasn't that I relished the idea of working along a new direction whose foundations could be called into question. And it was also a personal risk, in that I could have been seen as flaky. You know, 'He can't do the real thing, so he assumes some probability distribution and makes life easier for himself.' But again, I just didn't see any other way to proceed. The phenomenon of NPcompleteness persuaded me. KF If you don't have the optimal solution to a problem, how can you know that your heuristic is producing something close to optimal? RK That's a methodological difficulty. When you run a heuristic algorithm and it appears to be giving very good solutions you can't be sure, since you don't know where the optimum lies. You may run your program from many different starting points and keep replicating the same solution. If nobody ever finds a better one, you have some circumstantial evidence that your solution is best. You can also invest a very large amount of computer time in a branchand-bound computation and finally get a solution that you can prove is optimal 460
Turing Award Interview
to within half a percent. Then you run your quick heuristic on the same problem for three minutes or so and see how close it can come. Sometimes you can artificially construct the problem so that you'll know what the optimal solution is. But you've pointed out a severe methodological problem. KF Recently you've begun work on parallelism. How will parallel processing affect our notion of a good or efficient algorithm? RK I'm extremely interested in parallel computation, and I think it's a fascinating area. There are several strains of research that have not yet completely come together: There's the study of various parallel architectures; there are many questions about what the processors should look like and how they should be interconnected. There are numerical analysis issues, complexity issues, and algorithm design issues. Much of my work over the past couple of years has been done with two Israeli colleagues, Avi Wigderson and Eli Upfal. We have been studying the complexity of parallel algorithms on a fairly theoretical plane working with rather idealized models of parallel computers. That way we abstract away certain issues of communication and all the complications that arise because a parallel system is really also a distributed system. We may assume, for example, that any two processors can directly communicate, which is in fact flat wrong. But these are useful abstractions that let us get at some of the structural questions like, 'What is it about a problem that lends itself to parallelism? Under what circumstances can we design a completely new algorithm that will enormously reduce the amount of time required to solve a problem? These are interesting and important mental exercises because they lead us to discover completely different techniques for structuring algorithms. Very often the parallel algorithms that we come up with are very different from the sequential algorithms that we may use for the same problems. Most of the work that the theoretical computer science community has been doing on parallel computation has been concerned with making polynomial time algorithms even faster. We are asking ourselves, 'Which problems in that class can be tremendously parallelized? What are the conditions under which computations can be compressed enormously?' In my future work, I will focus more attention on applying parallelism to NP-complete problems. Somebody with a very severe theoretical point of view could say, 'That's hopeless, you can never reduce the run time from exponential to polynomial by throwing processors at a problem, unless you have an exponential number of processors.' On the other hand, even though you may never be able to go from exponential to polynomial, it's also clear that there is tremendous scope for parallelism on those problems, and parallelism may really help us curb combinatorial explosions. I intend to look at branch-and-bound, game trees, goal-subgoal structures, Prolog-like structures, backtrack search, and all of the various kinds of combinatorial searches, because I think that such problems are really well suited for parallel computation. The form that the theory will take is not yet clear. KF What is the relationship between your interest now in parallelism and your earlier work with Raymond E. Miller? RK There have been two main periods when I have been involved in studying parallel computation. The first was in the early to mid sixties, when I worked with Miller on several descriptive formalisms for parallel computaion. In the more recent period, I've worked with Upfal and Wigderson on the design and analysis of parallel algorithms. Miller and I were originally motivated Touring Award Interview
461
by considerations of whether it was feasible and desirable to design specialpurpose hardware to enable coimn uters to perform commonly executed iterative computations in parallel. We cane up with several formalisms for describing parallel computations. One was very specific and concrete, and another was on a very highly theoretical plane. The models and methods that we came up with were very similar in spirit to the systolic designs later pioneered by H. T. Kung, Charles Leiserson, and others, although I don't mean to say that we anticipated all their ideas. There were certainly many insights that we did not have. But, in a sense, we were doing it too early-the world wasn't quite ready for it. We were also interested in certain more qualitative questions like, 'What happens if you're running asynchronously, and you don't have a master clock so there is no way of telling whether A happens before B and B happens before A? Can you still have a determinate result for the whole computation even though you can't control the order in which these various events are happening in parallel? The recent work is in a different direction. We have been concerned with complexity -with a given numbE r of processors, how fast can you solve a problem? The two developments wne quite distinct. KF Do you think that perhaps your work will also contribute to parallel-processor design and help to determine the best ways to link different processors within a machine?
RK
At the hardware level, the kind of thing I do is highly relevant. Laying out an integrated circuit chip is a bit like designing a city of 50,000 people. There are all sorts of combinatouial problems having to do with placing and interconnecting the various circuit modules. At an architectural level, the work I've been doing on parallel computation isn't directly relevant, because I've been using idealized models that fail to address the issues of communication between processors. I hope that my work will begin to move closer to the architectural issues. KF Can we learn anything exciting about distributed communications and distributed protocols from theoretical studies?
RK
Yes. There are some very beautiful theoretical developments having to do with how much you lose when you have to depend on message passing in a sparse network of processors rather than direct point-to-point communication between processors. In a realistic distributed system, the processors have to not only compute but cooperate like a post office where messages flow between processors. There has been some very nice theoretical work on various kinds of protocols, where -as ir the so-called Byzantine Generals problem, another one of those jazzy names-a number of processors have to reach agreement through message passing, even when some are faulty and functioning as adversaries, trying to mess things up. It has become apparent that randomization is very powerful there. The kinds of protocols needed for these problems of cooperation and communication in a distributed system can be simplified if coin flipping is permitted. It's a fundamental insight that randomized algorithms can be applied in thz.t setting. So there are many links between theoretical studies and protocols for real-life distributed systems. KF People use algorithms that seem to work perfectly well in practice even though they lack the theoretical pedigree. And they might say, 'This work is fascinating, but if we can, by trial and error, come across algorithms that work fine, why concern ourselves with theory?' How will your work be applied in the most practical sense in the future? 462 Thring Award Interview
RK Some of the most important combinatorial algorithms could never have been invented by a trial-and-error process; having the right theoretical framework was absolutely necessary. Once the general shape of an algorithm has been determined, it is often possible to tune it empirically, but if you proceed in a purely empirical way, your knowledge is limited to the very specific circumstances in which you conduct experiments. The results of analysis, on the other hand, tend to be more susceptible to generalization. The justification for theory, apart from its apparent aesthetic attractions, is that, when you get a theoretical result, it usually applies to a range of situations. It's a bit like simulation versus analysis. They both certainly have their place, but most simulations only tell you about one very limited situation, whereas sometimes analysis can tell you about a whole range of situations. But the solution of combinatorial optimization problems is certainly as much an art as it is a science, and there are people who have wonderfully honed intuitions about constructing heuristic algorithms that do the job. KF What will be the focus of research at the Mathematical Sciences Research Institute fMSRI)? Well, I'm glad that you asked me about MSRI because that project is very close to my heart. It's a research institute up in the hills behind the Berkeley campus, but it's not officially connected with the university, and it supports year-long research programs in the mathematical sciences. In the past, these were mostly in pure mathematics. The primary support comes from the National Science Foundation (NSF). About two years ago, Steven Smale, of the mathematics department at Berkeley, and I proposed a year-long project in computational complexity, and we were very pleased that it was accepted. I think it's an indication that the mathematics community, which was really slow to involve itself in computational complexity, has now become very receptive to the field. About 70 scientists will participate in this complexity theory research. They are evenly divided between mathematicians and computer scientists. I'm very proud of the group we have assembled. People are pursuing a wide spectrum of topics. Some are doing metatheory, focusing on complexity classes like P and NP, rather than on concrete problems. Some are working on computational number theory where the central problem is factoring very large numbers. Others are concentrating on combinatorial problems. We're exploring the interface between numerical computation and complexity theory. And parallelism is a major theme. I'm absolutely delighted with the way it's going -the place is really the Camelot of complexity theory. There have already been a number of developments in parallel algorithms, just in the couple of months we've been operational. RK
KF
Could you be more specific?
It would be premature to mention specific results, except to say that some of them make my earlier work obsolete. RK
KF
How much money is available for the MSRI project?
The budget of the complexity project at MSRI in round numbers is $500,000 from NSF and $140,000 from the military services. This program has been something of a windfall for complexity theory, but I very distinctly have the feeling that the general funding picture for computer science is worse than it has been in years. The NSF is undertaking some very worthy new initiatives, but it's doing so without a corresponding expansion in its funding base, so that these initiatives are being funded at the RK
Turing Award Interview 463
expense of existing programs. Although I must say that the MSRI program is an exception, on the whole, people in theoretical computer science are being squeezed by the reductions in funding as a consequence of changed emphases at NSF, mostly in an engineering direction. KF What might be the more practical interests on the part of the Department of Defense and the three services? RK The support that's coming from them is principally in the area of parallel and distributed computations, anc we're planning to run a workshop in the spring that would bring together mathematicians, numerical analysts, and computer architects. Of course, there are all kinds of meetings on supercomputers and parallel computation these days, but this particular one will specifically explore the interface between complexity theory and the more realistic concerns of computer users and designers. KF There has been much debate over the merits of the Strategic Defense Initiative (SDI). Would you like to comment on it? RK I don't intend to make a speech about the Star Wars initiative nor do I pretend to be an expert on software engineering. But I have studied some of the evidence that presses the point that it is very dangerous to build a distributed system of unprecedented proportions that cannot be operational or tested until the critical moment. I am persuaded by those arguments to the extent that I am resolved personally not to involve myself in it. KF Researchers in many fields are studying complexity. Can you comment on the relationship, if any, between the study of complexity in computer science and in other disciplines? RK Complexity means many diIferent things - there's descriptive complexity and computational complexity. An algorithm may be quite complex in terms of the way its pieces are put together, and yet execute very fast, so that its computational complexity is low. So you have all of these different notions of complexity. It's not clear to me that electrical engineers, economists, mathematicians, computer scientists, and physicists are all talking about the same beast when they use the word complexity. However, I do think there are some very worthwhile and interesting analogies between complexity issues in computer science and in economics. For example, economics traditionally assumes that the agents within an economy have universal computing power and instantaneous knowledge of what's going on throughout the rest of the economy. Computer scientists den y that an algorithm can have infinite computing power. They're in fact study Ing the limitations that have arisen because of computational complexity. So there's a clear link there with economics. Furthermore, one could say tha. traditional economics-here I'm really going outside my specialization-has disregarded information lags and the fact that to some extent we operate without full information about the economic alternatives available to us, much in the same way that a node in a distributed computer network can only see its immediate environment and whatever messages are sent to it. So the analogies are cogent, but one has to be careful because we're not always talking about the same thing when we speak of complexity. KF Do you researchers?
use the term 'heuristics' differently
than do Al
RK People in AI distinguish between algorithms and heuristics. I think that they're all algorithms. To me an algorithm is just any procedure that can be expressed within a programming language. Heuristics are merely algorithms that we don't understand very well. I tend to live in an artificially precise world 464
bring Award Interview
where I know exactly what my algorithm is supposed to do. Now, when you talk about a program that's going to play good chess, translate Russian into English, or decide what to order in a restaurant-to mention a few tasks with an AI flavor-it's clear that the specifications are much looser. That is a characteristic of programs that those in AI consider heuristic. KF David Parnas points out in a recent article that systems developed under the rubric of heuristic programming, that is, programming by trial and error in the absence of a precise specification, are inherently less reliable than programming by more formal methods. RK Yes, and that brings us back to SDI. That's one of the reasons for being concerned about it. I think that we have much more apparatus for debugging a program when we can at least define what the program is supposed to do. KF Some members of the Al research community respond to that criticism by saying they they're trying to simulate humans and that humans have no precise specifications. The best they can hope for is a simulation of an unreliable system. RK I really believe in trying for crisp hypotheses and crisp conclusions. I realize that certain areas in computer science have to be dominated by empirical investigations, but that doesn't relieve us of the responsibility of thinking very hard about what it is we're measuring, what it is we're trying to achieve, and when we can say that our design is a success. And I believe that a certain measure of scientific method is called for. I don't buy the idea that simply because you're simulating the somewhat unknowable cognitive processes of humans you are relieved of the obligation to have precise formulations of what you're doing. KF Overall, how do you think computer science is doing as a discipline? RK Computer science has enormous advantages because of the tremendous importance and appeal of the field now. In some measures, we have been quite successful. A good portion of the young talent in the country is attracted to our field, especially in the areas of artificial intelligence and theoretical computer science. In terms of our progression as a science, I think that to some extent we are victims of our own success. There are so many ways to get money, so many things to try, so many exciting directions, that we sometimes forget to think about the foundations of our discipline. We need to have a continuing interplay between giving free rein to our urge to tinker and try all kinds of neat things, and yet at the same time designing our experiments using the scientific method, making sure that the foundations develop well. Our tools are so powerful, the vistas are so great, the range for applications is so enormous, that there's a great temptation to plow ahead. And we should plow ahead. But we also have to remember that we're a scientific discipline and not just a branch of high technology. KF You have noted the importance of a mixture of art and science, insight and intuition, as well as the more rigorous methods of investigation. Have there been times when something just came to you and you experienced the so-called eureka phenomenon that inventors describe? RK I think we all have experienced it -waking up in the morning and having the solution to a problem. We have to remember that those eureka Turing Award Interview 465
experiences are usually preceded by a large amount of hard work that may sometimes seem unproductive. For example, when I read Cook's 1971 paper, it didn't take me very long to see that he had found a key to something phenomenally important, and to press on and try to demonstrate the scope and significance of his results. In a se'se, it was almost instantaneous, but it was prepared for by well over a decade of work. I think it's characteristic that these moments when one makes connections come after a long period of preparation. KF Have you ever been talking to somebody, and an offhand remark they made caused something to click? RK Oh, sure. I find it very helpf-ul to explain what I'm doing because my mistakes usually become obvious t: me much quicker. And I listen to others because I'm really a believer in building up one's knowledge base. It greatly increases the probability that one will find unexpected connections later.
Categories and Subject Descriptors: A.0 [General Literature] Generill-biographies/autobiographies;E0 [Theory of Computation]-General; F.1.1 IComputation by Abstract Devices]-Models of Computation-computability t'teory; E 1.2 [Computation by Abstract Devices]: Modes of Computation-parallelis'n;probabilisticcomputation; E1.3 [Computation by Abstract Devices]: Complexity Classes-reducibility and completeness; relations among complexity classes; F.2.0 [Analysis of Algorithms and Problem Complexity]: General; G.2.l [Discrete Mathematics]: Combinatorics; G.2.2 [Discrete Mathematics]: Graph Theory; K.2 [History of Computing]: people General Terms: Performance, Theory Additional Key Words and Phrases: Richard Karp, Turing Award
466
Thring Award Interview
Index by ACM Computing Reviews Classification Scheme The Computing Reviews Classification System was designed during 1979-1981 by an international committee headed by Professor Anthony Ralston of the State University of New York at Buffalo. First used in 1982 by ACM to classify its own publications, it has been gaining acceptance as an indexing tool for the search and retrieval of computing literature by subject area. A. General Literature A.0
GENERAL Karp, Richard M., 'Combinatorics, Complexity, and Randomness,' p. 433
C. Computer Systems Organization C.0
GENERAL Wirth, Niklaus, 'From Programming Language Design to Computer Construction,' p. 179
C.1
PROCESSOR ARCHITECTURES C.A.A Single Data Stream Architectures Backus, John, 'Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Progams,' p. 63
C.5
COMPUTER SYSTEM IMPLEMENTATION
467
C.5.2 Microcomputers Ritchie, Dennis M., 'Reflections on Software Research,' p. 163 D. Software D.1
PROGRAMMING TECHMIQUES D. 1.1 Applicative (Functional) Programming Backus, John, 'Can PRogramming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs,' p. 63 D.1.2 Automatic Programming Knuth, Donald E., 'omputer Programming as an Art,' p. 33 Wilkes, Maurice V., 'Computers Then and Now,' p. 197
D.2
SOFTWARE ENGINEERING D.2.1 Requirements /Specif nations Wirth, Niklaus, 'Frnr, Programming Language Design to Computer Construction,' p. 1779 D.2.2 Tools and Techniques Floyd, Robert W., 'The Paradigms of Programming,' p. 13 D.2.4 Program Verificatio a Backus, John, 'Can ProgrammingBe Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs,' p. 63 Dijkstra, Edsger W., 'The Humble Programmer,' p. 17 D.2.5 Testing and Debugging Thompson, Ken, Refiections on Trusting Trust,' p. 171 D.2.9 Management Codd, E. F., 'Relatioas l Database:A PracticalFoundation for Productivity,' p. 391
D.3
PROGRAMMING LANGt AGES D.3.0 General Dijkstra, Edsger IV., 'The Humble Programmer,' p. 17 D.3.1 Formal Definitions ar.d Theory Backus, John, 'Can i ogramming Be Liberated from the von Neumann Style? A Functional (tyle and Its Algebra of Programs, p. 63 Minsky, Marvin, 'Form and Content in Computer Science, ' p. 219 Perlis, Alan J., 'The Snthesis of Algorithmic Systems,' p. 5 D.3.2 Language Classificalions Hoare, Charles Antonj. Richard, 'An Emperor's Old Clothes,' p. 143 Perlis, Alan J., 'The Snthesis of Algorithmic Systems,' p. 1 Thompson, Ken, -Re,7' ctions on Trusting Trust, ' p. 171 D.3.3 Language Constructs Dijkstra, Edsger v., The Humble Programmer,' p. 17 Floyd, Robert W. 'The Paradigms of Programming, p. 131 Perlis, Alan J., 'The S)nthesis of Algorithmic Systems,' p. 5
468 Index by CR Classification Scheme
D.3.4 Processors Codd, E. F., 'RelationalDatabase: A PracticalFoundation for Productivity,' p. 391 Hoare, Charles Antony Richard, 'The Emperor's Old Clothes,' p. 143 Wirth, Niklaus, 'From Programming Language Design to Computer Construction,' p. 179 Minsky, Marvin, 'Form and Content in Computer Science, ' p. 219 D.4
OPERATING SYSTEMS D4.0 General Ritchie, Dennis M., 'Reflections on Software Research,' p. 163 D.4.1 Process Management Hoare, Charles Antony Richard, 'The Emperor's Old Clothes,' p. 143 D.4.3 File Systems Management Perlis, Alan J., 'The Synthesis of Algorithrmnic Systems, ' p. 5 D.4.6 Security and Protection Thompson, Ken, 'Reflections on Trusting Trust,' p. 171
F. Theory of Computation F.0
GENERAL Karp, Richard M., 'Combinatorics, Complexity, and Randomness,' p. 433
F.1
COMPUTATION BY ABSTRACT DEVICES F.1.1 Models of Computation Karp, Richard M., 'Combinatorics, Complexity, and Randomness, ' p. 433 Newell, Allen, and Simon, Herbert A., 'Computer Science as Empirical Inquiry: Symbols and Search,' p. 287 F.1.2 Modes of Computation Cook, Stephen A., 'An Overview of Computational Complexity,' p. 411 Karp, Richard M., 'Combinatorics, Complexity, and Randomness, ' p. 433 F.1.3 Complexity Classes Karp, Richard M., 'Combinatorics, Complexity, and Randomness,' p. 433
F.2
ANALYSIS OF ALGORITHMS AND PROBLEM COMPLEXITY F.2.0 General Karp, Richard M., 'Combinatorics, Complexity, and Randomness,' p. 433 F.2.1 Numerical Algorithms and Problems Cook, Stephen A., 'An Overview of Computational Complexity,' p. 411 Iverson, Kenneth E., 'Notation as a Tool of Thought, ' p. 339 Minsky, Marvin, 'Form and Content in Computer Science, ' p. 219 Rabin, Michael O., 'Complexity of Computations,' p. 319 Wilkinson, J. H., 'Some Comments from a Numerical Analyst,' p. 243 F.2.2 Nonnumerical Algorithms and Problems Rabin, Michael O., 'Complexity of Computations,' p. 319
Index by CR Classification Scheme
469
F.3
LOGICS AND MEANINGS OF PROGRAMS F.3.2 Semantics of Programming Languages Scott, Dana S., 'Logic and Programming Languages,' p. 47
F.4
MATHEMATICAL LOGIC AND FORMAL LANGUAGES F.4.1 Mathematical Logic Backus, John, 'Can Pr gramming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs,' p. 63 Minsky, Marvin, 'Fo-pi and Content in Computer Science,' p. 219 Rabin, Michael 0, 'Complexity of Computations,' p. 319 Scott, Dana S., 'Logic and Progamming Languages, ' p. 47 F.4.2 Grammars and Other Rewriting Systems Rabin, Michael 0., 'Complexity of Computations,' p. 319
G. Mathematics of Computing G.1
NUMERICAL ANALYSIS G.1.0 General Rabin, Michael 0., 'Complexity of Computations, ' p. 319 Wilkinson, J. H., 'Some Comments from a Numerical Analyst,' p. 243 G.1.3 Numerical Linear Algebra Backus, John, 'Can ProgrammingBe Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs,' p. 63 Wilkinson, J. H., 'Some Comments from a Numerical Analyst,' p. 243 G.1.5 Roots of Nonlinear Equations Backus, John, 'Can Programming Be Liberated from the von Neumann Style? A Functional S'yle and Its Algebra of Programs,' p. 63 G. 1.m Miscellaneous Iverson, Kenneth E.
G.2.
'Notation as a Tool of Thought,' p. 339
DISCRETE MATHEMATICS G.2. 1 Combinatorics Iverson, Kenneth L. 'Notation as a Tool of Thought,' p. 339 Karp, Richard M., 'Combinatorics, Complexity, and Randomness,' p. 433 G.2.2 Graph Theory Iverson, Kenneth E., 'Notation as a Tool of Thought,' p. 339 Karp, Richard M., 'Combinatorics, Complexity, and Randomness, p. 433
G.3
PROBABILITY AND STATISTICS Cook, Stephen A., 'An Oveiview of Computational Complexity,' p. 411
H. Information Systems H.2
DATABASE MANAGEMENT' H.2.1 Logical Design Codd, E. F., 'Relationail Database:A PracticalFoundation for Productivity,' p. 391
470
Index by CR Classification Scheine
H.2.2 Physical Design Bachman, Charles W., 'The Programmeras Navigator,' p. 269 H.3
INFORMATION STORAGE AND RETRIEVAL H.3.2 Information Storage Bachman, Charles W., 'The Programmeras Navigator,' p. 269 H.3.3 Information Search and Retrieval Bachman, Charles W., 'The Programmeras Navigator,' p. 269
1. Computing Methodologies I.1
ALGEBRAIC MANIPULATION I.1.1
Expressions and Their Representation Iverson, Kenneth E., 'Notation as a Tool of Thought,' p. 339
1.2
ARTIFICIAL INTELLIGENCE 1.2.1
Applications and Expert Systems Wilkes, Maurice V, 'Computers Then and Now,' p. 197
1.2.3
Deduction and Theorem Proving McCarthy, John, 'Generality in Artificial Intelligence,' p. 257
1.2.4
Knowledge Representation Formalisms and Methods McCarthy, John, 'Generality in Artificial Intelligence,' p. 257
1.2.6
Learning McCarthy, John, 'Generality in Artificial Intelligence,' p. 257 Minsky, Marvin, 'Form and Content in Computer Science,' p. 219
1.2.7
Natural Language Processing Newell, Allen, and Simon, Herbert A., 'Computer Science as Empirical Inquiry: Symbols and Search,' p. 287
1.2.8
Problem Solving, Control Methods and Search Newell, Allen, and Simon, Herbert A., 'Computer Science as Empirical Inquiry: Symbols and Search,' p. 287
J.
Computer Applications J.2
PHYSICAL SCIENCES AND ENGINEERING Hamming, R. W., 'One Man's View of Computer Science,' p. 207
K. Computing Milieux K.2
HISTORY OF COMPUTING Dijkstra, Edsger W., 'The Humble Programmer,'p. 17 Karp, Richard M, 'Combinatorics, Complexity, and Randomness,' p. 433 Newell, Allen, and Simon, Herbert A., 'Computer Science as Empirical Inquiry: Symbols and Search,' p. 287 Wilkes, Maurice V., 'Computers Then and Now,' p. 197 Wilkinson, J. H., 'Some Comments from a Numerical Analyst,' p. 243
K.3
COMPUTERS AND EDUCATION K.3.0 General
Index by CR Classification Scheme 471
Minsky, Marvin, 'Fcrm and Content in Computer Science,' p. 219 K.3.2 Computer and Infoiniation Science Education Floyd, Robert W., 'The Paradigms of Programming,' p. 131 Hamming, R. W., 'Ore Man's View of Computer Science,' p. 207 K.6
MANAGEMENT OF COMPUTING AND INFORMATION SYSTEMS K.6.1 Project and People Management Knuth, Donald E., 'Computer Programming as an Art,' p. 33 Ritchie, Dennis Mt. 'Reflections on Software Research,' p. 163
K.7
THE COMPUTING PROFESSION K.7.0 General Knuth, Donald E., 'Computer Programming as an Art,' p. 33 K.7.1 Occupations Dijkstra, Edsger W., 'The Humble Programmer,' p. 17 K.7.m Miscellaneous Hamming, R. W., 'O)re Man's View of Computer Science,' p. 207
472
Index by CR Classification Scherme
A
Name Index A Bobrow, Daniel, 171 B6hm, C., 50 Boole, George, 340 Borgwardt, Karl-Heinz, 450, 455 Borodin, Allan, 418, 419, 423, 426 Bowles, Ken, 184 Brent, R. P., 331 Brinch, Hansen, 184 Brooker, 201
Acton, Forman, 51 Adleman, L. A., 422-23, 425 Adler, Ilan, 450 Aho, A. V., 414 Alvord, L., 388 Ammann, Urs, 183 Arvind, 67 Ashenhurst, Robert, 51 Attneave, F., 233
Burgess, J., 388 Burstall, 89
B Babbage, Charles, 340 Bachman, Charles, 194, 269-85 Backus, John, 3, 63-130 Balzer, Robert, 133, 134 Beardwood, 449 Bellman, Richard, 436 Bennett, J. H. 413, 420 Bentham, Jeremy, 40, 42
C Cajori, F., 340, 379 Canning, Richard G., 269 Carlson, Walter, 131, 143, 339 Carr, John, 200
Chaitin, G. J., 425 Church, A., 50, 59, 60, 61 Church, Alonzo, 51, 53, 54, 295 Clark, K. L., 51 Cobham, Alan, 413, 414, 416, 419, 420 Cocke, John, 134 Codd, E. F., 194, 391-410 Cohen, H., 417, 422 Cook, Roger, 145, 150 Cook, Stephen A., 195, 221, 334,
Bentley, J., 450 Berklekamp, E. R., 421
Berkling, K. J., 125 Berliner, H., 309 Berry, M. J. A., 388 Blauuw, G. A., 383 Black, F. A., 262 Blum, Manuel, 329, 412, 425, 443, 451
473
411-31, 443, 444, 446, 455 Coppersmith, D., 416 Cowell, D. F, 51 Coxeter, H. S. M., 38 Curry, H. B., 50, 59, 60, 61 D Dantzig, George, 254, 436, 438, 443 445, 454 Date, Chris, 407 Davis, Philip, 249, 455 Davis, R., 134 de Bakker, Jaco, 52 Dennett, Dan, 315 Dennis, J. B., 67 Diffie, W., 425 Dijkstra, Edsger W., 2, 17-32, 40, 89, 124, 132, 145 Donahue, J. E., 50 E Eckert, Presper, 198 Edmonds, Jack, 420, 440, 455 Eilenberg, S., 51 Ershov, Andrei, 39 Ershov, Yu. L., 54 F Faddeeva, V. N., 250 Fagin, Ronald, 129 Falkoff, A. D., 384 Feferman, Sol, 52 Feigenbaum, E. A., 230, 307
Feldman, J., 230 Feynman, R. P., 220, 238 Fischer, Michael J., 39, 332, 333, 418 Floyd, Robert W., 3, 89, 131-42, 15:3, 332, 333 Ford, Lester, 438, 455 Forsythe, George, 52, 141, 208 Fox, Leslie, 53, 245, 252 Frank, Geoffrey A., 129 Friedberg, R. M., 258, 259, 160 Fulkerson, Raymond, 436, 438, 455 G Galler, Bernard A., 287, 319 Galton, F, 232 Gandy, Robin, 53 Gelernter, 134 Gill, John, 423, 451
474 Name Index
Goldschlager, L. M., 424 Goldstine, H., 253 Golub, 254 Goodwin, Charles, 245, 252 Gostelow, K. P. A., 67 Gray, James N., 128 Green, Cordell, 136 Gries, David S., 129 H Halton, 449 Hammersley, 449 Hamming, Richard, 192, 207 Harrison, Michael, 443 Hartmanis, Juris, 412, 417, 442, 455
Hayes, P. J., 262 Hazony, Y., 388 Held, Michael, 436-37 Hellman, M. E., 425 Hewitt, C., 223 Hillmore, Jeff, 147 Hoare, Charles Antony Richard, 3, 54, 89, 143-61, 184 Hoffman, Alan, 443 Hofmann, Karl H., 54 Hopcroft, John E., 414, 443 Howard, C. Frusher, 38 Huskey, Harry, 52, 181, 203, 245, 248 I
Iverson, Kenneth, 74, 194, 339-87
J Johnson, David, 446, 447 Johnson, Selmer, 436 Jolley, L. B., 380 K Kahan, W., 254 Kaltofen, E., 417 Karatsuba, A., 415 Karmarker, Narendra, 455, 457 Karp, Richard M., 195, 333, 415, 420, 433-53 Kay, Alan, 168 Kerner, l. O., 378, 383 Kernighan, Brian IN., 39, 437, 447, 448 Khachian, L. G., 415, 417 King, Paul, 145 Kleene, Steve, 53 Knuth, Donald E., 2, 33-46, 141, 221, 443
Kolmogorov, A. N., 414, 425 Kosinski, P., 67 Kowalski, R. A., 58 Kreisel, Georg, 52 Kuhn, Thomas S., 133, 134
Morris, James H., Jr., 89, 103, 129, 201
Moses, J., 227, 228 Mostowski, Andrzej, 53 Motzkin, T., 330 Mueller, Robert E., 38 Munro, I., 330, 419
L Landin, Peter, 51, 145, 153 Lawler, Eugene, 443 Ledergerg, 307 Lehmer, Dick, 52 Lehmer, Emma, 38, 52 Lehrer, Tom, 240 Leighton, 450 Lenstra, A. K., 417 Lenstra, H. W., Jr., 417, 422 Lesk, Mike, 166 Levin, Leonid, 420, 445, 455 Levin, M., 224 Lewis, P. M., 417 Lifschitz, V., 264 Lin, Shen, 437, 447, 448 Lorie, Raymond, 404 Lovasz, L., 417 Luks, E. M., 417 M McCarthy, John, 52, 89, 194, 220, 257-67, 262, 264, 296 McCulloch, Warren S., 219 McDonnell, E. E., 382 McGeoch, 450 McIlroy, M. Douglas, 17-18, 164 McIntyre, Donald, 384 McJones, Paul R., 108, 128, 129 McNaughton, Robert, 442, 455 Mago, Gyula A., 125, 129 Manes, E. G., 50 Manna, Z., 57, 58, 89, 108 Martin, W. A., 227 Mauchly, John, 198 Megiddo, Nimrod, 450 Metropolis, Nick, 57 Meyer, A. R. 332, 333, 417, 418, 455 Micali, S., 425, 426 Mill, John Stuart, 36 Miller, Gary, 416 Miller, Raymond E., 443, 461 Milne, R., 50, 51, 53, 54 Milner, Robin, 54, 89 Minsky, Marvin 135, 138, 192, 219-42, 230 Mittman, Ben, 141 Morris, Bob 164
N Naur, Peter, 23, 64, 129, 145 Newell, Allen, 193, 220, 259
Nilsson, N. J., 306 Norman, Don, 164 0 Ofman, Yu., 415 Ohran, Richard, 185 Orth, D. L., 376, 381 Ossanna, Joseph, 164 P Papert, Seymour, 192, 220, 222, 229, 233 Park, David, 54 Parnas, David, 132, 465 Paterson, M., 330, 332
Pearl, J., 314 Penrose, Roger, 53 Perlis, Alan, 2, 5-16, 33, 201 Pesch, 388 Piaget, Jean, 231, 316 Pippenger, Nicholas, 424 Pitts, Walter, 219 Platek, Richard, 52
Plauger, P. J., 39 Plotkin, Gordon, 54 Poley, Stan, 39 Post, Emil, 295, 441 Pratt, V. R. 424 Presburger, M., 325 Putnam, 455 Pym, Jill, 145, 147 R Rabin, Michael, 3, 47, 48, 51, 195, 319-38, 333, 412, 413, 417, 421, 442, 451, 455 Rackoff, Charles, 426 Reynolds, John D., 54, 89, 124, 129 Ritchie, Dennis M., 4, 163-69, 171, 172 Ritchie, R. W., 413 Rivest, R. L., 422, 425
Name Index
475
Robinson, Julia, 50, 443, 455 Robinson, Raphael, 50 Rogers, Hartley, 442 Rosen, Barry K., 128 Rosenberg, Arnold, 443 Roth, J. P., 435 S Sammet, Jean E., 63-64 Samuel, 307 Savage, J. E. 413 Sch6nfinkel, 106 Sch6nhage, 331, 416 Schwartz, J. T., 422, 423 Scott, Dana, 3, 47-62, 50, 55, 66, 89, 108, 225, 319 Selfridge, Oliver 220 Shackleton, Pat, 144 Shadwell, Thomas, 49 Shamir, A., 422, 425 Shamir, Ron, 450 Shannon, C. E., 221, 294, 413 Shaw, J. C., 287, 289 Shaw, R. A., 424 Shortliffe, E. H., 134 Shrobe, Howard, 139 Simon, Herbert, 141, 193, 220, 259 Slagle, J. R., 228 Smale, Steven, 415, 450, 455 Snow, C. P., 38 Solomonoff, Ray, 220 Solovay, R., 416, 421, 422, 451 Speiser, Ambros P., 181 Spence, R., 388 Stearns, Richard E., 412, 417, 442, 455 Steel, Tom, 52 Stockmeyer, L. J., 417-18, 424, 455
Stoy, J. E., 50 Strachey, Christopher, 48, 49, 50, 51, 52, 53, 153 Strassen, V., 221, 331, 332, 416, 419, 421, 422, 451 Suppes, Pat, 52 T Tarski, Alfred, 50 Tennent, Robert, 49 Thompson, Ken L., 4, 163, 169, 171 -77
476 Name Index
Tolle, David, 129 Toom, A. L., 221, 415, 416 Trotter, Hale, 51 Turing, Alan, 53, 54, 154, 191, 197, 199, 201, 208, 219, 229, 230, 244-48, 249, 251, 252, 253, 295, 296, 310-11, 312, 330, 440, 441, 442, 454 U Ullmann, J. D., 414 Upfal, Eli, 461 Uspenski, V. A., 114 V Valiant, Leslie G., 332, 420, 421 Van Emden, M. H., 58 Van Wijngaarden, Aadrian, 18, 181 Vogelaeve, Ren6 de, 52 Von Neumann, Johann, 198, 248, 253, 414 Von zur Gathen, Joachim, 426 W Watson, J. D., 167 Weinberg, Sharon, 407 Weissman, C., 224 Whitehead, A. N., 340 Wigderson, Avi, 461 Wilkes, Maurice, 191-92, 197-205, 244 Wilkinson, J. H., 191, 194, 243-56 Williams, John H., 129 Winograd, Shmuel, 223, 330, 331, 416, 443 Wirth, Niklaus, 4, 132, 154, 155, 179-89 Woodger, Mike, 245 Y Yamada, Hideo, 413, 442, 455 Yao, A. C., 425
z Zilles, Stephen N., 128
A
a A
* *
Subject Index A Abstraction method, 133 Abstraction patterns, 27-28 Access control, 176. See also Security Access methods, 272-80, 414 Access paths, DBMS, 393-406 Ada, 16, 157-58, 188 Al. See Artificial intelligence Algebra, of FP systems, 89-108; of programs, 63-130; relational, 397 Algebraic expressions and equations, 322 Algebraic proofs, 75-76 ALGOL, 6-7, 23; compared with AST systems, 116-17; and future languages, 10-18; and mathematical notation, 378; subset design for, 147-49 ALGOL Compiler, 148 ALGOL 60, 23; for clause in, 28; design and implementation of, 145-47, 179, 180, 181-82 ALGOL 68, 156 ALGOL W, 154, 179, 180, 182-183 ALGOL working group, 145, 154-56 Algorithms, analysis of, 326; design of, 413-15; for dynamic programming, 134; and matrix field, 253; most important, 415; nondeterministic,
135; probabilistic, 334-35; 421-23; 448-51, 453, 455; randomized, 451-53; synthesis of, 5-15 American Association for Artificial Intelligence (AAAI), 315 American National Standards Institute (ANSI) Relational Task Group, 399 Analog computers, 212-13 ANSI/X3/SPARC Study Group on Database Management, 281, 283 APL, 16, 341, 378-86; teaching of, 388; vs. word-at-a-time programming, 74 Application programming, business, 212, 283; DBMS, 392-406; FP system, 79 Applicative state transition (AST) systems, 67, 69, 76, 77, 115-26; and computer design, 125; naming in, 124; program example, 118-23; structure of, 116-18; variants of, 123 Arrays, vs. lists, 16; in mathematics vs. APL, 379; as tool of thought, 346 Art, vs. science, 35-40 Artificial intelligence (AI), 36, 203-4; databases for, 258, 284-85; generality in, 257-67; and heuristic search, 289, 300-312; and problem solving, 259, 298, 299, 301-8; production systems in,
477
259-60; representing behavior and knowledge with, 258-63; and symbol system, 289-300 Assignment statements, 68-69 Association for Computing Machinery (ACM), 19, 210, 288 AST systems. See Applicative state transition systems Atomism, 291 AT&T Bell Laboratories, 163-69 Automatic assembly devices, 204 Automatic computers, 18, 19, 20 Automatic programming, 134, 200, 201 Axiomatic context, 265 Axiomatic semantics, 74, 75, 153-54 B Backus-Naur Form (BNF), 23, 63-64 Behavior modeling, 258-59, 299 Bell Laboratories, 163-69 Bendix G-15, 181 Best-first search, 306 Biology, 290 Block-encoding, 336 BNF, 23, 63-64 Bootstrapping, 187, 229 Branch-and-bound paradigm, 133 British Computer Society, 19 Britton-Lee IDM500, 405 Business applications, 212, 283 C C compiler, 172-75 Cell doctrine, 290 Changeable parts, 72-73 Checker program, 307 Chess programs, 304-5, 306, 309, 316 Chomsky hierarchy, 51 Church-Rosser theorem, 107-8 Circumscription, 264 COBOL, 212 COBOL Database Task Group, 274 CODASYL Database Task Group, 270, 393, 405 Cognitive psychology, 297, 299-300 Combinatorial explosion, 454, 457 Combinatorial optimization, 448-50, 454- 55 Combinatorics, 435-40 Combinators, 106-8 Combining forms, 73-74 Common sense database, 258, 262 263 Communicating sequential processes,
478
Subject Index
153 Communications of the ACM, 34 Compiler-compiler, 203 Compilers, ALGOL, 148; C, 172-75; design of, 145-46, 182; DBMS, 404; and form-content confusion, 223; heuristic, 226-28; Pascal, 183-84; single-pass, 146 Complexity theory, 321-36; 411-31, 433-53, 455-57 Compulsory declarations, 147 Computability theory, 441-42 Computable functions, 322 Computational theory, 219-23, 319-38, 411-31, 454-55 Computer applications, business, 212, 283; DBMS, 392-406; FP system, 79 Computer arithmetic, 323 Computer crime, 176 Computer design, 125, 127 Computer graphics, 203 Computerology, 249 Computers, automatic, 18, 19, 20; third-generation, 21 Computer science, as empirical inquiry, 287-317 Computer science curriculum, 207- 18, 222-23, 229-41. See also Teaching Computing methodologies, 193 Computing Science Research Center, 165 Computing system models, 66-67 Conceptual schema, 281, 283 Conditional expressions, 82 Conference on Software Engineering, 25 CONNIVER, 298 Constants and variables, 9- 12 Contamination, 277, 278 Context, formalizing, 265-66 Context-free grammars, 51, 223, 323 -24 Control statements, 68, 69 Conversational programming, 10, 12, 16 CONVERT, 223 Correctness proofs, 26-27, 37, 75-76, 89 COS (Corporation for Open Systems), 282 C language, 163-77 D Database, common sense, 258, 262, 263; and complexity theory, 321,
335-36; PROLOG, 16, 261; relational, 391-410 Database management systems (DBMS), 269 -70, 272 -80; and artificial intelligence, 284-85; range of services in, 402-3; relational, 391-410 Data manipulation language (DML), 396 Data models, 125, 395-96 Data processing, and complexity theory, 322-33 Data structure definition language (DDL), 396 Data structure diagrams, 270 Data structures, 201-2; in artificial intelligence programs, 258; cost functions of, 321, 335-36; and syntax, 12-14; synthesis of language and, 9 Data sublanguage, 397, 399-400, 406 DBMS. See Database management systems Debugging, 24, 175 Definition, in FP systems, 83 DENDRAL, 307 Denotational semantics, 74-75 Department of Defense, 157, 463, 464 Designation, 292, 295, 296 Design principles, 187-89 Digital computer, 294-95 Direct access storage devices, 272, 277 Directed graphs, 366-69 Directory-assistance system, 166 Distributivity proof, 372-73 Divide-and-conquer paradigm, 133, 140 Dyadic transpose, 374 E Economics, software vs. hardware, 25 EDSAC, 22 Education. See Teaching Efficiency, 41 Eigensystem, 250, 253 Elliott ALGOL System, 147 Elliott Brothers (London) Ltd., 144 Elliott 503, 144, 148-53 ENFORM, 400 Engineering, 209 ENIAC, 198 Enterprise models, 283 Entry to a block, 10 EP, 16 Error analysis, 244, 250 Ethics, 217, 218 Euler, 179, 180, 182
European Computer Manufacturers Association, 156 Expansion theorems, 97-98 Expert systems, 284, 315 F Factored solution, 29 Fast Fourier Transform (FFT), 321, 328, 330-31, 415, 416 File sorting, 324, 332-33 Fingerprinting function, 452 Fixpoint semantics, 58 Formal functional programming (FFP) system, 108- 15 Formalism, 219 For statement, 10,13 FORTRAN, 22-23, 24, 28, 63-64, 141, 147 FP system. See Functional programming system Frame problem, 263, 264 Framework, and changeable parts, 72 -73 'Funarg' problem, 108 Functional forms, 77, 81-83 Functional notation, 346-49, 379 Functional programming (FP) systems, 16, 63-130; advantages of, 88; algebra of programs for, 89-108; compared with von Neumann, 70-72; definition in, 83; examples of, 85-87; expressive power of, 88; limitations of, 87-88; as programming language, 87; syntax of, 109 Functional representations, 359-67 Functions, in FP systems, 79 Function space, 58-61 G Game-playing programs, 304-5, 306, 307, 309, 316 Gaussian elimination, 140, 251, 252 GEDANKEN, 124 General Problem Solver (GPS), 259, 298, 299, 306 Generate/filter/accumulate paradigm, 139 Geology, 291 Germ theory, 291 H Halting Problem, 441-42, 454 Hardware: DBMS, 405; design of, 125,
Subject Index 479
127, 185; economics of, 25; effect on thought processes, 22 Hardware/software interface, 186 HEARSAY speech understanding system, 309 Heuristic programming, 229-41 Heuristic search, 289, 300-12, 314 315, 316, 464-65 Hierarchical systems, 29-30, 51 Hilbert's Tenth Problem, 440-43, 454 Hitech chess machine, 316 Horn clauses, 261
IBM, 150, 168 ICL CAFS, 405 Identities, 370-77 IFIP, 156 IFIP Working Group, 52, 147, 182 Information, nonlocal use of, 308-9 Information extraction, 308-10 Information hiding method, 133 Information processing psychology 299 Information theory, 294, 425 INGRES system, 399, 400 Initialization, 10- 11 Inner products, 374-75 Integrated Data Store (I-D-S), 269-70, 274, 278 Intelligence, 305-6 Intelligent action, and symbol system, 293, 297-300 Interference, 277-78 International Business Machines (IBM), 150, 168 International Federation of Information Processing (IFIP), 52, 147, 156, 182 International Organization for Standardization (ISO), 282, 283 Interpretation, 292, 295 I/O interrupts, 20 Isomorphisms, 60-61 Iteration theorem, 100-3 K Knowledge, and search, 315, 316 Knowledge-based system, 284 Knowledge engineer, 315 Knowledge representation, 261-64 L Lambda calculus, 53-54, 61, 67, 69,
480
Subject Index
106-8, 125 Lattices, 55 Laws of qualitative structure, 290-91 Learning, and 'new math,' 229-41 Lilith, 180, 186 Linear programming, 252, 253, 254, 438, 456 LISP, 16, 23, 67, 69; and artificial intelligence, 296, 297; 'funarg' problem of, 108; syntax for, 12-13, 224-25 Lists, 295-96, 297; vs. arrays, 16; and complexity theory, 321, 335-36 Logic, 16, 260-64; formalizing, 294; and programming languages, 47-62; symbolic, 369 Logic Theorist, 306 M MACLISP, 139 Manchester University, 201 Mariner space rocket, 147 Marriage problem, 438-39 Massachusetts Institute of Technology (MIT), 135, 150 MATCHLESS, 223 Mathematical notation, characteristics of, 341-53; (vs.) programming languages, 340-41, 378-86; as tool of thought, 339-87 Mathematical Sciences Research Institute (MSRI), 463 Mathematics, pure vs. applied, 213-14; teaching, 211-12, 229-41 Means-ends analysis, 306 Mechanical theorem proving, 324-25, 333-34 Memory-time trade-off, 222 Merge sorting, 140 Mesa, 185 Metacomposition rule, 111 MICROPLANNER language, 135 ML, 16 Modula-2, 179, 180, 184, 186, 188 Monitors, 153 Moore School of Electrical Engineering, 197-98 Move generators, 302 Multics, 150, 165 Multiplication, 415- 16 Multiply-add trade-off, 221-22 Multiprogramming, 184, 276 MYCIN program, 134-35, 260 N National Physical Laboratory, 245-48,
251, 252 National Science Foundation, 463, 464 Network flow theory, 438-40 Newton's symmetric functions, 373 Nonlinear equations, 103-4 Nonmonotonicity, 263-64 Notation. See Mathematical notation NP-completeness, 411, 412, 420, 440-51, 456 Number representation, 359-60 Numerical analysis, 216, 243-56
0 Object code, 147 Object-oriented programming, 16 Objects, 79 Open Systems Interconnection Reference Model, 282 Operating systems, 150, 153, 163-77 Operational models, 67 Operators, in mathematics vs. APL, 379, 382-83; as tool of thought, 346, 347-49 Oxford University, 53, 54 P Paradigms, dynamic programming, 134; language support by, 136-38; rule-based, 134-35; structured programming, 132- 34, 137; teaching, 139-41 Parallel computation, 222, 423-25 Parsing, 131, 134, 321; and complexity theory, 323-24, 332; speed of, 332 Partitioning identities, 370-71 Pascal, 158, 179, 180, 183-84, 186 Pascal compiler, 183-84 Pattern-matching languages, 223, 226 Pattern recognition, 205 P-code, 184 PDP-11, 164, 165, 171 Permutations, 363-66 Personal computers, 184-85 Peterlee Relational Test Vehicle, 400 Physical symbol systems, 289, 292-93, 311, 315 Physics, 220 Pipelining, 16 PLANNER, 135, 298 Plate tectonics, 291 PL/I, 24, 156-57, 183 PL360, 182-83 Polynomial computations, 329-31,
354-58, 360-361, 376 Postponement operator, 225 Presburger's arithmetic, 325, 333 Princeton University, 51, 53-54 Probabilistic algorithms, 334-35, 421-23, 448-51, 453, 457 Problem solving, 259, 298, 299, 301-8 Problem space, 302, 303-4, 310 Procedure declaration, 10- 11 Production systems, 259-60 Productivity, 391-410 Professional standards, 217, 218 Program efficiency, 41 Program measurement tools, 44 Programmer, early views of, 18-19; as navigator, 269-80; role in education, 229-41 Programmer's Apprentice group (MIT), 139 Programming, 30, 31; APL vs. word-at-a-time, 74; automatic, 134, 200, 201; conversational, 10, 12, 16; functional, 63-130; linear, 252, 253, 254, 438, 454; paradigms of, 131-42; teaching of, 138-41; von Neumann, 70-72 Programming language design, 138, 143-61, 179-89; ALGOL subset, 147-49; and paradigm support, 141; principles of, 145-46; reliability in, 157, 158 Programming languages, context-free, 51, 223, 323-24; definition in, 153-54; describing, 228-29; economics of, 25; effect on thinking habits, 27-28; expressive power of, 226; and form-content confusion, 219, 220, 223-29; FP systems as, 87; frameworks vs. changeable parts in, 72-73; high- and low-level, 201-2; logic and, 47-62; vs. mathematical notation, 340-41, 378-86; paradigm support by, 136-38; and pattern matching, 223, 226; and relational sublanguages, 397, 399-400, 406; semantics of, 47-62; standardization of, 148; synthesis with data structures, 7-9; von Neumann, 65-66, 68, 74-76. See also specific languages Programming style, 215 Programming textbooks, 133-34, 139, 140 Program quality, 40-41 Programs, algebra of, 63-130; AST system example of, 118-23; structure
Subject Index 481
of, 25-26; verification of, 26-27 PROLOG, 16, 261 Proofs: correctness, 26-27, 37, 75--76, 89; formal, 370-77; notational, 351-53 Psychology, 297, 299-300 Publication delays, 279-80
Q QA4 language, 135 Qualification problem, 263, 264 Qualitative structure, 290-91 QUEL, 399, 400 Query-by-Example (QBE), 399, 404, 407 QUICKSORT, 145 R Random access models, 414 Randomized access processing, 272 Randomized algorithms, 451-53 Recognition systems, 309 Recursion theorem, 98-99, 220-21 Recursive functions, 145, 146, 147, 220-21, 328-29 Reducibility, 441-42 Reduction (Red) languages, 16, 109 Reification, 264 Relational algebra, 397 Relational databases, 391-410 Relational model, 394-96 Relational processing, 392, 396-99 Relational sublanguages, 397, 399-400, 406 Reliability, 24-25, 157, 158 Report on the Algorithmic Language ALGOL 60, 23, 145 Representations of functions, 359- 67 Research, basic, 216; software, 163 -69 Retrieval systems, 272 Rule-based paradigm, 134-35 S
Satisfiability Problem, 444-46 Science, vs. art, 35-40; vs. engineering 209 Search systems, 308-10 Search trees, 304-5 Security, 145, 176, 321, 336 Semantic recognition systems, 309- 1) Semantics, algebraic approaches to, E1 of AST systems, 117; axiomatic, 74-75; 153-54; denotational, 74-75; 482
Subject Index
of FFP systems, 109-15; fixpoint, 58; of FP systems, 84; of programming languages, 47-62 Semantic structures, 54-61 Sequential files, 271-72 Serial-parallel trade-off, 222 SHARE, 280 Shared access, 276-78 SHELLSORT, 144 Simpson's rule, 140 Situation calculus, 262-64 Smalltalk, 16 SNOBOL, 223 SOAR, 259 Social responsibility, 217-18 Software, courses in, 214-15; interface with hardware, 186; reliability and economics of, 24-25; research on, 163-69 Software crisis, 20-21, 25, 133 Sorting, 140, 324, 332-33 Spanning tree, 367 SQL, 399, 400, 401, 403 SQL/DS, 402, 403, 404 Stanford University, 52, 53 STAPL, 382 State-machine paradigm, 139-40 Stored program concept, 295 Strategic Defense Initiative (SDI), 464, 465 STRIPS, 263 Structured programming paradigm, 132-34, 137 Structure of Scientific Revolutions, The (Kuhn), 133 Subroutines, 22 Summarization proof, 371-72 Swiss Federal Institute of Technology (ETH), 181 'Symbol game,' 294 Symbolic behavior, 299 Symbolic logic, 369 Symbols, 289-90 Symbol systems, 289-300, 311, 315 Syntax, 201-3; and data structures, 12-14; excessive, 223, 224-26; of FP systems, 109; notational, 351; variations of, 14- 15 System R, 400, 401, 402, 404, 407 T Tandem Computer Corp., 403 Tandem ENCOMPASS, 400, 403 Teaching, 207-18, 229-41; and
form-content confusion, 222-23, 229-41; mathematical notation, 211-12, 229-41, 388; programming, 138-41; and textbooks, 133-34, 139, 140. See also Computer science curriculum TEIRESIAS program, 134-35 Textbooks, 133-34, 139, 140 Theorem proving, by machine, 324-25, 333-34 Time, upper and lower bounds on, 415-21 Time-memory trade-off, 222 Time sharing, 9, 198-99, 200 Traveling salesman problem, 436-37, 449-50 Turing machine, 5-6, 294-95 U Ultrasonic delay line, 199 Undecidability, 457 University of California at Berkeley, 52, 165 University of Chicago, 51 UNIX, 163-77
V Views, 402 Von Neumann bottleneck, 67-69, 123 Von Neumann computers, 64, 67-68 Von Neumann languages, 65-66, 68; alternatives to, 76-77; lack of mathematical properties in, 74-76; naming in, 124 Von Neumann programming, compared with functional, 70-72 Von Neumann systems, 67; compared with AST systems, 123-24; and computer design, 125; naming in, 124 W Weyerhaeuser Company, 278 Workstations, 184- 85 X Xerox Corporation, 184 Xerox PARC, 168
Subject Index 483
-U
-A
417 ACM QiLa
of,
>$34* 95 -MGa'010??gq9
ISBN O-201-07794-9