Computer Science and Programming
Once upon a time in a usenet newsgroup long long ago and far far away the (paraphrased) claim was made:
Without the theoretical foundations laid by Computer Scientists (and their associates, mathematicians and EEs) programmers wouldn't have anything to work with. It is the theoreticians who lead the way in the development of new computing technology and methodology. but the theoretical The foundations were well in place before the programmers got there. Recursion and induction (two very powerful programming techniques) were well in place from centuries ago. The notion of an algorithm existed before electricity was even understood. Church invented the Lambda Calculus almost 20 years before LISP existed, theorem-proving systems of Gentzen preceded AI, and Kleene had a number of formal language results before the first compiler existed.I replied (also paraphrased):
This simply isn't so. Programming very much came first; the early programmers developed the principles of programming pretty much ad hoc. The theoretical foundations were developed after the fact. It would be much more accurate to say that the theoreticians are becoming the leaders in the development of new computing technology and methodology.
This is the normal situation with technology and science. Until World War II science had surprisingly little impact on technology (with the exception of Chemistry). The reason is fairly straightforward; in its earlier stages technology depends on rough and ready constructs and a lot of empirical knowledge. The corresponding science is inadequate because not enough is known about the processes involved to provide theoretical explanations of the phenomena.
In the case of programming, the significant developments all preceded the rise of computer science, which was very much an after the fact occurence [and is still somewhat mired in the identity search syndrome.] Concepts (and implementation) of ideas such as databases, operating systems, higher order languages, and data structures were brought into being by programmers. It should be noted, however, that the early programmers were not "graduates of programming schools"; they had highly varying backgrounds with a predominence of Mathematics and Engineering.
Some of them, only some of them. Numerical analysis existed before computers; mathmematical logic existed before computers. But a lot of stuff didn't and the stuff that did wasn't always formulated in a relevant form. For example:
Nowadays we design algorithms using general principle, e.g. greedy method, divide and conquer, and all that. Today I would expect that someone who was trained in modern methods could create the FFT (Fast Fourier Transform) as a classroom exercise; I remember when the FFT was first developed -- it was a real bombshell. It was also very mysterious.
To repeat: HOL's were very much an ad hoc invention. I know about formal languages; I knew about them in 1961 when I started programming, and, believe me, they had very little to do with most programming. As Dijkstra observes, pre-computer logic was atemporal. Mathematicians did a great deal of twisting to force computer programming into the terms of classical recursion, induction, and formal logic.
Data structures are mostly a modern invention; I also remember when hash coding hit the streets -- there were people who felt that hash coding was somehow illegitimate. Somebody had to invent B-trees you know. The whole concept of data structures is new.
Fundamentally, you can do things with computers that could never be done before and the things that you can do are constrained by factors that are not at all apparent without the experience of programming computers.
The time of the programmer making fundamental discoveries by ad hoc experimentation may be past. When you don't know anything muddling about is a great way to learn something -- better than detailed analysis based on radically insufficient data. But the 'muddle about' approach only takes you so far and then you go nowhere slowly.
One other thing programmers of yore had to learn by doing was how to write decent programs, e.g. how to structure them and what kinds of techniques to use. Wee wrote some pretty garbagy programs in the old days. [Unlike what's done, today :-)]. You can write some pretty good software without knowing anything about the lambda-Calculus. But it is pretty hard to learn about writing good software without knowing a lot about how software is put together. Let me tell a story:
Many years ago I knew a hot programmer. He was good; he could take a problem and produce a working program that got good answers in short order. However he did have one idiosyncrasy -- he didn't use subroutines. Everything was all in one big Fortran program. The day came when one of his twenty thousand line specials blew the compiler -- symbol table overflows all over the place. So he did what people told him to do; he split it up into subroutines. He divided the program in half and made a 10000 line main program and a 10000 line subroutine.
Now my contention is that, nowadays, you can't be a 'good software tool' without a good grounding in theory. A lot of the stuff we did in the old days just doesn't cut it today [er, ah, oh well -- I wish that were a true statement.] But my opinion is that we could do with a lot less 'computer science' and a lot more 'software engineering'. Don't get me wrong -- computer science is interesting and valuable. But software engineering is also interesting and valuable, and a lot of people who are taking CS degrees should be in SE.
This page was last updated March 1, 2008.