Chuck Moore on the Lost Art of Keeping It Simple

Chuck Moore is still the radical thinker of Information Technology, After an astonishing career designing languages (e.g. FORTH), browser-based computers, CAD systems and CPUs, he is now energetically designing extremely low-powered 'green' multi-processor chips for embedded systems. Behind everything he does is a radical message: 'Embrace the entire problem, Keep it simple'.

In the early days of computing, software written for one make of machine would not run on any other. Computer scientists wanted to define “programming languages” that could be universally understood. That this is the norm today – the software of the internet, for example, can run on every kind of computer. Chuck Moore was the first to turn this vision into reality: a simple language written in itself, with its own simple disk operating system,  requiring just a tiny kernel to be written in the native machine code to link with the hardware. As a result, FORTH could be used everywhere, and was. It was the start of a life-long quest to provide computing power as simply and cheaply as possible, as widely as possible. It has led to poineering work with what we now refer to as ‘network computers’, MISC CPU design and  the development of computer languages.

775-chuck.JPG

It could be said to be a consequence of a talk that John McCarthy gave at Stanford University in 1961 when he said that if his approach to technology was adopted, “computing may some day be organised as a public utility, just as the telephone system is a public utility”, and that this could become the basis of a significant new industry.

As a pupil of McCarthy’s at MIT in the 1960s, Charles ‘Chuck’ Moore’s approach to software design embraced his master’s elegance, efficiency and simplicity as well as shaping his own beliefs and he went on to design numerous firsts including the ‘Forth’ language, which is still in use today.

Chuck’s numerous industry awards include membership to the US Computer History Hall of Fame, an honour bestowed on him by President Ronald Reagan. We last spoke with Chuck in 2009 and in this interview we talked about his still eager enthusiasm for technology, the beauty of code and his legacy as a technologist.


RM:
Chuck, the last time we spoke we touched on complexity and you said that simplicity was the only answer to this. Why do you think it is so tempting to solve a problem we don’t really have?
CM:
First, one doesn’t understand the problem initially and thinks it’s more difficult than it is. In the course of solving the problem, one learns about it. But in the rush of completing it, never re-examines the premises, rewrites and simplifies. This is not a small omission; code can be an orders of magnitude too elaborate.

Second, it’s irresistible to anticipate the future and expect the problem to grow in a certain direction. Thus code is added to facilitate future changes, which rarely occur. This is a good strategy, but can be put off until the future arrives.

Finally, a difficult problem is more fun to solve than an easy one. So the problem is enhanced to be more worthy of attention. Artificial Intelligence is often applied in this way.

RM:
How much do you think you can sit down and figure out how something should work, assuming it’s not something that you’ve built before? Do you need to start writing code in order to really understand what the problem is?
CM:
Yes. There used to be, perhaps still is, a distinction between programmers and coders. Programmers understand the problem; coders are grunt labour. In spite of the existence of high-level languages supposed to let the programmers produce the code. This is stupid. You need feedback from the code to the problem.

I’m currently programming the tiny computers of GreenArrays’ chips. And there the statement of the problem and the architecture of the solution depend crucially on the amount of code.

RM:
Your processors at GreenArrays generally have the power consumption of less than a watt. Do you feel rather despondent at the power demands of today’s high-end processors? Is this a consequence of the fact we don’t have a sufficient grip on building parallel software?
CM:
Yes. I hear that 10% of power used goes to computers. My goal with Forth was to provide an example of how much software was required to solve a problem. That would help me judge the quality of other software. And perhaps motivate others to seek a simpler solution.

GreenArrays likewise offers a benchmark about how much power contemporary technology requires. You don’t have to use our chips to appreciate how inefficient a PC is. We have 30 times the Mips, using 1% the power. Landauer’s Limit suggests a minimal entropy increase is required for computation, far lower than we’ve achieved.

The proper measure is the energy required for a computation. It is measured in femtojoules. Typically, the faster an operation the less energy needed. Power (nanowatts) depends on speed, whereas energy does not. But power is easier to measure.

RM:
Is there anything that you did specifically to improve your skill as a programmer, such as write something in a language that you’d rather not write in?
CM:
Actually, I did. When I joined Mohasco in the mid 1960s it was to do system programming for their order-entry system. I found that the programmers all used COBOL, so I thought it prudent to learn COBOL.

I doubt this improved my skill, but I did learn enough about COBOL to be disillusioned about it, those who used it and the establishment that promoted it.

RM:
Knuth has an essay about developing TeX where he talks about going over to a pure, destructive QA personality and his hardest to break his own code. Do you think most developers are good at that?
CM:
No. It’s been demonstrated time and again that a naive user will crash an application. The developer knows intuitively what not to do and it never occurs to him to test that.

A good test for keyboard input is to play monkey on the keys. The program must survive random gibberish, including shift key combinations. A young child is excellent at this. Of course, a keyboard is too sophisticated to be a good input device.

RM:
Leaving aside designing user interactions, when is prototyping valuable? As opposed to just thinking about how something is going to work?
CM:
Hardware prototyping is essential to allow software development. It’s simpler and more realistic than coding a simulator. Software prototyping is useful to establish the scope of a program. The calculations involved need not be coded. Stubs can return a result, perhaps always 0. If a display is required, it can be a simple template. But the programmer has his arms around the entire problem.

He’s factored the problem into independent parts, established multiple threads and their communication and basically roughed out the solution. In an afternoon. Filling in the details follows relatively easily and can involve others.

RM:
Do you find some code aesthetically beautiful?
CM:
Yes, code can be beautiful. But it rarely is. Beauty lies in showing off the structure of a program, without distracting details. Factoring into subroutines or threads is essential for this. A wall of straight-line code is ugly. As is code loaded with comments or conditions.

Pretty code can result from clever instantiation of objects. This is supposed to suppress details. Also from the careful choice of names so that their purpose is clear.

RM:
Is there an upper limit on how big a piece of software can be and still be beautiful?
CM:
I think so. A painting may be beautiful, but not a gallery full of paintings. Beauty can be overwhelming.

My Forth has always utilized 1KB blocks of code. This is a quantum that is comprehensible and can be seen on a monitor. I can’t think of code I’d call beautiful that was larger than this.

My multi-computers have 1 block of code per computer, so possibly several blocks will combine to be beautiful. But this hasn’t happened yet.

RM:
A well known programmer told me he’d noticed that people, who are too clever, in a certain dimension, write the worst code because they can see the whole thing in their head and consequently write code of enormous complexity and lack empathy with those who have to use it. Is there something intrinsic in programming that always going to draw people with that kind of mentality?
CM:
It’s important to see the entire problem in your mind. Whether you’re clever or not. It’s the only way to see the structure. And that is simple, not complex.

Problems are not complex. The things we ask computers to do are basically simple. Otherwise we couldn’t program the mindless devices. But we can make a problem complex by refusing to see its simplicity.

One way to do this is to apply too many people to the problem. Each will have his own preconceptions and communication is not enough to overcome them. The prime example is Microsoft, which has applied huge resources to maintaining 30 years of backward compatibility of Windows. The result is a hideous mass of complexity that no one understands. They’ve created a problem that cannot be solved.

RM:
Are the opportunities for this kind of programming to go away? A lot of this low-level design is implemented in the VM that you’re using or the concurrency libraries that are being used. So for a lot of people, programming is about gluing things together.
CM:
Library routines have been glued together since Fortran days. Fortran was the first Virtual Machine, but there have been many, many since. Apparently there are two kinds of programming and programmers. The knowledge and effort required to mine library resources is comparable to that required to understand and code the problem directly. This is an example of the principle of “Conservation of Complexity”.

A problem has an intrinsic level of complexity, which as I’ve said is pretty low. You may know of a library routine that addresses it, or a bit of silicon that can be used, or a computer language that applies. But even if you shovel off the complexity into hardware or language, it remains. Sadly, it can be increased, just like entropy:

You can pick a language, find a library routine and use the silicon all at the same time. You can add or multiply the individual complexities to obtain a new one. We do this in society all the time, by passing laws to solve problems that only create new ones.

RM:
What makes a good programmer? If you are hiring or interviewing programmers – what do you look for?
CM:
Although I’ve never hired one, I’ve interviewed some. My personal judgement is not reliable. What I want is someone who wants to solve problems and move on. Not make a problem into a career. Someone with enthusiasm, willing to work long hours and meet deadlines.

What is not necessary is a college degree. A Computer Science graduate waves a red flag. They have learned how difficult it is to program and make it so.

RM:
You obviously have to have a good memory to be a reasonable programmer. Bill Gates once claimed that he could go to a blackboard and write out big chunks of the code to the BASIC that he had written for the Altair, a decade or so after he had originally written it. Can you remember your old code in that way?
CM:
I don’t spend much time thinking about the past. I like to live in the future. But it seems that many people have better memories than I do.

When I encounter code that I wrote in the past, I can read and understand it much more quickly that someone else’s code. But without seeing it, I can’t recall it. And I’m always impressed with how clever I was back then. I have the impression that I’ve learned to write better code over the years. That may not be true.

RM:
Speaking of being a language designer, how have your ideas about language design changed over time? Back in the 1970s people would design languages and make a complete design, implement it and that would be that almost. Are languages now too big to design or implement all at once?
CM:
That’s a problem that I recognized and avoided back then. Forth is an extensible language. It has a basic structure of stacks and dictionary. Beyond that it can be augmented as required for any particular problem.

To try to anticipate all applications of a language leads to impossible syntax. This requires a miserably complex compiler. To create a language for each application is unsupportable. Forth provides a nice alternative.

RM:
Are there other skills that are not directly related to programming that you feel have improved your programming or that are valuable to have? You mention in your blog that you’re writing what sounds like an autobiography. What’s your writing schedule like and have you decided on what you’ll leave out as much as what you’ll put in?
CM:
My mother insisted that I learn the piano. My teacher profited for 10 years, but I had no talent for music. Nonetheless, music is probably a stepping stone to mathematics and thence to programming.

Another skill is writing. I’ve never learned to say the same thing over again in different words. I can put words on paper, but not in an entertaining manner. Programming is writing to a computer. It does not need to be entertained, but informed. But successful programs must be documented, which is writing to people. And if it’s not sufficiently entertaining, no one will read it.

RM:
Are programmers and computer scientists aware enough of the history of technology? It is a pretty short history after all?
CM:
I lived through the history of computers. And I missed a lot of it. History is a brooding study that adds flavour to a subject. But programming depends upon common sense more than most disciplines. There are few techniques that cannot be reinvented more easily than researched.

So I would encourage people to read history such as Knuth, but not to expect to gain insight into their problem.

RM:
Do you consider yourself a scientist, an engineer, an artist or a craftsman?
CM:
I put ‘engineer’ in the box on forms. Sometimes I put ‘computer engineer’ in a big box.

I studied to become a scientist, which is still my avocation. Then I transitioned to software engineer, specializing in scientific applications. Then to system programmer, language designer and finally hardware engineer.

Now I think of myself as a hardware stroke software engineer. One who uses software to design hardware which he then programs. This is a useful combination that produces simple, relevant hardware.

An artist is more imaginative and a craftsman has greater leisure to perfect his work. Engineer seems about right.

RM:
In what ways do you think you have influenced the software industry as a technologist?
CM:
My greatest success is that people are curious to hear what I have to say. Witness this article. Do they pay attention? Not so much. Persistence. That’s the secret. I’ve been talking since the golden age of the 70s and I’m still saying the same message. So there must be something to it? Keep it Simple. Embrace the entire problem, be it software or hardware. Value cleverness. I’ve had some influence on a small part of the world. But that was never the purpose: I’ve had a lot of fun.
Tags: , , , ,

  • 42894 views

  • Rate
    [Total: 113    Average: 4.7/5]
  • Andrew Penn

    A great man
    Ask around the programmers and developers of today’s young generation and few would know who Chuck Moore is. It is their loss and a great pity because if his philosophy of simple, elementary design was followed there would be many more people working in harmony anda standard way of developing technology for the betterment of mankind.

  • Mosaic

    Happy memories
    I used Forth as my full time language / operating system for 5 years, in the 80s, for a small automotive company. Happy days! The experience of breaking up a problem in screen-sized blocks has served me well. Thanks, Chuck.

  • Igor Maznitsa

    a smart experienced man
    Chuck is absolutely right in his words.

  • HackerJack

    very poor criteria
    What is not necessary is a college degree. A Computer Science graduate waves a red flag. They have learned how difficult it is to program and make it so.
    —————
    What a sad statement from a man that I otherwise have the upmost respect for.

    Having interviewed a reasonable number of programmers (and had to work with them all later) I find it very disappointing that anyone would take the view that either having (or not) a degree would wave any kind of “red flag”.

    Some of the very best I have worked with were university educated, one of them didn’t even know how to program at all before starting out in a sibling degree topic and switching.

    I often find that formally trained colleagues bring a clarity of thought, purpose and design to a problem that can highlight problems and efficiencies very early in the process. Those more self taught (myself included) are often better at working around the language limitations to apply these improvements but often have a far more limited scope of the overall problem and thus miss things.

    Had we taken the view expressed in this article we would be much worse off as a team.