The Marmite or Miracle Whip of Computer Languages

What is it about C++ that makes it one of the most important computer languages for systems work, yet so reviled by so many? Like Marmite, or Miracle Whip, nobody seems to take a neutral opinion of it. We asked the languages' creator, the great Bjarne Stroustrup.

Bjarne Strostrup’s contribution to programming languages might as well be re-named C++ Marmite for all the grimaces that the language attracts. People either hate it with predictable snarls and loathing, or they affect not to like it very much, but use it nonetheless.


There are those who dislike it intensely. Linus Torvalds called it a horrible language, Don Knuth said it was too ‘baroque’ for him to use. There are, of course, the many who use it as the obvious choice for developing demanding Windows system drivers and utilities. C++ is the main development language used by many of Google’s open-source projects and last year the company released a research paper which suggested that the language is the best-performing programming language in the market after it had implemented a compact algorithm in four languages – C++, Java, Scala and its own programming language Go – and then benchmarked results to find “factors of difference”.

“It is easy for a programmer
to outsmart himself (herself)”

Who, we wondered, would be the best person to explain the mysteries of the difference between the public perception and actual usage. It was obvious who that person was, so we asked Bjarne about the issues he wrestled with when he developed the language, the problem he has had with explaining what C++ was developed for, how he practices his craft and how language features can help people to be more productive.

Bjarne, was there something with C++ that made you think something along the lines of ‘Well, there are lots of things that could be usefully looked at as an infinite series of computations from which we want to draw answers until we are tired of it? As opposed to thinking something such as ‘Oh, that’s an interesting technique for problems but not the basis of everything.’
I tend to design language facilities to be general, rather than as specific solutions to specific problems. For example, I did not say “Concurrency is important, so let’s build in a notion of a process.” Instead, I considered what would be sufficiently general to allow programmers to define their own notions of concurrency using classes. Constructors and destructors were part of that. Today, threads and locks are library features in C++11 – as intended in 1980.

When I designed templates, I again aimed for generality – and of course efficiency because if your elegant systems code isn’t efficient, people will use a hack instead. I took a lot of flak from a variety of people: “templates are too complicated, why don’t you restrict them to make them less general and safer?” and “but templates make the type system Turing complete! Your compiler might never complete!” My thought was that you should not stop good programmers from writing good code out of fear that bad programmers might misuse features. Incompetent programmers will write bad code in any language. The people who worry about infinite compilations seem not to have thought of translation limits.

I don’t see language design as an exercise in applied mathematics.

C++ is one of Google’s four official languages so if use is a sign of popularity there should be little grumbling but as you know people criticise C++ as a bad language yet continue to use it.

Has it been a problem getting people to understand what C++ was supposed to be and how to use it? Do you feel that programmers fail to learn how some of the language’s features can be used in combination to serve some ideal of programming, such as object-oriented programming and generic programming?

I once had a Google manager say apologetically “we use C++ for all critical applications, but only because there is nothing else that can do those jobs.” I responded “Thanks!” C++ was designed for demanding tasks, not as a “me too” language. If C++ is the only language that can do the job for a worthwhile task, I have succeeded. It is sad, though, when people can’t see such success as a result of rational and effective design choices. Instead, many see the essential features they critically rely on as failings because the features don’t meet the dictates of fashion.

And yes, I have a lot of trouble getting my ideas of programming styles and programming techniques across. It seems that just about everybody else is more certain of their opinions. Many people seem comfortable with explanations I consider unhelpful oversimplifications: “Everything is an object!” or “Type errors must be impossible!” or “Strong typing is for weak minds!” Simple ideas, strongly stated, win converts.

I’m still optimistic, though. Good ideas often take a long time to become appreciated – sometimes decades later. Programming languages today look very different from languages when I started with “C with Classes.” Many look a bit like C++. I take that as a validation of the fundamental ideas I built into the language. Many of those ideas were of course borrowed (with frequent acknowledgements), but I think I helped make significant positive changes.

That said, most of my non-academic writings have something to do with explaining programming style. For example, my latest paper (IEEE Computer, January 2012) is about programming techniques for infrastructure and my latest book (“Programming: Principles and Practice Using C++”) is focussed on technique and style. My academic writings often describe experiments that might eventually lead to more effective programming styles.

Do you think that backwards compatible with C is a fatal flaw given that C has a corrupt type system?
One of my favourite Dennis Ritchie quotes is “C is a strongly typed, weakly checked language.” I don’t think that C’s type system is “corrupt.” At least it wasn’t, as it came from Dennis. I built on C because I thought it provided the best model of the machine with the fewest artificial limitations.

That said, keeping up with the incompatible changes to ISO C and the lack of concern for C++ in some of the C community (to say it politely) has made maintaining compatibility surprisingly hard. Also, people seem to insist using pointers and arrays in error-prone ways even though C++ provides safer and easier to use alternatives (e.g. the standard-library containers and algorithms) that are just as efficient.

In your career you’ve straddled both academic research and working in the industry. Functional programming is popular with the research community but a lot of people outside that community see functional programming as being driven by ideas that, while clever, can be very mathematical and divorced from day-to-day programming. Is that a fair characterisation?
Is there a lot of interaction between research and actual programming?
I guess your answer will depend on where you think widely used ideas originated. Object-oriented programming: A mix of industry and academia in the Norwegian Computer Centre. C: Bell Labs, with origins in the University of Cambridge. Generic Programming: Various industrial labs and a few academic departments. Many of the “greats” of computer science, such as Dijkstra, Hoare, and Nygaard straddled the industry/academia divide and their work was much better for it. I could go on with examples of languages, systems, and individuals. I think that industry/academia interaction is essential for progress. Delivering the next release of a commercial system is not in itself significant progress, nor is a stream of papers written for academic insiders.

I fear that academic language research has become further removed from the industrial reality than it was decades ago. However, I could be falling into the trap of seeing the “good old days” through rose-tinted glasses. Maybe most of academia was always off in never-never land and all we remember is who wasn’t?

Are the best of the good ideas from research labs and universities percolating into practice fast enough?
The problem is how to decide which ideas a good. Often the most popular ideas – in both academia and industry – are ineffective or even counterproductive. Once in wide industrial use, ideas can have major impact, for good and bad. Ideally, there is a slow filtering process from the lab to industry that separate fads from genuine progress.
As our languages get better, or at least more programmer-friendly, compared to the days of assembly language on punch cards, it seems as it’s easier to write correct programs – you get a lot of help from compilers that flag errors for you. But is it possible to allow the focus on readability to come first, if only slightly ahead of correctness? After all some, programmers are fond of saying ‘If your Haskell program type checks, it can’t go wrong.’
What is “programmer friendly” depends critically of the programmer’s skills and what the programmer is trying to accomplish. C++ is expert friendly and that’s fine as long as that does not imply that it is hostile to non-experts – and modern C++ isn’t. C++11 provides language features and standard library components that makes writing quality code simpler, but it is not a language optimized for complete novices.

If a Haskell program checks, it won’t crash the computer, but that doesn’t mean it does what it was meant to do or does so with reasonable efficiency. Nor does it imply maintainability. Anyone who thinks that Haskell is programmer friendly hasn’t tried teaching it to a large class of average budding programmers. Haskell is in its own way beautiful, but it does not appear to be easily accessible to programmers needing to do “average tasks.”

The languages that appears to be “novice friendly” are the dynamically typed languages where most tasks has already been done for the programmer in the form of language facilities and massive application libraries. JavaScript, PHP, Python, and Ruby are examples of that, but relying on a dynamically-typed language implies a major run-time cost (typically 4-to-50 times compared to C++) and concerns about correctness for larger programs (“duck typing” postpones error detection). Whether such a trade-off between development time and run-time costs is acceptable depends of the application. There is little gain from a statically typed language if you are mostly doing something that requires run-time interpretation, such as regular expression matching. On the other hand, if your aim is to respond to a stimulus respond in a few microseconds or render millions of pixels, overheads imposed for the convenience of the programmer easily become intolerable.

Let’s talk about concurrency a little. Is Software Transactional Memory (STM) the world-saver that many people say it is? Is it right to say that if you use one programming paradigm for writing concurrent programs and implement it really well and that’s it; people should learn how to write concurrent programs using that paradigm?
STM seems to have been the up-and-coming thing for over a decade. It would obviously be a boon if it really worked at the efficiencies and scale needed for near-universal use. I do not know if it does. I have not seen evidence that it does and such evidence would be hard to get. I particularly worry about one aspect of STM: it’s not inherently local: I designate some data to be in a TM and then any other part of the system can affect the performance of my code by also using that data. My inclination is always to try to encapsulate access to help local reasoning. Global/shared resources are poison to reasoning about concurrent systems and to the performance of such systems. STM localizes correctness issues, but not performance issues. If performance wasn’t an issue, concurrency wouldn’t be such a hot topic.
Are there language features that make programmers more productive? You’ve designed the most widely used language so you’ve obviously got an opinion on this.
This is a tricky question. The features that make bad programmers more productive don’t seem to be the same as the ones that makes good programmers more productive.

To make a poor/struggling programmer productive you make language features as “familiar” as possible to current popular languages, minimize the efforts required for first use (no installation and interpretation can be key) , focus on common cases, improve debuggers, avoid features that is not easily taught through online documentation, and provide an immense mass of libraries for free.

To make an expert productive you provide more control and better abstraction mechanisms and improve analysis and testing tools. Precise specification is essential. Built-in limitations (to safe and common cases) can be serious problems. Unavoidable overheads can be fatal flaws.

Consider C’s for statement and C++’s standard-library algorithms:

In the loop, the initialization, termination condition, and increment operations are separate. That way, arbitrary traversals over a linear sequence can be expressed, and more. Similarly, listing the beginning and the end of the sequence of elements to be sorted provides generality. For example, we can sort half a container or a C-style array. If you don’t need this generality, simplification is easy:

These alternatives are also valid C++11. They are obviously simpler to learn, simpler to use, and – importantly – eliminates several opportunities for making mistakes. That makes them better for novice, but the former alternatives are in a significant number of cases essential for experts. Thus, for a major language, we need both. A language for experts must not just be “expert friendly” it must also be approachable by novices. Also, the “novice friendly” features can – assuming that they don’t impose avoidable overheads – simplify the life of experts in the many cases where they are appropriate.

The tradeoffs are not always simply safe vs. unsafe or efficient vs. inefficient. However, the more general version always requires a deeper understanding and the added flexibility always opens more opportunities for confusion and algorithmic errors. For a specific application area, we can sometimes dodge this dilemma by raising the level of abstraction and leaving the mapping of very-high-level constructs to the language and run-time support, but for a general-purpose language, especially in the systems programming and infrastructure areas that interests me most, that is not an option (instead, we can use such techniques in the design of libraries).

How much does choice of language really matter? Are there good reasons to choose one language over another or does it all just comes down to taste?
The choice obviously matters, but how much and how? What matters for productivity, performance, and maintainability are the abstractions that a language presents to a programmer. These abstractions can be built into the language or as part of a library. And again, we come back to the questions “for what kind of application? and “for what kind of programmer?”

Imagine writing everything in assembler. That’s obviously absurd; who would want to write a web service or even a full-scale operating system in assembler. Unix settled that. On the other hand, imagine writing everything in JavaScript. That’s equally absurd; how would you write a high-speed network driver in JavaScript? How well would a JavaScript engine run if implemented in JavaScript? I think it is clear that we need a multiplicity of languages and that one (for infrastructure applications) will be something like C or C++ and that one (for web services) will be something like JavaScript, Python, and Ruby. If we agree so far, the exercise will no longer be “which language is the best?” but “which languages are suitable for a given kind of application; and which of those will be best for this particular set of constraints and programmers?”

People who claim that the language choice is not important are as wrong as those who claim that their favourite language is the solution to all problems. A language is a tool, not a solution. People who genuinely consider languages unimportant are condemned to write in the common subset of C and COBOL. Anything else would be an admission that some language features are actually useful, and that some are better and more important than others.

I’m an optimist, so I hope to see something better than C++ for infrastructure applications – even though I think that for that C++ is the current best choice by far. Similarly, I hope to see something better than JavaScript, Python, Ruby, and PHP for web applications. I also expect to see a further specialization beyond the two areas I mention here.

I think we need work on characterizing application areas and their needs from languages. We are groping in the dark and not learning from current successes and failures. A major part of the problem is that proponents of a language are unwilling to admit weaknesses because such admissions are immediately broadcast as proof of their favourite language’s fundamental inferiority. Too many still believe that one language is best and that history moves from one dominant language (for everything) to another; that we move from one (exclusive) paradigm to the next. I consider that absurd and in disagreement with any non-revisionist history. Another problem is that thorough and impartial analysis of existing systems is hard and typically ill rewarded by both industry and academia.

I have already used too many words, so I will dodge your question beyond pointing to the two application areas mentioned. Language choice is important and we need better rational criteria for choice. I think some of those criteria must be empirical. Language choice is not a purely mathematical or philosophical exercise. We do not need another discussion based on comparisons of individual language features.

What languages have you used seriously? It must be a long list for you?
Define “serious.” I don’t ship software for the non-educational use of others any more, and haven’t done so for years. Similarly, I haven’t done much maintenance recently. I had tried on the order of 25 languages when I started on C with Classes and I have experimented with a similar number since. Of the early languages, microcode assemblers, assemblers, Algol60, Algol68, Simula, Snobol, and BCPL were my main tools. Later, C++, C, and various scripting languages dominate. I think I have learned a lot from many languages, but that my experiences with modern language make me better suited for writing a textbook in comparative programming languages than for delivering quality code at a commercially viable rate. I have tried most modern languages currently used to deliver code and a few experimental languages. Listening to people who use languages non-experimentally is as important as my personal experiments.
Are there programming languages which you don’t enjoy using?
Yes, but I refuse to name languages. That wouldn’t be fair. I might simply have misunderstood something or applied the language outside its intended problem domain.
Since you started programming what’s changed about the way you think about it?
I’m much more concerned with correctness and far more suspicious about my own observations. Everything needs to be double checked and validated – preferably with the aid of tools. Maintainability has been a steadily growing concern.

I value simple, easy-to-read code, well specified algorithms, and simple, compact, data structures. I am increasingly suspicious of clever code and clever data structures. It is easy for a programmer to outsmart himself (herself) so that the resulting code is hard to maintain and often end up performing poorly as hardware architectures and input sets change over time. The best optimizations are simplifications. One reason for my concerns about “cleverness” is that modern architectures make predictions about performance hazardous. Similarly, the size and longevity of modern systems makes assumptions about what programmers understand hazardous. Think “pipelines and caches” and “Java programmers trying to write C++ or Ada programmers trying to write C.”

Tags: , , ,


  • Rate
    [Total: 0    Average: 0/5]
  • Philip Goh

    Duck typing vs static typing
    While Bjarne is correct that duck typing implies that code can only be checked when it’s executed and thus errors are detected at runtime, this is not that big a deal.

    The duck typing languages almost force you to have to write comprehensive unit tests (if you’re a developer worth any salt). Since the code is not compiled, typos will not be caught unless that code path is exercised. The easiest way to exercise your code consistently and frequently, is to write unit tests. Given the excellent unit test support in languages like Python, there’s literally no reason not to unit test.

    Having a robust suite of unit tests to go along with a project will help tremendously towards the long term maintainability of said project.

    In any case, this was a very good article and I’ve always like Bjarne Strostrup’s and C++.

  • Anonymous

    Good article
    Liked the way he put it ….
    Imagine writing everything in assembler. That’s obviously absurd; who would want to write a web service or even a full-scale operating system in assembler. Unix settled that. On the other hand, imagine writing everything in JavaScript. That’s equally absurd; how would you write a high-speed network driver in JavaScript? How well would a JavaScript engine run if implemented in JavaScript? I think it is clear that we need a multiplicity of languages and that one (for infrastructure applications) will be something like C or C++ and that one (for web services) will be something like JavaScript, Python, and Ruby. If we agree so far, the exercise will no longer be “which language is the best?” but “which languages are suitable for a given kind of application; and which of those will be best for this particular set of constraints and programmers?”

    — Though would love to know what Bjarne Strostrup thinks about Java.

  • Anonymous

    Very nice article from a person with a lot of passion for his language. Java, JavaScript, Ruby, … are all ‘C-style’ based, not?

    Well, Basic isn’t. It’s easy to learn but it has become quite powerful too. I’m not speaking of Visual Basic.Net alone, there are others out there that are extremely fast.
    And yeah, I just hate to have to place a semi-column after each line. This is 2012, not 1980 :).

    I’m sure things will change in the near future. Smartphones and tablets are more and more replacing desktop computers. Memory is temporally becoming a big issue again. Time for experienced programming skills like on my Commodore 64!

  • arzewski

    languages used by industry
    interesting read, specially on the overall topic of how accepted a programming language is after 20 years. Yes, 20 years ago, the introduction of C++ as a replacement for system development written in C, was a great improvement. But most of the custom-developed business applications found in corporate data systems have different requirements: need to be developed quickly, easily maintained, and not require employees to go through a long period of learning apprenticeship. In a way, i do not miss the long meetings in which self-declared “experts” fought each other on that or this language feature, rather than focusing on the actual business problem (insurance, billing, accounting…), things that needed to be “objectified” and abstracted. So, no, i do not miss it, 20 years later. The more time spent on developing a system, the more likely the staff required to develop that system will be off-shored to a geographical locale in which sw developer labor is less costly. That is how corporate managers see it. Oh, and about the static vs dynamic languages, one great quote came to mind: “casting is a lie to the compiler”, in which you delegate your problem to run-time anyway. So there you go… in systems built with statically-typed languages, you still have run-time problems. So, at that point, might as well have the advantages that a non-typed system offers. Tick, tick, tick, the meter is clicking, and the bosses are waiting… tick, tick, tick.