‘He that will not apply new remedies must expect new evils, for time is the greatest innovator’
“The Essays” by Francis Bacon 1561-1626
I often wonder whether ‘Software Engineering’ can really be called “engineering” given the obvious immaturity of the science. The term ‘Engineering’ implies a deterministic process, whereas development seldom progresses from the chaotic.
When I was in university, I taught in an engineering faculty and took many engineering courses. Each engineering discipline that I encountered was generally governed by some set of fundamental rules. For example, structural engineers will be familiar with “statics” (the rules governing the forces on objects at rest) and “dynamics” (the rules governing the forces on objects in motion). These rules are all based in fundamental mathematics, more complex than 2+2=4, but mostly deterministic in their results all the same.
After thirty years or so in the business of developing software, I’ve found that the results of software “engineering” are rarely deterministic. In fact, the results, in many cases, could almost be characterized as random, and have been termed “chaotic” – as in the Capability Maturity Model (CMM) level one. Higher levels of the CMM emphasize repeatability as a goal, and it is an important aspect of the discipline of software engineering be able to produce software systems with a satisfactory and predictable outcome, in much the same way that the structural engineer seeks to build bridges that remain standing, and to do so repeatedly.
Some Well-researched Facts and Fallacies
I recently reread the book “Facts and Fallacies of Software Engineering” by Robert L. Glass. If you haven’t read that book, and you have any level of involvement in the management of software projects, I suggest that you’ll benefit from reading it. Note that it has also been listed as one of the top 10 books and resources to become a great programmer. Even though it was published in 2002, it remains solidly relevant in my experience. I’m not saying that you’ll enjoy reading it, because it may just shatter some of your perceptions about what you know of software engineering. I can promise you that it will probably formalize many things that in your heart you know, but were probably too scared to admit were true.
There are many nuggets of wisdom that he summarizes from the hard-earned experience of the sixty or so years that software has been developed. I particularly liked
- His definition of what constitutes quality in a software development context.
- His take on research in the software domain, particularly as I’ve always been what he calls a “practitioner.”
- His clarity on software defects, and particularly his re-designation of “the testing cycle” to “error correction.” I’ve always thought that “testing phase” or “testing cycle” never really properly emphasized the goal of that step.
- His strong belief in code inspection.
- His observations on measurements and metrics.
He also has this incredible ability to express his wonderment at how we keep forgetting all of these hard-learned lessons, and repeating the same mistakes.
Perhaps most of all, I like his lack of fear of slaying sacred cows.
There is a common theme pervading much of that book: We fail to learn from experience. As human, thinking machines we possess the phenomenal capacity of learning from our mistakes. Why does this adaptive behavior fail to kick in with Software Engineering?
In the book, Glass presents fifty-five facts, bolstered by references to other writing. These facts cover topics that rang through management, the software lifecycle, Quality, Research and education,. Typical of these facts is ‘For every 25 percent increase in problem complexity, there is a 100 percent increase in complexity of the software solution’. Some of these facts are ‘truisms’ that are familiar to any seasoned professional programmer, such as ‘”Adding people to a late project makes it later’, but many are unfamiliar and often thought-provoking, such as ‘Understanding the existing product consumes roughly 30 percent of the total maintenance time’..
Some “Facts” of my Own
Instead of reiterating the facts and fallacies in that book, I will now presume to mention a few “facts” from my own experience.
Fact 1: Software engineering is immature as a discipline because it lacks fundamental rules to govern the activities of its practitioners.
This is probably one of those facts that you’d prefer to ignore, and it is one that goes against the grain for anyone that proudly wears the “software engineer” title. You may not accept this fact because the community already has CMM, and what is that if not some fundamental rules for developing software? Let’s just say that CMM also has its detractors, who may best be represented by its parody, that of the Capability Immaturity Model (CIMM). Or perhaps it is just that CMM itself is immature (and while this link too is a bit old, it is not at all dated, as the CMM has not significantly changed since the critique was written).
If you’ve read the Facts and Fallacies of Software Engineering, you’ll realize quite early on that many of them implicitly lead you to this conclusion, what I consider to be my much more general “fact.” Many of those facts are telling us precisely what things we are doing wrong, and precisely why these wrongs persist. Perhaps the problem is that failure is less obvious than in traditional engineering professions. If the Aeronautical engineer lacks fundamental rules then planes would drop out of the sky like autumn leaves, and Aeronautical engineering would be declared an immature discipline. Because failures in software engineering can be much more subtle, and because those failures may often not be immediately detected, it is no wonder that software projects often struggle and too often fail. For example, hidden requirements are often a subtle point of failure in a delivered software system, and while that doesn’t necessarily imply that the entire project was an utter failure, it is likely to cause plenty of rework that will cause at least some of the project metrics to go into the failure range.
Throughout the sixty or so odd years that programming, later replaced by software engineering, has been around, there have been many academicians and industry heavyweights that have attempted to add maturity by introducing or updating software engineering tools and processes. As a result, tools and methodologies abound. Since Glass made a clear case that tools tend to represent incremental improvements, one can argue that this represents overall improvement. In my opinion, since any tool or methodology introduced at this stage of the game is unlikely to be revolutionary, these incremental improvements may just be muddying the waters. For example, when they require you to increase the skills inventory within your development teams, decreasing the level of specialization of those same teams. And yet, because software engineers and technology companies love shiny new tools, they get adopted, used and too often discarded (as Glass pointed out) when that next shinier tool comes along. Not to mention the folks that make their livings by selling these tools – they have a vested interest in seeing new stuff come out so that it then gets purchased!
Fortunately, since I’m not the only person saying this and some of the people that are saying it are a lot smarter than me, there may be a light at the end of a very long and discouraging tunnel. Enter Software Engineering Method and Theory (SEMAT). Here we have an organized attempt by a serious group of professionals (academicians and practitioners in fact) that really have a handle on what discipline currently exists in software engineering, to actually formalize the methods and theory that should guide it, but oftentimes is seriously lacking in day-to-day project undertakings. Let us sincerely hope that, with their recent release of the fundamentals (mentioned in the conclusion of this article), that this offers the community a fresh and revolutionary new approach, that doesn’t suffer the failings of its predecessors.
Fact 2: The boundaries of the scope of any software project will always be consistently under-defined at a project’s inception, and will expand to include the project’s minimum requirements by the time the project is accepted.
This is better known as “scope creep” and is indeed the bane of every project manager. In Glass’s book, he is quite specific that project estimates are being done at the wrong time (at the start of the project), so my fact is closely related to his. You either:
- Embrace this fact (as in Agile software development methodologies),
- Absorb it and end up with cost/schedule overruns,
- Contain/manage it through change requests (which may also result in cost/schedule overruns but at least you don’t look like such a crappy project manager),
- Or you can end up with a runaway project wherein requirements never stabilize (as described by Glass).
To state the blatantly obvious, engineers solve problems and software engineers do this through software. Since you need to define any problem (“the requirements”) before solving it, one has to wonder why as software engineers we are collectively not so great at initially defining the problems we are trying to solve. Is this not a measure of immaturity, leading us back to my first fact?
Normally at the start of a project, the intrepid software engineer must provide an estimation and a scope of work. Glass’s facts about estimations suggest that they are overly optimistic at this point. I would say that the same thing is true for project scope, and undoubtedly this truism occurs for nearly all of the same reasons. Many of these reasons are political in nature, but others are due to the fact that the people asking for the work to be done merely have an inkling of what it is they want, or which of their many business problems the software system is to solve. Scope evolves (changes) along with the expectations of the systems’ ultimate end-users.
Sometimes the scope creep occurs because a project runs late and the business environment changes. Ultimately it doesn’t matter why it changes, only that it does and this impacts the project’s likelihood of success. Who’s measuring that success will be taking the amount of scope creep into account, and this may impact that measurement in a negative or a (counterintuitively) positive way.
The statement that at acceptance the project meets the minimum requirements is probably true because we assume that the system is only accepted by the users that commissioned its development if it does.
Fact 3: The most important activity of software testing is unit testing. If unit-testing was done better (more thoroughly), overall testing effort by an independent testing team would be reduced.
There are those that will probably argue with my opinion here, and rightly so as there are many important types of software testing. Some of these are important because of the niche they apply to (not all types of software testing apply to all types of projects). There are other types of software testing that are important but their breadth is too broad to expect that all of it can be done within unit testing. But perhaps the naysayers will at least agree that unit testing is required in all software development projects, so perhaps it is more widely important than most of the others.
Just like the waterfall software development methodology, software testing has a series of phases that it goes through, while testing different aspects of a system’s suitability to its purpose. Often, if software fails in an early stage of that testing, the later stages cannot be executed with any reasonable degree of effectiveness.
Unit testing is arguably the earliest of the software testing stages. The software engineers that are writing the code (developers) are usually the ones doing this unit testing, although I have seen instances of independent testing teams doing unit testing as well. Unit testing is the foundation of any software testing project. Code that has been poorly unit-tested will never get through later stages of testing.
I’m sure you’ve heard it said that developers are the least-well suited to doing software testing. Overall I would agree with this; however in the unit-testing stage it is imperative that the developers perform this function as thoroughly and completely as they possibly can, mainly so they don’t waste the time of the testing engineers that will follow.
So why shouldn’t software developers be responsible for testing? As the argument goes, they are biased towards testing things that they know will work, and ignoring things that they didn’t code for. Only someone independent to the coding step is unbiased enough to look for the latter.
If an engineer is only as good as the problems he or she solves, and we assume that every engineer strives to be the best that they can be, why are they considered unable to cast loose the chains of their own biases and thoroughly test the software that they produce? Could this be a training issue? Is it possible to instill the discipline necessary in developers, such that they can be relied upon to thoroughly test the software that they produce?
Since both coding and testing are learned skills, shouldn’t it be possible to train both facets into a developer’s (or tester’s) skill set so that they are sufficiently adept at both, and thus improve the results of unit testing? While there may be university courses that focus on unit testing, I’d be surprised if they’re common.
This then brings us back around to our first fact. How can software engineering be considered mature if professional software engineers cannot be trained to thoroughly test the software that they produce?
I will leave my valued readers to ponder this line of thinking, and decide for yourself whether my arguments have merit, or are simply circular and used to justify my original treatise. Keep in mind that I am not saying that independent testers aren’t important to the proper testing of software. What I am saying is that developers should be better at proving that their work is ready for the independents to take a crack at. And that they probably would be if the discipline of software engineering was a little bit more mature.
In Glass’s book, he suggests that unit testing often suffers from schedule pressures, i.e., that developers have pressure on them to complete their coding task(s). I won’t argue with that. I will suggest that there’s another factor at play here too: ego.
Let’s try a thought question. How would you rate your driving skills? If you said “better than average” you’d be among the 95% of drivers that think their driving skills are better than average. It is statistically impossible (assuming driving skills fall into a normal distribution) that more than 50% of drivers are better than average. I would argue that if you ask developers this question (not about driving skills of course, but rather their developer skills), you’d see a statistically impossible number rate themselves as better than average. This means that developers probably believe that once they’ve coded something, because they’re better than average, it probably works so they don’t even do the minimum unit-testing required to confirm it. How immature is that?
And in shops where developers are distinct from testers, I’d say that this can exacerbate this problem, because “developers develop” and “testers test.” Less unit testing is certainly going to make the testing team’s job easier, when their metrics are “number of bugs identified.” That’s because they’re identifying bugs that should have been caught during unit testing. How many times have you seen a software system fly through its testing cycle, and then zoom even more quickly through its User Acceptance Testing (UAT) because it has very few issues to find? I’ve seen it at times, and let me tell you that there is no better way to relieve schedule pressures than to get through both of those software development phases quickly!
Fact 4: Process cannot replace initiative, innovation or technical expertise.
CMM is mostly about process, and since it is about the most widely accepted current example of “fundamental rules and practices” in the software engineering discipline, I’m taking a direct shot at it. Fundamentally process is a way of applying the hard-learned lessons and experience of what works and what doesn’t across the great unwashed masses. OK, perhaps that was neither fair nor nice. But coming into the profession there’s always been this huge glut of fresh-faced and inexperienced engineers straight out of university, that have all the academic knowledge and none of the seasoning that the experienced professionals have. So process is meant to guide them, and help them to avoid mistakes and pitfalls.
But is that all it is? Remember what I said about developers’ skills and the bell curve above. If half of all engineers right out of university (or seasoned engineers for that matter) are of lesser than average skills, perhaps much of this process stuff is meant to help uplift or make up for shortfalls in their skill sets. There’s nothing wrong with that, although it seems like the elephant in the room that nobody wants to talk about.
I would say that there’s a corollary to this fact:
Great software is not developed by average developers.
I would expect that the best you can hope for with a team of all average developers might be slightly better than average software, assuming that enough process is included to allow the work effort to proceed without too many glitches.
Technical expertise usually comes with experience. Who do you go to in your company when you need to solve a particularly challenging technical problem that you’re struggling with? I know, your ego says that never happens! It is likely that if it does this will be a pretty senior person on your team. And this person is probably senior because he’s been around the block.
Sometimes seniors become seniors early because they exhibit the traits of innovation and initiative. These are the movers-and-shakers. Everybody who’s been around for a while knows at least one (and maybe it’s you). It is highly unlikely that a person becomes a senior person, unless they develop deep technical expertise in something, if their sole redeeming quality is that the know how to follow process.
Justification and the Fallacies I Chose to Omit
Unlike Glass, I have not done extensive literature searches to back up my “facts.” Instead I am relying on lessons learned through many of the projects I have personally managed, or in others that I’ve been simply involved in. I do however encourage my readers to engage in a bit of informal peer review by posting comments to this article.
I have omitted listing any specific fallacies because most of my facts can be restated in the negative as a fallacy (although that is not what Glass did for his fallacies).
I have also omitted many other facts, some of them elucidated by Glass and others that have simply become very well-known to me after many years in the industry. Many of these facts might also lead one to the conclusion that software engineering is inherently immature, so it begs the question of whether I selected my facts to back up my conclusions. Since that is the general treatise of this article, it would seem inevitable that I have done so.
However I will also say that I cannot think of a single fact that I’ve come to believe after years of experience that would back up the alternative argument: that software engineering is a mature discipline. I’ve seen tools, methodologies and trends that are attempts to improve maturity. But what I have not seen is any panacea that works in 100% of the cases 100% of the time. While that is probably an unattainable goal, ask yourself if you’ve ever seen something that will work in a significant fraction of the cases, even most of the time.
2+2 always equals 4. That is something that works 100% of the time. Addition is a rule that works in 100% of the cases. Arguably (and Glass makes a great case for this), software development is significantly harder than addition. Johann Wolfgang von Goethe said “everything is hard before it is easy.” Perhaps this can also be construed as a measure of maturity.
Conclusions and Recommended Reading
I’m not really expecting anyone to come along and tell me I’ve proven my case that software engineering is immature. I’ll be pretty happy if all I have are a handful of people nodding their heads in agreement as they read through my “facts” and then deciding maybe they should read the facts and fallacies of a true champion in the software research field.
Another thing that would make me extraordinarily happy is to learn that SEMAT has taken a strong foothold, especially if it begins to really address, within my lifetime, some of the challenges that are keeping software engineering immature.
If perchance you do agree with me that software engineering is not as mature as people would like to think, your next step (if you haven’t already done so) is to visit the SEMAT web site and download “the Essences Kernel.” At this point, I believe you’ll find that some of it is a bit on the abstract side. However, you do need to start somewhere and perhaps this is it. By all means, read some of the background and vision espoused by the organization’s members. You can even see that list and I’d bet that there’s a few names there you’ll recognize (like for example Robert L. Glass).
After you visit SEMAT and read the Essences Kernel, decide for yourself if the goals expressed by that organization are worthy and whether this is a good first step.
Then you should come back here and express your opinion of my opening question.