In the July 2001 issue of ACM Software Engineering Notes, J. P. Lewis published an article that claimed there are hard limits on our ability to estimate software development time.
The article has generated a fair amount of discussion within the software engineering community, because its conclusion is so disturbing.
Lewis claims there simply cannot be any objective method for arriving at a good estimate of the complexity of a software development task prior to completing the task.
He uses objective to mean a formal, mechanical method that does not rely on human intuition.
He backs up this assertion by providing a mathematical proof for it. (Here is the article, Large Limits to Software Estimation, and here is a companion paper Lewis wrote in response to the discussion.)
Lewis’s argument is based on the notion of algorithmic complexity, which is a measure of the shortest program that will produce a given string.
Algorithmic complexity, also known as Kolmogorov complexity, is a well-established and accepted area of information theory and computer science.
Lewis observes that any software program can be thought of as a table that maps program inputs to their corresponding correct outputs.
This characterization of programming assumes a finite number of finite inputs, which is fair for practical purposes.
Lewis then notes that a table can be represented as a string consisting of its rows.
Thus any programming task has an associated algorithmic complexity — the length of the shortest program that will produce the table that maps the project’s data inputs to their outputs. (For more information see An Introduction to Kolmogorov Complexity and Its Applications.)
Next, Lewis states a standard result from complexity theory: There is no algorithm for computing the algorithmic complexity of an arbitrary string.
In other words, there can be no program that tells us the length of the shortest program that solves an arbitrary software development problem.
Lewis asserts software development effort and time are strongly linked to the algorithmic complexity of the project.
Putting all these results together, he concludes there is no rigorous method for calculating the development time of a software project before actually doing the project.
Lewis also extends this result, using previous work by others, to cover approximate estimates of development time within a fixed error range. (For more information, see Computability and Complexity Theory.)
So, is Lewis correct? Should we throw up our hands in despair about ever having a clue how long a software project will take?
Lewis is mathematically correct, but he is correct only within a very narrow definition of the software estimation problem.
The definition used in his article is so narrow, in fact, that it is unnatural. Real software engineers, and researchers in the software process field, are not trying to solve the problem that Lewis lays out.
Despite Lewis’s claims to the contrary, no serious researcher in software engineering is trying to find a guaranteed method for producing estimates of time and effort that are certain to be correct.
No one is even trying to find methods that produce estimates guaranteed to be correct within a known error range.
The real-world problem of software estimation is much less strict than Lewis states. We are just trying to get somewhere close a reasonable percentage of the time!
Suppose someone discovers a formal, mechanical method to estimate software effort that is 80% correct 80% of the time.
For the other 20% of its estimates, the method produces a result that is wildly wrong.
The software development community would be ecstatically happy about this remarkable new method.
Everyone would flock to use it because 4 out of 5 times it estimates software tasks within a 20% error.
We probably would be overjoyed even to find an estimating technique that is 50% correct 50% of the time.
Lewis’s results do not disallow such methods, since there are no bounds on their errors in their failed cases.
In fact, these kinds of algorithms, which are efficient and accurate on many (but not all) problem instances, are well studied by the computer science community.
They are accepted as useful in important areas, including prime number theory and solving complex simultaneous equations. (For more information see Average-Case Complexity Forum.)
In a related topic, Lewis is on shaky ground when he asserts programming time and effort are strongly linked to the algorithmic complexity of the programming task.
This might be true, and it is intuitively appealing, but Lewis does not establish the direct relationship.
Is it possible there is a software project with an associated short program to create the proper input/output mapping table, but where this program is hard for a human to discover?
Such a project would have low algorithmic complexity but high human complexity (time and effort).
Or, perhaps there are software projects with a large mapping program, but a human can see clearly how the program should be written.
Lewis assumes such software projects do not exist, or are rare, but does not give a concrete reason for this belief.
Lewis also is incorrect in his criticism of the Software Capability Maturity Model (SW-CMM) from the Software Engineering Institute at Carnegie-Mellon.
This method, while far from perfect, has helped many organizations improve their software processes.
Lewis provides, and then refutes, a quote from SW-CMM that seems to indicate it promises guaranteed accuracy of software estimates to organizations that follow its model.
The quote is taken out of context and reflects a lack of understanding about SW-CMM. Each SW-CMM “maturity level” is designed to provide more accurate time estimates than the preceding level, but none are expected to be perfect.
Level 5 (their best level) promises software development time estimates that, over many projects, are tightly clustered around the actual development times.
But even at Level 5, the authors of SW-CMM expect some estimates to be far off the mark.
The primary SW-CMM paper even shows a bell-shaped curve to demonstrate this relationship between planned and actual time, and all such curves include values that are far from the center. (For more information see the SW-CMM web site and their primary paper.)
I do agree with Lewis on one of the major points he makes in his companion paper.
He says that, as a consequence of his results, we should place more faith in people when creating software time estimates.
He states, “Good estimation … requires experience and judgment. The software industry should value human experience, intuition, and wisdom….”
I cannot agree more. Good people who know what they are doing are invaluable to the software process.
In addition to relying on human skills, we also should continue to pursue formal methods of software estimation, which hopefully will provide reasonably accurate time estimates for most development tasks.
The existence of such methods is not excluded by Lewis’s results. As Lewis demonstrates, we can apply mathematical analysis to our formal methods so that we understand the limits of our estimating techniques.
We just need to be careful about interpreting those mathematical results.
My thanks to Steve Homer at the Boston University computer science department for a helpful discussion about the mathematical results in Lewis’s paper and my assertions about them. Any errors in this article, of course, remain mine.