Justify Agile Based On Productivity!

In a recent article Ken Judy takes the stand that agile software development should not be adopted on the grounds of higher productivity. The reason for that, Judy claims, is that there are better ways to justify adopting agile than with hard numbers.

I can sympatize, because I have worked in my share of software development projects where the measurements did more harm than good. Nevertheless, I believe Judy is wrong in this instance. Most organizations measure the wrong thing. That does not imply that measuring is bad in itself.

Judy is correct in stating that measurements drive behavior. He is also correct in stating that in most software development projects, measurements have unintended side effects. In many cases these side effects are quite nasty.

The problem is not with the idea of measuring, the problem is that how to design measurement systems, and how to use them effectively, is poorly understood.

To begin with, it is useless to measure unless we know the purpose of our measurements. To do that, we need a clear picture of what we are trying to accomplish. In other words, we must know the goal of the system under consideration. (System here is the project team and other stakeholders, not the software.)

If we do not know why we are measuring something, we are likely to get the unintended side effects that Judy describes. We must also be aware of the assumptions we make, or we may be mislead into measuring something we should not.

Take the infamous Lines Of Code (LOC) measure. It rests on several assumptions:

* There is a linear relationship between LOC and productivity. Productivity is the amount of functionality per time unit.

* There is a linear relationship between productivity and Throughput. (Throughput is revenue minus totally variable cost).

* Different programmers will use the same number of lines of code to implement a specific piece of functionality.

* What one programmer does, does not affect any other programmer. For example, when one programmers gets a high LOC measure by skipping writing documentation, or writing long, convoluted spaghetti code, this does not have any measurable effect on the productivity of other programmers.

All the assumptions above are wrong, and can be proven wrong quite easily. You do need to measure though.

The LOC measure is the result of a flawed idea of how software development works. Attacking the LOC measure is not very useful, unless the root causes are also addressed. Otherwise, all that happens is that we make the same mistake again, either with some other measurement, or by not measuring at all.

For example, Judy lists several reasons for using agile:

"We sought improved customer satisfaction, reduced risk, improved quality, incremental delivery, and innovation. We obtained other benefits including: great recruiting and retention, rapid professional development, high employee engagement."

This raises a couple of questions:
  • Do these objectives bring the company closer to its goal?
  • Are these objectives sufficient?
  • Are is there anything in the list subject to misinterpretation?
  • Are there any conflicts between these objectives?
  • How do you know you are moving closer to the objectives unless you measure them?
Let's assume we are dealing with a commercial venture. The ultimate goal is to make as much money as possible now and in the future.

Obviously, customer satisfaction is important. Should we always strive to improve it? Most of the time, yes. (Especially in the software industry, where most products compete by sucking less, rather than being better.) Not always though. Beyond a certain point, increased customer satisfaction will not increase sales. Something else will limit the organization's ability to sell its software.

Microsoft is a good example. Windows sales are limited by the number of personal computers in the world. Yes, other systems, like Linux and MacOS do have a market share. However, the number of people using Linux and MacOS is considerably smaller than the number of people who do not own a personal computer at all.

Quality improvement is also double-edged. In at least one Toyota plant improvement efforts have backfired. Employees have a quota on of problems to fix each month. There are very few real problems to find, so they have to make minor acts of sabotage in order to fill their quota. Improvement system has become the problem.

The point is that even though a high degree of customer satisfaction, and high quality, are very good goals to have, even they can backfire if taken out of context.

Next question, are the objectives listed sufficient, or are other things required? For example, is innovation a good thing in itself? I'd say not. Commodore was an innovative company. Commodore killed itself, partly because management did not understand how to take advantage of their innovative capability. To make use of innovative capability, a company must be good at strategic planning, tactical planning, and execution. Commodore sucked at all three.

Likewise, getting and retaining highly skilled people is not enough. An agile company must also invest in maintaining and developing the skills of its employees. Skills become less valuable over time. COBOL programmers know this.

Misinterpretetation? Quality stands out. What high quality is depends on whom you ask. Ask a developer and it probably has to do with code quality. (BTW, code quality is measurable.) Ask a user, and quality has to do with how the software enables the user to achieve her goals. These goals may extend far beyond the actual use of the software.

When on the subject of misinterpretation, what does "great recruitment" mean? A company may very well find exactly the kind of people it is looking for, but unless it is the kind of people that furthers the company goal, the company will be in a worse situation than ever. If you do the wrong thing right, you become wronger. (See Martin Fowler's post about questionable recruitment strategies. See also his follow up. BTW, fowler is wrong about it not being possible to measure programmer productivity. It is possible, and it has been done. A much better problem to consider is this: is measuring individual productvity useful? In the vast majority of cases, it isn't. That would be a topic for another post though.)

Are there any conflicting goals in the list? How about "reduced risk" and "innovation". When you innovate, you do new, untried, things. This increases risk.

Risk can be reduced by doing only what has worked before, and sticking to solving the same kind of problem over and over. That is the antithesis of innovation.

The answer is not to reduce risk, but to manage risk. Risk management and innovation are compatible. Risk reduction and innovation are not. (No, I'll resist the temptation to delve deeply into risk management, and the difference between managing risks and reducing them.)

And this seems rather obvious: setting different objectives does not obviate the need to measure. If you do not measure, how do you know you are moving closer to your objectives? You don't!

So, if you set customer satisfaction as an objective, but do not measure it, how do you know how satisfied your customers are?

Lest I forget, Judy is rightly concerned about uncertainty in measurements. For example, Function Point and Story Point measurements carry a great deal of uncertainty. However this does not make such measurements useless. Measurements are always imprecise to some degree. Try measuring exactly when a train arrives at a train station. You can't. If you can come up with an exact number, you aren't measuring, you are counting.

For a measurement to be useful, there must be two values: the mean value, and the degree to which individual measurement points differ from the mean. (Eli Schragenheim made this point very well in a Clarke Ching podcast recently.)

Consequently, for velocity/productivity measurements to be useful, it is necessary to know the boundaries of variation, i.e. the upper and lower control limit. Six Sigma people know this. Agilists need to learn. (Me, I'm working on it. Slow going, but necessary.)

Learning is the essence of agile. Remember the manifesto: "We are uncovering better ways..."

In conclusion, the intermediate objectives of agile do lead to improved return on investment. What we need to do, is to prove it. To do that, we need to measure.

Comments

Anonymous said…
Hi,

Thanks for commenting on my post.

Just to be clear, my objection is not that agile should not be justified by hard numbers but that I haven't seen a metric for productivity gain specifically that both stood systematic scrutiny and was economically feasible for the average business to collect.

I agree that the ultimate measure of success in business is profit. I understand that any business decision should somehow influence revenue gains or hard dollar cost savings.

The problem with justifying an agile adoption based on revenue gains is there are so many other considerations that attempts to credit any single factor become dubious.

Cost savings need to be real not theoretical. Who did you lay off? How much budget did you give back based on agile adoption?

Jeff Sutherland has a paper that does show significant cost savings using Scrum. The subject company was a CMMI-5 organization and there commitment and rigor to measurement was higher than most companies would support and is cost justified based on the government regulations they perform under. http://tinyurl.com/2hw75s

As you describe, in my experience, measures that are tied to some non-revenue related abstract concept of "productivity" such as lines of code, story points, etc. are more problematic than helpful.

If someone can propose a relevant metric that is economical for a small to medium size business to collect, that can be measured over time in small enough units to show increased performance due to specific process changes, and doesn't create more problems than it solves, I will be happy to consider it.

Popular posts from this blog

Waterfall - The Dark Age of Software Development Has Returned!

Waterfall vs. Agile: Battle of the Dunces or A Race to the Bottom?

Interlude: The Cost of Agile