Predicting future maintenance cost, and how we're doing it wrong

I believe (with Loyal Opposition passion !) that the software engineering field has many fallacies. Here I look at one with interesting implications: The way to predict future software maintenance cost, and to make software product replacement decisions, is to look at past cost data. I've seen this fallacy engaged in by almost everyone in the field, particularly managers of legacy software , and even more especially researchers looking into maintenance issues. Why do I think this is a fallacy? We humans tend to predict the future on the basis of the past. After all, you can't predict the future by looking at the future. So we assume that what is about to happen will be similar to what has already happened. Sometimes that approach works— in fact, it works fairly often. But sometimes it doesn't work at all. A tale of a tub Two interesting questions come up fairly frequently during software maintenance: I What will it cost us to keep maintaining this product? I Is it time to consider replacing this product with a newer version? Those are important questions. So, it's not surprising that our old predictive friend, " let's base our beliefs about the future on the past, " raises its head in this context. But does that method of prediction work for software maintenance? To answer that, let's briefly consider how maintenance occurs. It consists largely, as we now know well, of enhancements. Therefore, looking at a legacy software product's repair rates will do us little good. What we should look at, if this predictive approach is to work at all, is the product's enhancement rate. So, are there logical trends to enhancement rates? Not much data exists that will help us answer this question, but there are some facts we can consider. Those who write about software maintenance have long said that a prod-uct's maintenance costs have a " bathtub " shape. When a product first goes into production , there's a lot of maintenance. One reason is that the users have fresh insight into the problem they're trying to solve and discover a host of new related problems they'd like worked out. Another reason is that heavy initial use tends to flush out many errors. Time passes, and we descend into the stable , low-maintenance middle of the maintenance process. Enhancement interest drops off, the bugs are pretty well under control, and the …