Multidimensional quantified goals should direct software design processes

o A numerical measure of performance, such as the period of a rando m number generator, the average number of tries to find a key in a table, o r the waiting time in a queue. o A comparative measure of performance, such as the comparison of tabl e search methods or the comparison of paging algorithms. The choice of an algorithm can dramatically affect the performance and dependabilit y of a system. The designer can very profitably use analytical tools to guide desig n decisions, and then use the numerical results to support performance claims to users o r clients. We see more and more users who are demanding more than just "hand-waving" arguments to justify the investment in a system. The days of "There, that should do it" algorithm design are drawing to a close. W e need to design algorithms which are analyzable and then use the available tools t o analyze them. The April SEN provokes me to contribute. I feel that really central softwar e engineering issues are not being attacked directly enough, and that a lot o f space is being spent on low-priority issues such as program proof correctnes s and word wars between academics. Let me attempt to report my opinion of wha t we need to do. I have been practicing it for several years and find a hig h positive response and interest among clients and course participants in the US and Europe. The first step is that we are making use of formal, quantified, and easil y measurable "quality" goals for a project. The technology for doing this i s explained in (1) and (2). The "functional" specifications are taken fo r granted as being specified in some manner (what the system is going to do). The quality specifications (how well, when, how much, how fast, how easily) , and in particular the "critical" ones (ones that could result in a failed