An empirical study of software project bidding

The study described in this paper reports from a real-life bidding process in which 35 companies were bidding for the same contract. The bidding process consisted of two separate phases: a prestudy phase and a bidding phase. In the prestudy phase, 17 of the 35 bidding companies provided rough price indications based on a brief, incomplete description of user requirements. In the bidding phase, all 35 companies provided bids based on a more complete requirement specification that described a software system with substantially more functionality than the system indicated in the prestudy phase. The main result of the study is that the 17 companies involved in the prestudy phase presented bids that were, on average, about 70 percent higher than the bids of the other companies, although all companies based their bids on the same requirement specification. We propose an explanation for this difference that is consistent with the "prospect theory" and the "precautionary bidding effect." A possible implication of our findings is that software clients should not request early price indications based on limited and uncertain information when the final bids can be based on more complete and reliable information.

[1]  Magne Jørgensen,et al.  Better sure than safe? Over-confidence in judgement based software development effort prediction intervals , 2004, J. Syst. Softw..

[2]  L. J. Savage,et al.  The Utility Analysis of Choices Involving Risk , 1948, Journal of Political Economy.

[3]  Chris Chapman,et al.  Managing Project Risk and Uncertainty: A Constructively Simple Approach to Decision Making , 2002 .

[4]  Jacob Cohen,et al.  A power primer. , 1992, Psychological bulletin.

[5]  Barbara A. Kitchenham,et al.  Modeling Software Bidding Risks , 2003, IEEE Trans. Software Eng..

[6]  Magne Jørgensen Realism in assessment of effort estimation uncertainty: it matters how you ask , 2004, IEEE Transactions on Software Engineering.

[7]  Magne Jørgensen,et al.  Reasons for software effort estimation error: impact of respondent role, information collection approach, and data analysis method , 2004, IEEE Transactions on Software Engineering.

[8]  Dag I. K. Sjøberg,et al.  The impact of customer expectation on software development effort estimates , 2004 .

[9]  Glen Whyte,et al.  The Effect of Multiple Anchors on Anchoring in Individual and Group Judgment , 1997 .

[10]  J. Scott Armstrong,et al.  Principles of forecasting : a handbook for researchers and practitioners , 2001 .

[11]  A. Tversky,et al.  Prospect Theory : An Analysis of Decision under Risk Author ( s ) : , 2007 .

[12]  Robert P. Leone,et al.  Uncertainty, Experience and the Winner's Curse in OCS Lease Bidding , 1986 .

[13]  Kjetil Molkken,et al.  A Review of Surveys on Software Effort Estimation , 2003 .

[14]  Magne Jørgensen,et al.  Impact of effort estimates on software project work , 2001, Inf. Softw. Technol..

[15]  Thomas Oberlechner Psychology of Judgment and Decision-Making , 2006 .

[16]  H. Markowitz The Utility of Wealth , 1952, Journal of Political Economy.

[17]  Kjetil Moløkken-Østvold,et al.  A review of software surveys on software effort estimation , 2003, 2003 International Symposium on Empirical Software Engineering, 2003. ISESE 2003. Proceedings..

[18]  Péter Eso,et al.  Precautionary Bidding in Auctions , 2003 .

[19]  Brian G. Kingsman,et al.  A knowledge-based decision support system for cost estimation and pricing decisions in versatile manufacturing companies , 1997 .