Towards the improved treatment of generalization of knowledge claims in IS research: drawing general conclusions from samples

This paper presents a framework for justifying generalization in information systems (IS) research. First, using evidence from an analysis of two leading IS journals, we show that the treatment of generalization in many empirical papers in leading IS research journals is unsatisfactory. Many quantitative studies need clearer definition of populations and more discussion of the extent to which ‘significant’ statistics and use of non-probability sampling affect support for their knowledge claims. Many qualitative studies need more discussion of boundary conditions for their sample-based general knowledge claims. Second, the proposed new framework is presented. It defines eight alternative logical pathways for justifying generalizations in IS research. Three key concepts underpinning the framework are the need for researcher judgment when making any claim about the likely truth of sample-based knowledge claims in other settings; the importance of sample representativeness and its assessment in terms of the knowledge claim of interest; and the desirability of integrating a study's general knowledge claims with those from prior research. Finally, we show how the framework may be applied by researchers and reviewers. Observing the pathways in the framework has potential to improve both research rigour and practical relevance for IS research.

[1]  Shelby D. Hunt,et al.  Controversy in marketing theory : for reason, realism, truth, and objectivity , 2003 .

[2]  M. Lynne Markus,et al.  Industry-Wide Information Systems Standardization as Collective Action: The Case of the U.S. Residential Mortgage Industry , 2006, MIS Q..

[3]  L. Cronbach,et al.  Designing evaluations of educational and social programs , 1983 .

[4]  William R. King,et al.  External Validity in IS Survey Research , 2005, Commun. Assoc. Inf. Syst..

[5]  J. Scott Armstrong,et al.  Why We Don't Really Know What Statistical Significance Means: Implications for Educators , 2006 .

[6]  L. Delbeke Quasi-experimentation - design and analysis issues for field settings - cook,td, campbell,dt , 1980 .

[7]  Kathy McGrath,et al.  Power, Rationality, and the Art of Living Through Socio-Technical Change , 2007, MIS Q..

[8]  Richard T. Watson,et al.  Analyzing the Past to Prepare for the Future: Writing a Literature Review , 2002, MIS Q..

[9]  Geoff Walsham,et al.  Interpretive case studies in IS research: nature and method , 1995 .

[10]  Peter B. Seddon,et al.  OTHER-SETTINGS GENERALIZATION IN IS RESEARCH , 2006 .

[11]  S. Goodman Toward Evidence-Based Medical Statistics. 1: The P Value Fallacy , 1999, Annals of Internal Medicine.

[12]  David Papineau The philosophy of science , 1996 .

[13]  Chad Anderson,et al.  Journal Quality and Citations: Common Metrics and Considerations about Their Use , 2010 .

[14]  E. S. Pearson,et al.  ON THE USE AND INTERPRETATION OF CERTAIN TEST CRITERIA FOR PURPOSES OF STATISTICAL INFERENCE PART I , 1928 .

[15]  Deborah Finfgeld-Connett,et al.  Generalizability and transferability of meta-synthesis research findings. , 2010, Journal of advanced nursing.

[16]  R. Yin Case Study Research: Design and Methods , 1984 .

[17]  L. Harlow,et al.  What if there were no significance tests , 1997 .

[18]  William R. Shadish,et al.  The logic of generalization: Five principles common to experiments and ethnographies , 1995 .

[19]  Qing Chang,et al.  How Low Should You Go? Low Response Rates and the Validity of Inference in IS Questionnaire Research , 2006, J. Assoc. Inf. Syst..

[20]  Jacob Cohen,et al.  THINGS I HAVE LEARNED (SO FAR) , 1990 .

[21]  R. Royall On the Probability of Observing Misleading Statistical Evidence , 2000 .

[22]  David Hume,et al.  An enquiry concerning human understanding and other writings , 2007 .

[23]  T. Wigram,et al.  Therapeutic Songwriting in Music Therapy , 2008 .

[24]  John K Kruschke,et al.  Bayesian data analysis. , 2010, Wiley interdisciplinary reviews. Cognitive science.

[25]  J. Berger Could Fisher, Jeffreys and Neyman Have Agreed on Testing? , 2003 .

[26]  Rory A. Fisher,et al.  Statistical methods and scientific inference. , 1957 .

[27]  R. A. Fisher,et al.  Statistical methods and scientific inference. , 1957 .

[28]  R. A. Fisher,et al.  Design of Experiments , 1936 .

[29]  Kathleen M. Eisenhardt,et al.  Theory Building From Cases: Opportunities And Challenges , 2007 .

[30]  Stephen Allen Brown,et al.  Revolution at the checkout counter , 1997 .

[31]  J. Berger Statistical Decision Theory and Bayesian Analysis , 1988 .

[32]  D. Campbell,et al.  EXPERIMENTAL AND QUASI-EXPERIMENT Al DESIGNS FOR RESEARCH , 2012 .

[33]  Jeff Gill,et al.  Bayesian Methods : A Social and Behavioral Sciences Approach , 2002 .

[34]  M. J. Bayarri,et al.  Confusion Over Measures of Evidence (p's) Versus Errors (α's) in Classical Statistical Testing , 2003 .

[35]  R. Royall The Effect of Sample Size on the Meaning of Significance Tests , 1986 .

[36]  D. Rousseau,et al.  Evidence in Management and Organizational Science: Assembling the Field's Full Weight of Scientific Knowledge through Syntheses , 2008 .

[37]  G. Glass,et al.  Meta-analysis in social research , 1981 .

[38]  Stephen T. Ziliak,et al.  Size Matters: The Standard Error of Regressions in the American Economic Review , 2004 .

[39]  Gary James Jason,et al.  The Logic of Scientific Discovery , 1988 .

[40]  Gerd Gigerenzer,et al.  The superego, the ego, and the id in statistical reasoning , 1993 .

[41]  E. Lehmann The Fisher, Neyman-Pearson Theories of Testing Hypotheses: One Theory or Two? , 1993 .

[42]  P. Meehl Why Summaries of Research on Psychological Theories are Often Uninterpretable , 1990 .

[43]  Jacob Cohen The earth is round (p < .05) , 1994 .

[44]  Richard Baskerville,et al.  Generalizing Generalizability in Information Systems Research , 2003, Inf. Syst. Res..

[45]  C. Whitbeck,et al.  A Realist Theory of Science. , 1977 .

[46]  Denton E. Morrison,et al.  The Significance Test Controversy , 1972 .

[47]  R. Falk,et al.  Significance Tests Die Hard , 1995 .

[48]  M. Kendall Statistical Methods for Research Workers , 1937, Nature.

[49]  Raymond Hubbard,et al.  Why We Don't Really Know What Statistical Significance Means: A Major Educational Failure , 2006 .

[50]  R. Fisher,et al.  The Logic of Inductive Inference , 1935 .

[51]  S. Goodman Introduction to Bayesian methods I: measuring the strength of evidence , 2005, Clinical trials.

[52]  Frank L. Schmidt,et al.  What do data really mean? Research findings, meta-analysis, and cumulative knowledge in psychology. , 1992 .

[53]  Rory A. Fisher,et al.  The Arrangement of Field Experiments , 1992 .

[54]  Shelby D. Hunt,et al.  Foundations of Marketing Theory: Toward a General Theory of Marketing , 2002 .

[55]  Karl R. Popper The Logic of Scientific Discovery. , 1977 .

[56]  S. Goodman,et al.  Toward Evidence-Based Medical Statistics. 2: The Bayes Factor , 1999, Annals of Internal Medicine.

[57]  Shirley Gregor,et al.  The Nature of Theory in Information Systems , 2006, MIS Q..

[58]  David B. Dunson,et al.  Bayesian Data Analysis , 2010 .

[59]  P. Meehl Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. , 1978 .

[60]  W. Shadish,et al.  Experimental and Quasi-Experimental Designs for Generalized Causal Inference , 2001 .

[61]  P. Meehl Appraising and Amending Theories: The Strategy of Lakatosian Defense and Two Principles that Warrant It , 1990 .

[62]  Scott M. Lynch,et al.  Introduction to Applied Bayesian Statistics and Estimation for Social Scientists , 2007 .

[63]  R. Fisher Statistical methods for research workers , 1927, Protoplasma.

[64]  Alain Pinsonneault,et al.  Survey Research Methodology in Management Information Systems: An Assessment , 1993, J. Manag. Inf. Syst..

[65]  S. Rogelberg,et al.  Introduction Understanding and Dealing With Organizational Survey Nonresponse , 2007 .

[66]  G. Belle Statistical rules of thumb , 2002 .

[67]  David J. Will A realist theory of science , 1981, Medical History.

[68]  T. Cook,et al.  Quasi-experimentation: Design & analysis issues for field settings , 1979 .

[69]  R. D. Rosenkrantz,et al.  The significance test controversy , 1972, Synthese.