NCGIA National Center for Geographic Information and Analysis Testing Technology Transfer Hypotheses in GIS Environments Using a Case Study Approach

Although they are perhaps the most commonly-used and popular research methods, case studies and other qualitative forms of social science research have long been criticized for their lack of generalizability to the larger population and lack of sampling controls. These criticisms may be aptly addressed and solutions constructed by evaluating the rules of scientific method within case research environments. By using more logically consistent, rigorous, and systematic approaches, some of the shortcomings of case study methods may be overcome. This article draws from the management information systems (MIS) and organization behavior (OB) literature to make some suggestions on how to conduct and evaluate GIS case study research. It reviews the requirements of natural science research models, particularly as described by Lee (1989), and provides examples of how the substance of those requirements may be met in the context of GIS case studies. Introduction Case study methodologies have been suggested within the GIS community as appropriate for researching a range of GIS implementation, utilization, and diffusion issues (Zwart 1986, Niemann et. al. 1988, NCGIA 1989, Craig 1989, Azad 1990). These issues include identifying the determinants of adoption outcomes; isolating critical adoption factors and processes for particular classes of users; investigating the stages at which change agent, opinion leader, and champion influences are most critical; assessing use success; determining levels in the organizational structure where GIS products are used and to what extent; identifying the forms of decision making which have utilized GIS; identifying factors and processes leading to rejection of previously embraced GIS; and identifying organizational and societal consequences of GIS (Onsrud and Pinto, 1991). Case studies examine phenomena in their natural settings and typically involve collection of data by several different means from a range of sources. When used as the sole research heuristic device, case studies have been criticized for their limitations in terms of generalizability to the larger population and lack of sampling controls (Piore 1979, Bariff and Ginzberg 1982, Bonoma 1985). It is generally acknowledged in the social science research community that no single research methodology is most appropriate for all research applications (Williams, Rice & Rogers 1988). In addition, it is generally agreed that using multiple forms of research to investigate an issue leads to better and more reliable results than using a single methodology (Yin and Heald 1975, Cook and Campbell 1979, McClintock, Brannon & Maynard-Moody 1979, Kaplan and Duchon 1988). Case studies are often included and occupy a lead position in the suite of methods used by researchers to evaluate intervention strategies within organizations. However, there is a need to clarify the explicit methodological means by which case studies within GIS application environments should be carried out. For purposes of this paper, a case study is an examination of a phenomena in which the primary purpose of the observer has been to carry out research rather than to implement a system or improve an operational environment. That is, since the techniques suggested in this article are intended for individuals testing theories relating to the efficacy of intervention strategies, they will be less useful to practitioners who are implementing systems. Of course, the overall intent of investigating various case study techniques is to aid researchers in building a relevant body of knowledge which eventually will aid practitioners in their system implementation and improvement efforts. Let us assume that the general GIS practitioner community is confronted with an implementation issue in which the most appropriate intervention strategies to use in addressing that issue are difficult to determine. For instance, organizations throughout the general GIS community currently are involved in determining which processes they should follow and which factors they should consider to ensure that their GIS systems are used to their greatest benefit over time. Another example is determining which policies 1 This article is based upon work partially supported by the National Science Foundation under Grant No. SES-88-10917. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. agencies should pursue in regard to public access to their GIS database and which organizational and legal tools should be used in carrying out those policies. When confronted with these and similar problems, the practitioner community engages in trial and error processes in attempting to find out "what works." The brainstorming and testing engaged in by the practitioner community results in a substantial body of valuable knowledge. This knowledge base of successes and dead ends is communicated by various methods throughout and among the community. However, as the experience base grows and system implementations become more diverse and complex, large numbers of conflicting and competing messages may be received on what works or does not. At this point in the diffusion of a technology (i.e. when the appropriate decision routes in addressing an issue are no longer intuitively obvious), social science researchers can play an important role in providing direction to the user community. Consequently, there is a need for social science researchers to construct research questions from the existing experience base and to develop a body of research that builds up falsifiable hypotheses and tests them through rigorous methods following the nature of scientific canons. Utility of Case Study Approaches The traditional phases of accruing knowledge within "learned" settings are often expressed as exploration, hypothesis generation, and hypothesis testing (Glaser & Strauss, 1967). In the exploration phase, researchers descriptively study how organizations have dealt with the constraints imposed upon them. These knowledge capture studies form the basis for developing theories regarding phenomena and for hypothesizing prescriptive strategies. After deriving the theories and generating hypotheses in support of them, the researcher proceeds to the hypothesis testing phase. Conventional thinking in the MIS literature indicates that case study approaches are highly appropriate for the exploration and hypothesis generation phases but are generally ill-suited to the hypothesis testing phase (Roethlisberger 1977, Bonoma 1983, Benbasat 1984). The argument made is that, although disconfirmation of a hypothesis might be shown by a single case, reasonable confirmation of a hypothesis requires analytic deductive testing of a representative and substantial sample (Benbasat, Goldstein & Mead 1987). By this reasoning, data must be gathered in a form suitable for quantitative processing and must be gathered for a significant number of cases to evidence confirmation of a hypothesis reliably (Bariff and Ginzberg 1982, Dickinson, Benbasat & King 1982, Kauber 1986). A recent article by Lee (1989) challenges the conventional wisdom with regard to the use of case studies in the hypothesis testing stage. He argues that the data or results generated from case studies need not be quantitative, statistical, or mathematical in order to be analytically rigorous or "scientific." We believe the application of his approach to the evaluation of GIS implementations could provide a useful and practical means for the GIS community to better isolate those factors and processes which are critical for inclusion in prescriptive implementation and improvement strategies. Croswell (1989) lists numerous obstacles to successful GIS implementation and categorizes them into eleven major groups: apathy/fear of change; funding availability or justification; planning/management support; organization coordination and conflicts; training/understanding of technology-, staffing availability/recruitment; software complexity/maturity of technology; data communication/networking; data structure and source materials; data and software standards/data integration; and miscellaneous. The obstacles listed in each group were identified from a wide range of GIS literature extending from 1985 through 1988. From the diffusion of innovations literature and GIS tracer case studies, Onsrud and Pinto (1991) developed a similar list of factors potentially critical to successful adoption of GIS. Typical questions which arise in regard to such lists include the following: Which of the factors cited may be most critical for a particular class of users or for particular forms of organizational structure? To what extent is it necessary to consider each item? Are there items on the list which might be "perpetuated myth" rather than actual requirements for success? Are there items on the list which may improve the likelihood of initial acquisition but be counterproductive to long term success or vice versa? Appropriate application of case study approaches could aid in finding the answers to these and similar questions. Lee’s Scientific Case Study Methodology "Few propositions in science are directly verifiable as true and none of the important ones are" (Copi 1986 as reported in Lee 1989). Thus, most theories, whether in the natural or social sciences, may be tested only indirectly. The indirect testing of a theory consists of deriving one or more suppositions capable of being tested directly from the theory and then comparing the actual outcomes against those which were predicted. In addition to making this comparison, the scientist must also provide evidence that the outcomes are attributable to the supposition being tested and not to other causes. Lee argues that a single case study meets the requirements of scientific method if it adequately addresses four methodological pr

[1]  Jeffrey K. Pinto,et al.  Diffusion of geographic information innovations , 1991, Int. J. Geogr. Inf. Sci..

[2]  S. Dubin How many subjects? Statistical power analysis in research , 1990 .

[3]  Magid Igbaria,et al.  Correlates of user satisfaction with end user computing: An exploratory study , 1990, Inf. Manag..

[4]  Nancy J. Obermeyer,et al.  Bureaucratic factors in the adoption of gis by public organizations: Preliminary evidence from public administrators and planners☆ , 1990 .

[5]  Donald R. Chand,et al.  Diffusing software-engineering methods , 1989, IEEE Software.

[6]  K. Kearns Communication Networks Among Municipal Administrators , 1989 .

[7]  Allen S. Lee A Scientific Methodology for MIS Case Studies , 1989, MIS Q..

[8]  Allen S. Lee Case Studies as Natural Experiments , 1989 .

[9]  Steven P. French,et al.  Computer Adoption and Use in California Planning Agencies: Implications for Education , 1989 .

[10]  Michael F. Goodchild,et al.  NATIONAL CENTER FOR GEOGRAPHIC INFORMATION AND ANALYSIS , 1998 .

[11]  Bonnie Kaplan,et al.  Combining Qualitative and Quantitative Methods in Information Systems Research: A Case Study , 1988, MIS Q..

[12]  E. Rogers,et al.  Innovations and Organizations , 1988 .

[13]  Claudette M. Peterson,et al.  The darkside of office automation: how people resist the introduction of office automation technology , 1988 .

[14]  Jane M. Carey,et al.  Human factors in management information systems , 1988 .

[15]  D. Slevin,et al.  THE INFLUENCE OF ORGANIZATION STRUCTURE ON THE UTILITY OF AN ENTREPRENEURIAL TOP MANAGEMENT STYLE , 1988 .

[16]  Suzanne Rivard,et al.  Factors of success for end-user computing , 1988, CACM.

[17]  Wanda J. Orlikowski,et al.  A Short Form Measure of User Information Satisfaction: A Psychometric Evaluation and Notes on Use , 1987, J. Manag. Inf. Syst..

[18]  E. Rogers,et al.  Research methods and the new media , 1988 .

[19]  Hugh W. Calkins,et al.  The economic evaluation of implementing a GIS , 1988, Int. J. Geogr. Inf. Sci..

[20]  Izak Benbasat,et al.  The Case Research Strategy in Studies of Information Systems , 1987, MIS Q..

[21]  Suzanne Rivard,et al.  Successful Implementation of End-User Computing , 1987 .

[22]  Dorothy Leonard-Barton,et al.  Implementing Structured Software Methodologies: A Case of Innovation in Process Technology , 1987 .

[23]  Louis Raymond,et al.  Validating and applying user satisfaction as a measure of MIS success in small organizations , 1987, Inf. Manag..

[24]  P. Zwart User requirements in land information system design ― Some research issues , 1986 .

[25]  K. McCardle Information Acquisition and the Adoption of New Technology , 1985 .

[26]  M. Saxton,et al.  Gaining Control of the Corporate Culture , 1985 .

[27]  T. Bonoma Case Research in Marketing: Opportunities, Problems, and a Process , 1985 .

[28]  Kenneth L. Kraemer,et al.  People and computers : the impacts of computing on end users in organizations , 1985 .

[29]  William I. Gorden Corporate Cultures: The Rites and Rituals of Corporate Life , 1984 .

[30]  Blake Ives,et al.  The measurement of user information satisfaction , 1983, CACM.

[31]  Izak Benbasat,et al.  The MIS area: problems, challenges, and opportunities , 1982, DATB.

[32]  Marshall W. Meyer,et al.  Power in Organizations. , 1982 .

[33]  A. Raub CORRELATES OF COMPUTER ANXIETY IN COLLEGE STUDENTS , 1981 .

[34]  Lawrence B. Mohr,et al.  Toward a Theory of Innovation , 1979 .

[35]  T. Mitchell,et al.  The Effects of Different Organizational Environments Upon Decisions About Organizational Structure , 1978 .

[36]  Jay R. Galbraith,et al.  Strategy implementation : the role of structure and process , 1978 .

[37]  G. Huber,et al.  Relations Among Perceived Environmental Uncertainty, Organization Structure, and Boundary-Spanning Behavior , 1977 .

[38]  A. Strauss,et al.  The discovery of grounded theory: strategies for qualitative research aldine de gruyter , 1968 .