Paradigm-Independent Meta-Criteria for Social & Behavioural Research

In this paper, I flesh out a network of inter-related meta-criteria for evaluating the quality, coherence and value of social and behavioural science research, independent of the paradigm(s) guiding the research. The network emerges, in part, from principles of complexity science and considerations of multimethodology. I argue that the ultimate test of research quality and contribution lies in its power to convince those who stand to be influenced by it. ‘Convincingness’ emerges from the confluence of the research act and the telling of the story about that act and therefore forms the central meta-criterion to be satisfied. Judgments of ‘Convincingness’ are influenced by a set of 12 inter-connected meta-criteria, each targeting an aspect of research act or story. The network is displayed in both matrix and mindmap formats to assist researchers and reviewers in its application. The network is explicitly designed to move researchers’ thinking beyond the boundaries of specific research traditions or paradigms and their localised assumptions and definitions of research ‘validity’. I argue that focused consideration of the meta-criteria can greatly assist both the planning and the evaluation of social and behavioural science research. In the social and behavioural sciences, there is a continuing debate about the criteria one should use to judge the research quality, impact and contribution. The crux of this debate has centred on the different meanings held for the criteria of ‘validity’ and ‘generalisability’ within various research traditions or paradigms (see, for example, the discussions in Beer 1993, Crotty 1998 and Thomas 2006). The debate has yet to arrive at a consensus view. Furthermore, in emerging conceptualisations of social and behavioural research in terms of multiple paradigms, multimethodology and triangulation (see, for example, Brewer & Hunter 2006, and Onwuegbuzie & Teddlie 2003), lines of distinction are becoming increasingly blurred. Meanings for ‘internal validity’ and ‘external validity’, two of the dominant criteria in the positivistic or ‘normative’ paradigm, have been borrowed, distorted and recast to fit different expectations and paradigm assumptions. LeCompte and Goetz (1982) and Healy and Perry (2000), for example, demonstrated this reshaping process when they generated meanings for internal and external validity to fit the contexts of interpretive ethnographic research and qualitative marketing approaches within the realism paradigm. Cooksey (2001) showed that it was beneficial to apply complexity science considerations and principles to social and behavioural science research. Research, from a complexity science perspective, is an emergent activity evolving from the dynamic and contextualised intersection of the researcher and the researched in the context of one or more sets of guiding assumptions (i.e., paradigms). This encourages a 'multi-' mindset for research: multidisciplinary, multi-paradigm, multi-methodology (see Cresswell & Plano-Clark 2007 and Mingers & Gill 1997). An important question that emerges from this complexity perspective is how should the quality, coherence and value of research evaluated and judged? Criteria typically applied within one

[1]  E. Guba,et al.  Paradigmatic Controversies, Contradictions, and Emerging Confluences. , 2005 .

[2]  M. Lecompte,et al.  Problems of Reliability and Validity in Ethnographic Research , 1982 .

[3]  J. Scott Peer Review for Journals: Evidence on Quality Control, Fairness, and Innovation , 1997 .

[4]  M. Hojat,et al.  Impartial Judgment by the “Gatekeepers” of Science: Fallibility and Accountability in the Peer Review Process , 2003, Advances in health sciences education : theory and practice.

[5]  R. Cooksey What Is Complexity Science? A Contextually Grounded Tapestry of Systemic Dynamism, Paradigm Diversity, Theoretical Eclecticism , 2001 .

[6]  Nahid Golafshani,et al.  Understanding Reliability and Validity in Qualitative Research , 2003 .

[7]  Francis A. Beer Validities: A political science perspective , 1993 .

[8]  A. Hunter,et al.  Foundations of multimethod research : synthesizing styles , 2006 .

[9]  Kevin Brazil,et al.  A Strategy to Identify Critical Appraisal Criteria for Primary Mixed-Method Studies , 2004, Quality & quantity.

[10]  M. Crotty The Foundations of Social Research: Meaning and Perspective in the Research Process , 1998 .

[11]  John Mingers,et al.  Multimethodology : the theory and practice of combining management science methodologies , 1997 .

[12]  George R. Franke,et al.  HANDBOOK OF MIXED METHODS IN SOCIAL & BEHAVIORAL RESEARCH (Book) , 2004 .

[13]  Ann Taket,et al.  Diversity Management: Triple Loop Learning , 1996, J. Oper. Res. Soc..

[14]  A. Onwuegbuzie,et al.  Mixed Methods Research: A Research Paradigm Whose Time Has Come , 2004 .

[15]  C. Seale,et al.  Quality in Qualitative Research , 1999 .

[16]  C. Perry,et al.  Comprehensive criteria to judge validity and reliability of qualitative research within the realism paradigm , 2000 .