Modeling technology innovation: How science, engineering, and industry methods can combine to generate beneficial socioeconomic impacts

BackgroundGovernment-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact—that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs.MethodsThis paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor.ResultsThe resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and “bench to bedside” expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes.ConclusionsHigh-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits.

[1]  Nina Roberts Measuring Program Outcomes: A Practical Approach , 1998 .

[2]  Chris L. S. Coryn,et al.  The logic of research evaluation , 2008 .

[3]  P. Sudsawad Knowledge Translation : Introduction to Models , Strategies , and Measures , 2022 .

[4]  Joseph S. Wholey,et al.  Evaluability assessment: Developing program theory , 1987 .

[5]  Russel E. Kaufman,et al.  “Science, the Endless Frontier” , 1960, Nature.

[6]  農林水産奨励会農林水産政策情報センター ロジックモデル策定ガイド = Logic model development guide , 2003 .

[7]  Joseph P Lane,et al.  Translating three states of knowledge--discovery, invention, and innovation , 2010, Implementation science : IS.

[8]  Patricia J. Rogers,et al.  Program theory in evaluation : challenges and opportunities , 2000 .

[9]  Nathaniel J. Logar Scholarly science policy models and real policy, RSD for SciSIP in US Mission Agencies , 2011 .

[10]  Wilfreda E Thurston,et al.  Evaluability Assessment , 2003, Evaluation & the health professions.

[11]  Gennaro F. Vito,et al.  Theory-Driven Evaluation , 2014 .

[12]  Daniel L. Stufflebeam,et al.  Evaluation Models: New Directions for Evaluation , 2001 .

[13]  A. Link,et al.  Assessing the Impact of Organizational Practices on the Productivity of University Technology Transfer Offices : An Exploratory Study # , 1999 .

[14]  Anne-Marie Schryer-Roy Knowledge translation : basic theories, approaches and applications , 2005 .

[15]  Daniel L. Stufflebeam,et al.  The 21St-Century CIPP Model: Origins, Development, and Use , 2004 .

[16]  Daniel L. Stufflebeam,et al.  Stufflebeam’s Improvement-Oriented Evaluation , 1985 .

[17]  M. Scriven Evaluation thesaurus, 4th ed. , 1991 .

[18]  Carol H. Weiss Evaluation : methods for studying programs and policies , 1997 .

[19]  Lars Wallin,et al.  Individual determinants of research utilization by nurses: a systematic review update , 2011, Implementation science : IS.

[20]  Mark W. Lipsey,et al.  Roles for Theory in Contemporary Evaluation Practice: Developing Practical Knowledge , 2006 .

[21]  Robert E. Litan,et al.  Commercializing University Innovations: Alternative Approaches , 2007, Innovation Policy and the Economy.

[22]  R. Herzlinger,et al.  Why innovation in health care is so hard. , 2006, Harvard business review.

[23]  Daniel L. Stufflebeam,et al.  International handbook of educational evaluation , 2003 .

[24]  Kathryn E. Newcomer,et al.  Handbook of Practical Program Evaluation , 2010 .

[25]  J. P. Lane,et al.  Engaging national organizations for knowledge translation: Comparative case studies in knowledge value mapping , 2011, Implementation science : IS.

[26]  Huey-tsyh Chen Theory-driven evaluations , 1990 .

[27]  Georgia Karuntzos,et al.  The logic model , 2004 .

[28]  M. Scriven The methodology of evaluation , 1966 .

[29]  Chris L. S. Coryn,et al.  Evaluation Theory, Models, and Applications , 2007 .

[30]  E. Rogers,et al.  Diffusion of innovations , 1964, Encyclopedia of Sport Management.

[31]  S Hassfeld,et al.  EVALUATION OF MODELS , 2002, Biomedizinische Technik. Biomedical engineering.

[32]  John A. McLaughlin,et al.  Logic models: a tool for telling your programs performance story , 1999 .

[33]  R. Pielke,et al.  The neglected heart of science policy: reconciling supply of and demand for science , 2007 .

[34]  Pat R I C I,et al.  Rogers : Using Programme Theory to Evaluate Complicated 29 Using Programme Theory to Evaluate Complicated and Complex Aspects of Interventions , 2007 .

[35]  M. Wright,et al.  Technology transfer offices and commercialization of university intellectual property: performance and policy implications , 2007 .

[36]  Kenneth B. Kahn The PDMA Handbook of New Product Development , 1996 .

[37]  Les Rymer,et al.  Measuring the Impact of Research--The Context for Metric Development. Go8 Backgrounder 23. , 2011 .

[38]  David F. Channell Pasteur's Quadrant: Basic Science and Technological Innovation , 1999 .

[39]  S. Straus,et al.  Lost in knowledge translation: Time for a map? , 2006, The Journal of continuing education in the health professions.

[40]  Joseph P. Lane,et al.  State of the Science in Technology Transfer: At the Confluence of Academic Research and Business Development--Merging Technology Transfer with Knowledge Translation to Deliver Value , 2010 .

[41]  Gretchen Jordan,et al.  A theories‐based systemic framework for evaluating diverse portfolios of scientific work, part 1: Micro and meso indicators , 2008 .

[42]  D. Berwick Disseminating innovations in health care. , 2003, JAMA.

[43]  James R. Sanders,et al.  تقويم البرنامج : طرق بديلة و إرشادات عملية = Program evaluation Alternative Approaches and Practical Guidelines , 1987 .

[44]  C. Weiss The many meanings of research utilization. , 1979 .

[45]  Yongxiang Lu,et al.  Science & Technology in China: A Roadmap to 2050 , 2010 .

[46]  Daniela C. Schröter,et al.  Logic Modeling Methods in Program Evaluation , 2008, Journal of MultiDisciplinary Evaluation.