Discussions about decision-making and the kinds of knowledge that could and should be used for this purpose within the workplace have become prominent in current organisational literature. These issues are high on the agenda of the business community, and among not-for-profit agencies. Research and observation suggests that a good deal of organisational decision-making has, in the past, been based on the judgements of authority figures. It was long assumed that senior managers had the sole right to make decisions, relying on a combination of experience, political know-how and the advice of trusted others in making choices about the present and future of the agencies for which they were responsible. This situation is changing. Most organisations recognise that important executive-level decisions should involve others besides senior managers, and I have previously argued that evaluators should work cooperatively in providing ‘just in time’ information for leaders (Owen & Lambert 1998). This implies that evidence and empirically based knowledge has the capacity to enhance decision-making and the effectiveness of organisations, for example by making managers and other practitioners more aware of the context in which they operate, understanding the needs of their clients, determining the effects of major initiatives, and being accountable to funding sources. In this paper, I argue that the creation of an evaluation culture leads to a change and improvement in the ‘mix’ of ‘working knowledge’ that is used by those responsible for applying information to solve organisational problems. An evaluation culture can be regarded as a commitment to roles for evaluation in decision-making within an organisation (Owen & McDonald 1999). This is systematic enquiry which is initiated and controlled by members of the organisation, and is carried out with the explicit purpose of contributing to the stock of its working knowledge. Enquiry of this nature is not undertaken routinely, but in response to the need for empirically based knowledge to contribute to issues regarded as strategic.
[1]
Lorna Earl,et al.
The Case for Participatory Evaluation
,
1992
.
[2]
J. Owen,et al.
Acquiring Knowledge of Implementation and Change
,
1994
.
[3]
E. Ziegel,et al.
Balanced Scorecard
,
2019,
Encyclopedia of Public Administration and Public Policy, Third Edition.
[4]
Michael Huberman,et al.
Research utilization: The state of the art
,
1994
.
[5]
T. Guskey.
Staff Development and the Process of Teacher Change
,
1986
.
[6]
Matthias Wingens,et al.
Toward a General Utilization Theory
,
1990
.
[7]
Robert O. Brinkerhoff.
Using evaluation to transform training
,
1989
.
[8]
Arnold J. Love,et al.
Internal Evaluation: Building Organizations from Within
,
1991
.
[9]
R. Havelock.
Planning for innovation through dissemi-nation and utilization of knowledge
,
1969
.
[10]
Robert E. Stake,et al.
Program Evaluation, Particularly Responsive Evaluation
,
1983
.
[11]
John M. Watkins,et al.
A postmodern critical theory of research use
,
1994
.
[12]
K. Louis.
Reconnecting Knowledge Utilization and School Improvement: Two Steps Forward, One Step Back
,
2005
.
[13]
J. Owen,et al.
Evaluation and the Information Needs of Organizational Leaders
,
1998
.
[14]
E. Rogers,et al.
Diffusion of innovations
,
1964,
Encyclopedia of Sport Management.
[15]
Donald W. Compton,et al.
The art, craft, and science of evaluation capacity building
,
2002
.
[16]
Brian J. Caldwell,et al.
Leading the self-managing school
,
1992
.
[17]
J. Owen,et al.
Creating an Evaluation Culture in International Development Cooperation Agencies (特集 国際教育協力--海外専門家の視点〔英文〕)
,
1999
.