| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • Stop wasting time looking for files and revisions. Connect your Gmail, DriveDropbox, and Slack accounts and in less than 2 minutes, Dokkio will automatically organize all your file attachments. Learn more and claim your free account.

View
 

Realist Evaluation

Page history last edited by Alexandra Pittman 9 years, 7 months ago

 

Ray Pawson and Nick Tilley. 2004. “Realist Evaluation.”

 

Realist evaluations are theory-based evaluations that articulate how program or policy interventions foster change. Instead of asking ‘What works?’ or ‘Does this program work?’, they aim to answer the questions: ‘What works for whom in what circumstances and in what respects, and how?’ There are four principles to understanding the complexity of change in this approach, which include the following: programs are theories that have developed to address some social issue; programs are embedded in social systems and aim to change relationships; programs require active shifts on the part of beneficiaries; and they are by nature, open systems that interact with, and are affected, by a variety of different external factors. The aim of the evaluation is to gain a better idea of how the intervention works to produce diverse and multiple outcomes with the aim to strengthen programs and learning. 

 

The findings are pragmatic and do not seek to explain everything and to attribute change solely to the intervention, rather the aim is to make sense of the particular outcome patterns observed by testing as many alternatives as possible.

“Realism also means pragmatism, of course, and this is another feature of the

perspective. Having a subject matter composed of double and triple doses of

complexity does not mean that we are forever blind to the consequences of

interventions. It does mean, however, that our understanding will always be partial and provisional and, consequently, that we must be somewhat modest in our ambitions for evaluation.” (p. 16)

 

There are four concepts that form the basis for understanding and assessing programs in a realist evaluation:

  1. Mechanisms describe what actually produces the program effects; they are described as the pivots or levers to change actually happening and outline the logic of the program theory.
  2. Context relates to the description of the elements, conditions, or social relationships, which have a bearing or influence on the program mechanisms.
  3. Outcome patterns include the intended and unintended program results as stimulated through the different mechanisms and contexts. For example, different outcome patterns may result from analysis of different implementations, regional variations, socio-demographic variations, etc, allowing for a better understanding of complex interventions. 
  4. Context-mechanism-outcome pattern configuration allows for the testing of different interventions under different combinations of conditions. This ‘configurational’ approach to causality aims to show that particular outcomes will likely result from the alignment of a combination of attributes. Realist evaluation uses standard scientific methods for hypothesis testing of the program theories. The aim is to understand the outcome patterns and successes and failures according to different subgroups. 

 

In general, Pawsey and Tilley (2004:19) suggest that the outcome of the evaluation should indicate:

 

  • that a particular intervention works in quite separate ways
  • that it gets implemented in different ways
  • that it is more effective with some groups rather that others
  • that it will find more use in one location rather than another
  • that it has intended and unintended consequences
  • that its effects are likely to be sustained or taper off.

 

Since the evaluation should help institutions tailor, adapt, or implement programs, findings should:[1] 

 

  • Show how combinations of attributes need to be in place for a program to be effective. Optimal alignments of implementation schedules, target groups and program sites should be suggested.
  • Have the potential for transferability on the basis using concepts that link to other program theories and thus rest on further bodies of findings. Conclusions should evoke similarities and differences with existing evidence.
  • Bring alternative explanations to the fore in order to sift and sort them. Program building involves making choices between limited alternatives, and findings should be addressed at and help refine those choices. 

 

The realist approach challenges typical policy inquiries. For an example of how, see the boxes below (reproduced from p.21).

 

 

The article also has a useful tool to guide thinking and analysis on if and how an intervention could be applied in a different context. 

 

Strengths:

 

  • Realist evaluations are concerned with how change happens and set out specific guidelines for understanding under what conditions and with whom interventions make a difference, adding greater nuance for capturing complex social phenomena.
  • Realist evaluations aim to carve out a middle ground between making universal claims of effectiveness and only speaking to localized knowledge specific to one intervention. 
  • There is deep attention to context in its many forms and exploration of the way in which different contextual factors may interact to influence outcomes in a variety of ways. 
  • The focus on pragmatism and the true complexity of social change and the theories that underlie interventions challenge the field to think differently about measurement and the most useful and relevant information we should be gathering.
  • There is an underlying focus on the importance of assessment for learning purposes. 

 

Weaknesses (or not designed for):

 

  • The system for conducting realist evaluations is complex and has multiple steps. The approach requires a high level of knowledge regarding theory development and social scientific hypothesis testing. It would not be suitable for contexts or organizations without high levels of research capacity. 
  • The approach does not aim to attribute change necessarily, but also does not address the importance of measuring or assessing contributions to change. 
  • There is not an embedded form of stakeholder participation in the evaluation design, although the approach does not necessarily preclude such participation. 
  • Due to the specificity of evaluation findings and the outcome patterns, comparisons across contexts and over time may be more difficult. 

 


[1] Reproduced from p.19.

 

 

Comments (0)

You don't have permission to comment on this page.