Initial Research Abstracts from SHAREHD

The realist evaluation team have presented 2 oral and 2 poster presentations at the European Evaluation Society and Health Service Research UK conferences. See the abstracts and posters below.  For more information contact Dr Steven M Ariss or Dr Joanna Blackburn on  This email address is being protected from spambots. You need JavaScript enabled to view it. or  This email address is being protected from spambots. You need JavaScript enabled to view it.

ABSTRACT TITLE: Combining Realist with Developmental and Utilization-Focused Evaluation Methodology: conflict or symbiosis?

  • Rethinking evaluation methods, design and criteria
  • Combining methods in evaluation

The central theme of this paper is the combination of popular evaluation approaches that have incorporated complexity concepts. It will reflect on over ten years of applying and teaching complex evaluation methods in a wide range of settings. This work has largely been inspired by an early experience of producing a final report for a two-year programme evaluation, which was largely treated as inconvenient and comprehensively ignored. An experience that I am sure is shared by many evaluators. During this evaluation, some approaches proposed by Michael Quinn Patton in his (1986) book 'Utilization-Focused Evaluation' (UFE) were adopted, but this was 'too little, too late' to make a noticeable impact on the usefulness of this piece of work.

In the following decade, an intensive programme of complex evaluations was carried out, in which UFE approaches were adopted alongside Realist Evaluation methods to increase the focus on 'usefulness'.The development of these combined methodological approaches, and their application in complex settings, was accelerated in 2011 and 2013 by the publication of Patton's 'Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use', and Pawson's 'The Science of Evaluation: a Realist Manifesto'.

There are numerous differences between the approaches of Pawson and Patton; perhaps most notably the former's emphasis on scientific methodology and the latter's focus on pragmatics and usefulness. Key similarities are the central importance of; developing theories of action, and the conceptualisation of causal linkages in complex settings.

The combination of Realist Evaluation approaches with other approaches has been a contentious issue, which has often divided the community. This is particularly the case when epistemological or ontological foundations appear to be incompatible. For instance, the ongoing debate concerning the combination of Realist methods with trials-based methodology has been a rich source of controversy.

This paper suggests that, far from being incompatible, there are greatly beneficial synergies to be gained from combining both these approaches. Many years' of experience in applying and teaching Realist Evaluation methods has highlighted issues around prioritization, boundary-setting and decision-making that can potentially be resolved with this proposed combination of approaches. Perhaps the greatest advantage is the potential for a systematic focus on usefulness to underlie the theoretical development process of CMOCs (Context, Mechanism, Outcome Conjectures). Thereby, helping to define the role of the evaluator and providing focus and boundaries for CMOC development; preventing 'theory-creep' and evaluator bias. 

ABSTRACT TITLE Using social media for programme theory development: the role of emerging technology in complex programme evaluation

  • Rethinking evaluation methods, design and criteria
  • Data collection, analysis and reporting issues particularly in challenging contexts
  • Role of ICT and emerging technologies including social media, big data, location systems, cyber ethnography etc

Crises in areas of society where resilience begins to break down lead to situations in which innovative approaches to improving efficiency are increasingly sought and applied, whilst the resources for engagement in evaluation are put under increasing pressure. It is therefore necessary for evaluators to join in the drive for greater efficiency, by developing more efficient research methods and streamlining the need for engagement with respondents.

Evaluators frequently find themselves operating in environments and dealing with topics with which they have limited prior knowledge or experience. It is therefore critical to engage with experts in the field to collaboratively co-produce the evaluation design and to begin to construct the rationale for activities and expected outcomes. This is often accomplished through the development of comprehensive programme theories or theories of change, for instance in the case of a Realist Evaluation this would be through the construction of Context Mechanism Outcome Conjectures (CMOCs).

Co-producing and validating these programme theories with a wide range of key stakeholders helps to ensure that evaluation findings resonate with people's lived-experiences, which therefore assists with the quality and subsequent uptake of findings and recommendations. However, complex evaluations require input at several stages from many participants for the co-production of programme theories. Traditional methods of engagement, such as discussion groups can be time consuming. This can be a practical barrier to the successful conduct of an evaluation, particularly in areas such as health-service innovation; where organisational resilience is increasingly being tested; resources are increasingly at a premium and there is a key focus on direct service delivery; and members of staff have very little time for anything other than direct service delivery.

This paper describes such an evaluation and how this practical problem of the time-poverty of key stakeholders was addressed through the use of social media. A combination of social media platforms were used to engage with different stakeholder groups, at different levels of specificity. The problem and the development of the solution will be described. Examples will be given of the types of prompts and questions that were asked of different groups, and how these contributed to the development and refinement of the CMOCs to guide the evaluation.