Designing impact tracking systems with people at the heart

by Mark Reed

by Mark Reed

 

By Prof Mark Reed

Just motivating people to report their impacts is a huge challenge, given the number of administrative pressures that most researchers are under. Added to this challenge, some researchers would gladly report their impacts, but don’t realise that anyone is interested in what they are doing. And then there’s the small minority who want to shout about each and every minor career success they have had, whether anyone is interested or not. 

There are lots of good reasons why institutions are spending so much time and money trying work out how they can track impact. Tracking impact is often required by funders and Governments who want to see the impact that their money is having. But the same information can also be used for marketing purposes, and to target resources and support to the researchers and teams who need it most. By sharing good practice across an institution, it becomes possible to celebrate individual and collective success and learn everyone can work together to make a bigger difference.

Many websites now offer systems to help track impacts, and many institutions are developing their own systems to keep tabs on the impacts that their researchers are having. But is this the answer?

Why is it so difficult to track impact?

The challenge of generating useful information about the impact of research, and whether researchers are progressing towards likely impacts or not, is not trivial. In particular, there is significant variation in the way impacts are conceptualised, operationalised and assessed in different academic disciplines, and certain kinds of impact evaluation can distort perceptions of impact and may funnel future effort into narrowly conceived types of impact generation activities.

We recently reviewed 135 evaluations of knowledge exchange and impact across diverse disciplines, and found strong associations between the way knowledge, knowledge exchange and impact is understood or framed, the methods used for evaluation and the types of outcomes evaluated. Positivist perspectives tended to result in more one-way, knowledge transfer to achieve impacts, and quantitative evaluation methods. Subjectivist perspectives, in contrast, were more likely to lead to knowledge exchange activities that encouraged mutual learning through multi-stakeholder interactions and methodologies that captured the diverse experiences of those involved and considered wider impacts and factors that may have contributed to impacts. We recently blogged about some of the most common activities that researchers claimed had given them a pathway to impacts on policy under the UK’s 2014 Research Excellence Framework.

It is important therefore, to recognise how differently impacts may be perceived and hence evaluated in different disciplines and traditions. This creates challenges for tracking impact across institutions that host many different disciplines, such as Universities, but these challenges are not insurmountable. The most important thing to remember is that we’re tracking what real people are doing, and so we need to be people-focussed, rather than solely impact focussed if we want to find out what people are doing to make a difference.

The danger of impact tracking systems: two very different approaches

Broadly speaking, there are two types of approach to tracking impact:

·      Summative tracking and evaluation of impacts after they have occurred with minimal participation of researchers or beneficiaries, to provide ex-post measures of reach and significance

·      Formative tracking and evaluation of knowledge exchange and impact in collaboration with researchers and beneficiaries, to provide ongoing feedback on reach and significance, so that impacts can be enhanced during the research cycle

The majority of impact tracking and evaluation is done in the first, summative, ex-post mode, by research funders to evaluate the impact of their investments or to distribute quality-rated funding to the best research institutions. However, our knowledge exchange principles would suggest that it is also worth investing in formative tracking of impacts as they arise, including an on-going evaluation of the knowledge exchange activities that are meant to deliver those impacts.

The danger of developing any system to track and evaluate impact is that it may alienate the researchers it should be serving, creating unwelcome new administrative burdens on academic staff, and making people feel judged or undervalued. For this reason, it is essential to practice all of the principles of effective knowledge exchange in the development and implementation of such systems.

Design impact evaluation collaboratively with the people who generate the impacts

In particular, it is important to design such systems in collaboration with the people who will use and benefit from them, and to emphasise the role of people in the system, rather than focussing on engagement with a website. A range of people can play pivotal roles in the running of an effective knowledge exchange and impact tracking and evaluation system, from line managers and research administrators who can share activities and good practice, to academic impact champions who can inspire culture change and work with colleagues to build capacity over time. To really understand the significance of an impact, it is often necessary to engage directly with beneficiaries, and this can create a rich source of additional detail and testimony to give credibility and bring life into an impact case study.

It is possible to design systems for tracking and evaluating impact that combine both summative and formative approaches. Impact evaluation systems need to have people at their heart, rather than numbers or software. There are many useful websites that can help us track and evaluate impact, but unless we design a system around these tools, we won’t get the engagement we need for these tools to actually work:

1.     We’ve blogged about the range of existing online impact tracking systems or systems can be developed in-house, which can capture and share knowledge exchange activities and impacts as they arise. However these systems typically rely on researchers inputting data, which can result in patchy coverage of activities, depending on the extent to which researchers engage with the system

2.     It is therefore important to have a number of support mechanisms to incentivise researchers to input data themselves, and where necessary to enable line managers, research administrators and “impact champions” to collect data from researchers to input to the system, for example:

a.     Including an assessment of impact related activities in applications for promotion and annual performance reviews, which can be used to collect data directly from researchers. With appropriate training, those carrying out performance reviews may be able to identify impacts through dialogue with the colleagues they are reviewing, that the researchers may not have recognised themselves as impacts

b.     Regular interviews with key researchers (identified using criteria such as Principal Investigators on research projects) by research administrators to identify impacts, which may then be recorded by the administrator without the researcher having to interact with the system itself

c.     There may be an argument for appointing “impact champions” to perform this role, where such people can be found from within the academic community. Peers may be able to perform an inspirational role, sharing good practice within their discipline with greater credibility than administrators. They may be able to demonstrate any online system to colleagues and increase engagement. Such people may be more likely to be given appointments to discuss impact with sceptical colleagues than administrators, and may be able to help lead a culture change where necessary, to promote engagement with the impact agenda. For this to be successful, champions ideally need to be well-respected opinion leaders within their field. Where relevant, champions may be able to play a mentoring role with key researchers, and identify capacity, funding and training needs within a particular research field. Champions may also be able to provide early intelligence about potential impact case studies that could be put forward for later evaluation in assessments of research quality such as REF. In this way, it may be possible to prioritise resources early to support impacts that are likely to meet assessment criteria (e.g. being underpinned be research of sufficient quality is a key criterion for REF)

d.     Champions may be able to bring researchers together to share good practice in knowledge exchange and inspire each other with examples of impacts that they achieved, for example in workshops or training events

3.     It is important to design mechanisms to provide feedback to researchers whose impacts are being tracked, to celebrate impact that have been achieved and motivate further achievement, and to provide formative feedback to improve practice. Given that most impacts take place over long time-frames, it is as important to provide feedback on the quality of knowledge exchange, as it is to provide feedback on impacts. A good way to evaluate the quality of knowledge exchange in any research project or programme of work is to evaluate the extent to which the five principles from our research on knowledge exchange are being implemented

4.     On a more detailed level, it should be possible to evaluate progress towards impacts, and the quality of knowledge exchange activities by evaluating the implementation of the knowledge exchange strategy devised at the start of the research. Where these are absent, a logical first step is to work with teams to devise knowledge exchange strategies that link each of the intended impacts to a series of specific activities with associated indicators to show if the activities were successful and the impacts are likely to be realised

5.     With strong engagement and effective data input by researchers and support staff, it then becomes possible to use this data to provide a summative evaluation of impact across an institution or disciplinary area, highlighting particularly strong cases, and providing summary statistics that can inform future strategy. 

Talk to the people who benefit from your research to understand what they regard as impact

In addition to engaging with researchers, it can be useful to engage directly with those who have benefited from the research, to better understand the significance of impacts in their operational contexts. It can also be useful to work with beneficiaries to identifying indicators that could represent progress towards impacts, and to work with stakeholders to collect and interpret data. This may provide opportunities for sharing of perspectives between researchers and beneficiaries, challenging or reducing dominance of particular knowledge types and flattening hierarchies that may constrain knowledge production and learning. Formative and participatory approaches that involve stakeholders in doing evaluations can therefore be part of a knowledge exchange strategy, contributing to increased ownership and motivation for delivering impact in collaboration with researchers.

People-focused impact tracking

Untitled.jpg

The most important thing to remember is that we’re tracking what real people are doing, and so we need to be people-focussed, rather than solely impact focussed if we want to find out what people are doing to make a difference. We need to create impact tracking systems with people at their heart. Once we’ve got this right, we can start thinking about online systems to help these people track their impacts. But you have to get this in the right order: if you start with an online system, you’ll quickly find that there’s very little being input to it. Tracking impact starts with the people who make the impact.