A blog by Dave Thomas, Volunteering Development Officer with the Volunteer Centre at Nottingham CVS.
Whenever I go to network meetings or conferences, I enjoy a game of Buzzword Bingo as much as anyone. But there’s word that’s been creeping into the vocabulary of volunteering that deserves rather more than a tick on the bingo card. That word is “Impact”.
It’s become very popular, although its meaning is sometimes a little blurred. James Noble is the Impact Management Lead at New Philanthropy Capital, whose blogs on this topic make for interesting reading. His very readable summary sets out the view that impact measurement is:
- Long-term. If ‘impact’ means something sustained, then we have to have a way of getting data from people after our work is done—potentially long into the future. In practice this means longitudinal research, which is the hardest and most costly type of research to do.
- Comparative. Even if we can get longitudinal data we still have the problem of attribution: how do we know our programme has been the thing that made the difference? To test this formally we need a counterfactual or control group.
- Robust. To actually ‘measure’ impact we need to do both 1) and 2) with enough rigour and scale to be confident in the results.
In trying to measure the difference our volunteer programme makes, very few of us have the resources or the time to carry out a study that meets all three of James’ criteria. But this doesn’t remove our need to produce some evidence. No matter what we call it, this is something that we (and our funders) really need (and want) to know.
Before we can decide how to measure a difference, we need to know what difference we want our volunteers to make. This is pretty obvious, but unless we have a clear idea at the start of the process, how can we know what and who we should be monitoring and measuring?
The planning stage of the Impact cycle includes:
- Developing Policies
- Developing Role Descriptions
- Recruiting Volunteers
- Induction and Training
- … what else would you add here?
In order to make a change, we need to know where we are starting. Even if your volunteer project has been running for years, it will still be seeking to make improvements to the service, to the lives of service users or whatever the “cause”. Wherever you are going, you start from here, so where are you now?
“Mapping Exercise” is another tick on the “Buzzword Bingo” card; but being clear about the situation that you want to change is a prerequisite for measuring how much change you achieved.
As a Leader of Volunteers, you will probably count the numbers of volunteer hours, how many times activities have taken place, how much money raised, etc. Things like the team planted 250 trees and gave 2000 hours of time worth well over £20,000.
But… so what? What difference will all that make?
We collect stories by asking questions. But make sure that you record and store them securely, especially if they contain personal data.
- Volunteer Support and Supervision Records
- Beneficiary feedback, including informal feedback
- Feedback from referrers, partners, parents, carers, other organisations, statutory services … and anyone else
- Evaluations. Especially if they have been carried out by someone independent of the volunteer programme.
- Comments about your service on social media
- Coverage in local media
- … … how else can you collect stories?
Another useful measure is that of “Distance Travelled”. This tries to turn stories and “soft outcomes” into numbers. Some people like the straight line(s) of the Rickter Scale, but the Outcomes Star is also well-used.
Analysing the data
You now have a set of numbers and a collection of stories. In New Philanthropy Capital’s Well-Being Measure, John Copps highlights:
‘No stories without numbers, and no numbers without stories’
Getting to grips with an unstructured data collection can be very daunting; however, there are some proven approaches. Achievability is an organisation that works with researchers in the university sector. This blog tells us that there is no right or wrong way to analyse your data, but provides us with advice and tips.
The key message in analysis is to be organised and consistent. I would also suggest that this analysis should start to take place while the data is still being gathered.
Reporting the findings
Traditionally, evaluation reports are written, but why not think more creatively? Could your volunteers produce a video showing the difference that they have made? How else could you tell their story?
But most of us will stick to the written report – if only because it’s what managers and funders understand. NCVO’s KnowHow Non Profit site has this helpful information to set us the right path.
Things to consider are Outcomes, Outputs and Processes (all explained on this web page), using your analysis to describe your findings in straightforward language, in line with the needs and expectations of your audience. Remember to include those numbers as well as the stories.
Using the evaluation
This is the stage that can be far too easy to overlook. Our volunteers, service users and other stakeholders have invested a great deal of time and energy in monitoring and reporting. They have shared their experiences and stories with you. You owe it to them to make good use of all this data.
Now it’s over to you. At 8pm UK time on Thursday 9th August 2018, we’re going to get together for a Twitter chat where we’ll be discussing five questions around Impact. What practical steps you are taking towards measuring your team’s impact? Please join in and share your experience, thoughts and reactions by following #TTVolMgrs
If you can’t make that time please share your thoughts anyway on this blog or on Twitter. Please remember to include #TTVolMgrs in your tweets.
In the Twitter chat, we’ll be asking these questions:
- What is the difference that your volunteers make?
- How do you know this?
- How do you measure this difference?
- What tools do you use?
- What problems / difficulties do you face?