Learning from our COVID Recovery Fund
Alex van Vliet - Research and Learning Manager
27 July 2021
Alex van Vliet - Research and Learning Manager
27 July 2021
One year ago, we opened for applications to a new grant programme, one specifically designed to support small and local charities to emerge from the social and economic crisis of the first national lockdown.
In designing this programme we wanted to leverage the distinctive characteristics of our existing grantmaking: unrestricted core costs, a relationship-based approach and a comprehensive programme of capacity building support. By offering two year grants, we wanted to provide some stability at a time when most available funding was short term. The volatility and uncertainty that characterised the first stages of the pandemic has endured - around changing individual and community needs for people facing the greatest risks, the operating, regulatory and financial landscape for charities, the welfare and morale of leaders, staff and volunteers. That meant that the pace and timescales around monitoring, evaluation and learning that we relied on for our legacy programmes needed to be significantly sped up.
Working with the independent consultancy BrightPurpose, we set out to design and build an evaluation learning framework for the Covid Recovery Fund. A critical early decision was that the evaluation should be delivered internally by our research team. While this meant sacrificing the independence of an external evaluator, we would benefit from having tighter feedback loops and proximity to the work which would help us to gather learning faster.
We are now six months into the evaluation of the fund. Over the coming weeks, the Foundation’s research and learning team will be sharing short outputs from what we’ve learned from the work.
The Covid Recovery Fund builds on many elements of our pre-Covid grantmaking practice, but has a number of distinctive features that we want to learn from to inform our future grantmaking strategy.
The first step of our evaluation design was to work with colleagues from across the organisation to generate ideas, and prioritise the strategic evaluation questions of the fund. These questions give shape to the focus, method and tools of the evaluation, and help us think through the decisions that the work should inform in future.
To properly answer these questions, we know different groups of grant holders are likely to have different experiences. We want to pay particular attention to these differences by segmenting all our analysis by the following groups:
The design of the framework has been strongly influenced by Michael Quinn Patton’s developmental evaluation method, where evaluators work alongside decision-makers and programme delivery staff using a mix of process, formative and (eventually) summative qualitative and quantitative data collection.
Alongside our primary research, we’re convening Development Partners together with the grantmaking team every two months for a half-day learning call. We’re using these sessions to gather insight and identify lessons learned, share emerging findings from the evaluation, sense-check our analysis and build a community of practice around our development support.
As a small internal team, delivering an in-house evaluation on this scale for the first time brings with it risks, challenges and opportunities. We are providing the inputs for and supporting the learning of colleagues, alongside an evaluative role that requires making judgements about the value of their work.
As we share outputs from the work, we are keen to gather feedback from others in the sector - researchers and evaluators, funders, infrastructure organisations and frontline charities – get in touch with Alex and Tom.