How to boost accountability and learning in aid for COVID-19 - Marvin Taylor-Dormond and Stoyan Tenev
Op-Ed / Opinion | 28 April 2020
The world is experiencing what some may think is a “typical” black swan event: rare, extremely impactful, and only retrospectively predictable. It may also be argued that what we are witnessing is rather the consequence of a lack of prevention and preparation in the face of imminent danger.
At any rate, even as we watch the virus reverberate across the world, it still catches us by surprise when it washes onto our shores. This crisis is particularly insidious as it attacks and employs our main defense and coping mechanisms — health systems and social contacts — to propagate itself.
With all hands on deck, those in the business of designing and deciding on the current massive support to countries against COVID-19 — staff, management, and boards — as well as those in charge of examining the effectiveness of this support — the evaluators — need to engage to help improve the results of the interventions being deployed.
Here are four areas in which designers, policymakers, and evaluators can engage during the current emergency work of international financial and development institutions:
1. Blueprint or framework of interventions
Everyone’s no. 1 priority should be to bring lessons from past crisis episodes to inform policy responses similar to those that are being formulated. While helping, evaluators of course need to watch for misleading analogies in a world in which nothing repeats itself in exactly the same way.
Take for instance the World Bank Group response to COVID-19. Past evaluations of the World Bank Group’s response to the economic crisis of 2008, the food crisis, and natural disasters, have proven useful in informing the current response. A similar experience is taking place at the Asian Development Bank, where evaluators have made themselves available to contribute to the specification of the intervention blueprints by looking at their results frameworks and incorporation of past lessons, while maintaining their independence.
2. Real-time assessment
Beyond informing design, the development community and evaluators need to work in real-time to gauge the likely effectiveness of the responses while they are being implemented. Just as in the case of health treatment where emergency and long-term care coexist, evaluators must be capable of assessing short-and long-term results of interventions.
In times of emergency, there is a premium on immediate results, dealing with uncertainty and quick adaptation. There is no point in waiting for late feedback. The focus should be placed on rapid assessments addressing adaptation, learning and short-term results, which essentially means stabilization and return to conditions for countries to pursue a long-term development path.
For example, the real-time evaluation of the World Bank Group’s response to the 2008 global economic crisis, looked at the timeliness, speed, and quality of the World Bank response; factors related to readiness such as adequacy of administrative budget, prior analytic work and knowledge, and especially bank’s weakened financial sector capacity in the period leading to the crisis; attention to poverty issues and debt and fiscal sustainability of the response. Ongoing or early outcomes and risks were assessed against the stated objectives of protecting vulnerable groups, maintaining infrastructure, and sustaining private sector growth.
3. Results of interventions
Regarding results, in order to extract lessons and contribute to the institutional accountability in the deployment of the massive amount of resources that COVID-19 will end up implying, an assessment of results should be conducted as soon as possible when the crisis has subsided. For this the current process should be carefully documented while decisions, policies, and interventions are being deployed, creating organized repositories of information. The focus of the early evaluations should be placed on the intended short-term results and the creation of conditions for the resumption of a long-term development path.
The 2011 “Real-time Evaluation of Asian Development Bank's Response to the Global Economic Crisis of 2008–2009” is a good example. In this immediate assessment — designed to provide feedback to the management and the board of directors of the bank — the evaluation addressed the relevance of the bank’s assistance, responsiveness, and the short term results of interventions and their sustainability.
Likewise, the immediate assessment produced several lessons on what worked and what didn’t with respect to the role of the bank in the time of crises, several of which are applicable to the current one.
4. Prevention and preparedness
Finally, the enormous costs of interventions to soften the impact of the crisis highlight once again the value of investing in prevention and preparedness.
It may seem like prevention and preparedness is a long-term issue to be dealt with separately and sequentially to the crisis response, but this crisis suggests otherwise because it comes in waves giving the chance to anticipate and prepare, and it may combine with other legitimate black swan events like the unprecedented oil crisis, the financial crisis or natural disasters, possibly pushing systems over the breaking point. So forward-thinking and prospective analysis may need to be part of the crisis response itself.
This point applies to the international development community at large; we must all get better at contributing to prevention and preparedness. The following can help.
First, it is methodologically convenient and expedient to parcel the world out in designing development programs and evaluations. In efforts to narrow the scope of programs and evaluations to cut costs or because of short-sightedness, what is typically sacrificed are interdependencies. But we live in an interdependent world. Hopefully this crisis can teach us that a global perspective and articulation in international aid and its evaluation is becoming a methodological imperative.
Second, development agencies and their evaluators tend to be preoccupied with the “representative” and the typical; we often do not pay enough attention to outliers. This leaves us blind to unexpected events — both positive and negative. Incorporating a perspective on rare, highly impactful, and unlikely events is inherently difficult, but focus on learning from outliers as well as greater attention to how the institutions deal with risks and uncertainty will make us better at this and increase the versatility and value-added of our support and evaluation function.
Third, in assessing results, development agencies tend to be excessively constrained by: assessing what already is; programs’ conceptual frameworks and objectives; and alignment with institutional strategic objectives and priorities. This does not leave much room to look for what is missing and what could happen. A broader vision in assessing results and particularly, prospective evaluation, should become part of the repertoire of development assistance.
For improved results of the massive support to countries against COVID-19, several fundamental ingredients will be needed. A listening stance from staff and management of the development agencies, willing to incorporate lessons and good results frameworks along the way, is essential. Additionally, a committed evaluator is important, ready to contribute and engage in times of emergency with timely inputs, while protecting his or her independence and impartiality. And also a demanding board and policymaking body determined to help but expecting full institutional accountability and learning.