浪花直播

Passing the Baton on Data and Evidence

Bastian AS / Shutterstock.com

Since the founding of the United States Agency for International Development (USAID) in 1961, scholars, practitioners, and politicians have debated how much, if any, foreign assistance the United States should provide in low- and middle-income countries.  While we do not claim to have a definitive answer, we do believe there is that the foreign aid the United States provides should be spent in the most effective way possible.  Regardless of political stripe, everyone can agree that, for the good of the taxpayer鈥 and the world鈥檚 most vulnerable people the Agency strives to help鈥  USAID should focus on maximizing the impact per dollar of its funding.[1] 

We firmly believe that better use of data and evidence, in every sector, is the key to making foreign aid as effective as possible.  We were pleased, therefore, to hear Administrator Samantha Power emphasize this idea in , 鈥淚 will work tirelessly with Members on both sides of the aisle to ensure that taxpayer dollars are well spent. Guided by evidence, I will work with you to adapt or replace programs that are not delivering.鈥  This clear commitment to evidence and effectiveness echoes similar statements from former USAID Administrators on both sides of the aisle: [2], [3], and [4].  The Foundations for Evidence-Based Policymaking Act of 2018, , and the put wind in the sails of efforts to use data and evidence to improve the value for money of U.S. foreign-assistance spending.  While it is clear therefore that maximizing the value for money of foreign assistance is a bipartisan, even a non-partisan aim, it is also clear that . 

Two of us came to USAID as political appointees under Administrator Mark Green and one of us came to USAID as a career Foreign Service Officer during Administrator Raj Shah鈥檚 tenure.  All of us left USAID proud of our contributions to help the Agency improve its funding decisions to achieve more impact for the money it spends.   And we all are heartened to see the Biden Administration appoint talented leadership at the top of the Agency, including Administrator Power and the two nominees for Deputy Administrator, Paloma Adams-Allen and Amb. Isobel Coleman. As new leadership comes on board, we offer some reflections on the data-and-evidence agenda, what went well during our terms of service, and where more progress is needed.  In particular, we discuss four elements: 1) The Journey to Self-Reliance Country Roadmaps; 2) USAID鈥檚 guidance on designing and evaluating its investments; 3) Emphasizing more and better impact evaluations; and 4) Using cash benchmarking to promote effectiveness.

The Journey to Self-Reliance Country Roadmaps

During Administrator Mark Green鈥檚 tenure, USAID focused on building commitment and capacity in partner countries to plan, finance, and implement solutions to solve their own development challenges. 

Data and evidence formed the backbone of USAID鈥檚 entire strategic approach under Green, called the 鈥淛ourney to Self-Reliance.鈥  Rather than being sector-driven, the Journey to Self-Reliance is country-centric, and aims at developing partnerships that meet countries wherever they are in their development journey.  In all cases, the goal is the same:  making sure all USAID-funded programming is moving each country closer to the point where foreign assistance will no longer be necessary, whether that day is decades away, or closer at hand.

Data and evidence are critical for USAID and its partners to understand exactly what is helping and hindering a given country鈥檚 broader development journey 鈥 and thus where to focus the bilateral partnership and U.S. assistance.  It is for this reason that USAID created its annual 鈥,鈥 which use 17 third-party, publicly available, and regularly updated objective indicators. The Roadmaps provide a quantitative framework anchored along the two axes of 鈥淐ommitment鈥 (the degree to which a country's laws, policies, actions, and informal governance mechanisms support progress towards self-reliance) and 鈥淐apacity鈥 (how far a country has come in its ability to manage its own development journey across the dimensions of political, social, and economic development).

Starting in 2018, USAID has produced the Roadmaps annually for all low-, lower-middle-, and upper-middle-income countries, and released them publicly each October, at the beginning of the U.S. Government鈥檚 Fiscal Year.  Users from all over the world have viewed the Roadmaps thousands of times over the last three years.  A far more nuanced and analytically rigorous way to measure and frame the development spectrum, the Roadmaps ultimately provided the strong, evidenced-based foundation that now underpins all of USAID鈥檚 .  The Biden Administration should continue to produce the annual Roadmaps, and promote their use as an objective, analytical tool and discussion framework in planning and resource-allocation.

Reforming and streamlining USAID鈥檚 guidance on designing and evaluating its investments

In October 2019, USAID hosted a worldwide meeting of its Program Officers, the staff embedded throughout the Agency鈥檚 more than 80 country Missions and Offices to manage .  This is the institutional process of turning Congressionally appropriated foreign-aid dollars into activities that should produce measurable results.  At that conference, the career staff made clear that the required steps to design, approve, and amend USAID-funded programs had become too burdensome.  In addition, a few weeks before, USAID鈥檚 Inspector General (IG) had issued a that correctly pointed to lapses and weaknesses in the Agency鈥檚 procurement of new programs, and management of its ongoing awards.

In response, Administrator Green called for working groups to reform the process of turning funds into programs, overseeing them, and evaluating their results.  The upshot was a clearing away of internal hoops through which staff had to jump to get programs off the ground, and again if they needed to change the direction of an investment.  The expectation was that once staff were freed from the gauntlet of low value-add internal-facing processes, they would have the bandwidth to spend time on higher-value work such as understanding and using research evidence, talking to local partners and host governments, and visiting and assessing existing programs and beneficiaries.  In other words, focusing on data and evidence rather than bureaucracy, and doing the things that justify the cost of stationing thousands of staff overseas.

In addition, to answer the criticisms of the IG, the Agency instituted a series of reforms to clarify accountability for performance, prevent obligation of funds if an award鈥檚 documentation is not complete, enforce requirements that implementers have targets and monitoring plans in place before they begin work, and increase training for staff who manage grants and contracts.  Executive Messages from Administrator Green and Letters of Instruction to USAID鈥檚 Mission Directors emphasized the need for the Agency鈥檚 Agreement and Contracting Officers 鈥渢o act based on independent judgment, without inappropriate influence on award or award-administration decisions,鈥 including to terminate poor-performing awards. 

Finally, USAID gave new momentum to the push to make geospatial location data an integral part of assessing the progress of U.S. development initiatives.  Following on the revolution in performance-monitoring brought about by the President鈥檚 Emergency Plan for AIDS Relief, the Agency鈥檚 first  mandates the digital collection of programmatic data and calls for 鈥淸p]rioritizing investments through geospatial analysis.鈥  While the resulting policy published under the Biden Administration (Chapter 579 of the USAID鈥檚 Automated Directive System [ADS]) does not require the submission of site-level geospatial coordinates by implementers that are working in specific locations (such as clinics, schools, or farms), it expresses the Agency鈥檚 clear preference and expectation that they do so.  This is important because such data are crucial for designing and evaluating programs that can have a measurable impact on specific populations.  The next step is for the Agency to undertake notice-and-comment rulemaking as soon as possible to apply the new policy on geospatial data to its contractors.

The new Administration should continue to build upon these changes to introduce even more rigor and process efficiency into USAID鈥檚 work.  The Agency鈥檚 job is to issue and oversee grants and contracts, and improving the performance and integrity of those awards should be every employee鈥檚 primary concern.

Emphasizing more and better impact evaluations

The IG鈥檚 report also implied that USAID often did not know how its awards in the field were performing.  Therefore, the Office of Learning, Evaluation, and Research within USAID鈥檚 Bureau for Policy, Planning, and Learning (PPL) undertook a multi-faceted stocktaking of the state of play of the Agency鈥檚 evaluations of grants and contracts.  Historically, , the type that rigorously estimates the degree to which a USAID-funded intervention actually improved its target outcome.  Impact evaluations do this by, 鈥  The stocktaking found this pattern has continued, as impact evaluations remain under 10 percent of all evaluations.  rightly explains why this is a problem: 鈥淲hile impact evaluations are not inherently more rigorous than performance evaluations, they do add distinct value to an overall evaluation portfolio. Impact evaluations allow donors to test unproven 鈥榯heories of change鈥 before devoting large amounts of resources in interventions.鈥  Adding to the problem of relatively few impact evaluations, the stocktaking found that a very large number of the impact evaluations that USAID does undertake have serious issues with their quality and credibility.  An ,鈥 and only three percent of impact evaluations met the highest standards of quality.  In other words, fewer than five percent of the Agency鈥檚 evaluations are methodologically credible impact evaluations.  Of these, only a handful attempted to do any analysis of cost.  This means that even when well-done impact evaluations can tell USAID about how much change in a target outcome was caused by a USAID program, it is unclear how much it cost to get that change.  Without the ability to compare the expected impact from spending the same amount of money on different types of interventions, activity-design staff are left without critical鈥 perhaps the critical鈥  information needed to be good stewards of taxpayer money. According to the stocktaking, a major cause of the lack of high-quality impact evaluations has been confusion by field staff over when an impact evaluation is needed and when a performance evaluation is suitable to answer the question at hand.

All of this means that USAID is failing to generate rigorous evidence on which of its programs do or do not work.  This might not be such a problem if most USAID funding went towards interventions known to be highly successful.  Unfortunately, PPL鈥檚 internal stocktaking also showed that the few valid impact evaluations the Agency has conducted have often found that the intervention studied was ineffective.    

To remedy the situation, USAID took several steps as part of a broader effort to improve its .  Of note, the revised guidance on the Program Cycle:

  • States that cost-effectiveness is a key consideration in programming (ADS Chapter 201.3.1.2.a);
  • Clearly states the difference between performance and impact evaluation (ADS 201.3.6.4);
  • Expresses a clear preference for impact evaluations when the Agency is trying to answer whether an intervention is achieving a specific outcome (ADS 201.3.4.10);
  • Requires cost analysis in impact evaluations to allow for estimations of the cost-effectiveness of evaluated programs (ADS 201.3.6.4); and; 
  • Mandates more statistical justification for comparison groups in quasi-experimental evaluations (ADS 201.3.6.4 and 201.3.6.9).

In addition, the Administrator published an Executive Message to encourage more long-term, cost-effectiveness analysis evaluations of USAID鈥檚 traditional activities.[5] 

Left in process were efforts to improve guidance on developing scopes of work for evaluations, and the information needed in their final reports.  In addition, an important conversation on the balance between building the capacity of large numbers of field staff to take the lead on evaluation, and empowering a smaller, yet specialized, cadre of Washington experts to provide internal services to the field on evaluation, is still on-going.  Given the findings of PPL鈥檚 stocktaking on evaluations鈥 and the reality that most of USAID鈥檚 staff only will work on a few evaluations over the course of their entire careers鈥 we believe the Agency should consider a finer division of labor that takes something off the plate of overstretched field officers. This could happen by forming a new to provide services to the field.  In addition, USAID drafted, but has not finalized, .  The new Administration should follow these efforts to their conclusion.

Using cash benchmarking to measure effectiveness

A final example of bipartisan efforts to improve the impact per dollar of USAID-funded programs is 鈥.鈥  Begun under Administrator Shah, cash benchmarking seeks to establish a rigorous minimum standard for USAID鈥檚 programs.  It asks a simple question: Does USAID鈥檚 system for designing, procuring, and managing awards through implementing organizations deliver better value for money than simply passing on Congressionally appropriated funds as quickly and as simply as possible to beneficiaries, in cash[6], and letting them decide for themselves how to invest the funds to improve their lives?  In other words, 鈥淐an the bureaucracy of USAID and its implementers do more for the poor than the poor can do for themselves?鈥  This is an important question to answer, both in terms of good financial management and as part of the conversation around localizing aid. 

The first study of cash benchmarking in late 2018.  While it generated some consternation (), Administrator Green was willing to take the results seriously and .  Cash benchmarking is a prime example of what USAID鈥檚 , developed under Administrator Green鈥檚 tenure, refers to when it says, 鈥淲e recognize that...commitment to transparency includes accepting the risk of possible criticism brought because data show our activities fall short of their objectives.  We will incentivize and foster a culture of learning by openly discussing and disseminating lessons learned to enable continuous improvement and enhance our credibility.  This will mean at times identifying mistakes or errors that could affect our reputation.鈥  USAID should embrace cash benchmarking as a decision-making framework across its portfolio of awards designed to produce outcomes at the individual, household and community levels, including in health, nutrition, education, and agriculture/food security.

* * *

We encourage the new leadership of USAID to continue the work of using data and evidence to find what is working, end what is not, and doing everything in their power to deliver the highest possible returns for the money entrusted to USAID by the American taxpayer.  We wish them well and hope they build off the efforts of past administrations to increase the rigorous use of data and evidence, including by maintaining and promoting the Journey to Self-Reliance Country Roadmaps; implementing fully the Agency鈥檚 guidance on designing and evaluating its investments while requiring the submission of geospatial location data; undertaking more and better impact evaluations; and expanding the use of cash benchmarking in funding decisions.

Dr. Bill Steiger is a Public Policy Fellow at the 浪花直播 Center.  From May 2017 to January 2021, he was Chief of Staff at USAID. 

Chris Maloney is a Global Fellow at the 浪花直播 Center, and Senior Director for Strategy and Business Development at the Digital Impact Alliance at the United Nations Foundation (DIAL).  Previously, he served as Assistant to the Administrator for Policy, Planning, and Learning and Acting Assistant Administrator for Africa at USAID.

Daniel Handel leads external engagement for the International Initiative for Impact Evaluation (3ie) and served 11 years as a USAID Foreign Service Officer, including from 2019-2021 as the lead of the Agency's Evaluation Team.


[1] For the purposes of this article 鈥渧alue for money,鈥 鈥渃ost-effectiveness,鈥 鈥渂ang for the buck,鈥 and 鈥渋mpact per dollar鈥 all describe trying to get the highest level of improvement possible from the budget that USAID has to spend.

[2] 鈥淚 will make sure that our programs respect taxpayers...We owe it to them to use [foreign assistance funds] as efficiently and effectively as possible.  I will focus our limited resources on what is working and end what is not.  I will scrutinize every program and every expenditure to ensure that we are maximizing value鈥︹

[3] "The Agency must ensure that American taxpayer dollars are spent responsibly. It must identify successful programs, learn from prior mistakes, apply lessons learned.鈥

[4] 鈥淯SAID is committed to maximizing the value of every dollar.  We have made tough choices so that we are working where we will have greatest impact, and shifting resources towards programs that will achieve the most meaningful results.鈥

[5] Executive Message, 鈥淚nvesting in More Long-Term Impact Evaluations of Cost-Effectiveness鈥, January 14, 2021

[6] Usually through secure mobile-money bank accounts rather than physical cash.

Authors