Preface for Advocacy and Policy Change Evaluation
Preface
Several factors have fueled the need for skilled evaluators who can design appropriate evaluations to meet diverse stakeholder needs: increased foundation interest in supporting advocacy and policy change (APC) initiatives to achieve systems change; evaluation of democracy-building initiatives worldwide; and diffusion of advocacy capacity beyond the traditional advocacy community (such as service providers). Evaluators have met these needs with great success, building a new field of evaluation practice, adapting and creating evaluation concepts and methods, and shaping advocate, funder, and evaluator thinking on advocacy and policy change in all its diverse manifestations. The field will only continue to grow and evolve.
This book is designed to build on this groundswell of evaluation thought and practice and to be insightful and instructive. We combine the plethora of concepts, definitions, designs, tools, empirical findings, and lessons learned thus far into one practice-focused and easy-to-use resource. This book addresses the varied evaluation needs of stakeholders by presenting a wide array of options specific to evaluating advocacy and policy change initiatives. It also addresses the challenges associated with evaluation practice, such as the complexity and moving target of the context in which advocacy activities occur and the challenge of attribution issues and identification of causal factors.
There are several academic and practical reasons for developing this book. Current advocacy and policy change evaluation practice lacks a deep understanding of the existing research and models from the political science, public policy and nonprofit management disciplines, including organized interests, influence, agenda setting, media, and models of the policy process. Consequently, evaluators often do not incorporate a robust, theory-based foundation into their evaluation practice, limiting their effectiveness in designing advocacy and policy change evaluations and informing stakeholder learning. Increased understanding of core principles and scholarly research will enable evaluators to make themselves heard more broadly and to contribute to the knowledge base of political representation, influence, and systems change.
At the practical level, this book provides useful, real-world examples of developing appropriate evaluation designs and applying the findings to advocacy practice and decision-making. Our review of available resources is broad and deep and includes an examination of relevant evaluation strategies, as well as an analysis of the findings from the 2014 Aspen/UCSF APC Evaluation Survey of tested evaluation designs and data-collection instruments. Completed by 106 members of the American Evaluation Association (AEA)1 and evaluators of advocacy and policy change initiatives of all types, our understanding of actual APC evaluation practice has been greatly expanded by the results—advocacy tactics evaluated, evaluation strategies used, and detailed information about gaps in the APC evaluation field.
Additionally, throughout the book, we describe and compare six evaluation cases that speak to the diversity of advocacy and policy change evaluations, including a range of evaluation designs, conventional and unique evaluation methods, and approaches to informing advocate and funder strategy. They were identified by individual Aspen/UCSF APC Evaluation Survey respondents as being conducted in the past five years and containing an interesting methodology or significant lesson. A primary reason for developing the six cases of evaluation practice was to surface design models in a variety of advocacy and policy contexts. It is helpful to see how evaluators of advocacy and policy change initiatives mix and match different methods and link them to evaluation questions and a theory of change and/or logic model while being mindful to resource constraints and a quickly evolving context. How evaluators balance stakeholder information needs that may go beyond strategic learning early on while addressing challenges to validity, such as an evolving initiative, small sample size, and limitations in resources may be the “art” of advocacy and policy change evaluation. All six initiatives were sponsored by philanthropic organizations or nonprofit public charities and speak to funder willingness to invest in different strategies to achieving a policy change as well as commitment to achieving long-term systems change, and include: (1) the Initiative to Promote Equitable and Sustainable Transportation (2008—2013) was funded by the Rockefeller Foundation Board to support adoption of policies for equitable and sustainable transportation options largely through the reauthorization of the Federal Surface Transportation Bill in 2009 and through support of commensurate state policies in key, influential states; (2) the United Nations Foundation provided support for the Let Girls Lead program (2009—present) to create a global movement of leaders and organizations advocating for adolescent girls’ rights. The Let Girls Lead initiative strengthens the capacity of civil society leaders, girl advocates, and local organizations to promote girl-friendly laws, policies, programs, and funding in Guatemala, Honduras, Liberia, Malawi, and Ethiopia; (3) Oxfam funded the GROW Campaign (2012—present), a multinational campaign to tackle food injustice and to build a better food system that sustainably feeds a growing population, and it included a six-month campaign targeting World Bank policy on large-scale land acquisition; (4) the Pew Charitable Trusts launched campaigns in Canada and Australia targeting regional and locally based land-use planning processes, as part of its International Lands Conservation Program (1999—present) to conserve old-growth forests and extend wilderness areas; (5) funded by ClearWay Minnesota, the Tribal Tobacco Education and Policy (TTEP) Initiative (2008–2013) provided resources and assistance to five tribal communities to pass or expand formal and informal smoke-free policies while increasing community awareness of secondhand smoke; and (6) Project Health Colorado (2011–2013), a public-will-building campaign was launched by the Colorado Trust to engage individuals and organizations in a statewide discussion about health care and how it can be improved. By encouraging people across the state to be part of the solution, Project Health Colorado believed it would make a difference in how decisions are made about health care. (See Appendix A for detailed descriptions of these six cases.)
To examine the similarities and differences in designs, methods, and data collection instruments, we compare two different evaluation cases in Chapters 3, 4, and 5. Our pairing of cases is intentional, choosing to compare evaluations that were done at about the same time in the program process. In Chapter 3, we compare designs of two end-point evaluations, the Initiative to Promote Sustainable and Equitable Transportation and the Let Girls Lead program. In Chapter 4, we compare two midpoint evaluations, the GROW Campaign and the International Land Conservation Program. Last, in Chapter 5, we compare two multiyear evaluations, the Tribal Tobacco Education and Policy (TTEP) Initiative and Project Health Colorado. While there is significant diversity in the six evaluation cases’ policy objectives, advocacy tactics, and contexts, there are similarities in purpose, design, methods, conventional and unique instruments, the evaluator role, and use of evaluation findings.
A three-part “pracademic” framework is used to increase utility of the book for evaluators, advocates, and funders. The first two chapters tilt toward the academic and describe concepts and models from the policy sciences and nonprofit scholarship that can be used to help evaluators navigate the deep and many times turbulent public policy waters and develop a theory of change. The remaining five chapters focus on the “meat” of the evaluation design, applicable methods, and recommendations for advancing individual and collective evaluation practice. Second, our “pracademic” approach applies to each chapter, and we lay out concepts and models in the first half of the chapter and finish with a discussion on actual evaluation practice, specifically the findings from the Aspen/UCSF APC Evaluation Survey and the six evaluation cases. The three parts are: (1) useful theories and conceptual models; (2) appropriate designs, methods and measures; and (3) getting to wisdom and advancing individual and collective advocacy and policy change evaluation practice. Each chapter builds on the previous chapter although each is designed to be unique and to address specific evaluation needs. For example, evaluators who are new to advocacy capacity and/or policymaking will find Chapters 1 and 2 about theoretical underpinnings useful in developing sound evaluation questions.
That being said, while this book is intended to expand on prior advocacy and policy change evaluation guides and to serve as a comprehensive resource for evaluators, advocates, and funders, it is not intended to be an evaluation textbook for beginning evaluators. A basic understanding of evaluation is assumed. It should also be noted at the outset that this book does not promote one framework or evaluation design over another. Instead, it is intended to be a “cookbook,” providing a variety of strategies and measures that have been used in the field and applied to a wide array of advocacy and policy change evaluation issues.
Part 1: Useful Theories and Conceptual Models
Evaluators will benefit from grounding their practice within a robust understanding of advocacy and policy change, including scholarly research about what we know and do not know about the policymaking process and individual and collective action. The primary goal of Part 1 is to expand evaluator capacity to use applicable concepts and models, such as the policy stage model of policymaking to frame evaluation designs. We also look across disciplines and seek commonalities as well as gaps in knowledge that challenge evaluation, such as the lack of a single definition of advocacy. Evaluators who ignore these foundational components of their evaluation practice are at risk of overlooking critical aspects of advocacy and policy change initiatives, such as the advocacy activities postpassage of a policy, which are not so transparent. They are also at risk of having a limited understanding of the perspectives and strategies of advocates, decision-makers, and funders important in the planning and implementation of advocacy and policy change initiatives.
In Chapter 1, we review the public policy concepts and definitions important to evaluation practice, including models of policymaking process and the venues where policy is made. In Chapter 2, we describe advocacy in the broadest sense, particularly the myriad types of advocates—individuals, organizations, and groups—and their attributes, as well as the many strategies and tactics that advocates use to build a constituency for change and influence policymaker support. In both chapters, we try to strengthen the link between theory and practice and provide real-world examples as well as suggestions for incorporating a concept or model into an evaluation design. We also describe the policy and advocacy contexts in the six evaluation cases to illustrate the diverse scenarios that evaluators may encounter—international, national, state, regional, and local, as well as policy issues—health, transportation, land-use, food security, human rights, and gender equity.
Part 2: Appropriate Designs, Outcomes, and Methods
In Part 2, we shift from the academic perspective to the design and implementation of advocacy and policy change evaluations. We use a macro-to-micro approach, starting with recommendations for developing an evaluation design followed by suggestions for selecting and/or developing specific methods and outcomes. Second, we use the findings from the Aspen/UCSF Survey about advocacy and policy change evaluation practices to illustrate evaluation designs at different points in an advocacy and policy change initiative as well as the ways that evaluators mix and match their methods.
In Chapter 3, we review the evaluation strategies important for designing advocacy and policy change evaluations, including the evaluation purpose, knowledge of the context, rigor, and working with stakeholders. We also describe several challenges (and possible solutions) to advocacy and policy change design, some of which are contextual (such as lack of transparency) and some of which are methodological (such as initiative complexity and uncertainty). In Chapter 4, we discuss conventional and unconventional or unique evaluation methods that have been specifically developed for advocacy and policy change initiatives. While we are mindful to the evolving and complicated nature of an advocacy and policy change initiative, we advocate developing and working with a program theory of change and/or logic model. In Chapter 5, we review the unique, off-the-shelf instruments that have been used by the field, such as those reported in the peer review literature and/or frequently mentioned by survey respondents, and we describe their intended focus, use, and limitations. In each chapter we compare and contrast two evaluation cases to illustrate the points described in the narrative as well as provide useful designs, strategies, and tools.
Part 3: Leveraging Wisdom from the Field
In Part 3, we shift from evaluation practice to opportunities and challenges for advancing the field of advocacy and policy change evaluation. Recognizing that APC evaluators are diverse and conduct other types of evaluation, as well as come from different backgrounds, we recommend leveraging the wisdom and knowledge of seasoned APC evaluators and creating a “community practice” through continued sharing and networking.
In Chapter 6, we revisit partnership-based evaluation principles and describe the possible roles that may be afforded to evaluators by advocacy and policy initiatives—educator, strategist, and influencer. Unlike evaluation of stable programs that have a specific intervention, evaluators of advocacy and policy change initiatives may find themselves in the position of informing decision-making and being a potent voice for change. We also discuss the Aspen/UCSF Survey findings on the key uses of recent evaluations and describe evaluation products and processes developed by the evaluators of the six evaluation cases. In Chapter 7, we identify gaps and discuss new frontiers in evaluation practice, including suggestions for strengthening individual evaluation practice, or what we call “mindful evaluation.” Second, with input from our partners at the Aspen Institute and other longtime advocacy and policy change evaluators and funders, we make recommendations for advancing the field, such as expanding the geographic focus of APC evaluation, continuing to build capacity among those evaluators committed to working in this arena, and supporting the sharing of designs, methods, and lessons learned, assuring that evidence is used in the next generation of efforts to improve the lives of those most often left behind. Furthermore, building a strong network among the APC evaluation community also helps to assure that evaluation techniques will be incorporated sooner and more effectively, thus accelerating learning and wisdom across a variety of evaluation stakeholders. Second, we believe there is a place for evaluators in the scholarship on advocacy, public policy, and nonprofits, and we describe areas and topics that would benefit from APC evaluation findings and methods.
In sum, this book taps into knowledge from other disciplines, relevant evaluation concepts, and the works of the APC evaluation community to strengthen advocacy and policy change evaluation practice. Our intention is to create an enriched understanding of advocacy and policy change that can be used to inform future evaluation practice. We reflect on individual and collective evaluation practice to address the current challenges raised by advocacy and policy change initiatives, as well as advance APC evaluation theory and design.
That said, we have been humbled by the enormity of our task: the advocacy and policy change evaluation arena is broad and deep, and in all likelihood we have overlooked a relevant model, idea, or perspective. While we have both worked in the international arena and with underserved populations, our understanding of policy, policymaking, and advocacy has been largely shaped by the U.S. context. So as not be too United States-centric, we have used examples that are present in most settings, such as access to health care and human rights issues, as well as examples of specific policies that are more widely known.
The overall tone and philosophy of this book is to provide both a supportive guide while taking a “critical friend” perspective, sharing information on specific strengths and gaps to advance the field of APC evaluation. We recognize that it is an emerging field that learns and builds upon a collective history of evaluation practice, while also building its own identity and recognition. The results are well worth the effort. Through focusing the evaluation lens on the increasing importance of advocacy in addressing the issues of disparities, equity, and social justice and the well-being of global communities, APC evaluation is becoming a prime vehicle for effective learning and the shaping of future advocacy and policy change strategies and tactics.
Notes
1. The twenty-three-item survey was administered electronically in May 2014 by the Aspen Planning and Evaluation Program to the 585 members of the Advocacy and Policy Change (APC) Topical Interest Group (TIG) and 1,000 randomly selected members of the American Evaluation Association. The survey was completed by 106 evaluators, a 7 percent response rate. The response rate of APC TIG members was 9 percent. All respondents had been involved in evaluating advocacy and policy change initiatives within the last five years.