Doing Partner-Centered Evaluation
By positioning evaluation as a tool for organizational learning, we help partners improve their practices and advance their missions.
Evaluations are often constructed to serve donor interests. When conducted in this manner, evaluators may use a “tick-box approach” that doesn’t benefit organizations very much. For example, in an evaluation of a new educational program, donors might ask questions like:
- How many students were recruited into the program?
- How many students completed the program?
- How much did test scores improve by?
Questions like this prioritize measurable outcomes over data that might help organizations improve the quality of a given program, curricula, or intervention. Instead of reporting on completion rates of test scores, educators might like to know things like:
- Which kinds of recruitment strategies were most and least effective?
- Why did some students disengage midway through the program?
- What conditions helped students feel safe enough to take academic risks?
In other words, educators might be more interested in questions that promote organizational learning. They might favor a type of evaluation that opens the door to reflection, insight, and long-term improvement — not just accountability.
Our Approach to Evaluation
Our evaluations prioritize organizational learning. We always begin our evaluations by asking our partners what they (not just their donors) want to get out of the process. This approach allows us to expand from standard, boilerplate questions like “did you accomplish x, y, and z?” to more useful questions like “how can we learn from what you did to continue to improve your practice?”
When designed in this way, evaluations are bi-directional, allowing for back-and-forth exchange and the co-creation of an evaluation’s goals and processes. At the start of any evaluation, we like to facilitate conversation among project leaders, holding workshops that help organizations clarify their program goals and connect them to broader institutional missions. These initial meetings help partners determine what they’d like to get out of an evaluation.
For evidence of the value of these workshops, see examples from our partnerships with the Finca Cántaros Environmental Association, the Santa Cruz Public Library, the Clark Art Institute, and the Center for Brooklyn History.
Beyond facilitating, we also contribute our own expertise to these meetings — whether that’s in logic model design, survey development, data analysis, or understanding how similar efforts have succeeded or struggled elsewhere. We also bring our research directly into the early stages of the evaluation process — for example, by conducting literature reviews on the topics a given intervention is addressing (something that has been shown to enhance an evaluation’s quality). We help partners see patterns they may not have noticed, question assumptions, and make meaning from the data in ways that deepen both learning and strategy.
Perhaps most importantly, our partner-centered approach to evaluation builds relationships of trust. Having established this trust, we’re better able to serve as “critical friends” – that is, as an evaluator that puts itself at a partner’s disposal “without denying [their] own expertise and capacity to influence the process in light of this.” Importantly, being a “critical friend” is not about imposing one’s own questions or views on another. “Critical” here means not criticism or negative judgment, but instead, being able to “stand back from the situation and view it through different lenses.” Critical friendships start and end with caring, listening, and understanding, and require the evaluator to be a passionate advocate of the partner’s growth and success. Within an environment of trust, critical friends make constructive suggestions to partners, asking provocative questions, helping them better understand the context of their work, challenging some of their preconceptions, and encouraging them to examine data from different angles or theoretical perspectives.
In our experience, the ability to serve as “critical friends” is immensely beneficial to partners, because it supports evaluation activities that are maximally aligned with their values, their missions and visions, and their strategic goals. While this kind of evaluation requires more of a commitment from partners, the benefits it yields outweigh the costs involved. For evidence of this, check out the following examples:
- “Reducing Barriers in Informal STEMM Education” — Through evaluation of a program called “Science Journeys,” we’re helping our partners at Children’s National Hospital overcome the many obstacles hospital-based education efforts typically confront, while also ensuring that the program aligns with the needs, interests, and concerns of its target audience.
- “Making Museums Financially Accessible” — When commissioned to evaluate an initiative called “Museums for All,” we used data gathered through case studies and a questionnaire to build a theory of change — one designed to help participating institutions maximize the initiative’s benefits by thinking more strategically about their implementation actions.
- “What Library Patrons Have to Say About Accessibility” — In our role as evaluator of the American Library Association’s “Libraries Transforming Communities” project, we’ve helped small and rural libraries better understand the impact of their accessibility efforts by conducting interviews with library patrons.
- “Responding to Shifting Needs in Environmental Education” — As part of our evaluation of a wildlife curriculum developed by the New England Aquarium, we interviewed teachers to learn about how informal science organizations could better align their programming with current school priorities and directions.
Let’s Put it to Work
Interested in seeing how our partner-centered approach to evaluation could benefit your organization? Get in touch with us by filling out this form.
Photo by FortyTwo @ Unsplash