Working Along the Research-Evaluation Continuum

By moving back and forth between research and evaluation, we help organizations discover new and better ways to serve their communities.

by Knology
Jul 30, 2025

Research and evaluation are often said to be very different things. But as many have noted, the differences between these two activities are relative rather than absolute, and “the exact line between research and evaluation is contested.”

In our work, we find that it’s often useful to think of research and evaluation as a continuum — one that’s full of overlapping, intersecting “gray areas.” Those intersections lead to situations where research can inform evaluation, and where evaluation can inform research. By leveraging the synergies between research and evaluation, we’re able to help individual partners take their work to the next level and also improve practices on a field-wide basis.

In what follows, we share examples of how the research-evaluation continuum manifests in our work, while also detailing the benefits of this understanding of practical social science.

Research and Evaluation as Parallel Activities

As an organization dedicated to the production of practical social science, research and evaluation are both core parts of our DNA. Some of our work is purely evaluative, and other aspects of it are purely research-oriented. But many of the projects we undertake have both a research and an evaluation component. As a result, we’re often simultaneously assessing the impacts of specific organizational initiatives and analyzing aggregate, field-wide data in an attempt to improve understanding of more general developments or phenomena.

For example, as part of our contribution to the American Library Association’s Libraries Transforming Communities project, we’re publishing evaluative case studies that describe the impact of libraries’ accessibility efforts and peer-reviewed research aimed at advancing knowledge and theory within the field of Library and Information Sciences (LIS). Similarly, as part of a four-year partnership with PBS News Hour, we both evaluated an informal STEM education project and published an academic journal article contributing to the literature on STEM storytelling.

As a research-practice organization, these are not uncommon experiences for us. Operating at different points along the research-evaluation continuum gives us a unique vantage point on each activity. The research we do outside of evaluations often yields insights and discoveries that sharpen our evaluative strategies and processes. And in our evaluative work, we often uncover results that have broad applicability, serving as the basis for peer-reviewed publication and further research.

Research Informing Evaluation

One reason people see research and evaluation as separate activities is because the questions they address can be very different. As one study puts it:

“Researchers choose their research questions according to their area of knowledge and the question’s interest and importance to the researcher. Evaluators choose them according to the probable usefulness of the answers in the project they are serving: in other words, according to their relevance to the project.”

Because of its highly practical, pragmatic nature, evaluation is not generally thought to involve considerations outside the scope of a specific program or intervention. It is often conceived of as “a cycle that begins and ends with the project.” But this need not be the case. Taking a more holistic approach to evaluation (for example, by conducting literature reviews to help contextualize a project) can enhance an evaluation’s quality.

As researchers, we have expertise that helps those we evaluate get a “big picture” view of their work. Our identity as a transdisciplinary social science organization gives us a broad knowledge of different social science traditions, fields, and literatures. That knowledge enables us to serve as “critical friends” — helping partners better understand the context of their work, rethink assumptions and preconceptions, and view their work through different lenses or perspectives. Informed by the research we conduct, our evaluative approach also helps partners broaden their horizons, ensuring that their decisions and actions solve urgent, in-the-moment problems while at the same time laying a foundation for long-term success and sustainability.

Evaluation Informing Research

Another reason a distinction is often made between research and evaluation is that these two activities may have different aims. As commonly understood, research is about contributing to generalizable knowledge and advancing theory, while evaluation seeks to advance and improve organizational practices and impacts. But that’s not always true. At times, causal theory is the result of an evaluation process. At others, surprising findings from an evaluation can serve as a springboard for follow-up research that produces knowledge both in a project context and in a more general sense.

As these examples indicate, evaluation can inform research. Often, evaluation unearths previously unexplored questions that demand investigation. For example, after evaluating an educational program carried out in a single library, we might ask, “what are the conditions that best support learning in the library setting?” Because researchers do not always approach questions like these from the standpoint of practitioners’ experiences and perspectives, there is often not a robust research literature tied to them. Investigating these questions creates an opportunity to benefit both practitioners and scholarly communities on a broader level.

A recent example of this is our NSF-funded project, “Research Infrastructure for Informal STEM Education” (RIISE). Emerging out of our long history of evaluating informal science learning (ISL) programs, this project was informed by the realization that the ISL field would benefit from a new research infrastructure – one that could be used to support the creation of practitioner-centered research agendas aimed at mapping the existing ISL terrain and help build a community of ISL researchers, practitioners, and participants. By bringing together a group of 25 professionals with expertise in many different fields, we published a position paper explaining how such a research infrastructure could be built.

Let’s Put It To Work

The idea that research and evaluation are distinct things has merit. And that distinction is important to observe in practice. For example, when treated as the same thing, there’s a danger that the findings generated from an evaluation will be too general (and thus not useful) for those behind a particular program, project, policy, or product.

Yet while the distinctions between research and evaluation are real and important, it’s also true that these things are not as diametrically opposed to each other as some believe. Between the activities people generally call “research” and “evaluation,” there’s a huge gray area: a boundary-blurring middle space where practices associated with each concept overlap and inform each other. That space between research and evaluation is a poorly defined one, but it’s where we tend to thrive. By positioning research and evaluation along a continuum, we’re able to create knowledge that is applicable on many different scales, for many different audiences.

Our work along this continuum is a constant source of creativity and inspiration – one that helps us develop new ways of thinking and new frameworks for social science that are relevant both for individual institutions and the broader environments of knowledge and practice they operate within.

Photo courtesy of Ivelin Radkov at iStock

Join the Conversation
What did you think of this? How did you use it? Is there something else we should be thinking of?
Support research that has a real world impact.