Why Your Hard Work Sits on the Shelf — and What to Do About It

We’ve all been there. The time when the client seems to forget the project ever happened as soon as the final check is cut. The time when your report stuffed full of creative recommendations gets buried by risk-averse leadership. The time when the stakeholder group really does seem engaged by the findings, has lots of conversations, and then…nothing changes.

These stories happen with remarkable frequency. In fact, based on the evidence, there’s ample reason to believe they are the norm rather than the exception. Among more than 120 evaluation and program executives surveyed at private foundations in the U.S. and Canada, more than three-quarters had difficulty commissioning evaluations that result in meaningful insights for the field, their grantees, or the foundation itself, and 70 percent have found it challenging to incorporate evaluation results into the foundation’s future work. A survey of more than 1,600 civil servants in Pakistan and India found that “simply presenting evidence to policymakers doesn’t necessarily improve their decision-making,” with respondents indicating “that they had to make decisions too quickly to consult evidence and that they weren’t rewarded when they did.” No wonder Deloitte’s Reimagining Measurement initiative, which asked more than 125 social sector leaders what changes they most hoped to see in the next decade, identified “more effectively putting decision-making at the center” as the sector’s top priority.

This problem affects everyone working to make the world a better place, but it’s especially salient for those I call “knowledge providers:” researchers, evaluators, data scientists, forecasters, policy analysts, strategic planners, and more. It’s relevant not only to external consultants but also to internal staff whose primary role is analytical in nature. And if the trend continues, we can expect that it’s eventually going to catch up to professionals working in the social sector. Why spend precious money and time seeking information, after all, if it’s unlikely to deliver any value to us?

Frustrating as this phenomenon may be, the reason for it is simple. All too often, we dive deep into a benchmarking report, evidence review, or policy analysis with only a shallow understanding of how the resulting information will be used. It’s not enough merely to have a general understanding of the stakeholder’s motivations for commissioning such a project. If we want this work to be useful, we have to anticipate the most important dilemmas they will face, determine what information would be most helpful in resolving those dilemmas, and then explicitly design any analysis strategy around meeting those information needs. And if we really want our work to be useful, we have to continue supporting decision makers after the final report is delivered, working hand in hand with them to ensure any choices made take into account not only the newly available information but also other important considerations such as their values, goals, perceived obligations, and existing assets. 

In short, knowledge providers need to be problem solvers first, analysts second.

Decision-Driven Data Making

So what should you do if, say, an impact investor wants to set up an impact tracking platform but doesn’t seem to be able to articulate how the results will inform its future plans? Or if you’re tasked with creating a data dashboard for your social enterprise on faith that the included metrics will someday somehow inform something?

The first and most important step is to interrupt the cycle as soon as possible. The further you get toward completing a knowledge product without having the conversation about how it will be used, the more likely you and your collaborators will find yourselves at the end struggling to justify the value of your work. It’s much better to have clarity about decision applications from the very beginning — not only because it helps solidify the connection between the information and the decision in the decision maker’s mind, but also because it will help you design the information-gathering process in a way that will be maximally relevant and useful.

For instance, in the case of the evaluation example above, there are probably a couple of decisions the evaluation could inform. Should the program be renewed, revised, expanded, or sunset? Does the strategy underlying the program’s design need to be adjusted? And if so, how? It’s important to recognize that these decisions, like all decisions, must consider not only what’s happened to date but what we should expect to happen going forward — and what other options may or may not be available to accomplish the organization’s goals.

A good decision consultant will bring these and similar questions to the surface of the conversation before any analytical work gets off the ground. A great decision consultant will go further and engage the client in “rehearsing” the decision-making process at the outset of the project, while leaving room for judgments to change in response to new information. The framework created through this process can then be used to make the actual decision once the evaluation is complete and the findings have been presented to the client.

Previous
Previous

From Impact Intention to Impact Management: Are You Ready for the Change?

Next
Next

How to Approach Impact Reporting When You Don't Have a Clue