Let’s chat about a specific methodology; Outcome Harvesting. It’s been a hot topic for a few years now. It is one of only a few complexity-aware approaches to monitoring and evaluation named by USAID, you’ve likely heard it mentioned.
I love the name of it — Outcome Harvesting. Why a harvest?
Harvests are something you wait for. You nurture. You notice. You count, quantify, and use. They also nourish and sustain. Can evaluation be so lovely? You bet!
–if you know what this is and what to cut to the chase, drop down here-
In practical terms, what is it?
A method of data collection, inclusive of analysis and use.
But really, what is it?
A step-by-step process done throughout the life of a project or initiative.
Why would I use it?
Complexity-aware. Instead of looking for and measuring outcomes pre-determined by a logic model or theory of change. It takes an exploratory approach to find and harvest outcomes that have happened, then tries to make sense of them.
Participatory and embedded. The process necessarily involves people beyond the designated M&E folks. It also necessarily takes place throughout the life of a project, not just at the end. These two features are desirable because learning, reflecting, and using data become part of the culture of a project — generally leading to better outcomes and more responsiveness to real-time happenings.
Shift past activities to outcomes. This data collection method unabashedly skips over what the project did (activities) and cuts straight to answering, What happened as a result? This is enticing, but it’s further exciting because outcomes harvest seeks to capture changes that may only be influenced by or contributed to by the program — not just straight cause and effect. You get a chance to see your program in relationship to larger systems and happenings.
What do I get out of it?
- A robust database of outcomes that your project/initiative contributed to
- High-quality data points for use in analysis of change patterns
- Insight into change processes throughout the life of the project/initiative so adjustments can be made to maximize impact
Quick review of the steps of Outcome Harvesting
Here is a post on the AEA365 blog, which discusses the 6 steps of Outcome Harvesting. Use this image for easy reference as I discuss tips below.
Speaking from Experience
We recently had the pleasure of conducting a mid-initiative review using the data from an Outcomes Harvest. Our role was to complete steps #4, 5 & 6 of the Outcome Harvesting method (substantiate, analyze, support use). Our client, Changing the Way We Care, is a truly global initiative combining advocacy, organizing, and systems transformation with specific action taking place in spotlight countries.
The complexity of their undertaking merits the use of a method like Outcome Harvesting. The results of conducting regular harvesting within their staff were evident to us as outside consultants — they were grounded in the impact of their actions and simultaneously brimming with curiosity and further questions about their work. Diving deep into their database of outcomes, seeking substantiation and answers with accompanying interviews and surveys was a rewarding experience for all involved. We were able to verify their theory of change and offer focused strategic advice for moving forward.
Anytime you get real personal with a specific method, you are reminded of its precise fit and drape and hopefully come away with new ideas for wear.
Tips for Use
If you are considering using Outcome Harvesting or if you are in the midst of it, here are some tips we know we’ll be keeping in mind for the future:
- The magic is the harvest process
- Make the harvest process more human
- Use the harvest process to explore other places
- Substantiate with a frame of utility
- Substantiate with other humans (not just documents)
- Analyze with curiosity
The magic is the harvest process
A substantial portion of the utility of Outcome Harvesting comes through conducting the process of harvesting itself — ie. Steps 1, 2, and 3. By the time (4) substantiation is underway, those involved have already benefited tremendously from the group reflection, interaction, and categorization of potential outcome statements within a theory of change and through the careful documentation gathering. I think it helps teams better understand themselves and stay focused on seeing direct desired results from the many interlocking strategies they are shepherding forward.
Consider what needs to be in place to conduct (1) design a harvest, (2) draft outcomes, and (3) engage human sources —
- clear learning questions
- framing to “human sources” (read: staff) about
- complexity-aware evaluation — staying open to unintended consequences
- thinking in terms of outcomes, not activities
- regular times for group discussion, pause, and reflection
- engagement of multiple voices, not just those with M&E titles or those in power
- ways to scan the environment, stakeholders, clients, etc. for what has changed
- great clarity and intention regarding the contribution from the project/initiative/program
Wow! Does that ever set up a team for a rich learning culture and adaptive action!
Make the harvest process more human
When designing your harvest in step one, resist the urge to get sucked into the lure of six decisive steps. While it is a specific method, it need not be a technical morass. It can be tempting to become rigid and overly thorough about doing it “right,” external credibility, the anxiety of trying something new, or other reasons. Resist!
Yes, having a lovely and thorough database of outcomes can be satisfying. But if you make your spreadsheet too complicated or pull precious team reflection time into focusing on building the database of outcomes rather than pulling their insight and experience into a collective space for reflection — then you’ve lost your way. It can be great to use the M&E team to attend to any technical details that can be desired.
If Outcome Harvesting is to feel sustainable, not burdensome, and even enriching, then designing it to fit within natural workstreams, to complement natural timelines, and to address real-life decisions is essential.
Use the harvest process to explore other places
As you build your database of outcomes through regular harvesting reflection times and/or asynchronous capture methods, leverage having built-in staff time spent in this way to broaden thinking and spur conversation. The naming of outcomes and description of your contribution lends itself very well to going further. Here are some places you might choose to go:
- Process. How are these outcomes happening? What about how the team practiced or intervened made a difference?
- Principles. What are guiding project/program decisions that led up to these outcomes? What guidance would you offer others to contribute to similar outcomes? What are the principles behind your work?
- Theory of change. Are these expected outcomes? Surprising? Do they align with what you thought your theory of change was? How so
- Landscape. Who else is regularly contributing to these outcomes? What other factors are influencing — enabling or hindering — outcomes?
- Level of effort. How much effort is it taking to produce certain outcomes? Are your strategies and/or activities effective? Why or why not?
- Missing. What isn’t being reported as outcomes? Where are there holes? Are they vital? Did you expect to touch these places?
Substantiate within a frame of utility
Step 4, substantiate, is both great and can be a red herring. Substantiation is similar to cleaning a quantitative data set — it needs to be done. It helps ensure data quality so that analysis and subsequent results are the best they can be. This is important. AND it can suck a lot of resources (time, money, and social capital) — beyond its utility. Be sure to right-size the substantiation effort so that you don’t run out of resources to attend to the final two steps!
Three tips for making substantiation manageable within means:
- Do this at regular intervals. Don’t wait for it to pile up.
- Conduct substantiation internally (If your audience(s) will tolerate this.) Create a procedure through which your M&E staff can conduct substantiation efforts to make it transparent and well documented.
- Remember, substantiation can be done via multiple data sources and data collection methods. Use what suits you. We used google forms to great effect — very high response rate, easy 3–4 question forms, lovely additional input. This made it easy to track and gave precise data.
Substantiate with other humans (not just documents)
If you use key informants for parts of your substantiation process, we have one technical piece of advice and a second recommendation aligned with Picture Impact’s bent for exploration and systems thinking.
First, on a technical level, when you substantiate an outcome, you are checking both the outcome statement’s accuracy and the project/program’s contribution to the making of that outcome. We found that contribution statements tend to be project/program centered. This is fine as internal documents, but as you shift outside of this inner circle to ask someone else to verify your claims, the jargon and perspective used are often confusing or irrelevant to your outside stakeholder. Consider reworking contribution statements carefully for a broader audience, not as a replacement, but for the specific purpose of substantiation. You want to make the stakeholder’s participation in your pursuit of substantiation as painless as possible; it also helps them give more meaningful feedback and input.
Second, if you are going to the trouble to engage outside stakeholders, take advantage of your effort! This is a relationship-building opportunity. This should be a useful interaction to your respondent, not just for your benefit. This is an opportunity for meaningful feedback and information gathering beyond substantiation. This is when you can get more nuance and details to understand change processes better.
This could mean a full-blown interview, veering into all areas: outcomes they see, the reputation of the program/project, strategic advice, updating landscape analysis of additional connectors and actors, etc. Or it could be a simple 1–2 additional questions beyond a yes/no. Is this accurate? For instance, why do you think this outcome happened? Or, Has this continued to happen?
Analyze with curiosity
Because Outcome Harvesting is meant to stay open to the outcomes which emerge — not predetermined ones — it makes sense to follow suit in your analysis approach. Taking an open, grounded theory approach to analyzing outcomes — picture outcomes on index cards being grouped and re-grouped on the floor according to potential themes — yields wonderful insights. Grounded theory allows for emergent coding and patterns and subgroups to take shapes independent of any particular framework or expectation.
I’d share two lovely results of using grounded theory on this most recent project. The first is that when we analyzed all of the outcomes this way — outcomes from all demonstration country sites and global efforts — we accidentally re-created their theory of change without knowing it. It was wonderfully confirmatory! Second, when subgroups of outcomes were analyzed this way (for instance, all of just Guatemala), context-specific nuance became quite clear as the themes from certain sections of outcomes were distinctly different.
For as specific as Outcome Harvesting is, it is still an art, not a science. Each instance of use for this methodology will have its own learnings and flavor. I think sharing both how and when to use this methodology, but also the nitty gritty specifics can get us thinking creatively and broadly within the context of Outcome Harvesting. I recognize that a lot of what I suggest strays from minimum application of the methodology, but I think that’s what exciting—a reflection of my commitment to use and being complexity sensitive. What have you tried?