Clearing the Air

An Evaluator and Her Client Finally Spill Their Guts

*This is the fifth post in our “Ask the Evaluators!” series, exploring real-world assessment of recent media projects to help you make informed decisions about what can be measured, and good questions to ask along the way. Click here to see all of the articles.*

A 10 minute read (for you “measurement and relationship” geeks out there…) 

If you’re like us — media strategists and/or social change agents who use storytelling to fuel their work— you can’t afford to keep making the same mistakes over and over. That’s why Active Voice Lab is a staunch practitioner of outside assessment. Recently I asked our go-to evaluator, Dr. Kien Lee, to give me her candid feedback of how we worked together on a project several years ago. To keep things on track, we asked Debika Shome, another member of the HowDoWeKnow/Learn Network to referee.

I can feel the social media experts frowning at the length of this post, but taking shortcuts happens to be one of the obstacles we discovered in this deep, informal dive! That said, if you don’t have time to read the entire conversation, just keep these questions in mind:

“Are We a Good Fit?” 
Four Prompts for Evaluators and Media Practitioners Before They Agree to Work Together

1. What does the client REALLY want to know?
“It’s not too hard to find an evaluator who will tell you what you want to hear. But we actually needed to know if we were accomplishing what we claimed.”

2. What will the client do with the information?
“Sometimes I ask clients, ‘What happens if the evaluation shows this instead of that?’”

3. What makes each other tick?
“Talk a bit about each other’s worlds, expectations, and possible tensions that might come up.”

4. What time and resources do we need?
“Not just the client’s and evaluator’s, but also the people on the ground who will need to spend time talking to the evaluator. Make sure they’re aware of this in advance!”

 
THE CHALLENGE

Debika: How did you decide to work together?

Ellen:  In 2003, we were looking for an evaluator whose background was in social change, and not in media evaluation specifically. I don’t remember how we found Community Science precisely, but I DO remember that Kien’s then-boss, David Chavis made me really mad during our first phone call! He questioned whether we could legitimately claim that using film would really make a difference. That pushback was one of the most important conversations; I expected you guys to just tell me what I wanted to hear, but Community Science wasn’t interested in hollow claims about media and social change; they are dedicated to actual results. 

“I expected you guys to just tell me what I wanted to hear, but Community Science wasn’t interested in hollow claims about media and social change; they are dedicated to actual results.”
– Ellen Schneider, Active Voice Lab


Kien: I think what attracted us about the work that Active Voice Lab was doing was the intersection between story and documentary and a social issue near and dear to our hearts. If it had been just about story and documentary, we would not have been as attracted to the work.

[For the the most recent project,] The attraction was the topics — immigration, day labor, racism, intergroup relations. These are hard topics for the public to talk about because they are so emotional. Thus, it was an opportunity to study and understand how film could crack the door for discussions among people who don’t typically sit down to talk to about these issues. Also, I want to add that Active Voice Lab under Ellen’s leadership was very open to learning. Her conversation with David made her mad, but she remained open to how evaluation could help her be better, even if it wasn’t what she wanted to hear.

Debika: Ellen, why a firm focus on social change and not media evaluation? What’s the difference in approach?

“Why a firm focus on social change and not media evaluation? What’s the difference in approach?” 
– Debika Shome, TCC Group


Ellen:  When I started at POV, impact evaluations of public television programming were, well, fairly gentle. Interviews with viewers, teachers or nonprofits were useful but often lacked a deeper analysis. Tough questions about how a film could really help people & communities change their actions, particularly in the policy arena…that was fairly rare.

Debika: Ellen, what were you most apprehensive about with doing this [most recent] evaluation?

Ellen:  I thought that our Ecosystem of Change (EoC) model, while central to our approach, might be tough to implement. We already knew that we were asking a lot from our partners. We knew that the individuals who came to the table representing sectors, organizations, approaches, etc, wouldn’t have consistent understanding or skills in working with media. (Why should they?) Yet we felt that by experiencing a narrative collectively, conversations and common needs might flow. That was an accurate observation but admittedly difficult to pin down and standardize.

Debika: Both of you, what did you hope to get out of this evaluation?

Ellen: For me, I was getting concerned that Active Voice Lab’s process was getting too granular. It seemed enormously labor intensive; all that staff time, all the coaching, all the accommodating. It wasn’t at all clear how we might take this to scale. But the EoC concept seemed central. We needed smart outsiders to step back and give it to us straight. Also, it became clear that our partners spoke very differently to Kien than they did to staff. Staff reported — and I witnessed — deep affection and connection to the partners. So they were taken aback when Community Science would report, for example, that the same partners didn’t understand how to use the film, or what the EoC was about, etc. It was a parallel universe!

[Our staff was] …taken aback when Community Science would report, for example, that the partners didn’t really understand how to use the film, or what the EoC was about, etc. It was a parallel universe!” – Ellen 

Kien: For me, it was about documenting the outcomes that Active Voice Lab set out to achieve and the lessons learned to support their continuous improvement of this work.

Debika: Ellen, MacArthur Foundation was funding this — what pressure did you feel to show positive results?

Ellen: Happy to report that MacArthur thought of Active Voice Lab as a learning partner. They were not deeply in the outreach/impact game, but they also wanted to know how social change groups could take advantage of documentaries, especially for civic engagement. I think they appreciated the intersectional nature of our approach.

DIFFERENT WORLDS

Debika: Kien, what were the challenges of designing this evaluation?

Kien: First, the two worlds from which we come. The Community Science team comes from a world of precision and measurement. We had a certain language that we spoke. The Active Voice Lab team comes from a world of art and culture and had their own language. (And, as Ellen puts it so well, more fluid because of the “lab” context in which they were working.) In retrospect, I wished our two teams had spent more time understanding each other’s worlds and the apprehension that came with our work together. Perhaps because of my comfort and familiarity with Ellen, I didn’t pay sufficient attention to this tension that I know comes up all the time when we are asked to evaluate something.

“In retrospect, I wished our two teams had spent more time understanding each other’s worlds and the apprehension that came with our work together.”
– Kien Lee, Community Science


Second, we are “dropping in” to evaluate the work that is unfolding in real time on the ground. Active Voice Lab staff were continually adjusting their strategies and language to respond to the challenges and issues they were observing and hearing from the local partners. We were not able to keep up with the adjustments. Consequently, we would report something we heard and synthesized from interviews, only to find out that there is now a slightly different interpretation by staff since we last talked to the local partners. The limited budget did not always allow us to stay in close contact with the Active Voice Lab staff to keep up with the adjustments along the way with their strategies, implementation processes, and relationships.

Debika: Kien – In retrospect, any thoughts on how evaluation can adapt to the real time nature of this work?

Kien: I think one of the strategies is to schedule quality time with Active Voice Lab staff on a frequent basis where we focus our inquiry on the key levers, pressures, and decisions that may have affected their strategies and implementation processes and contributed to their adjustments along the way. I would have made a deliberate attempt to help the staff understand that our evaluation team’s pushing and questions — what could be perceived as not understanding and being difficult even — was to ensure that we were not making any assumptions about their work. This is such an important step. I now inform any group I work with that they should be prepared for the pushing and questions, precisely because we don’t want to make any assumptions. Of course, we have to do this in a respectful and non-judgemental way.

“I would have made a deliberate attempt to help the staff understand that our evaluation team’s pushing and questions — what could be perceived as not understanding and being difficult even — was to ensure that we were not making any assumptions about their work.” – Kien

Ellen: Kien, that’s been true for us too. Since neither of us are robots, or algorithms, the need to invest in trust-building and transparency is paramount. Especially when the stakes are high. This might be a good way to introduce our new initiative “What Would It Take which we hope will be a philanthropic model for incentivizing creatives and other social change leaders to be more candid about what what works and why.

For me, I knew I would only learn from Community Science’s findings and recommendations. But for staff, it’s possible that critical observations might reflect negatively on them. Or not fully appreciate what they had to do to get to those outcomes, even if the outcomes changed along the way. In a sense, this gets to the power dynamics that influence any encounter. I had little to lose; Community Science had its own credibility to protect; and staff wanted to be understood (even if it didn’t show up on a logic model.)

Ellen: [Our in-house evaluator admitted] that I came up with impromptu ideas and expected her to deal with them, or that I made adjustments and didn’t communicate them properly, so she had to either confront me, or fill in as best she could. In this sense, there’s an inherent conflict between experimentation and assessment.


Kien:
I’m not sure about that inherent conflict. I think experimentation can be assessed, but there must be clear understanding from the beginning about all our expectations about the process and anticipated outcomes. There should be more emphasis on the implementation process and less on the outcomes.

Kien: I would also add that there is tension between the data we collected and the Active Voice Lab staff’s perceptions. While the data we collected were also the perceptions of the local partners, we believe that when they reflected a pattern based on reports from several people, we reported that as a finding. The Active Voice Lab staff, on the other hand, heard [something else] from the local project director…and felt that should have been the finding, or that our finding was inaccurate. It’s not as if one is right and one is wrong, it’s the difference between what is understood as data and a finding.

Having said this, I found the discussions that our team had with the Active Voice Lab staff to review the findings and address any inaccuracies extremely helpful and productive. This step in the interpretation process is key to any good evaluation process, no matter how hard it can be sometimes for everyone.

Ellen: Certainly true! And this was the problem of being a “lab” where everything is fluid and opportunistic. If we learned that a partner was having trouble, we’d find a new way for them to intersect with our project. That was the ingenuity of the staff; very partner-centric. So whereas they might feel very proud of their troubleshooting, very protective of their relationship and very creative about the new opportunities, it was virtually impossible for Community Science to keep up with these shifts. Tricky.

Also, right before this project, we’d hired an in-house evaluator who was simultaneously creating her own system. She drew on what Community Science had contributed in the past, but there were times when her immersion in our projects clashed with Community Science’s more distant, and I’d say, less forgiving approach. She explained that she had the same frustrations with me as Community Science might have had that I came up with impromptu ideas and expected her to deal with them, or that I made adjustments and didn’t communicate them properly, so she had to either confront me, or fill in as best she could. In this sense, there’s an inherent conflict between experimentation and assessment.

Kien: I’m not sure about that inherent conflict. I think experimentation can be assessed, but there must be clear understanding from the beginning about all our expectations about the process and anticipated outcomes. There should be more emphasis on the implementation process and less on the outcomes.

WHAT DO STORIES ACTUALLY CONTRIBUTE AND HOW LONG DOES IT TAKE TO FIND OUT? 

Debika: Kien, what were the hardest things for you to measure? Why?

Kien: The impact of the film. We could measure how the film changed people’s hearts and minds, and affected change strategies. What was hard to measure was whether or not any larger changes in organizations, systems, and communities could be attributed to the film. While funders and others understand this, they sometimes still talked as if they wanted to know if the film caused the change, rather than the role of the film in contributing to the change. I think this is especially so when we are talking about a film that deals with hard social issues. This is why the EoC concept was attractive and made sense because it took more than a film and a few partners to advance the issue.

“While funders and others understand [the difficulty of attribution], they sometimes still talked as if they wanted to know if the film caused the change, rather than the role of the film in contributing to the change.” – Kien

Ellen: I think we’ve been aware of the limitations of attribution. That’s why the EoC is important to us; the unique contributions of stories. Our [early] assumptions were a little grandiose; that we as outsiders could parachute in and expect to alter relationships, increase communication, strengthen or even build collaborations. Today, I am a lot more realistic about what stories can contribute to. In fact since working with Community Science we talk about the EoC this way: “Social movements need research, money, policy actions, organizers, AND stories…” This helps us understand how and why stories are required. I think if we had been more clear about that earlier on, the evaluation questions might have been more specific.

Debika: Kien, what are your thoughts on whether evaluation can measure the intangibles related to storytelling, art, emotion?

Kien: Our evaluation did not focus on that because we were clear — I thought — about the tangibles desired. The intangibles were not the focus. Having said this, the Active Voice Lab staff documented the emotions by working with local partners to hand out surveys after they watched the film.

Ellen: True about the intangibles. We were definitely looking for tangible shifts and by the way, we prided ourselves on this harder-core assessment. At a time when many creatives were railing against the very notion of measurable impact of stories, we asserted that there WERE valid indicators, if only you had the right roadmap. We told funders that they should expect those identified outcomes, and that they would help them make more sense of why stories were essential to their philanthropy. From that perspective, if, say, you wanted communities to become more welcoming, a strategic screening accompanied by values-based dialogue, realistic calls to action, etc SHOULD be a detectable factor. We knew through experience that the story itself was resonant. We were focused on how that power could be a catalyst.

Debika: Often the impact we want to see is over the long term but it’s not always feasible to do an evaluation that will cover the long tail of impact. How do you address this?

Kien: That is a good question for any evaluation of tough issues that take time to change, and it is no different for the evaluation of film. First of all, we need to make sure that we expect the film to be able to make a lasting impact. If not, then we are searching for unintended impact, which is important to capture; however, we must have clear expectations about what that means in relating the impact back to the film. Second, it’s time and resources, not just for a group like Active Voice Lab and an evaluator like Community Science, but also the people on the ground who will need to spend time talking to the evaluator.

Debika: Kien, what role does anecdotal information play in measuring impact? Does it have a place? Where does it fit in relation to quantitative and qualitative data?

Kien: It’s putting it in context. We can’t generalize this unless we see a pattern of responses like this. It’s not better or worse, it’s just how you interpret and report it.

LEARNING TOGETHER

Debika: How did your work with Community Science or in evaluation change what Active Voice Lab was doing, Active Voice Lab’s approach? Did it change?

Ellen: For example, I’d never seen a logic model when we started on this project. It hadn’t occurred to me to sync up how a story can be used with how an organization or a movement was moving forward. They brought their methods used on measuring community change to our attention, and together we modified some of the steps. We had to become more rigorous.

Kien: At the same time, I believe it was more than just evaluation. Because we were knowledgeable about community change, we understand the process of making change beyond evaluation. So, we looked also at Active Voice Lab’s implementation processes such as the facilitation of groups that were convened to watch a film, and the development of Active Voice Lab’s partnerships with local organizations, and how these processes affected the outcomes associated with a film’s use. One of the recommendations I recall from several experiences with Active Voice Lab staff was the importance of being prepared to answer resource questions that were likely to come up due to a film. For example, during The New Americans [2000-2004], an issue that repeatedly came up was immigration laws and many people, especially, American citizens, were not informed about the laws. It became important for Active Voice Lab staff to be able to point them to resources or have the information at hand to seize that opportunity to educate the participants. And, if Active Voice Lab did not have the capacity, what strategic partnerships were necessary to provide a stronger system of support to communities grappling with such tough issues.

Ellen: You also correctly identified some “resource creep” during that project, Kien. You went on the road with our staff to observe how they conducted the ecosystem planning process. You called me one day and said, “Did you know that your staff have become community organizers?” Apparently, in some sites, they were a bit ahead of the partners’ organizing skills, and were being asked questions like, “What’s a steering committee? Can you help us put one together?” And since the staff were so committed to helping those groups, they often crossed the line. I would have NEVER understood the consequences of that had Kien not witnessed the encounters, nor thought deeply about our optimal role. We also came to understand that we were part of someone else’s (our partners) process, and had to grapple with our capacity, the appropriateness of our content, as well the other variables that we would never control.

 

Debika: Both of you, looking back, knowing what you now know, what makes for a good partnership between media creators, impact campaign, evaluators?

Ellen: I think that EVERY partnership with an evaluator should start with this: What does the client REALLY want to know? I can’t tell you how many times I’ve been asked by other nonprofits or producers for an introduction to an evaluators, and that’s what I always encourage them to think through. As I said at the beginning, it’s not too difficult to find an evaluator who will tell you what you want to hear. That happens. I’ve also heard about situations when the client wanted to limit the scope of the assessment to a particular activity or task.  We wanted to know if our EoC process was helpful, and even though we were surprised to find out that we had overrated it, it subsequently improved our services to the field.

“I think that EVERY partnership with an evaluator should start with this: What does the client REALLY want to know?” – Ellen 

Kien: I agree with Ellen: “What do you want to know?” Followed by, “What will you do with that information?” Then we can hone in on what is the best design and what are the trade offs because there will always be trade offs! Sometimes I ask clients, “What happens if the evaluation shows this instead of that? Not saying that it would, but hypothetically, what are the implications for you?” We don’t want to be in a situation where we knew from the beginning that you may not see the change you want, but don’t tell you until the end — that would be irresponsible.

Ellen: Community Science really taught us about becoming a learning organization. We turned to them repeatedly to ask, is this BS? Can we expect this, and how would we know if we were on the right track? Our organization was too small, and our staff too passionate, to do work that wasn’t meaningful.