Our informal focus groups and research suggest that, when it comes to measuring impact, people have a lot to say.
But they were unlikely to have expressed their opinions to those outside their own community. We’re sharing some perspectives below, paraphrased from all of our focus groups and conversations—and by no means comprehensive—to jump start an important conversation between grantmakers and media makers.
What did we miss? Contribute your opinion here.
When we consider funding media, we need to be able to prove who it influences. We have to ask: Is this the best use of our money? Are there cheaper, more cost effective ways to get where we’re trying to go?
Our policy is NOT to ask media makers about their measurable objectives at the beginning of a project. How could they possibly know, that early on? (Besides, you’re setting yourself up for a bull#&$% answer…)
We support media only when our NGO grantees make a strong argument for it. That argument must include how, why, and to what ends the media will help them reach their objectives.
We know we can’t necessarily apply the methods we use for program evaluation, but we need some metrics to know whether funding media makes sense for us.
I wonder about how to measure the sometimes counter-effects of a snappy piece of media designed to rally people to a cause; that is, you’ve lit your base with strong content, but you also managed to light up the opposition by igniting their emotions.
Media is a 21st century must. We’re dedicated to measuring impact because we want to know if our media grantmaking strategies are working and how to advance the field.
We already know that powerful media is essential for influencing public will and opening hearts and minds. We don’t need a lot of data to prove it.
I understand that investing in impact evaluation can help make the case for other funders to invest in media. But I worry that, with so much activity, we might be diverting funds that could be supporting distribution.
We create media in order to have lasting impact, but our measurement criteria is likely a bit more nimble than a funder that might use a prescriptive score card (views, attitudes, behavior change). We would care about those things, but I think we might hold them a bit more loosely than others.
Why not measure to see if we’re right, to learn more about what’s working and what’s not. Bring it on.
It’s hard when foundations are interested in outreach goals and objectives but don’t want to contribute to the production of the film. This limits impact because filmmakers are always struggling, and so it can be difficult to reach long-term objectives.
I make films about deep issues where it takes a long time to see change. What can short-term evaluation tell us?
I’m an activist. When organizers use my films, that’s an outcome. When people donate to a cause because they saw my work on the web, that’s another outcome.
People come up to me after screenings in tears saying, ‘it changed the way I think.’ The evaluation tools don’t capture what’s really great about film: their nuances, the way they make people feel. How do you put metrics on that?
Funders aren’t looking at proposals that don’t promise ‘measurable change.’ I don’t even submit projects anymore that aren’t trying to have a specific impact. So yes, we’re censoring ourselves.
We started assessing our impact because our funders expected it, but now it’s a standard practice. We’re looking to see how our films impact/support/deepen our partners’ work over time.
It’s the funders who want evaluation. Period.
I’m interested in measuring impact, but I’m a filmmaker, not a social scientist. If I were to evaluate my own work, I’d need the training, resources, and time to make it happen.
Are we compromising the quality and artistic vision when we are producing primarily for a very targeted audience?
I don’t want to do gimmicks and make apps just for the sake of it. I need help to make the impact and chart the metrics grounded in real work and real life.
Powerful, and sometimes the most impactful, media may not truly change hearts and minds in real time, which makes measurement even more complex. Often times a powerful film begins a conversation that may take decades to mature… but without that first spark none of the movement would have begun. So real-time, or near-time, measurement of media impact ought to be balanced with the long view.
Looking at a single film without looking at the environment in which it is produced is absurd. What is needed is a different frame altogether—one that is dynamic, allows for many variables in a networked environment, and provides realistic and nuanced evaluation measures.
…we believe ‘measuring impact’ is not a good description of what [Fledgling is] talking about, as it seems to imply a comparison between other film projects/campaigns or against arbitrary benchmarks. When we think about data collection and assessment, we are seeking to understand the success a project has in meeting its stated goals.
Lead image by Marc Wathieu. Licensed under CC BY-NC-ND 2.0.