By Sheba Laser Lux
The past two editions of The Grant Sage focused on local and regional grantors, grant-seeking strategies, and funding opportunities. One of the most significant aspects of the grant-seeking process actually happens after grant funding been secured: Grant reporting is much more than a chance to demonstrate to a funder what was accomplished with their grant funding; reporting is the first step in obtaining future funding.
One of the most mystifying aspects of the grant process, exceptional grant reporting is often elusive. Nonprofit organizations understandably struggle with measuring ‘what good was done’ with funding from grantors. A study on grant impact by the University of Michigan denotes, “A key challenge in quantification and attribution lies in addressing the thorny issue of causality.”
Impact reporting arose, in part, out of increasing governmental scrutiny and funder pressure to demonstrate the results of granting to Board Members. Yet the pressure to provide outcome measurements has proven to misguide nonprofit organizations to focus on outcomes even when causal links are unclear. Impact reporting, the long-time standard, “reflects more of an obsession with upward accountability to funders than an interest in actually finding ways of improving services and results,” describes the Independent Sector, an on-line philanthropy-focused publication.
Impact is likely to be affected by multiple factors that may or may not be easily discernible. One can easily count the number of people who were provided with food or shelter, but determining funding outcomes becomes more difficult with complex programs such as those targeting social justice, human rights, or the educational process.
In the past an organizations’ effectiveness was evaluated on the percent of its budget spent on overhead. If a nonprofit was ‘doing good work’ and spending 20% or less on administrative costs, it was more likely to receive funding. “This destructive way of evaluating nonprofit organizations has been losing favor over the last few years as rating agencies like Charity Navigator have recognized the need for a broader evaluation of nonprofit effectiveness. New measures have started to include outcome and impact elements,” describes another sector publication, Social Velocity.
The Wharton Foundation, based in Santa Barbara, funds educational programs that improve the academic performance of low-income and under served populations, and which can demonstrate impact. Wharton’s Board of Directors and in particular the foundation’s President, Jean Pettitt, is at the helm of the local charge toward better grant reporting.
“Reports can make or break future funding,” says Pettitt, whose Board declined to re-fund an organization after fourteen consecutive years of funding because “their grant reporting was so inadequate.”
The foundation has rejected a number of grant requests based on reporting of prior year grants that contains “irrelevant” and “incomprehensible” information. “Platitudes, often reported, are ridiculous! ‘We’ve changed lives’ is not an impact statement!” admonishes Pettitt.
There is no doubt that measuring impact is difficult. Philanthropic sector debates regarding grant impact and reporting are now front and center. The resulting changes in reporting emphasis are pushing non-profit organizations to think more strategically about their reports, and often in doing so – about their programming.
Concurrently, funders are evaluating their own reporting requirements. One might recall the Santa Barbara Foundation’s old column-based program reporting model: Of the (# of individuals) targeted by our program in the coming year, (subset of above #) will (change) as verified by (measurement). Thankfully, that model is no longer utilized and grant reporting questions are more open-ended.
The Wharton Foundation recently re-vamped its reporting format as well, hoping to gather more compelling and realistic data. “We understand that the organizations we fund are not going to solve problems that are endemic, over night.”
Pettitt works with fundees to refocus their reporting on “progress”. “We want organizations to distinguish between what they are ultimately striving for versus what they plan to accomplish during a grant year. And we want to know what they’re learning in the process that actually helps move the bar in the right direction. If strategies they try in a given year have virtually no effect, we want to know that, as well, so that we can work together to reallocate future funding.
“We don’t want to read in a report that in one year an organization solved a problem that society has had for ages. Rather, we want to know how far a specific population was moved toward an intended goal and what strategies were used to produce that movement,” describes Pettitt.
In the past, Wharton based its reporting questions on the Common Grant Application. That changed when the foundation’s board discovered that their grant recipients were more “honest about what their funded programs are lacking when they’re asked to focus on program design and progress rather than quantification.
“Many times the positive results of our funding are not necessarily quantifiable. I do recognize that a big part of the problem is the emphasis funders place on outcome funding, which causes non profits to report on things that nobody cares about.” An example cited by Pettitt is a grant report she received regarding an after-school, science education program. “One of the measurements they reported on was the number of teenage participants who finished the program with the ability to recite ten careers in science. Yes, that was something that could be measured. But who cares!”
The Ford Foundation has also looked at funding outcomes and of course acknowledges that solving societal problems is not a one-year process – although it is the length of time of an average grant. “So what we can observe or measure will necessarily be only part of the picture. In many cases, then, our assessments tell us not whether broad social change has finally been achieved, but rather, whether we are on the right path to change.”
The James Irvine Foundation’s recent efforts to assist nonprofit agencies in California to improve systems for gathering and assessing data on performance outcomes concluded that, “In the end, a project’s success has less to do with whether measurement systems and more to do with whether an organization can create a culture that values the process of self-evaluation”.
Report on progress, says Pettitt, not on outcomes. “This should not be difficult if an organization has taken the time to think about what it is working to accomplish, and to develop specific strategies it will utilize in hopes of some day getting there.”