Making Social Investment Decisions

Thursday, 27 March, 2008 - 10:46

This article is an attempt to chronicle the emergence of a new generation of concepts, tools, platforms and organisations designed to measure quality social change work.

This article is an attempt to chronicle the emergence of a new generation of concepts, tools, platforms and organisations designed to measure quality social change work. To give away the end of the story, it turns out that the trick to getting impact measurement right at ‘the organisation’ is to move out of the organisation into the larger ecosystem in which the organisation lives.

When you do this, you can set measurement and reporting standards that carry incentives to collect and share impact measurement data in a way that maximises learning for donors, ultimate beneficiaries, and everyone in between. The explanation of why this is so is not a short story, but I think it is one worth the telling.

Unlike in the business world, our ‘bottom line’ is well nigh impossible to measure with precision. As the articles in this issue show, measuring social impact (such as lasting improvements in opportunity, well-being or environmental sustainability) at all is complicated outrageously by difficulties associated with time lags, causality/attribution, aggregation and comparison. Just because it is difficult, however, does not mean we should not try to understand performance and impact. But we do need to accept that we are dealing with a high level of complexity, and that in order to come up with feasible ways of measuring, we are going to have to reject a host of inappropriate concepts and tools.

The good news is that much of this clearing out of inappropriate intellectual undergrowth has been done in the last ten years. Business concepts and tools still find their way uncritically into our impact measurement practices, but we are getting better at adapting them for our purposes. Whereas a decade ago there was a great clamour for ‘generally accepted principles’ of impact measurement based on a unified and quantifiable approach, current thinking favours a much friendlier pluralistic model in which qualitative, quantitative, perceptual and empirical data can be assembled into a comprehensible whole that still honours the complexity of social change. We are indeed beginning to find ways of seeing the simplicity on the other side of complexity. This is observable in the debates of relevant communities of practice such as GAN-Net’s Impact Community of Practice, BOND’s quality group, and the Outcome Mapping Learning Community of Practice.

A pluralistic approach

The pluralistic approach begins from the premise that the main purpose of measurement is to advance intended outcomes. It gives us a way to distinguish between two very different kinds of impact. One, commonly referred to as ‘outcomes’, can be measured directly in the shorter term, such as how many children were immunised under a programme, what changes in community usage of primary health care facilities occurred, or those things that must change so that the most vulnerable can realise opportunities or exercise real choices.

The other kind of impact has been widely neglected because it is more difficult to measure. It is to do with changes in the system around the problem being addressed. To understand system impact we need a means of assessing how the things we achieve directly may be likely to lead to it. Many people now use a theory of change for this purpose. At Keystone, we promote a participatory approach to building a theory of change in which those who are meant to benefit are central to creating the vision of success. We have found that by engaging with constituents in a structured way organisations can avoid the rigidities of mechanistic planning and measurement models in favor of simple, direct feedback from the ordinary people who experience the organisation every day. We call this constituency voice.

In addition to tracking evidence of what we influence directly, we try to understand the quality of an organisation’s relationships with its constituents. Learning – and sharing what we learn – is the categorical imperative of our social change purpose. It defines how social investors differ from commercial investors. Venture capitalists look for ‘winners’, for companies who command growing ‘market share’. A social investor is looking not for market dominators but for organisations that can bring diverse actors together to have a greater impact on a problem – ‘winners’ of a different kind. In sustainable social change, the real winners are the learners. Social change practitioners measure not to prove but to improve.

More and more organizations get this and are developing new impact assessment practices. The December 2007 issue of Alliance Magazine exemplifies a clear trend among donors in this regard. Perhaps the most interesting finding in the impact assessment survey (reported on p36) is that donors and CSOs see this pretty much the same way. Both say that strengthening impact assessment is a growing priority. What the survey also shows, however, is that despite the general recognition of the value of evaluation, and the willingness of grantees to do it, donors are not funding it adequately or making effective use of results, nor are they investing in a larger infrastructure to support it.

Creating a philanthropy marketplace

This brings us to the big hairy unresolved problem – converting the positive steps at the organisational level into generally accessible information for society at large. We do not have ecosystem-level mechanisms that capture, analyse, aggregate, compare and freely publish organisational results for all. To take impact seriously, individual organizations must not only measure better and smarter, but also create a public marketplace of information for effective philanthropic decision-making.

Let me illustrate with a story. In 1987, after I had spent four years funding human rights organisations in South Africa and Namibia for the Ford Foundation, I pretty much knew who was generating the influential ideas, who delivered effectively, who was getting along with whom (and who wasn’t), and who was smug, or lazy, or even corrupt. A few colleagues in the sector shared this knowledge, but they amounted to perhaps half a dozen people in the world of international philanthropy. Twenty years later, the situation is the same. My successors at the Ford Foundation are still almost uniquely well informed about the human rights field in South Africa.

I believe this experience can be generalised. The data in possession of a handful of individuals and organisations is nowhere in the public domain. There is no database, no website, no directory, nothing that interested third parties can rely on to identify the best performers. This is essentially the situation with respect to our most important social problems everywhere.

Until we remedy this, the entire field of philanthropy cannot move from the reigning model of one-to-one relationships between grantseekers and grantmakers to a one-to-many model in which a marketplace for social change decides how to invest its funds on the basis of publicly available information about performance and impact.

The good news is that two things necessary to create an effective information marketplace are reaching maturity: first, there is a growing consensus about the types of information that we need; second, people are beginning to demonstrate how this information can be generated. The bad news is that very few foundations invest in further developing and scaling these emerging solutions.

A consensus view of the information basis for social investment

In his introductory essay to the Hewlett Foundation’s most recent annual report, President Paul Brest presents what may become the basis for a long-needed consensus about the information necessary for solving important social problems.[1]

  • Basic organizational and financial information of the type required by legal and tax authorities in most countries.
  • A description of the organisation’s goals and strategies for achieving them. To take an example with a very simple strategy, an organisation dedicated to eliminating polio in a developing country would describe the scope of the problem and how it plans to tackle it.
  • Indicators to track the organisation’s progress towards its goals and a description of what progress has been achieved. (How many vaccinations have actually been administered?)
  • Evidence of actual impact, where available, and lessons learned. (In the long run, did polio decline in the country? In the short run, what obstacles were encountered, and how were they surmounted?)
  • Reviews of the organisation by its beneficiaries and other constituents and interested parties. (How do families, communities, governments, and others view the vaccination programme?)

The pivotal role of constituency feedback

In my own work, I pay particular attention to the final category – constituency feedback. We have found that there are user-friendly ways for organisations to embed constituency feedback into their planning, monitoring, learning and reporting. When all of an organisation’s constituents, especially those most affected, participate meaningfully in defining success, planning activities and evaluating results, their views on the organization and its purported results can be elicited to validate the integrity and enrich the quality of data.

When you cultivate the views of all primary stakeholders across the other four types of information, you reinforce learning-based relationships. By publishing stakeholders’ views, everyone can see the kind of picture that I (and too few others) had of the human rights sector in South Africa in the 1980s.

When great leaders from other walks of life apply their minds to the world of philanthropy, they are often able to articulate the underlying structural problem of our profession. In his commencement address at Harvard University in June 2007, Bill Gates recalled how he and his wife Melinda asked how it could be that millions of children died each year for want of basic medical care. ‘The answer is simple, and harsh. The market did not reward saving the lives of these children, and governments did not subsidise it. So the children died because their mothers and their fathers had no power in the market and no voice in the system.’ He could have added that they also have no real voice in the organisations whose purpose is to help them.

Nelson Mandela put it succinctly when he said, ‘I have found that those who enjoy the most power and influence – even with the best of intentions – tend to over-rely on their own counsel. We see in most anti-poverty programmes, for example, a lack of accountability by donors and NGOs to the people who are meant to benefit from them.’

Our work with organisations around the world over the past three years suggests that the quality of an organisation’s relationships with its beneficiaries and other constituents is highly predictive of its effectiveness and impact. We need empirical validation for this hypothesis, but the reason may be similar to the reason that successful companies listen to their customers.

Seeds of an enriched informational basis

We need not look further than the December 2007 issue of Alliance Magazine and recent contributions to Alliance Online to see the evidence that we are approaching a step change in measurement and evaluation. In her interview in the November 2007 issue of Alliance Online, Gates Foundation head of Impact Planning and Improvement Fay Twersky highlights the leading trend when she says that her aim ‘… is to blend planning with measurement, learning and improvement…in partnership with … grantees to set up systems of measurement that provide us with answers to both our short-term and our long-term questions. It’s very important for us to have systems of measurement that we have evolved collaboratively with our partners and also to use that data to inform our decision-making and course direction.’

Our profession is getting serious and smarter about what we measure, but we are beginning to understand that we need to do this by being inclusive in who does the measuring. In this way we enable the diverse parties that affect and are affected by a problem to articulate and test their assumptions about each other. When we link constituency voice to public reporting, we enable society to learn how to solve important problems. We enable an ecosystem-level point of view to emerge for all to see.

The new tools

Searchable databases 
There are a growing number of searchable internet databases that aggregate basic information about CSOs. GuideStar International is now making the model developed by GuideStar in the US available across the world. It aims to provide a free website for every CSO that would like it. Directly and indirectly it is stimulating efforts to create open organisation directories in a number of countries.

Online markets
Other online actors use more performance-related information. GiveIndia, an online giving marketplace in India, is beginning to take performance reporting seriously and the two dozen or so other online giving marketplaces around the world are likely to follow suit. Keystone has just published a study of how online giving markets are utilising performance data of the NGOs listed on their markets.

Taxonomies of success indicators
There are a number of efforts to create online taxonomies of success indicators by answering the question, ‘What are the commonly used indicators to measure success in this field?’ As more organizations adopt common indicators, it is possible to aggregate data from the bottom up and begin to benchmark performance across organizations. Annie E Casey Foundation and Lisbeth Schorr, one of the leading figures in the field of evaluation, pioneered this idea in the US. Others, such as the Center For What Works, are now developing it in partnership with the Urban Institute, and by the Success Measures Data System.

Certification and codes of conduct
Most countries have codes of conduct and basic certification of management standards for NGOs. The most promising of these generate a virtuous spiral of self-regulation and rely importantly on peer review and learning (see, for example, the work of the Philippine Council for NGO Certification).

Independent rating agencies
These continue to spring up, but have two fundamental flaws that so far have resulted in their doing more harm than good. Because of the absence of publicly available information on impact, the raters tend to rely on subjective proxies that bear no actual relationship to effectiveness. The consequence, regrettably, is to incentivize the wrong behaviour. One recent effort in the UK, Intelligent Giving, seeks to get around this problem by rating NGOs mainly on the extent to which they are transparent about their impact.

Capturing constituency feedback
Most promising are the systematic efforts to collect and learn from constituency feedback. Notable among them is the Center for Effective Philanthropy’s path-breaking Grantee Perception Report. Keystone is now working with the Center, the Alliance for Children and Families, and a consortium of US foundations to extend this proven methodology to feedback from the constituents of grantees.

While soliciting constituent feedback through rigorous survey methodologies is often an appropriate way to get needed data, another powerful expression of constituency feedback – particularly where internet usage is highest – is for those who know about an organization to publish their reviews on a transparent platform. GreatNonprofits is now pioneering this approach in the US, while One Economy is experimenting with constituency feedback on its Beehive website. Dalberg Global Advisors has published a set of ratings of the partnership qualities of 85 CSOs and UN agencies on the basis of reviews from 20,000 businesses.[2]

Providers of tools and support
Most CSOs want tools and training support as they embark on more rigorous impact assessment, and there are growing numbers of high-quality providers, including several who have described their work in this issue. The internet provides an enormous opportunity for an integration of knowledge management systems, encompassing the first four types of information listed by Paul Brest, with constituency feedback (through surveys or self-reporting) and public reporting to fundamentally transform the quality and utility of information about impact and performance.

The impetus here will come from the social responsibility efforts of innovative companies – such as the Foundation – as well as the open source community. There is a technical problem to solve: how to find and aggregate information on the internet. Corporate reporting is developing an internet standard taxonomy and it may well be that the groups involved – such as the business association – can extend to support an application for social outcome indicators.

Towards a performance measurement commons

Bill and Melinda Gates exemplify a new generation of philanthropists that is taking measurement seriously, and increasingly thoughtfully. But the big question for the field is what will come of the concurrent blooming of ‘a hundred flowers’ of communities of practice, tools, curricula, online platforms, standards, certification mechanisms, ratings, indexes, impact measure taxonomies, software, and stakeholder reviews? We can see an increasing sophistication about what it is important to measure, and by whom. But we do not yet have a clear strategy to develop an effective information marketplace for giving.

The first step might be to agree a framework for bringing together the most important elements of the work of the different innovators in this area. This should seek to realize a higher-level synthesis without constraining the current diversity and creativity. It would identify how different sources of information generation could be linked up and aggregated. It would identify the organisational and business models that could take to scale the most promising innovations, while steering those pursuing less helpful paths towards more effective strategies.

It seems to me that this fundamental structural problem in the field of philanthropy and social change is now ripe for solution. Some hard choices need to be made. Funders need to get behind some models. The innovators in this space need to work together and be less territorial than they often are. The Center for Effective Philanthropy’s unselfish support of Keystone’s efforts to run with the Grantee Perception Report model in Africa and Asia is a great example.

As we move ahead with this exercise in pruning and integrating, it bears remembering that while a bit of top-down is now called for, it is ultimately bottom-up that we want to support. During his time as the president of the World Bank, Jim Wolfensohn convened the big public aid donors three times to try to get them to agree to a common set of reporting requirements. He did not succeed. The trick may be to invest in the creation of infrastructure such as open communities of practice and performance measurement commons that accelerate bottom-up processes of joint venturing, merging, imitating, and above all learning.

1 I have excerpted, with small modification, Paul Brest’s typology, which may be found in the original in his essay, Creating an Online Information Marketplace for Giving.

2 The Business Guide to Partnering with NGOs and the United Nations. This first edition includes business reviews of CSOs; future editions will include CSO reviews of companies.

(This article was originally published in the December 2007 edition of Alliance Magazine as part of a special feature on “Measuring impact - who counts?”)

NGO Services

NGO Services

NGO Events