In response to David Bonbright’s article on “Making Social Investment Decisions - What do we need to know?” which was published on the SANGONeT Portal last week, a number of issues came to my mind while I was reading it. The issue of measurement has become a new industry in the non-profit sector. Whilst measuring output, outcomes and impact is critical for every organisation as well as those who support them, it has become an over-complicated process and the assumptions made about certain measurements being “good” for social change needs to be challenged at various levels.
His comment that we will see a “concurrent blooming of ‘a hundred flowers’ of communities of practice, tools, curricula, online platforms, standards, certification mechanisms, ratings indexes, impact measure taxonomies, software and stakeholder reviews” is enough to put anyone off working in the development sector.
Before I go further I would ask the question - why do we work in this sector? Is it just a job with outcomes and impacts likened to the toothpaste factory where we can measure tubes produced, profit made and the impact on dental caries or do we do it because we have a passion for the cause and a passion to change the world? If it is the latter (and it probably is for many of us), then the hundred flowers is a bouquet that contains a myriad of stinging insects and is enough to drive us out into the wilderness.
My response could be as long as David’s article, but I will deal with specific issues and try to keep this short. Firstly, I have no problem with measurement if it is done for the right reasons. These reasons will include that it comes from within the organisation, that it is done to assess the effectiveness of the organisation and that it is used as an organisational learning tool. The problem is, however, who is measuring, why are they measuring, what do they want to achieve by measuring and finally, can they effectively measure what they claim to want to measure?
David talks about creating a philanthropy market through providing data in a public marketplace of information for effective philanthropic decision-making. This suggestion goes against good development practice in that the one-to-one relationships that he implies are inadequate or perhaps old fashioned are actually the basis for trust between developmental partners i.e. the grantmaker and the grantee. Shopping on line for organisations who produce the greatest impact (and this depends on whose values?) does not build relationships. If there are strong relationships in the partnership, then the demand for measurement can be tempered.
The current obsession with independent, external measurement implies a lack of trust. It is ironic that David continues later in his paper to refer to how important the relationships are between development practitioners (i.e. the development organisations) and their beneficiaries, but doesn’t recognise that the creation of strong relationships between the organisation and its donor/supporter is just as critical.
We don’t shop for our beneficiaries. Why should grantmakers shop for the organisations that they want to partner with? It does offer a short cut, but as we know mail order brides are not always what they appear to be on paper. Unfortunately, there is no short cut. Getting to know organisations well enough to feel confident that funding will be well spent with levels of accountability is critical to the success of philanthropy.
I would also query what he meant by “best performers”. This is values-based and involves a judgemental take on what is best for a context with which a donor may not be fully au fait. The donor may be interested in output and impact, but more important than these measureables (or not easily measurables) is the process involved. The list provided by President Paul Brest of the Hewlett Foundation’s annual report did not include a key element of good development practice - process. How we do things is as important as what we do. Are we doing the right things? The list takes no note of timeframes - impact in the social arena can take years. For example, the greens movement began in the 1960s and we are only seeing the impact now. Would those organisations have been bounced out of the philanthropy market at the time? The on-line philanthropy market will seek immediate results and will reduce the patience required to effect social change.
David also talks about the advantages of searchable data bases. Yes, these are probably a useful start for a new donor going into an area into which they have never ventured before. However, there seems to be an absolute lack of sensitivity to our current global political context - do people outside of the USA want Guidestar for example? In whose interest is it to map global civil society? This may be of some use to donors in a simplistic way - they can check tax returns, diversity statistics and budgets but there are many other agencies that can use this information that civil society organisations would not be happy about.
The online markets will use standardised performance data set by who? Standardisation is essentially reductionist - everything is based on the lowest common denominator. How do you measure the passion for the cause, the thinking, strategising and considering that goes into every move in the social sector? In addition, this standardisation is usually developed by the “reality of the measurer and not the measured”. (James Taylor and Sue Soal, CDRA) Benchmarking performance across organisations might be a clever tool, but how do the organisations concerned feel? Measurement should not be imposed on organisations or it becomes a threatening event rather than a learning opportunity. If imposed, you will find that people are dishonest about their achievements and the data is then flawed. Organisations that voluntarily seek measurements should be encouraged, but the tone of the article implies that it is time for some top-down muscle in this regard.
We then look at the tools, the training support, the rigorous impact assessments, the service providers, the knowledge management systems, the constituency feedback, the surveys, etc. Here is the industry! How much is being invested in all this? If donors do their sums - what is the impact of these evaluations and tools, etc, versus the money put into them? How much is invested? How much is learned? How much change? Are what we are spending a fortune on measuring something that we all know anyway and can write up in a simple report? Let’s not get swept along the tide of a new industry without asking these questions.
The brilliant paper produced by James Taylor and Sue Soal of the CDRA gives a wonderful critique of “Measurement in Developmental Practice”. They point out that we all measure - it is intuitive to our survival and decision-making on a daily basis as measurement promotes accountability in development practice. However, it is critical not to allow the process of measurement to undermine our purpose.
They also point out that measurement can become an end in itself and it begins to stifle creativity, adaptability and spontaneity. These are things that enable an organisation to succeed - the ability to turn around quickly in a fast changing society. Measurement creates bureaucracy and bureaucracy can be a threat to productivity. Taylor and Soal point out that different organisations at different stages of development in different contexts and cultures will require different measurements. How can they be compared in the philanthropy market place? Imposed systems of measurement are controlling and are anti-intuitive for a developmental practitioner.
So what am I calling for? Measurement yes! It is a critical part of development practice but, as Taylor and Soal say, it needs to be put in perspective. Evaluations are an exciting process. Finding out the views of beneficiaries and other stakeholders including donors and other organisations in the field on our progress and the experience they have with us is a stimulating experience. This includes process as well as output and impact. Even more exciting is exploring the long term impact - have we sparked any systemic change? Have we contributed to the existing discourse or changed the discourse? These questions and the lessons we learn are critically important to our success, they are part of our strategic thinking and planning and they are motivating and thought-provoking.
Most important, however, is that we define the questions we want to ask in a self-critical way and we own the process. An independent evaluator will take instructions from us and we will pay him/her. Each evaluation is unique to its own programme, context and needs. Standardisation for us is anathema. We will continue with our one-on-one relationships with our donors, creating trust, being accountable and being confident that we can do what we set out to do.
- Written by: Shelagh Gastrow is the Executive Director of Inyathelo - The South African Institute for Advancement