Tag Archives: evaluating

Reporting impact should be EASY – why do so many struggle with it?

I think the work of the United States Agency for International Development (USAID) is one of the most important that my country, the USA, does.

I think foreign aid by the USA, or any other country, is vital to world economic stability and security. I believe foreign aid prevents wars and reduces human migration fueled by violence and poverty. I also believe foreign aid is just the right thing to do, to help people and our world.

Because I think USAID is so important, it’s difficult to see it stumble so badly, especially in a country I dearly love, Afghanistan. And that seems to be the case with Promote, an Afghanistan-based initiative that is USAID’s largest women’s empowerment program in the agency’s entire history. The Promote web site says:

The aim is to advance opportunities for Afghan women to become political, private sector, and civil society leaders and to build upon existing and previous programs for women and girls.

Three years after it launched, a USA government watchdog agency has reviewed the program and cannot find any concrete data that it has helped any women become political private sector or civil society leaders.

The Special Inspector General for Afghan Reconstruction (SIGAR) was established by Congress to monitor spending by the USA in Afghanistan. In its report released last week, SIGAR cites a letter from USAID saying that the Promote program had “directly benefited 50,000 Afghan women with the training and support they need to engage in advocacy for women’s issues, enter the work force and start their own businesses.” The letter added that Promote had helped women “raise their voices and contribute to the peace and prosperity of their country.”

But the SIGAR report notes that these USAID claims for the program are not backed up by any measurable data, such as actual jobs, internships or additional trainings made possible because of Promote’s work.

The SIGAR report notes that:

  • The Promote program changed its performance indicators substantially in its first two years, greatly reducing the number of people it committed to serve.
  • Because it did not complete a baseline study early in its implementation, Promote lacks a starting point from which to monitor and evaluate the program’s progress over its first 2 years and to measure its overall impact in Afghanistan. In other words, evaluation was not baked in right from the beginning.
  • The Promote program delivers much of its programming through contractors, and SIGAR found that USAID/Afghanistan’s records on the contractors’ required deliverables were incomplete and inaccurate because management did not give contractors enough guidance on record keeping and tracking important information about deliverables in a consistent manner. In addition to such records being absolutely fundamental to being able to evaluate impact, the report notes that complete and accurate records are critical to documenting and maintaining institutional knowledge in a mission that experiences high staff turnover.
  • The report also notes that the program didn’t have feedback from contractors on the potential negative impacts of the proposed programming.

In some cases, attendance at a single gender empowerment class organized by Promote was counted as a woman benefiting from the program. One target was to help 20 women find leadership positions in the Civil Service, but none have so far, according to the SIGAR report. One of the few concrete results cited in a study of the Promote project was the promotion of 55 women to better jobs, but the SIGAR report says it is unclear whether the Promote program could be credited for those promotions.

Two people associated with the program that I have seen on social media have been very upset about the SIGAR report and the article in The New York Times about it. They are saying the data IS there – but neither could give me any links to it, say where the data is or how it was collected, etc. One said that the kind of data SIGAR is asking for is impossible because of two things out of the program’s control: the security situation in Afghanistan and because of the conservative nature of the country. To which I say: NONSENSE. Neither of those factors are reasons not to have the data necessary to evaluate this program – if those issues didn’t prevent activities by the program, then they would not prevent data-gathering about such.

Program results are not meetings, not trainings, not events, and not the number of people that participated in any of them. Those are activities and mere activities can rarely be reported as program results. What happened because of the meeting or training or event? What changed? What awareness or skill was gained? What happened to the participant at the meeting, or because of the meeting, that met the programs goals?

Here is just how easy it can be to evaluate a program: Create a survey to be delivered before or at the start of a meeting, a training or event for attendees. You can get answers to that survey as one big group exercise, as a series of small group exercises or in one-on-one interviews if its a low-literacy group or if you don’t believe the target audience will fill out a paper survey. Ask about their perceptions of various issues and challenges they are facing in relation to the issues you want to address. Ask their expectations of your meeting, training or event. Then conduct a similar survey weeks or months, with the same group, and compare the results. TA DA: YOU HAVE DATA FOR EVALUATION OF YOUR RESULTS. This is a very simplistic approach and just scratches the surface on all that the Promote program should have been gathering, but even just this would have been something. It would have given some indication as to whether or not the program was working.

Now, let’s be clear: this SIGAR report does NOT say the Promote program isn’t doing anything and should be ended. Rather, as the report itself says:

after 3 years and $89.7 million spent, USAID/Afghanistan has not fully assessed the extent to which Promote is meeting its overarching goal of improving the status of more than 75,000 young women in Afghanistan’s public, private, and civil society sectors. 

And then it makes recommendations to the USAID Administrator “to ensure that Promote will meet its goal in light of the program’s extensive changes and its mixed performance to
date.” Those recommendations are:

1. Conduct an overall assessment of Promote and use the results to adjust the program and measure future program performance.

2. Provide written guidance and training to contracting officer’s representatives on maintaining records in a consistent, accurate manner.

3. Conduct a new sustainability analysis for the program.

Here’s some tips regarding number 2:

  • give the representatives examples of what data should look like
  • explain the importance of reporting data that shows an activity has NOT worked in the way that was hoped for, and how reporting this data will not reflect poorly on the representative but, rather, show that the representative is being detailed, realistic and transparent, all key qualities for a program to actually work
  • engage the representatives in role-playing regarding gathering data. Have staff members do simple skits showing various data-gathering scenarios and overcoming various challenges when interviewing someone and how to address such. Then have representatives engage in exercises where they try these techniques, with staff playing the roles of government officials, NGO representatives, community leaders hostile to the program, women participating in the program, etc.
  • emphasize over and over that evaluation isn’t a separate activity from program delivery, done at the end of a project, and provide plenty of examples and demonstrations on what evaluation activities “baked in” to program delivery really looks like.

I developed this comprehensive list of questions to answer in preparation for reporting to donors, the media & general public with a colleague in Afghanistan, to help the local staff at the government ministry where we worked know what information donors and UN agencies regularly asked for, and what we anticipated they might start asking for; what subjects the media regularly asked about or reported on, and what we anticipated they might start asking about or reporting on; and what information could be used for evaluation purposes later. It was part of our many efforts to build public sector staff communications capacities in countries where I’ve served. We needed a way to rapidly bring staff up-to-speed on reporting – on EVALUATION – needs, and I think we did with these kinds of efforts. I hope Promote will develop something similar for those delivering their services, and make sure the lists are understood.

Also see:

Measuring the Impact of Volunteers: book announcement

Want to make me cranky? Suggest that the best way to measure volunteer engagement is to count how many volunteers have been involved in a set period, how many hours they’ve given, and a monetary value for those hours. Such thinking manifests itself in statements like this, taken from a nonprofit in Oregon:

Volunteers play a huge role in everything we do. In 2010, 870 volunteers contributed 10,824 hours of service, the equivalent of 5.5 additional full-time employees!

Yes, that’s right: this nonprofit is proud to say that volunteer engagement allowed this organization to keep 5.5 people from being employed!

Another cringe-worthy statement about the value of volunteers – yes, someone really said this, although I’ve edited a few words to hide their identity:

[[Organization-name-redacted]] volunteers in [[name-of-city redacted]] put in $700,000 worth of free man hours last year… It means each of its 7,000 volunteers here contributed about $100 – the amount their time would have been worth had they been paid.

I have a web page talking about the dire consequences of this kind of thinking, as well as a range of blogs, listed at the end of this one. That same web page talks about much better ways to talk about the value of volunteers – but it really takes more than a web page to explain how an organization can measure the true value of volunteers.

9780940576728_FRONTcover copyThat’s why I was very happy to get an alert from Energize, Inc. about a new book, Measuring the Impact of Volunteers: A Balanced and Strategic Approach, by ChristineBurych, Alison Caird, Joanne Fine Schwebel, Michael Fliess and Heather Hardie. This book is an in-depth planning tool, evaluation tool and reporting tool. How refreshing to see volunteer value talked about in-depth – not just as an add-on to yet another book on volunteer management. But the book’s importance goes even further: the book will not only be helpful to the person responsible for volunteer engagement at an organization; the book will also push senior management to look at volunteer engagement as much, much more than “free labor” (which it isn’t, of course). Marketing managers need to read this book. The Executive Director needs to read this book. Program managers need to read this book. The book is yet another justification for thinking of the person responsible for the volunteer engagement program at any agency as a volunteerism specialist – a person that needs ongoing training and support (including MONEY) to do her (or his) job. This book shows why the position – whether it’s called volunteer manager, community engagement director, coordinator of volunteers, whatever – is essential, not just nice, and why that person needs to be at the senior management table.

I really hope this book will also push the Independent Sector, the United Nations, other organizations and other consultants to, at last, abandon their push of a dollar value as the best measurement of volunteer engagement.

For more on the subject of the value of volunteer or community engagement, here are my blogs on the subject (yeah, it’s a big deal with me):

Judging volunteers by their # of hours? No thanks.

I would never judge the quality of an employee by how many hours he or she worked. When I see someone regularly working overtime, week after week, here are my thoughts:

  • That person’s job might be too much for one person; that job might need to be broken up into two positions.
  • That person might be doing things he or she shouldn’t be doing, and ignoring what should be priorities. I wonder what isn’t getting done?
  • That person may not be qualified for this position.
  • That person may have personal problems that aren’t allowing him or her to get this job done.

So, if I wouldn’t think the number of hours worked by an employee is a good indicator of their job performance, why would I judge a volunteer by the number of hours he or she contributes?

When judging volunteer performance, I look at:

  • What did he or she accomplish as a volunteer for this organization?
  • How does this person’s volunteering – specifically this person’s time and effort – have a positive effect?
  • How did volunteering have a positive effect on him or her?

Which is actually how I judge paid employees as well…

I gather that data by:

  • surveying volunteers, employees, clients and the public, through both traditional online and printed surveys and formal and informal interviews
  • reading through feedback that comes through emails, memos and online discussion groups
  • listening and writing down comments I hear
  • observing their work for myself

What about you? Is your organization still giving out volunteer recognition based on number of hours provided to an organization? Is the person who donated 100 hours to your organization last year really more valuable than the person who donated 20?