Evaluating community programs to maximize impact and efficiency
Around the world, thousands of organizations run programs aimed at helping people: development agencies try to lift communities out of poverty; rehabilitation programs aim to support addicts to reclaim control of their lives; youth programs want at-risk kids to get the best start in life.
Whatever the mission, community programs are trying to make a difference.
Money is invested by governments, by foundations, by people like you and I, because we want to see these programs succeed. But do they? Are we sure they work?
For years, across all areas of social work, there has been growing pressure to evaluate community programs and ensure they are effective. Randomized control trials help us to see if positive change is occurring over time by comparing people who participate in programs and those who do not.
Such research tells us if people taking part in the program are, as a result, more able to feed their children, less likely to relapse into drug abuse, or more likely to stay in school.
Evaluations that tell us if a program works are essential. They give donors confidence and ensure people receive help in a way that works. “Black box” evaluations measure variables of interest—like malnutrition rates, drug use, or school attendance—before and after an intervention, to see if it makes any difference. And many of them do.
A new question is therefore arising: how exactly do they work?
Researchers and practitioners now want to measure aspects within an intervention to understand how it works. And to see if it works in the way it should.
With some colleagues in New Zealand, and in partnership with the Graeme Dingle Foundation, we used data from an evaluation of Project K, a youth development program, to ask just this kind of question.
Project K consists of three components over the course of 14 months—an outdoor adventure experience where young people learn skills, teamwork, and leadership; a community service project to address a need within their local community; and a mentoring partnership with a supportive adult. A previous control trial evaluation of Project K showed that participants significantly improve in social resources (connectedness with others, sense of belonging in their community, and their social skills and self-beliefs) because of the program. Now we wanted to know how the program achieved such social gains.
Our analysis showed that adolescents who had positive experiences in the outdoor adventure and the mentoring components of Project K showed the greatest progress. Their experiences in the community service aspect did not contribute to their social development.
Such results highlight the value of evaluating not just if community programs work but how they do. Project K staff gained valuable knowledge from the research that allowed them to change the program to ensure it was as effective as possible for the young people they serve. And as efficient as possible with the resources their funders entrust them with.
This is the value of effective research: knowledge gleaned can help you do more good for more people with less money.
- Cassandra Chapman
* * *
Chapman, C.M., Deane, K.L., Harré, N., Courtney, M.G.R., & Moore, J. (2017). Engagement and mentor support as drivers of social development in the Project K youth development program. Journal of Youth and Adolescence, 1-12. doi:10.1007/s10964-017-0640-5.
Read the full research report online: http://rdcu.be/oWIW
All researchers in the Social Change Lab contribute to the "Do Good" blog. Click the author's name at the bottom of any post to learn more about their research or get in touch.
LAST 25 POSTS