Among other things this week, I’ve been ruminating on the call to action in this post by 3ie’s Executive Director, Howard White. The International Initiative for Impact Evaluation’s website says that it “3ie funds impact evaluations and systematic reviews that generate high quality evidence on what works in development and why. Evidence on development effectiveness can inform policy and improve the lives of poor people.”
White says (or said, rather, several months ago):
Researchers and funders need to address the moral issue of how those being researched will benefit from the research. Drawing on participatory methods and sharing research findings with communities can make research empowering rather than extractive.
This is just the tip of the iceberg of a rich and interesting discussion regarding evaluation theory and methods with a focus on advancing social justice and empowerment for those involved in the programs we evaluate. I could babble on about what I’ve been learning about this since the keynote Donna Merten (one among many thought leaders in this work) graced us with at the 2011 OPEN conference, but instead I’ll make a connection to my current work in higher ed assessment.
Students should benefit as directly from learning assessment work as is feasible. This usually means embedding assessment work in courses and programs, so that feedback on student work in context (which helps student’s reflect and deepen learning, if done well) serves a secondary purpose of capturing data that contributes to program assessment. Sounds easy, but this requires careful planning at the very least. I think this also means sharing results of program assessment with students more broadly, something we’re not always great about doing in higher ed. The student survey video I produced in cooperation with the Student Leadership Council at Marylhurst is an example of an attempt to direct ‘reporting’ of results at the ‘subjects’ of a study, and better yet (I hope) to engage those ‘subjects’ in further conversation about those results.
How does this connect to working to make evaluation results more actionable? I think if we keep the subjects of our work in mind, think of them as stakeholders akin to program or organization administrators, funders, etc., and work to share results with the clients, customers, or students supported by the programs we’re evaluating, our results will naturally be more actionable. After all, who better to empower to hold programs accountable in responding to results than those clients, customers, students, etc?