I’ve been meaning to write about “closing the loop” for a while, since it’s one of my favorite concepts from higher education assessment. And I believe it could easily be translated to program evaluation as well. Closing the loop usually refers to one or both of these (interrelated) steps in the assessment cycle:
- the use of assessment findings to make changes–usually improvements–to courses, programs, policy, practices (including assessment practices or tools themselves), etc., and
- the connection between the end of one assessment cycle and the beginning of the next — how the next round is built upon the last round (planning–>gathering data–>analyzing data–>reporting findings–>using data).

For more visualizations of this cycle, which should look familiar to program evaluators, see the results of my google search.
Both of these (interrelated) steps are easily left out of the assessment cycle in practice, just as they can be in program evaluation. Michael Q. Patton, in speaking about Utilization-focused Evaluation, talks about the importance of evaluators spending time with stakeholders after a report is complete in order to help them make use of the findings. Likewise, his Developmental Evaluation calls for building intentional feedback mechanisms into the evaluation process in an ongoing way so that the evaluation is constantly informing the program, and the program the evaluation.
Because “closing the loop” was new to me when I began working in higher education, I thought it might be of interest to others to hear what this can look like:
Department Q regularly reviews senior portfolios to evaluate whether those students successfully demonstrated a selection of program outcomes of interest (i.e. do students’ work samples display the skills and knowledge they are expected to gain by the end of the program?). To do so, faculty convene and discuss what they are seeing as they review the portfolios. This allows for “norming” — ensuring that the evaluation tool being used is interpreted consistently, that challenges in the process and tools being used are surfaced, and that space is provided for rich discussion about instructional approaches and program intentions.
As a result, Department Q has a sense of the strengths and weaknesses of their students in a given cohort (e.g. many exceeded expectations around X outcome but are struggling with Y outcome) and of the strengths and weaknesses of their assessment process (e.g. the sample of work we’re looking at isn’t the best way to see whether students met Z outcome after all, or the definitions in the evaluation tool for what ‘met’ or ‘approaching’ the outcome looks like seem to overlap and need to be revised). The faculty then work to make improvements in their courses and instruction to address potential gaps, and they adjust their assessment tool and plan for the next year so that they can more accurately gauge students’ success. AND they plan for next year’s assessment work given what they’ve learned through this year’s efforts (e.g. next year they want to focus on assessing outcome Y because they’re going to attend to it differently and want to see if that paid off for students in a demonstrable way).
Is this ‘example’ helpful? Do you see the connections too, or am I getting loopy? What do you think — is it helpful to frame evaluation use as closing the loop?
See more ponderings about Patton’s types of use here: Is all data potentially actionable? And more on assessment and evaluation in What can program evaluation learn from deliberative assessment? (a thinking ‘out loud’ post).
Reminds me a little of an action research spiral – reflect, plan, act, observe…. Yes, the example is quite helpful. And anytime you write about Patton, you have my attention. I’m a big fan of his work.
Hi Kim, Thanks for sharing this terminology and your example. I’m also a fan of using results throughout the process, not just waiting until the end (or not using them at all!).
Thanks Ann and Sheila — Yay for using results! And yes, totally reminiscent of the action research spiral, and your posts about evaluation cycles, Ann :-).
My supervisor asked the other day if developmental evaluation is at all connected to Kolb’s learning cycle… which seems also related here… are either of you more familiar with that cycle than I am?