Assessment and Improvement Science

(NOTE: This blog posted was also posted on http://www.cas.edu/blog_home.asp?display=73)

Recently, I attended the Assessment Institute in Indianapolis. During a discussion with a friend she introduced me to the concept of improvement science. I had no idea there was such a field of study, so I immediately search Google to learn more. After reading a few online articles, I was intrigued enough to purchase for my flight how a copy of New Directions for Evaluation (2017), number 153 as the issue was dedicated to improvement science.

To those implementing assessment, the concept is familiar. According to Moen, Nolan, and Provost (as cited in Christie, Lemire, & Inkelas, 2017, p. 11), “improvement science is an approach to increasing knowledge that leads to an improvement of a product, process, or system. That certainly sounds like the continuous improvement purpose of assessment. What is a little different is that between improvement science explicitly focuses on systems thinking. Because of this systems approach, a central principle of this approach is change management, as it is critical for improvement (Christie, Lemire, & Inkelas, 2017).

The journal issue goes into detail regarding the components of improvement science, an operational model, and then case studies to illustrate what the authors are describing. But, there was one discussion point that jumped out to me. A cycle of testing and learning is foundational to improvement science (Lemire, Christie, & Inkelas, 2017). This cycle includes four steps: Plan, Do, Study, Act (PDSA).

Screen Shot 2017-11-08 at 8.08.55 AM

In this cycle, improvement occurs in small steps to see how effective implementation is and to better understand concomitant issues. The PDSA cycle is critical to change management as there are many issues that impact implementation of an improvement including user buy-in, resources, and sometimes even politics.

The PDSA cycle can easily be applied to assessment in higher education. Once the assessment is complete and recommendations for change are made, those changes should be implemented in small steps starting first with planning out the execution. After the planning takes place, a small-scale version of the improvement is implemented. Third, there would be assessment of that small-scale improvement. With this information, the improvement is scaled up.

Here is a basic example of how this might look in assessment practice. The orientation office at a small college completed a CAS self-study and learned that their program regarding sexual assault prevention did not achieve the learning outcomes intended. A recommendation in their self-study report is to contract with a professional organization that has developed a program called “Consent4U.” However, the program costs almost half of the entire orientation budget. While the benefits could be great, the orientation director wants to test the program first before making the significance financial investment. The director ran a pilot of the program with one of the residence hall learning communities. To understand change in learning, the director developed a pre- and post-test regarding sexual assault. In the first step of the cycle, “plan,” the director collaborated with the residence hall director to schedule a time and partnered with a faculty member in sociology to create the pre- and post-test. The program was implemented as part step 2, “do.” After the program was done, the director administered the post-test and “studied” the results. Based on the data, the director determined that given the amount of learning students obtained from this program, they were going to implement the Consent4U program and they requested additional funding from the provost for the program.

Some of the tenets of improvement science mirror those of assessment. However, the Plan, Do, Study, Act model may provide a way to manage the change that comes with making an improvement.

References
Christie, C., Lemire, S., & Inkelas, M. (2017). Understanding the similarities and distinctions between improvement science and evaluation. New Directions for Evaluation, 153, 11-22.

Lemire, S., Christie, C., & Inkelas, M. (2017). The methods and tools of improvement science. New Directions for Evaluation, 153, 23-34.

 

Leave a comment