Organized Anarchies: 13 Steps to Building a Learning Organization

This post was originally published in the New England Journal of Higher Education

In many ways, higher education has not changed in the nearly 1,000 years since the first university was founded in Bologna, Italy in 1088. Many courses still have professors or “masters” lecturing in front of students, with exams being reproduction of facts learned in lectures. But in other ways, higher education changes daily. A brief perusal of headlines from the Chronicle of Higher Education or Inside Higher Ed demonstrates the changing landscape of colleges and universities. Given this perpetual state of chaos, colleges and universities not only need to withstand and manage change, they need to leverage that environmental change, and sometimes even foster change to better meet the needs of diverse stakeholders. To achieve their goals, colleges and universities need to become “learning organizations.”

Continue reading “Organized Anarchies: 13 Steps to Building a Learning Organization”

Assessment as Learning

NOTE: This was originally posted on here.

About 16 months ago our institution started an institutional assessment steering committee (IASC) as a vehicle to improve assessment on campus. Composed of representatives from administrative and academic units across campus, we have been engaging in a number of activities to help increase confidence and capacity in assessment and also begin to build a culture of assessment.

Continue reading “Assessment as Learning”

Managing Change to Close the Loop

While there are variations, the basic assessment cycle comprises four steps: identify outcomes, develop strategies to achieve the outcomes, gather and interpret data related to outcome achievement, and close the loop. A great deal of emphasis is placed on the first three steps, but less on the final and perhaps most important step: closing the loop. Assessment is meaningless without putting the findings into action.

Continue reading “Managing Change to Close the Loop”

Assessment and Improvement Science

(NOTE: This blog posted was also posted on http://www.cas.edu/blog_home.asp?display=73)

Recently, I attended the Assessment Institute in Indianapolis. During a discussion with a friend she introduced me to the concept of improvement science. I had no idea there was such a field of study, so I immediately search Google to learn more. After reading a few online articles, I was intrigued enough to purchase for my flight how a copy of New Directions for Evaluation (2017), number 153 as the issue was dedicated to improvement science.

To those implementing assessment, the concept is familiar. According to Moen, Nolan, and Provost (as cited in Christie, Lemire, & Inkelas, 2017, p. 11), “improvement science is an approach to increasing knowledge that leads to an improvement of a product, process, or system. That certainly sounds like the continuous improvement purpose of assessment. What is a little different is that between improvement science explicitly focuses on systems thinking. Because of this systems approach, a central principle of this approach is change management, as it is critical for improvement (Christie, Lemire, & Inkelas, 2017).

The journal issue goes into detail regarding the components of improvement science, an operational model, and then case studies to illustrate what the authors are describing. But, there was one discussion point that jumped out to me. A cycle of testing and learning is foundational to improvement science (Lemire, Christie, & Inkelas, 2017). This cycle includes four steps: Plan, Do, Study, Act (PDSA).

Screen Shot 2017-11-08 at 8.08.55 AM

In this cycle, improvement occurs in small steps to see how effective implementation is and to better understand concomitant issues. The PDSA cycle is critical to change management as there are many issues that impact implementation of an improvement including user buy-in, resources, and sometimes even politics.

The PDSA cycle can easily be applied to assessment in higher education. Once the assessment is complete and recommendations for change are made, those changes should be implemented in small steps starting first with planning out the execution. After the planning takes place, a small-scale version of the improvement is implemented. Third, there would be assessment of that small-scale improvement. With this information, the improvement is scaled up.

Here is a basic example of how this might look in assessment practice. The orientation office at a small college completed a CAS self-study and learned that their program regarding sexual assault prevention did not achieve the learning outcomes intended. A recommendation in their self-study report is to contract with a professional organization that has developed a program called “Consent4U.” However, the program costs almost half of the entire orientation budget. While the benefits could be great, the orientation director wants to test the program first before making the significance financial investment. The director ran a pilot of the program with one of the residence hall learning communities. To understand change in learning, the director developed a pre- and post-test regarding sexual assault. In the first step of the cycle, “plan,” the director collaborated with the residence hall director to schedule a time and partnered with a faculty member in sociology to create the pre- and post-test. The program was implemented as part step 2, “do.” After the program was done, the director administered the post-test and “studied” the results. Based on the data, the director determined that given the amount of learning students obtained from this program, they were going to implement the Consent4U program and they requested additional funding from the provost for the program.

Some of the tenets of improvement science mirror those of assessment. However, the Plan, Do, Study, Act model may provide a way to manage the change that comes with making an improvement.

References
Christie, C., Lemire, S., & Inkelas, M. (2017). Understanding the similarities and distinctions between improvement science and evaluation. New Directions for Evaluation, 153, 11-22.

Lemire, S., Christie, C., & Inkelas, M. (2017). The methods and tools of improvement science. New Directions for Evaluation, 153, 23-34.

 

Assessment as a Tool for Organizational Change

NOTE: This blog post is inspired by Margaret Leary. During the ACPA16 Convention she and I discussed ideas she had regarding how assessment can foster organizational change. That conversation me to learn more about organizational change and how assessment might be related to it. This blog post is a result of that research and contemplation.

Continue reading “Assessment as a Tool for Organizational Change”

Four Stages of Assessment Competence

Originally published November 3, 2015 on The Student Affairs Collective

Assessment isn’t an activity. It’s a state of mind.

The statement above has been my mantra for the past five years. Too often, assessment is seen as an afterthought rather than an integral part of the program or service planning and implementation process.

Interestingly, this statement didn’t occur to me when I was in the midst of an assessment project, teaching class, facilitating a workshop, or consulting with a division of student affairs. It occurred to me in a grocery store. Yes, in the deli aisle waiting on maple and brown sugar ham, to be exact. I won’t bore you with the details here, but you can read more regarding this “evaluative epiphany” here: http://bit.ly/1MSqT2x.

In this post, I don’t want to talk about the statement itself, but what it represents. Recently, I echoed this refrain during a webinar sponsored by ACPA’s Commission for Assessment and Evaluation. The topic was “Assessing Cultures of Assessment.” (You can access the webinar here by entering your name and email) At the outset, I suggested that this statement, Assessment isn’t an activity. It’s a state of mind was a definitive sign that a culture of assessment existed in an organization. When assessment is a state of mind, it is infused into every aspect of individual or organizational practice including planning, implementation, and – of course – evaluation. At this stage, assessment becomes an unconscious, embedded element of everyday work.

So, how does one arrive at this destination where assessment is a state of mind?

I think the four stages of competence, originally described as the four stages of learning, and outlined by Linda Adams (2011), can be a helpful guide. These stages are

  • Unconscious incompetence
  • Conscious incompetence
  • Conscious competence
  • Unconscious competence

In the unconscious incompetence stage, people are not engaged in assessment and don’t know why it’s important. They are unable articulate its value or purpose. This stage described most of our field 10-15 years ago. The prevailing attitude was that assessment was simply a fad and we just needed to “wait it out.”

Once awakened to the both the necessity and benefit of engaging in assessment, people realize they need to it, but are not sure how to do it. This is conscious incompetence. The first step is to identify the actual skills and knowledge needed to perform assessment. The 2nd Edition of the ACPA/NASPA Professional Competencies (released in August 2015) provides a framework for skill and knowledge development regarding assessment, evaluation, and research. A great way to develop these skills is to seek out books, workshops, and other resources to improve skill and knowledge. Fortunately, more and more resources are available. There are four new student affairs assessment books coming out this academic year. How exciting! One centers on leading assessment for student success and the other focuses on coordinating divisional assessment. Two more covering more general assessment practice will come out in early March. Workshops and conferences are another way to build assessment competence. Each summer, ACPA holds its Student Affairs Assessment Institute and NASPA sponsors its Assessment and Persistence Conference. Both associations have special interest groups regarding assessment for networking and professional development. (ACPA’s Commission for Assessment and Evaluation and NASPA’s Assessment, Evaluation, and Research Knowledge Community). In addition, many master’s level preparation programs have assessment courses to help students move from this stage of competence to the next.

As people gain skill and knowledge, they become conscious competent. They value the assessment process and are continually attentive when performing it. Assessment takes effort and concentration at this stage but continues to become more comfortable and more frequent.

At the apex of this competence hierarchy is unconsciously competence. At this stage, assessment has been performed so often that it becomes habit or second nature and is integrated into daily practice and processes. It is important to note that learning should still take place. Even in assessment, lifelong learning is important as new scholarship is constantly being created.

As I mentioned at the beginning of this post, the statement Assessment isn’t an activity. It’s a state of mind came to me in a grocery store. In the deli aisle, I recognized that I was shopping in a way that mirrored the assessment process. My goal was to be both effective and efficient and those goals shaped by actions from planning to execution. Assessment had become so unconscious that it was integrated into other parts of my life. While, you may not be as much of a geek as me and want assessment to infiltrate your personal life, I do hope you develop your assessment skill and knowledge to the point that you are unconsciously competent.

At what stage are you? What can you do to get to the next level?

References
Adams, L. (2011). Learning a new skill is easier said than done. [Blog]. Retrieved from http://www.gordontraining.com/free-workplace-articles/learning-a-new-skill-is-easier-said-than-done/

The Tinkertoy Postulate

I entered the Indianapolis Marriott Ballroom on Tuesday evening, April 1st to watch the Pecha Kucha sessions at the 2014 ACPA Annual Convention. The room was packed and everyone was enjoying the fast-paced 6 minute and 40 second presentations using 20 slides with each slide changing every 20 seconds. These presentations were fast, fun, and engaging. I, too, was captivated.

But this isn’t the focus of my story. What was going on around me is.

In between presentations I observed the crowd. People were talking and laughing with those sitting or standing close by – old friends, new friends, and ACPA acquaintances alike. With a backdrop of “edutainment,” people were connecting.

In a row near the back, I spotted two colleagues I knew sitting next to each other. They weren’t talking to each other like other folks were. I then realized that while I knew each of them, they likely didn’t know one another. One of these friends was Kristin Carpenter, a colleague from the University of New Hampshire Department of Residential Life. The other friend was Kristin Skarie from Teamworks who I have known through ACPA for a number of years.

The moment I saw them sitting together, I knew I had to connect them. Not only because they had the same first name (even spelled the same!) but also because they have a common interest. Kristin C. loves the outdoors, the environment, and intentional living. Kristin S. recently wrote a book entitled A Year of Nothing New: Tools for Living Lean and Green where she discusses how her life changed when she decided to stop shopping as a hobby which resulted in a renewed focus on deliberate, responsible, local living (check out the book here: http://www.betterteams.com/store.htm). Based on these interests, I figured Kristin and Kristin would hit it off right away. I went up to them, said hi, and introduce them to each other and suggested that Kristin S. tell Kristin C. about her book.

As I headed to another event later that night, I was reflecting on the connection that I made between Kristin S. and Kristin C. That connection reminded me of a concept I learned many years ago as a hall director at UNH. Ruth Abelmann, Associate Director of Residential Life, talked about residential life staff as Tinkertoys because they connect people to each other. The analogy is perfect for any student affairs professional.
If you aren’t familiar with Tinkertoys or Fiddlesticks (their generic kin), it is a set of wooden hubs and spokes that allow the user to create a variety of constructions. The hub has many holes around its circumference into which spokes can be inserted. In the Tinkertoy Postulate, each of us is a hub and we have connections to other hubs via spokes. Thus, I had a connection to both Kristins individually. But they didn’t have a connection to each other.

photo 4

When I connected Kristin with Kristin they created a bond. In this case people are similar to molecules. A molecule is stronger if the atoms that comprise it have many bonds to the other atoms. People are also stronger if they have more connections. They feel integrated and that they matter.

photo 1

Now, imagine if everyone were to be a hub making intentional connections between other people. We would have a constellation of connections that were integrated and interconnected.

IMG_0980

That would be an almost unbreakable net of relationships. Be the hub and connect others.

The Argument for Competency-Based Higher Education

There has been recent buzz regarding the awarding of higher education degrees based on demonstrated competence of knowledge and skills rather than the traditional acquisition of a set number of course-based credits. In April 2013, the U.S. Department of Education approved the eligibility of Southern New Hampshire University to receive federal financial aid for students enrolled in a new, self-paced program (http://bit.ly/17IHbJ2). Then in May, the U.S. Department of Education notified colleges and universities that they could apply to provide federal student aid to students in competency-based programs and identified a process for that application (http://bit.ly/HCZvGW). Later this year (2013), Wisconsin’s extension system will start a competency-based program where students with experience and program-specific skills may be able to test out of courses (http://bit.ly/17cV4Js).

I am enthusiastic and optimistic regarding the possibility of competency-based education. There are benefits for all constituency groups involved. Here are a few of the benefits I envision. What benefits to you see?

  1. The focus of the degree is truly on skills and knowledge attainment not credits or seat time.

Currently, colleges and universities award a degree essentially based on seat time. A student satisfactorily completes 120 credits and receives a diploma. While there is an assumption that satisfactory completion of coursework suggests learning has occurred, the degree itself is not awarded based on demonstrated skill or knowledge. Aren’t the knowledge and skills what college and universities should be focusing on?

  1. Graduates are better prepared.

If the focus shifts from completed credits to demonstrated skill and knowledge, then it seems logical that college students will be better prepared than they currently are as they transition from these institution. Federal reports, international rankings, and books such as Academically Adrift decry the academic preparedness of today’s U.S. college students. Competency-based higher education can re-center degree attainment on what really matters to everyone – skill and knowledge.

  1. There is a clear delineation of acquired skills and knowledge for employers/grad schools.

As I talk with colleges working in career development, they discuss the inability of seniors to articulate what they have learned during their undergraduate careers. Yes, they can list off all 1.3 million items on their resume (that they actually started developing in kindergarten). However, they cannot explain what skills and knowledge they acquired from these experiences no how they can apply what they learned to different situations. Developing competency-based educational program would require clearly defined sets of skills and knowledge that would have to be demonstrated to graduate. This delineation would make it easier for students to describe these knowledge and skills. This explanation would also make it easier for employers and graduate schools decipher resume’s to determine what students know.

  1. Alternate college journeys are validated.

Competency-based higher education is student-centered. Rather than making students conform to an antiquated, mode of education most appropriate for the industrial age, this model focuses solely on competencies and acknowledges the real fact that students can acquire these competencies multiple ways. This model honors the multiple journeys students take to achieve their degree. Students can swirl between institutions to acquire the skills and knowledge required to graduate. They can also double-dip by attending two institutions at the same time. Students may acquire skills and knowledge when they stop out of college because they are developing skills on the job or in other settings. As the number of diverse paths to a college degree increase, a model for degree completion is needed to align with these myriad journeys.

  1. College will be cheaper for students, colleges, and the federal government.

Competency-based education would be cheaper for most higher education stakeholders. If the focus is competence, students wouldn’t need to take courses at the same institution and articulation agreements wouldn’t be needed. In addition to coursework, students could also acquire skills in a variety of ways including working a job, volunteering, or serving in the military, etc. All of these options could decrease the cost of degree attainment for students. With decreased costs for a college education comes a reduced need for financial. A reduction in need for financial aid would ease fiscal burdens for individual institutions as well as federal aid programs. It is important to note that a shift to a competency-based model would include an initial investment at the institutional and possibly federal and state level for development and implementation.

  1. Assessment will be easier.

It also seems that assessment would be easier in a competency-based system, or at least much more clearly focused. Right now, it seems challenging for many academic departments and institutions to identify learning goals and outcomes and find ways to document their achievement. In this new model, goals and learning outcomes would have to be clearly articulated (which would take time of course). Competency milestones on the path to degree would need to be developed to help a student know if she was on track. The assessment process wouldn’t be easy. However, the end result for student learning would be much clearer that it currently seems to be.

While there are benefits to competency-based higher education, the process to implement this model nationally would be a long, challenging road. It would require agreement that this model is the best for students and the U.S. higher education system. Once that understanding was reach, the conversation regarding which skills and knowledge would need to be demonstrated for each discipline would begin. This discussion couldn’t be resolved during a weekend retreat. It would take longer. But, that conversation has started and I am interested to see where it leads.

What benefits and challenges do you see to competency-based higher education?