Aligning Assessment with Outcomes
2. Collecting data
As we've discussed, one of the main goals of assessment and evaluation is to measure learning and/or abilities.
Unfortunately, the actual learning is unseen. That's because it happens in the minds of your students. We can only measure and judge what is observed as a result of the learning. In order to do this, we have to put tasks in front of students so they have a chance to show or prove their learning or ability. Sometimes the task itself closely aligns with the outcome you are trying to measure. For example, if you want to see if students can reposition a patient in bed you can assess their technique while in the lab, or on practicum. If you are teaching business communication and you require your students to edit and format a document for distribution, then you can set this task as an assignment or exam, and easily judge the result. But what if you want your students to better understand something complex, like the impact of the Indian Act on today's governance of unceded territory in BC? How would you assess that? What would be the best tool? A multiple choice quiz? An essay? A presentation? A reflective journal? These would all be imperfect representations of the understanding. In fact, they are just hints of potential understanding.
The problem with much assessment is that you often end up measuring the task itself (e.g. writing an essay, giving an oral presentation), rather than the learning or ability it is supposed to reveal. As shown above, this is okay if the task itself is closely aligned to the outcome, but often it isn't. Some students struggle to write well which might mask their deep and nuanced understanding of a concept. Conversely, some students are super test takers and can score highly on a MC exam without deep understanding, relying instead on their memory, perceptual fluency, and deductive logic.
Choosing the best way to assess learning is more art than science.
The instrument or task you choose to assess learning should be:
- Valid - the degree to which it measures what it is supposed to measure. A well-designed rubric can improve validity of any task.
- Reliable - the degree to which the marking criteria elicits consistent results regardless of the assessor or context.
- Practical - the amount of resources (time, effort, money) to deliver, complete, and evaluate.
- Fair - the degree to which the students are equipped and enabled to be successful if they choose to be.