For now, just a quick note. A rubric is basically a two dimensional structure, a table, a matrix. The rows represent categories or observable or measurable phenomena (such as, for grading an essay, "statement of topic," "grammar," "argument," and "conclusion") and the columns represent levels of achievement (e.g., "elementary," "intermediate," "advanced"). The cells of the table then contain a description of the level of "grammar" that would constitute different levels of performance.
A rubric is, we could say, just a series of scales that use the same values with something like the operationalization of each value specified.
Rubrics are, in other words, nothing new. Why then, our first question must be, do assessment fanatics act as if rubrics are new, something they have discovered and delivered to higher education?
I would submit that the answer is ignorance and naivete. They just don't know.
A second question is why their rubrics are so often so unsophisticated. Most rubrics you find on assessment websites, for example, suggest no appreciation for something as elementary as the difference between ordinal, interval, and ratio measurements. Take this one, which is a meta-rubric (an assessment rubric for rating efforts at assessment using rubrics). (Source: WASC)
Criterion | Initial | Emerging | Developed | Highly Developed |
Comprehensive List | ||||
Assessable Outcomes | ||||
Alignment | ||||
Assessment Planning | ||||
The Student Experience |
Looks orderly enough, eh? Let's examine what's in one of the boxes. Here's the text for "Assessment Planning" at the "Developed" level:
The program has a reasonable, multi-year assessment plan that identifies when each outcome will be assessed. The plan may explicitly include analysis and implementation of improvements.It looks like we need another rubric because we've got lots going on here:
- What makes a "reasonable, multi-year plan"?
- Mainly what we need here are dates : when will each outcome be assessed.
- How should the assessor rate the "may-ness" of analysis and implementation? Apparently these do not make the plan better or worse since they may or may not be present.
In the case of assessment, though, these are a priori categories made up by small minds who like to put things in boxes. And the analogy they are making when they make them up is very much to child development. An assessment rubric is a grown up version of kindergarten report cards.
No comments:
Post a Comment