Friday, October 30, 2009

The Fetishization of Rubrics I

The one thing you see over and over and over in the assessment literature is the "rubric."  Never mind, for now, the history of the concept -- that's an interesting story but it's for another time.

For now, just a quick note.  A rubric is basically a two dimensional structure, a table, a matrix.  The rows represent categories or observable or measurable phenomena (such as, for grading an essay, "statement of topic," "grammar," "argument," and "conclusion") and the columns represent levels of achievement (e.g., "elementary," "intermediate," "advanced").  The cells of the table then contain a description of the level of "grammar" that would constitute different levels of performance.

A rubric is, we could say, just a series of scales that use the same values with something like the operationalization of each value specified.

Rubrics are, in other words, nothing new.  Why then, our first question must be, do assessment fanatics act as if rubrics are new, something they have discovered and delivered to higher education?

I would submit that the answer is ignorance and naivete.  They just don't know.

A second question is why their rubrics are so often so unsophisticated.  Most rubrics you find on assessment websites, for example, suggest no appreciation for something as elementary as the difference between ordinal, interval, and ratio measurements.  Take this one, which is a meta-rubric (an assessment rubric for rating efforts at assessment using rubrics). (Source: WASC)

Criterion

Initial

Emerging

Developed

Highly Developed
Comprehensive List
Assessable Outcomes
Alignment
Assessment Planning
The Student Experience

Looks orderly enough, eh? Let's examine what's in one of the boxes. Here's the text for "Assessment Planning" at the "Developed" level:
The program has a reasonable, multi-year assessment plan that identifies when each outcome will be assessed. The plan may explicitly include analysis and implementation of improvements.
It looks like we need another rubric because we've got lots going on here:
  1. What makes a "reasonable, multi-year plan"?
  2. Mainly what we need here are dates : when will each outcome be assessed.
  3. How should the assessor rate the "may-ness" of analysis and implementation? Apparently these do not make the plan better or worse since they may or may not be present.
Our next analytical step might be to look at what varies between the different levels of "Assessment Planning" but first let's ask what conceptual model lies behind this approach?  It's very much that of developmental studies, especially psychology.   The columns are, thus, stages of development.  In psychology or child development the columns have some integrity in the natural stages a person goes through.  Dimensions may be independent in terms of measurement but highly correlated (typically with chronology) and so "stages" emerge naturally from the data.

In the case of assessment, though, these are a priori categories made up by small minds who like to put things in boxes.  And the analogy they are making when they make them up is very much to child development.  An assessment rubric is a grown up version of kindergarten report cards.