Sunday, August 14, 2011

What Will "Assessment 2.0" Look Like? A Proposal

The most serious flaw in assessment as now practiced is the premise that it is something that teachers are not interested in, do not want to do, have not been doing, etc.  A word that comes up a lot in connection with assessment is "accountability," but most folks who use the word don't take the time to be explicit about just who is supposed to be accountable to whom for what.  When someone does get beyond just parroting the word, the most common interpretation seems to be "we need to hold teachers accountable."

We have some news for those who have discovered assessment.  Teachers -- lecturers, instructors, professors -- have long been interested in what works and what doesn't in the classroom.  Those who would appoint themselves guardians of learning have a nasty habit of trotting out stereotypes of the worst professor ever and, in a classic example of question begging, concluding that such figures dominate the academy and represent a threat to the future of higher education.

But rather than argue about that, here's a proposal for what the next stage in assessment might look like.

Given that most professors and most departments are actually interested in student learning and in how to maximize it -- this is, after all, the vocation these folks have chosen -- the resources that have been pumped into assessment projects should be put at the service of the faculty.  Throughout Assessment 1.0 the dominant pattern is for an office of assessment to be in the driver's seat, more or less dictating to faculty (generally relaying what had been dictated to them by accreditation agencies) how and when assessment would happen.  Many faculty found the methods wanting and the tasks tedious and pointless, but most went along -- at some institutions more willingly and at some less.  The interaction between faculty and assessment offices generally came down to the latter making work for the former without the former seeing much in the way of benefits.

That's unfortunate because there are lots of potential benefits for us as instructors.  But to realize them, we need to turn the tables.  The basic premise of Assessment 2.0 should be (1) that it be faculty driven and (2) that assessment offices work for the faculty, rather than the other way round.  Assessment offices should think of themselves as a support service for the academic program rather than a support service for a regulatory body that oversees the academic program from the outside.  The main job of assessment offices should be to make a part of the job that faculty do, as professionals practicing their craft, easier.  A part of what professionals do is self monitor and mutually monitor outcomes.  As faculty, we need to think about what information will help us to make micro-, meso-, and macro-adjustments in our practice that will improve the outcomes we are collectively trying to achieve.

And the services of our assessment offices should be available to us to obtain it.  We need to put the focus back on this side of the operation and shift away from the idea that the primary motivation behind assessment is to prove something to outsiders.  Even the rhetoric from the accreditation agencies, if you slow the tape down and listen, resonates with this: they demand evidence that assessment is happening, that program adjustments happen in response to it, and so on.  Where they are wrong is in their ignorant insistence that such things were not already happening.

The assessment industry did not invent assessment -- they simply codified it and figured out how to make a living off of doing it instead of being involved directly in educating.

Thursday, August 11, 2011

Too Bad Higher Education "Experts" and Vendors aren't Graded

I was inspired by a TeachSoc post from Kathe Lowney today to have a look at two articles in the Chronicle of Higher Education on computer essay grading.

The articles are "Professors Cede Grading Power to Outsiders—Even Computers" and "Can Software Make the Grade?"

My Review: A typical Chronicle hack job to my mind.  Articles like this remind me of National Enquirer.  Author makes little attempt to critically assess comments from his sources and gives little weight  to contrary information (failing to infer, for example, anything from reported fact that in six years of marketing, almost no one has bought into the computer grading product mentioned).  He jumps on grade inflation bandwagon instead of offering an analytic take on it.  In typical COHE fashion he sets up false dichotomies and debates between advocates and defenders as if there is a big divide down the middle of higher education.  In effect, articles like this are just product placement -- hopefully without kickbacks -- and "if someone says it then it's a usable quote" journalism.  As with many COHE articles, it reflects journalism that's more in touch with the higher education industry than with higher education.   It's mediocre work such as  this that makes me let my subscription lapse every year or so.  It's interesting how COHE  seems to have no qualms at all about trashing educators and educational institutions but only ever so rarely do they seem to take an even gentle critical look at education vendors.

On the accompanying "compare yourself to the computer" article : I think I'd fire a TA who graded like that -- the words "capitalism" and "rationality" showing up constitute "concepts related to him" and an answer on Marx where "expelled for advocating revolution" = "significance for social science"?  I scored them 4 and 2 and that was generous.   I'd be mighty disappointed if I were the makers of that software and this is how my product placement in COHE turned out -- would anyone buy it based on this portrayal?!