Friday, August 28, 2009

Why Do We Need a Faculty Assessment Committee?

Any time we create a committee we should stop and ask why.  The baseline for answering that question should be the world (or institution) without the committee.  How was it?  How would it be?

When I think about that in this case, here's what I come up with.  With no practicing assessment committee (it was appointed but never met last year):
  1. Faculty have felt little opportunity for real input into assessment
  2. The process has in fact, over the years, been dominated by non-faculty and non-academics.
  3. Many faculty members are unimpressed with the process. Substantive missteps have been frequent.  Faculty members' assessments of assessment span the range from feeling insulted by the unprofessional and intellectually demeaning tone with which assessment has frequently been conveyed to serious criticism of the validity of the methods used in assessment and real concern about how it is consistently ignored or dismissed.  And much in between.
[I suspect that from the "other side" it looks like this
  1. Faculty have been slow to adopt a culture and practice of assessment
  2. Our job is to get the institution to comply with WASC enough to get us re-certified]
So how to make the world different WITH an assessment committee?  If I were an administrator, I'd think that the committee could help me to bring the faculty along.   I could co-opt them as fellow champions of assessment as currently practiced and they'd be vanguards of the movement.

Uh, I don't think so.  The problem with assessment is not lack of faculty buy-in.  Let's repeat that: THE PROBLEM WITH ASSESSMENT IS NOT FACULTY BUY-IN.  The problem with assessment is (are):
  • its methods are methodologically dubious
  • its logic model (observation>analysis>change) is vague, rarely made explicit, and more wishful thinking than realistic
  • it dishonestly or naively hides its political values behind a veil of "objective measurement"
  • it is dominated by self-serving educational entrepreneurs who live off, not for, assessment
  • it is evangelized in the absence of hard thinking about institutional inputs and outputs, the very things it purports to be sensitive to
  • it enters the academy as a fait accompli, more based on conviction and belief than theory, analysis, and argument, and exempts itself from the critical examination and culture of evidence that it champions
So, what can an assessment committee do if even some of the above is in fact that case?  Mainly, I think, hold assessment accountable to normal standards of intellectual integrity and professionalism.  If we do that, I predict, there would be changes in how assessment is implemented, changes that would allow the process to capitalize on its virtues and avoid some of its vices.  And in reaction to THAT you would get more buy in.  The giant flaw in how it's been handled so far is that assessment is blind to closing its own loop.  When faculty don't fall in line, it's not necessarily because they are resistant to change, unwilling to give up their comfortable sinecures, or too arrogant to think about students.  Sometimes its because they have looked at something, and, smart people that they are, found it wanting.

It may even be that the resistance to change and feedback, the comfortable sinecures, and the arrogance that deflects all criticism may lie in the assessment industry itself.  The rest is, as they say, projection.

Sunday, August 23, 2009

Let's Take It Seriously

Let's take assessment and accountability seriously AS AN INSTITUTION. There is a tendency to equate assessment with measuring what professors do to/with students. The buzz word is "accountability" and there's this unspoken assumption that the locus of lack of accountability in higher education is the faculty. I think that assumption is wrong.

We should broaden the concept of assessment to the whole institution. Course instructors get feedback on an almost daily basis -- students do or don't show up for class; instructors face 20 to 100 faces projecting boredom or engagement several times per week; students write papers and exams that speak volumes about whether they are learning anything; advisees tell faculty about how good their colleagues are. By contrast, the rest of the institution has little, if any, opportunity for feedback. It's important: one substandard administrative act can affect the entire faculty, so even small things can have a big negative effect on learning outcomes.

In the name of accountability throughout the institution I propose something simple, but concrete: every form or memo should have a "feedback button" on it. Clicking on this button will allow "users" anonymously to offer suggestions or criticism. These should be recorded in a blog format -- that is, they accumulate and are open to view. At the end of each year, the accountable officer would be required in her or his annual report to tally these comments and respond to them, indicating what was learned, what changes have been made or why changes were not made.

The important component of this is that the comments are PUBLIC so that constituents can see what others are saying. Each "user" can see whether her ideas are commonly held or idiosyncratic and the community can know what kind of feedback an office is receiving and judge its responsiveness accordingly.

Why anonymous? This is feedback, not evaluation. This information cannot be used to penalize or injure anyone. The office has opportunity to respond either immediately or in an annual report. Crank comments will be weeded out by sheer numbers and users who will contradict them. In the other direction, it is clear that honest feedback can be compromised by concerns about retribution, formal or informal. Further analysis along these lines would further support the idea that comments should be (at least optionally) anonymous.

We should note that we already do all of this in principle -- many offices around campus have some version of a "suggestion box." What is missing is (1) systematic and consistent implementation so that users get accustomed to the process of providing feedback, and (2) a protocol for using the feedback to enrich the community knowledge pool and to build it into an actual accountability structure.

The last paragraph makes the connection to a sociology of information. Information asymmetries (as when the recipient knows what the aggregate opinion is, but the "public" does not) and the atomization of polities (this is what happens when opinion collection is done in a way that minimizes interactions among the opinion holders -- cf. Walmart not wanting employees to discuss working conditions -- preventing the formation of open, collective knowledge*) are a genuine obstacle to organizational improvement. Many, many private organizations have learned this; it's not entirely surprising that colleges and universities are the last to get on board.

* as opposed, say, to things that might be called "open secrets"

Friday, August 21, 2009

The Bannerization of Assessment

I met recently, along with Andy, Alice, and Kiem, with two reps from the Blackboard company. They were here to tell us about a Blackboard add-on product called, I think, "the assessment module." Herewith, some observations.

The product incorporates some of the functionality that most of us have seen recently in the CARP software. It allows folks at different levels of the instructional process -- from instructors up to deans and assessment staff -- to input, collate, tally, analyze, query, and report on all manner of information related to assessment. It has the advantage of using the same overall interface and design logic that we are familiar with from our use of Blackboard for classes and it's "flexible" and can be integrated with BANNER.

The mere fact of investing in the software would probably send a positive signal to WASC that we are an institution that is taking assessment seriously. It would also greatly simplify the work of the office of institutional research by organizing assessment data in one place and one format. Nobody at the meeting was prepared to give actual numbers but it seems logical that it could save lots and lots of hours of work in that office (and probably in other offices that have to prepare materials for WASC).

Much of the labor saving derives from the fact that the system assumes that instructors will use it to collect and assess at least some of the work students do in their courses. At a minimum, the system allows students to submit papers, essays, etc. in electronic form and then the assessment group can process these using rubrics we've developed so as to arrive at some measure of student achievement in our programs. In most cases we'd rise above that minimum: instructors would simply use the system itself to do the grading and feedback on papers and exams and so this information would be "automatically" recorded and tallied up for use in assessment. This would make faculty life easier because we would not have to submit separate assessment information. Ideally, most, if not all, of the work that we assign for evaluation in courses would be associated with a rubric that would be in the system and then students could submit work electronically and we could have open in side-by-side windows the student's work and the rubric and we rate the work on each measure, add comments, etc. and then the student receives the feedback in electronic form and the aggregate results for the class are automatically recorded and forwarded "up the chain" to department heads, the office of assessment, etc. as appropriate.

MY TAKE-AWAY

After several hours listening to the Blackboard reps (sales and technical folks) here are a few observations:
  1. These folks do not understand how education happens in a liberal arts college.

    1. what is valuable for students
    2. how departments work
    3. how decisions get made
    4. what value added professors actually bring to the mix

    Instead the software is designed to resonate with an auditor's fantasy of higher education as might be manifest in the cfo of a large, for-profit, online university.

  2. Feature after feature of the software is perfect for online correspondence courses as offered by, say, University of Phoenix.

  3. While the company representatives repeatedly touted the system's "flexibility," in fact, it imposes dozens upon dozens of assumptions about teaching and learning on the process without any self-consciousness. The whole thing derives from a particular view of academic assessment (itself a refugee from peer review) and purveyors appeared to have zero sense that its epistemological status was different from, say, the law of gravity.

  4. Totally absent from their pitch was any sense at all that there was an educational problem that this product could help you solve.

    1. What it does address is the fact that institutions like Mills have been told "you must do something" and this is clearly a something and spending a lot of money on it would be a great demonstration of institutional commitment.

    2. The company appears to have done zero assessment of the temporal impact of the processes the software would require. "Eventually, instructors would get really good at entering this stuff and so the time involved would drop over time..." "The information could be viewed and sliced in many different ways..." (by whom?) Is there a net gain in productivity? No idea. Is there a net positive for student learning? No idea. Will more parents want to pay our tuition because we use this system? No idea. What should instructors stop doing to make time to use this system? No idea.

  5. What they are selling is "a license and consulting." In order to figure out how to use the software and adapt it (remember, it's very flexible) you have to hire them as consultants. Remember too, that these consultants, as far as I can tell, have very little fundamental appreciation for how a liberal arts college works. Either they will mislead us because they don't understand us or we will pay for them to learn something about how a college like Mills works.

  6. The fact that, as potential customers, we were hardpressed to come up with things that we want to do that this product would make it easier for us to do (usually at these things users' imaginations get going and they start saying "hey, could I use it to do X?") and instead we sat their realizing that the software would make us do things is telling.
SOFTWARE DESIGNED TO CONNECT THINGS UP

Two aspects of the software are key (from a software design point of view) -- "the hierarchy" and "links."

A core concept in the software design is "the hierarchy" by which they seemed to mean the managerial hierarchy that oversees the delivery of education. At the bottom of this structure are instructors and their students. Instructors implement courses which are at the next level above them -- overseen "by a department chair or dean" who might then be under another dean. Above this we have "the assessment operation" -- as the discussion went on it seemed that this means some combination of Institutional Research and Assessment Committee. Then above this you might have other levels of college administration and then above this outside mandaters such as WASC. The genius of the software is that each institution can build-in the hierarchy that is appropriate to itself. The data in the system, the descriptions of goals and standards and such, are carefully protected so that only the appropriate people at the appropriate level of the hierarchy can see, change, etc.

The other part of the system is the links. It allows you to build a rubric for, say, reading lab assignments and for each item in the rubric to be linked back to course learning objectives which are in turn linked back to program goals and these back to institutional goals or to requirements set forth by external agencies. This means that when you evaluate 25 lab reports, the system automatically gets information on how well the institution is doing in its effort to inculcate a culture of experimentation AND it also automatically gets information about the fact that the institution is monitoring whether or not such learning is occuring. And all this simply by clicking on a radio button in an web-based report evaluation rubric!

All of these things are, of course, changeable. In theory. In practice, the system allows for the creation of extremely high levels of opaque complexity. To insure system integirity new procedures will need to be invented so that faculty who want to make a change can confer with departmental colleagues and get department head to make a request to office of assessment and then maybe the gen ed committee or the epc has to get involved etc. Or, even more likely, once stuff is in the system it just stays there until it causes a major problem.
The designers of the system seem totally oriented toward (1) the capacity to output what an entity like WASC wants and (2) changing the way instructors teach via a logic of "it's easier to join than fight" and "why duplicate your efforts?"

DISHONESTY AND ANTI-INTELLECTUALISM

A system like this is championed for its flexibility but that flexibility exists only relative to how rigid it could be. Neither its designers nor its purveyors struck me as having even a hint of a nuanced view of what education is and how it happens and how real educational organizations work. That's too bad because these are not mysterious topics -- a lot of people DO know a lot about them. What the talk of "flexibiliity" represents is marketing-speak. A common complaint about course management software and student systems software is that it is inflexible and "doesn't fit how we have kept records before" and so the folks who write it add more options to mix and match the pieces (the presenters seemed to want to impress us by the fact that on a particular screen we could have two tabs or four : "you can set it up so it's exactly right for your process!"). But that's not really flexibility, that's customization. System software is, pretty much by definition, not flexible. It's especially true that system software almost never adapts to an organization; organizations adapt to system software. We've seen this plenty over the years when we're told "Banner can't do that" or "we need this change because of Banner."

A second moment of dishonesty happens because the designers and sales force have clearly bought into the ideology of the professor/instructor as problem. They have talked for so long to assessment afficiandos and heads of assessment who get blowback from faculty that they "know" that individual professors don't like this stuff and that part of the challenge of their job is just to soft pedal around that. They are not selling this stuff to instructors. They are selling it to the instructors' managers or ueber-managers, folks who themselves have uncritically bought into the idea of there being a crisis of accountabilty in higher education. The intellectual dishonesty lies in the fact that these folks are neither willing nor able to actually have a critical conversation about any of this. They simply think of people who do not swallow it hook, line, and sinker as "unsaved."

AN IMPORTANT ASIDE

Sociologically, what's interesting is that this is an example of the "for-profit" side of education rubbing up against the not-for-profit side. Blackboard and its competitors, as well as the folks who are on the hustings about assessment, are entrepreneurs. They don't live for assessement, they live off assessment. And we know that that's an arrangement that makes intellectually honest discussions hard to come by

Thursday, August 6, 2009

It IS an Industry...

Recently got copy of an email sent out by McGraw-Hill's "Assessment Research Project." In part it said:
As a quick reminder, we are conducting this nation-wide study to learn about assessment practices that are actually being used by professors of Introductory Sociology. We are seeking a copy of your syllabus, mid-term and final exams. If you do not assess cumulatively, please submit your mid-year and end-of-year exams along with your syllabus.
The recipient was assured that the material would be kept confidential and not published in any of their teaching materials (interesting that they even need to say this) and was asked to be sure to "please be sure to let me know if you would like to receive a Certificate of Participation and/or an honorary mention in our research"! Maybe you could include that in your tenure file.

There's money in them thar hills, folks, and even though the accreditation agencies, think tanks, washed-up academics who've become assessment entrepreneurs, and standardized testing organizations have a head start (since they are the ones who get to set the agenda), textbook publishers are starting to realize that if we are up against the wall we'll love to order textbooks that come with "integrated assessment plans" of some kind.

Onward and upward!

See also...

PRESS RELEASE: McGraw-Hill Education Forms New Assessment and Reporting Unit to Meet Growing Global Demand: Combines CTB/McGraw-Hill, The Grow Network/McGraw-Hill and McGraw-Hill Digital Learning

You can earn over $100k and live in Monterrey, CA: Director, International Research and Development

Wednesday, August 5, 2009

Validity and Such

An AACU blogpost referred me to the National Institute for Learning Outcomes Assessment website which referred me to an ETS website about the Measure of Academic Proficiency and Progress (MAPP) where I would be able to read an article titled "Validity of the Measure of academic Proficiency and Progress ."

And here's the upshot of that article: The MAPP is basically the same as the test it replaced and research on that test showed
...that the higher scores of juniors and seniors could be explained almost entirely by their completion of more of the core curriculum, and that completion of advanced courses beyond the core curriculum had relatively little impact on Academic Profile scores. An earlier study (ETS, 1990) showed that Academic Profile scores increased as grade point average, class level and amount of core curriculum completed increased.
In other words, the test is a good measure of whether students took more GenEd courses. And we suppose that in GenEd courses students are acquiring GenEd skills. And so these tests are measures of the GenEd skills we want students to learn.

A tad circular? What exactly is the information value added by this test?

Introduction

This is intended to be a critical discussion of assessment in higher education.