The problem is that in most activities measuring and monitoring service standards is incredibly difficult, and getting it wrong can often lead to terrible distortions. And even where one aspect of service quality can be measured very precisely, problems in other areas can prevent it from being achieved. While I mostly want to comment on measuring quality in universities here, I shall start by illustrating the last point from a recent personal experience in the telecommunications sector.
Thus on January 27th the BT phone line coming to my house broke - no idea how or why, but obviously I needed it fixed as rapidly as possible. Clearly, no phone service can be delivered without the phone line, so this is an easily measurable dimension of the service standard. Moreover, both on its website and in personal communication it was stated that BT normally expects to repair such line faults within three working days. Now, that doesn't strike me as a terribly impressive standard to aim for, but it was nowhere near achieved in our case. In fact our line was not restored until the late afternoon of February 10th, day 15 of our loss of service.
So what was the problem? As I see it there were several: (a) a dreadful faults helpline; (b) inaccurate diagnosis of our fault (very surprising because we had reported the broken wire); (c) failure to find our address (yes, really); (d) engineers not showing up as promised, or showing up and doing nothing. Basically, we experienced a whole series of mistakes and miscommunications from BT, and it took great pressure and persistence finally to get the line fixed. In the end I was so exasperated and annoyed by the whole experience that I wrote to the BT chief executive to tell our sorry tale, something I rarely do. The response was pleasingly quick, but otherwise pretty pathetic: a verbal apology, and an offer of one month's free line rental by way of compensation - absurdly, the recent snow was also blamed, though it clearly had nothing to do with my problems.
Now, this shows just how much can go wrong when the service standard is easily measured. But in universities, the situation is immensely more difficult because it is far from clear what can or should be measured. Yet in the news recently I have seen several articles suggesting some very specific measures, most notably to do with the typical weekly contact hours (lectures, tutorials, labs, etc.) that a student can expect to experience during his/her course. Indeed it is even being proposed that universities should be required to publish information about contact hours for each of their courses, as part of the information made available to potential applicants. This seems to me a seriously bad idea. Let me explain.
Of course, contact hours have always differed a good deal between subjects, for quite understandable reasons. Thus science students often have to spend time in labs to conduct experiments as part of their courses, while history students are expected to spend more time in the library (or nowadays, on line) collecting information for the various essays they have to write. This is all quite normal. For a given subject, there will probably be a bit of variation in contact hours between universities depending on how each institution chooses to structure the degree course and deliver the material. In their advertising material and course brochures, institutions might or might not choose to say how a typical student week will be structured, but I don't see why they should be compelled to reveal this or that detail, including exact contact hours.
Requiring institutions to publish their contact hours implies - for most people studying the information and comparing courses and institutions - that more is better. But we simply don't know whether that is true for university education. It's probably not great to have zero contact hours, for why be at university in that case; and having a very large number is probably not good either, for then university education becomes far too much like spoon feeding, with insufficient room for independent learning. So somewhere in between there is no doubt a happy medium, located at different points for different subjects. But I'm not aware of any evidence that more contact hours are systematically correlated with a 'better' university education. So I would let universities decide for themselves what contact hours to offer, and what information they choose to publish about their courses. This is surely not an area that needs regulation.
Moreover, while in my BT example the broken line implied 'zero service' (and therefore should have called for priority attention), the number of contact hours in a university course has little or nothing to do with the quality of higher education being delivered. Do you think people want to go to the university offering the most contact hours, or the one with the best academic reputation? I would have thought the latter.
The quality of service offered by a university is a highly subjective notion, hard to pin down in precise numerical indicators - it's one of these puzzling things that we can't define all that well, but we know it when we see it. Contact hours are thought to be an easy indicator to measure, but this is not a good reason for drawing a lot of attention to it. In practice, too, if contact hours were used in the way proposed, it would quickly turn out that they were subject to all the distortions that afflict all quantitative indicators of institutional performance - universities would find ways of inflating the figures to make them look better, for instance. In the end, what matters most for universities is the quality and number of graduates they produce at the end, their 'output', not a whole lot of intermediate inputs such as contact hours.
No comments:
Post a Comment