Tuesday, 24 July 2012

How do we measure an individual's research impact?

This is a subject that I thought I already understood fairly well, having expressed on this blog and elsewhere various sceptical views about the whole notion of impact, including some of the commentary about it that has come out in connection with REF discussions and the like.

Some of the debate about impact has certainly been pitched at a fairly low level, and has often not taken us a great deal further than the well worn territory of either assessing 'research quality' - of an individual or a department (or, more formally, a unit of assessment) - in quite subjective terms; or making the assessment on the basis of publication numbers, journal rankings and citation counts. Not only has this not advanced matters much beyond the old RAE system, but it is open to all the familiar arguments: how many articles to count; how far back to go; how to judge the quality of an individual article, etc. Like a lot of important things in life, research quality seems to be one of these things where 'we know it when we see it'.

However, a few days ago I was intrigued to find that I didn't really understand this topic of measuring research impact as well as I thought. It happened like this. I had been approached by a couple of academic bodies in Italy to help them with two tasks: (i) assessing the research quality of individual academics, by reading and ranking a selection of their papers; (ii) assisting Italian universities to make new appointments at professorial level. I presume my name was put forward by a colleague in Italy, but in order to take things forward the Italian academic bodies naturally asked me to provide my CV and to complete a couple of simple forms. Whether, in the end, the Italians ask me to do these academic tasks, remains to be seen, but completing their forms was unexpectedly challenging.

For one of the forms asked me for the h-index and g-index of my research, something that left me totally baffled as I've never heard of them. Just to deal with the forms I did a quick Google search using the string 'Paul Hare h-index'. To my surprise this came back with h = 5 and g = 7. I still had no idea what they meant so I just put them on the forms and sent them off as requested.

Once that was done, though, I had to investigate further.

It then turned out that the h-index originated in a paper by Hirsch published in a physics journal in 2005. His idea is a really simple one. If a researcher has an h-index of 22, say, it means they have 22 papers with 22 or more citations. The h-index is the maximum number of a researcher's papers where this property holds. Thus someone might have published 80 or so papers, but most will only have a handful of, or even no citations. With 80 papers, the h-index could turn out to be 25, if that researcher - among all his/her output - has 25 papers with at least 25 citations (and not 26 papers with at least 26 citations).

The g-index, by the way, is a modification of the h-index to weight more heavily the articles that gain more citations.

What you realise when you think about all this is that anyone's measure of h (or g) will depend on the specific database that is examined - what papers are included in it, what citations are counted, etc. This means it's not easy to get a wholly reliable measure of h, because available research publication and citation databases are nowhere near good enough. Whatever database is used, for any given researcher their h value should rise over time, implying that it's probably not a great indicator for early-career researchers. Moreover, if different databases are used to investigate a given individual, one can expect to find different h values. All one can say, then, is that the person's true h-value is at least as large as the largest value one manages to find.

To illustrate the effect of using an alternative database, I downloaded an amusing bit of free software called Publish or Perish (Harzing, A.W., 2007, Publish or Perish, available at this link). This searches through the Google Scholar database, and using this approach I found the following values for my research: h = 20 and g = 33. I still don't really know whether these values are good, bad or indifferent, as I haven't checked the corresponding values of any colleagues and haven't seen these research performance indicators reported before.

Should we try to use indicators like this in the forthcoming REF? It seems to me worth thinking about, despite the practical difficulties I have mentioned above.

Monday, 23 July 2012

More on publishing

Academics, even retired ones, are supposed to keep up a steady pace of publishing their research output, though in my case it can't really matter very much as I don't expect to be part of the forthcoming REF (Research Excellence Framework), for which the data are due to be collected during 2013, and the results published in late 2014.

Anyway, regardless of the REF, I guess I've been doing my bit recently. At the weekend I was proof reading a paper on North Korea that I thought was due to be published in the International Economic Journal early next year. It now seems the paper will come out in September this year, in just two months' time.

And a few days ago, my co-editor (Gerard Turley) and I finally submitted to Routledge our edited volume, Handbook of the Economics and Political Economy of Transition, which should be published in March 2013; see the publisher's web-pages for this book. We've been working on this volume for just over two years. It has involved us working out a structure for the volume, then commissioning a large number of contributions from experts in the field, mostly academics, but a few who have also served as ministers in or advisors to various Eastern European governments. This process went surprisingly well, with the submitted volume comprising our editors' introduction plus 37 contributions. Clearly it'll be a large book, so anyone wishing to buy it should get saving up right away.

The submission process itself was quite novel for me, as it was the first time I've submitted a book entirely electronically. In fact as co-editor I've never even printed out the whole thing, so I guess we've saved a tree or two. To make their life easier, the publisher wanted the text of each chapter in a separate file, with each table and figure also in a separate file. Hence our submission amounted to 186 files in all, bundled up in a single, large zipped file. Needless to say, for this to work, we had to be really careful with our file names.

The next step, I imagine, will be a steady stream of queries from Routledge as they work through the files and prepare everything for publication, but our main work as editors is done, and these final stages should not be too hard to fit in with whatever else we are doing. It's been an amazingly interesting project to work on, and I'm really impressed that so many contributors delivered their contributions to a very high standard, and more or less on time. I can say that now, though I must admit that the final weeks were marked by a certain amount of anxiety as we awaited the last few contributions.

Thursday, 12 July 2012

The joys of publishing....

My latest venture has required me to learn a great deal about the publishing business, especially the more technical aspects of layout and design, and setting up an e-book. It's been a fascinating few weeks. Let me explain.

What happened was this. Back in 2010 I had a book published about my travels in Central and Eastern Europe since 1970, recounting my impressions of the hugely interesting places I've been lucky enough to visit over the years, and talking about some of the projects I've worked on as an economist. So the book was a curious mix of travel tale, a history of the region, and an account of what economists actually do when they do research or give policy advice in Eastern Europe.

At first the book was written just as a record for myself, and to some extent for friends and family. But I also wrote it because I realised that fewer and fewer of my students had any idea that Eastern Europe was under communist rule for several decades. Many had not heard of the Berlin Wall or the Iron Curtain, so not surprisingly they had no idea what transition - the process of shifting from a centrally planned economy to a more 'normal' market type economy - might entail. This is part of what my book sought to explain.

The book was published under the title: Vodka and Pickled Cabbage - The Eastern European Travels of a Professional Economist. Since I was never that bothered about making money from the book, I made the pdf-file of the book available on my own website, and it was also on the website of a US-based area studies association. Through these channels I gather that several thousand copies were distributed, and I still get e-mails giving comments on the book.

Unfortunately, in early 2011 I discovered that my publisher had gone into administration. At first it seemed that some other publisher might take over the rights to keep the book in print, but that fell through. Then a couple of months ago I finally wrote to the liquidators to find out what was happening. I never received a reply. This is why I finally decided to reissue the book myself, with a new cover (nicer than the original one, I think), using free software available through the Amazon company, CreateSpace. Using this, I set up the book for a print-on-demand paperback, in the process of which I learned a lot about formatting, style, layout, editing and the like. But I got there in the end and the book is now on Amazon - see link in right panel.

At the same time, since the original book had never been made available as an e-book, I decided to set it up for the Kindle. The formatting needed to do that was a bit fiddly, as I discovered, as my original book file for the paperback version turned out not to convert very well. In fact it's still not perfect, and when I have some time I may go back to it and tidy it up. One of the big advantages of this modern technology is that I can do this whenever I want, quickly and easily.

So now, of course, I hope someone will come along and buy the book...............

Saturday, 7 July 2012

LIBOR: Bankers are not Gentlemen!

The revelations that have surfaced in the past couple of weeks over the setting of LIBOR (London inter-bank offer rate) have been nothing short of amazing, especially for a fairly simple-minded and even sometimes naive economist such as myself.

When we teach economics we spend a lot of time talking about markets and how they work, and we usually discuss markets in terms of the supply side, the demand side, and the (equilibrium) price. Curiously, though, when it comes to the details of price setting, we are often a bit vague, relying on the 'market mechanism' to set proper prices without thinking terribly much about what exactly the price should be. If we do give prices some thought, it is usually to argue that competition between firms will make sure any price distortions cannot last long - if a price is too high, new firms will enter the market; if it is too low, firms will fail and leave the market. And mostly this simple story seems to work quite well.

However, in markets where there are not multiple price setters, the story can break down badly as we discovered in the case of LIBOR. LIBOR is actually a really fundamental 'price' in the financial markets, since massive volumes of contracts (including some mortgages) pay an interest rate that is expressed as so many basis points (100 basis points make a 1% interest rate) above LIBOR. So if the way LIBOR is set is wrong, lots of folk - both firms and individuals - will end up paying the wrong price (interest rate) on their borrowing. Mostly, I imagine, they would end up paying too much (thereby raising bank profits), but sometimes the error could go the other way, and they would pay too little. Thus it's quite important that the LIBOR is set correctly and fairly.

Yet despite its importance, LIBOR is one of those prices to which most economists have paid absolutely no attention. Hence it was interesting, and quite a revelation (to me), to discover that it is set daily by the British Bankers Association (BBA) on the basis of submissions from their member banks. Each bank is supposed to submit data on the interest rate it has to pay that morning for inter-bank borrowings on the wholesale market for funds. The BBA apparently drops the four highest and the four lowest submissions, then averages the rest to set the LIBOR for that day. The whole process seems to me surprisingly informal, and it relies heavily on the truthful submission of the relevant data by the member banks. In other words, setting LIBOR is based on an assumption that bankers are 'gentlemen' in the old fashioned sense of being completely honest, faithful to the official 'rules of the game', and not at all out to play the system for personal gain.

Recent events revealed what one might have guessed if one had investigated the LIBOR price setting process at all carefully, which no one apparently did - though the regulators had at least started to ask the right questions, we gathered. This is that the bankers are not in fact the gentlemen they were naively assumed to be, and that they did, over an extended period, systematically strive to distort LIBOR by providing incorrect data to the BBA. In other words, they lied.

Why might banks do this? Well, there are two reasons that come to mind. First, the interest rate a bank has to pay to get funds in the wholesale market is sometimes viewed as an indicator of the health of the bank. If a bank can borrow cheaply, it is considered healthy; if it has to pay a high interest rate, this is deemed to reflect a market judgement that the bank is in poor shape. So this line of thinking might make a bank try to achieve a lower value for LIBOR than is really correct, by claiming that it doesn't have to pay much to get wholesale funds. Second, the bank might want to get LIBOR higher than it should be so that its customers then have to pay more for their loans and the banks make higher profits.

These factors push in different directions, but either way one can see that banks might have incentives not to be totally truthful. And with LIBOR set in such an informal and non-transparent way, it wasn't easy for anyone to verify what was really going on, not even the BBA itself. As a result, it was all too easy for this price-setting process, based on the regular and truthful submission of information, to break down.

Two lessons can be drawn from this sad and sorry episode. One is that economists need to think more carefully about the specific ways in which important prices in the economy are set, and the associated incentives that affect the various players in the relevant markets. Doing this could lead to a significant programme of rather important and interesting economics research. The other lesson is simply that setting LIBOR surely cannot continue to be done as it has been hitherto.