Wednesday, December 5, 2007

Hope springs eternal

The Norwegian University of Science and Technology (NTNU) has plummeted from second place (behind University of Oslo) to fourth place (now also behind University of Bergen and University of Tromsø) among Norwegian universities, according to a recent ranking. Also, the Norwegian universities have overall dropped a bunch of places on the list. Not so good.

For NTNU, this is quite a wet blanket, as the mission statement according to the web pages is to become one of the top ten technical universities in Europe by 2020. This recent development is hardly a step in the right direction.

This annual ranking from The Times is also somewhat in conflict with the more famous (and some say more accurate) Shanghai-ranking, where Norwegian universities in general do a little bit better. I've got no particular opinion on which ranking is more or less accurate, but I wonder how objective these rankings are and exactly what criteria must be met (and weighted how) in order to yield a high score.

According to the article, the Times ranking is based on evaluation of 1500 industry leaders and 5000 researchers (whatever the hell that means - the wording made it impossible to decipher), the percentage of international faculty members, the fraction of international students, the number of students per faculty member, and the number of citations per faculty member. No mention of how these factors are weighted, and also, other rankings use other criteria. So; what do these riteria mean, and how can they be affected?

The percentage of international faculty members appears to be a good measure of the university status, as it means people are willing to relocate to the country and university in question, presumably because getting a job there is good news for ye olde resume. However, it could also mean that people relocate because they are unable to get faculty positions in their home country, either because the jobs are extremely hard to come by and the quality is very high, or because the system limits the number of senior faculty staff (Professors) like they have in among other countries Germany. Without stating something about the criteria for hiring international candidates compared to the same criteria elsewhere, I'm not so sure how meaningful this factor is. On the other hand, it provides a good measure of the level of institutional inbreeding, but this can also be accomplished by measuring the fraction or eprcentage of faculty members with an education from other universities than where they are faculty members.

Next criterion is the fraction or percentage of international students. No offense, but what the hell kind of criterion is that? In this day and age, where the need for internationalization has prompted large subsidies of international exchange programmes, having international students is simply a way of getting outside funding for the universities involved. Technically, all that's needed is that someone - like the government - allocates funding for exchange between two universities either within a region, like Scandinavia, or between continents. In other words, having lots of exchange students is hardly a measure of quality, unless some other specifications are included.

The number of students per faculty member tells you just that, and can be regulated without concern for quality. Fire half of the faculty members and watch this number increase by a factor of two while saving money. Again, not necessarily a measure of quality, as it does not in any way, shape or form pertain to the quality of either students or faculty members.

The number of citations per faculty member, however, is in my opinion one of the few objective measures of the quality of an academic - as quantified through the Hirsch-factor and other such measures. The number of citations, although linked to field of study for individual researchers, says a great deal when averaged over the entire institution, as it pertains to the number of publications, where the work is published, and also how the articles stack up within the target publications.

But still - how can one rank academic institutions without leaving the criteria wide open for influence?

4 comments:

Anders said...

Of course there always will be a way of manipulating such a rating. However, I totally agrees that number of citations per faculty member is one of the best parameters out there.

Wilhelm said...

What other factors would you include, A-team?

Anders said...

I don't know.

The thing is universities are also educational institutions, not only a research institution. And do you want to include that in the rating? Undergraduates doesn't publish much, and just including citiations doesn't say much about the learing enviroment for students in general and undergraduates in specific.

Also, if you collaborate closely with the industry, you may not be allowed to publish much of the results. And in a small size university, the publication rate for one of the staff members might have a noticable impact on the whole faculty.

So, citations is a good parameter, but that alone it not perfect if you want the whole picture.

Wilhelm said...

You obviously need several parameters, and you're right. The average Hirsch afctor says squat about the pedagogic qualities.

Money brought in from external sources would be significant, as it means people outside the department care about the research being done.

Pedagogics are notoriously difficult to measure because grades are easily tampered with, unless you institute a world-wide standard for certain subjects. However, I shudder at the number of pencil-neck paperpushers such an administration would warrant.