The perils of ranking

The take-away

  • The most popular lists base their rankings on quantifiable data, including research output, income, numbers of student and faculty size.
  • At least one list is now adding a more subjective element: teaching quality.

“The scientific strength of an institution does not necessarily measure its total strength.” That prescient sentence could have come from any commentary on today’s university ranking system, but it was penned by US psychologist James McKeen Cattell in 1910. Cattell produced the first university ranking in “American Men of Science”, a compendium that listed American universities in descending order depending on the number of eminent scientists they employed.

Though the world has changed beyond recognition in the intervening years, what has not changed is the difficulty of measuring a university’s excellence. Today five global ranking systems – ARWU World University Rankings, CWTS Leiden Ranking, QS World University Rankings, THE World University Rankings and U-Multirank – attempt to do so (see infographic).

These rankings are sophisticated compared to Cattell’s. The Times Higher Education (THE) World University Rankings collects data from three different sources: an annual survey of at least 10,000 academics; analysis of around 13 million separate research outputs (journal articles, books, book chapters, conference papers); and direct data from universities, such as size, income, student body and faculty numbers.

This then feeds into 13 indicators balanced across five broad areas of performance, which is then further refined to provide the final overall number. Unlike systems that rely on public sources or data scraped from websites, like QS or ARWU, THE partners with any university that wants to be ranked on a voluntary basis. “It’s a huge global undertaking,” says Phil Baty, editorial eirector of the THE global rankings.  “We’re working with close to 2,000 individual universities across continents and in multiple languages.”

Boosting the ranking position

Global rankings were first introduced in 2003 by Shanghai Jiao Tong University to gauge the standing of top Chinese universities. As competitors emerged, rankings were used primarily to help students and parents choose schools. Given the vast amounts of data each ranking produces, it is no surprise that they have become hugely influential to the schools themselves. The logic goes that increasing a university’s ranking has a snowball effect, attracting better scientists, smarter students and more public funding and donations, producing world-class research, exceptional students and all the benefits that come with a top international reputation.

“We measure and analyse rankings, and of course do benchmarking and share experiences with other universities,” says Eva Hildebrandt of the Technical University of Munich. Similarly, Merle Rodenburg of the Eindhoven University of Technology says that her school benchmarks its ranking against ”a select group of partner universities,” while also using a specific indicator in the CWTS Leiden Ranking to measure collaboration with industry.

Both, however, are worried about the rising influence of rankings. Aside from questioning the reliability and comparability of the data, Hildebrandt and Rodenburg recount stories of universities using rankings to decide whether departments should be merged, if specific research topics should receive bigger budgets, or even whom to hire.

“Universities that optimise their structure and strategy only to perform well in rankings will soon have other problems,” says Hildebrandt. For example, a school keen to climb the rankings may attribute more resources to its science departments – which have high publication and citation rates – at the expense of subjects they may be renowned for, such as the arts, humanities or social sciences.

THE’s Baty voices another concern: “If universities are focused just on climbing up the ranking they may chase prestige and research excellence, but then there is an issue about wealth inequality”. Newer and smaller universities, particularly in developing countries, do not have the resources to compete on these fronts with their top-ranked peers.

One tactic is to merge with other universities. Paris’s Pierre and Marie Curie University was ranked 123rd in THE’s 2018 rankings, while Paris-Sorbonne University – Paris 4 was 196th. Having now merged, their combined research output and reputation has pushed the new Sorbonne University to 73rd position.

Aiming to recognise a wider range of excellence, THE is diversifying its metrics. One of the most important for students, but also one of the hardest to measure, is teaching quality. Recently, THE introduced US and European rankings based on an annual poll of thousands of students. And THE is developing another ranking to measure universities’ success in delivering on the UN’s sustainability goals. “We reject the idea that the only great universities are Stanford, Harvard, Oxford and Cambridge, and that everyone should copy them,” says Baty. “The reason we’re investing so heavily in teaching and impact metrics is to recognise that the sector is strong through its diversity.”


Posted

in

by

Tags: