Which MBA? Ups and downs

Rankings have been a good thing for business education. The intense scrutiny from, in particular, the three main international rankings—The Economist, the Financial Times and Business Week—have helped schools to raise their game over the past decade. When a school fails to place enough of its graduates in high-paying jobs, it is no longer just a concern for that particular school’s students. The numbers are fed into the various rankings and the school’s worldwide reputation takes a hit. If faculties’ research output tails off or if they become half-hearted in the lecture theatre, this too quickly affects a school’s standing.

In an age in which a prospective Indian MBA student might easily be choosing between schools in Australia, America and Singapore, the rankings matter. Institutions that let their game slip will soon feel the effect: fewer applications, unhappy stakeholders and less generous alumni. As any MBA student will be able to tell you, real, transparent competition can only be good for consumers.
Click here to find out more!

But rankings are not without controversy. They may have been a good thing for business education, but only on balance. Many of the accusations made about rankings are of genuine concern.

Perhaps the most important issue is whether schools sometimes play the ranking game to the detriment of their students. In all three international rankings, there are two or three individual criteria that, when focused on exclusively, can make a big difference to a school’s position.

The GMAT entrance exam is a prime example. Rankings use GMAT scores as a proxy for the intellectual capability of students at a school. Indeed, it is a good indicator of future performance on a programme—particularly in the numbercrunching disciplines—and all weight it heavily (it accounts for 6.5% of the total score in The Economist). But, as a result, the last decade has seen GMAT inflation on an alarming scale. The average score at a top-ten school in 1999 was 688 (out of a possible 800); a decade later it is 716.

It is unlikely that students have simply become more intelligent over the past ten years. More likely are two related explanations. As schools have raised their minimum requirements, students have got better at taking the test—spending more time studying sample questions or paying for GMAT prep classes. But if schools are, overtly or otherwise, raising their requirements because they fear a drop in the rankings, they are in danger of neglecting lower-scoring students who would be assets to the programme. Those with leadership potential, or with a fascinating work history, for example.

It is a similar case with salary. How much schools’ graduates get paid is the highest-weighted criterion in the rankings. In The Economist it accounts for 20%; in the Financial Times it is 40%. It might not seem a problem that schools obsess over making sure that MBA graduates leave programmes earning six-figure salaries. However, in most industries remuneration like that is hard to come by—but, until recently at least, not in financial services. Careers officers, therefore, spent a disproportionate amount of time chasing jobs at banks and, to a lesser extent, high-paying management consultancies. This was not good news for those looking for a job in marketing or a not-for-profit organisation.
Caveat emptor

There is also concern about the way in which students use the rankings. In theory, MBA students should be among the most quantitatively-literate people on the planet. They would not get far in their finance classes if this were a weakness. But schools worry that students are paying too much attention to the top-level numbers when they choose where to apply.

The difference between, say, Henley Business School (21st in our ranking) and the University of Michigan (25th) is, in almost every aspect, much more than just four places. The first has an intake of 40, the second 400. Henley has strength in finance, Michigan in strategy. At Henley, students’ average age is 37, compared with Michigan’s 29. Despite one school being marginally higher in the rankings, it is obvious that each school could be the right choice for completely different kinds of student.

For schools there is a further problem. There are now so many rankings—not just the three international ones, but also a plethora of regional- and countrybased lists, ratings of programmes’ ethical qualities or those focusing on a particular industry—that they can be a serious drain on resources. Some schools have dedicated staff simply to deal with the mass of questionnaires.

But one thing seems certain: rankings will not be disappearing any time soon. An attempt by Harvard and Wharton to lead a boycott of the rankings in 2005 did not, in the end, muster much support. Most schools now accept—albeit sometimes grudgingly—that they have been broadly beneficial. And the marketing departments can’t get enough. After all, the more rankings there are, the more chance there is to be number one at something.:SugarwareZ-162::SugarwareZ-162:
 
Back
Top