Data Envelopment Analysis in University Rankings

Authors Avatar

Data Envelopment Analysis in University Rankings

1.0        Introduction

Data Envelopment Analysis (DEA) is a means of producing a performance measure where sets of organisational decision making units (DMUs) have multiple inputs and outputs. (Dyson, 2000, OR Insight Vol13 Issue 4). DEA considers each unit in turn through linear programming and selects the most favourable weights for it.  In this investigation the DMUs are universities, and the outputs are the different categories, ie ‘Completion Rate’

1.1        Data Set Chosen / Assumptions

From the Times Top 100 universities a list of 12 was made from what was considered to be the top “Computer Science” universities listed on the Times Website (Appendix d).  “Computer Science” was chosen solely as a means of selecting 12 Universities. These 12 universities were then investigated using Data Envelopment Analysis (DEA) in EXPRESS-IVE.

In the appendices the rankings were altered so that the selected universities were also ranked 1 to 12 in the order they appeared in the Times league table. This facilitated comparisons.

Weightings of variables were changed and are therefore subjective as to how I perceived their importance. 4 decimal places were used as a minimum to ensure as greater accuracy as possible.

  1. Calculating efficiency scores for each university & comparison with Times listing

(Appendix a)

The criteria used in the Times Top 100 University DEA model were all assigned a minimum weighting (0.0001). The item number (representing where the university placed in the league table) was then altered to the required university before compiling and then run/re-running for each university. The efficiencies and rankings are listed in Appendix a.

From this table it can be seen that broadly the universities have kept their same “ranking” (in relation to each other). However it should be noted that 3 main changes have a occurred, Bristol has swapped positions with Bath so that Bath now ranks 9th instead of 10th.

When looking at the weightings automatically placed on them it can be seen that Bath’s strengths appear to lie in ‘facilities spend’ and ‘completion’, whereas Bristol’s strengths appear to lie in ‘graduate prospects’ and ‘completion’. When considering the 2 weightings listed it can be seen that for ‘completion’ 0.0030 is assigned to Bristol, but a greater 0.0057 is assigned to Bath indicating that Bath might be considerably stronger in this area. A glance at the Top 100 list confirms this to be true such that it is almost twice the value of Bristol.

Another main change is that Edinburgh has moved 1 place from 4th to 5th. The full weighting of 0.0325 is placed on ‘teaching assessment’ as it is considered Edinburgh’s greatest strength. However it is not the ‘best’ in that criterion  and therefore cannot be assigned an efficiency of 1 like the other 4 universities ranking above it with efficiencies of 1.

UCL has now moved from 5th to 1st equal, as it too has a criterion it is “best” at namely ‘student-staff ratio’. This explains why it has an efficiency of 1, and also why 3 other universities have an efficiency of 1 – because they each top the league in a particular criterion. There is therefore no choice but to rank all 4 1st=, though it might be assumed that UCL be ranked bottom of the 4 when considering it’s prior ranking (5th). This is possibly a criticism of using DEA to investigate University rankings as it shows that often more than one (university) can have an efficiency of 1, (an indication that we have too many outputs). It should be noted, however that it would be unreasonable for one of these Universities to claim to be 100% efficient by virtue of a single output., however without any further analysis these are forced to be ranked 1st=.

Join now!

In conclusion it can seen that these rankings closely resemble the rankings listed in The Times.

  1. Experimentation to find suitable values of the minimum weights

        (Appendix b)

The aim with this table was to achieve a “non-biased” ranking that kept all weightings at a minimum level, bar one, in order to ascertain the maximum efficiency a university could achieve if maximum weighting was placed solely on that one variable. Initially the maximum weighting for each category had to be calculated. This was weighting was increased until the program would not run. A table was then constructed illustrating ...

This is a preview of the whole essay