However, one might consider the U.S. News rankings more misleading than useful. Is it possible to have a meaningful assessment of a university distilled down into a single line of statistics in a magazine? There are qualitative differences between universities, but the differences that really matter to a student are not reflected in the ranking data. Students want to know what the quality of the Engineering school is like, or the class size in the history department, or amount of scholarships for the business program. A ranking that groups all departments together is meaningless. Is a ranking actually telling you about the quality of the education you’ll receive? Students who are looking for a degree in something happens to be excellent at this low ranked university might overlook a university that appears at the bottom of the rankings.
Ehrenberg (1999) questions the validity of numerical scores used to rank academic institutions. His studies reveal a number of problems: It is virtually impossible to quantify the quality of education. What is important to one college applicant may be meaningless to another. The existence of a ranking system encourages colleges to boost their scores by providing misleading, exaggerated, or downright incorrect information to the ratings services. (Almost all of the “hard” data used by college guides and rating sources is provided by the schools themselves, without independent confirmation.) It is not meaningful to evaluate entire institutions with a single numerical score. Even university departments vary from year to year with respect to faculty and funding, let alone the college as a whole. Ranking services often change their methodology, so that a college ranked number one last year could dive to tenth place this year, and vice versa. Cynics have noted that jumbling the rankings this way leads to increased sales of the ratings publication, compared to a listing that remains relatively static from year to year. Differences between the ranked positions may be statistically insignificant, but forcing them to be placed in an arbitrary scale exaggerates these differences. In other words, there may be little difference in quality between the schools ranked #2 and #20, but the uninformed consumer naturally thinks otherwise. Lists of "best" schools are worthless unless the judging criteria are specific, non-arbitrary, and clearly spelled out for the reader—and the data to be judged must be accurate and independently verified. These criticisms indicate that there are serious problems with the current ratings methodology.
How would you reform the current system?
Base upon the preceding information it is clear that the popular ranking system/s need improving. How does one create a ranking system that conveys quality and that would not cause unfavorable distortions in behavior by colleges? The goal must be to have schools provide evidence of contribution to learning. Long (2000), lecture, suggested two proposals. Proposal number one involves a change from a program based on inputs (reputation, students, test scores, resources) to one based on outputs (returns to that college). Proposal number two was deemed a “National Survey of Student Engagement Measure, ‘Good Practices’”. Such a proposal or ranking system would address questions such as: How many times are students required to make a class presentation? Have the students had conversations with professors outside of class? How many students are required to write papers at least twenty-pages long? Would the student/s attend the same college again?
Any reform proposal should consider college quality on measures such as a clear mission, attention to students, planned, yet flexible, curriculum, the classroom climate, devoting resources to learning, and the campus culture. Therefore, I prefer a system of college evaluation that is not based on ranking schools, but rather gives comparative information about schools also provides discussion of how to assess the quality of schools and ran services. One example, is the Peterson’s presentation of “Considering College Quality”, a discussion of assessing institutional quality. Peterson’s also explains why they don’t believe in rankings.
Many faculty and college administrators have a love-hate relationship with rankings and ratings. Colleges routinely disparage rankings but are quick to trumpet their high standing and paste the U.S. News best college graphic on their web site. Schools are known to tinker with their admissions policies, alumni files, and other ranking factor in order to maintain or boost their U.S. News rating. Like them or leave them, colleges devote no less attention to rankings as we proceed into the next decade.