Simply click on the most frequently asked questions regarding the ranking to get the answers.
Which subjects were examined and when?
The data for the single subjects are actualised in a three-year circle: natural sciences, mathematics, computer science, medicine, nursing, sports: 2012. Law, economics, social sciences, social work 2014; humanities, psychology, peadagogy 20113; engineering 2013.
Why is the data not updated every year?
The enormous amount of work necessary for such a complex ranking makes a yearly update of the data for all subjects impossible. This would be beyond our capacities and the capacities of the universities, which assist with a considerable amount of preparatory work for the ranking project. Except for a few individual cases, changes at universities do not take place with the speed that would make an annual update necessary. In addition, most cases do not show any values referring to one year, but average values across several years (e.g. publications, study time). The figure-work of these values is unlikely to seriously change within one year. In addition, it is extremely unlikely that there would be substantial changes, e.g. for publications or the duration of studies within one year. The time comparison in the different subjects, which was possible for the first time in the 2003 ranking, also shows that important changes, which really result in a change to the ranking groups, are more an exception than a rule.
Why is my subject not included?
The subjects included in the ranking cover the degree courses of around 80% of first-year students. The subjects that are not included are, for the most part, subjects that are only offered at a few universities, or for which only a relatively small number of students per university have enrolled.
When will the new ranking be available?
The CHE UniversityRanking 2015/16, for which the natural sciences, mathematics, computer sciences, medicine and sports were examined will be published in may 2015.
What does Research Reputation / Reputation for Academic Studies and Teaching tell?
The CHE Ranking also includes the indicator "Research Reputation" and "Reputation for Academic Studies and Teaching", which reflects the faculties' reputation among the professors of the subject. The CHE asks the professors which universities in their opinion are best for studying their subject, if only the quality of the education was of importance. For the Research Reputation the professors are asked, which universities they consider as "leading" in their subject. Within the subject community, there is, as a rule, a clear picture about the standing or the reputation of the individual faculties. Even if the professors do not know all the faculties of their subject in detail, there is nevertheless this reputation hierarchy in their heads. This indicator reflects the opinion of the professors; it is not an indicator of the universities' efficiency! A faculty's reputation may, but does not have to, match its factual achievements in research and teaching. There may be faculties, which still live off achievements from the past; but vice versa, there are also faculties, whose achievements are not yet recognised among the professors. Nevertheless, this indicator may be meaningful information - not least as the reputation of a university is also linked to the graduates.
Does the ranking only cover subjective opinions about the universities?
Higher education ranking is often accused of only including subjective opinions and judgement about universities, which cannot paint a "real" picture of the situation at the universities. This reproach applies to some rankings where e.g. only students or professors or employers have been asked for their assessments. It does not, however, apply to the CHE/ZEIT OnlineRanking. The approach to our ranking is to obtain a precise and subtle picture of the study conditions and achievements of the universities from different perspectives. This includes assessments and opinions of the students about studying at their own university as well as facts. University is about students - they can competently assess study conditions and teaching as users or "customers" of the universities. Yet ranking covers a great deal more. We have collected a number of facts about the universities from different data sources: The indicators being ranked include - according to subject - e.g. the average duration of studies, final marks, failure rates, the ratio of students and professors, and also indicators of the research activities, such as the number of PhDs, publications or the amount of acquired research funds.
Why does the ranking not include any survey of employers?
The CHE has - in contrast to some other rankings - deliberately omitted to ask employers for the "universities". There are several reasons for this decision. For one thing, the reputation of the university where a student has graduated is not as important for applications as is sometimes suggested. There are also methodical reasons that are points against surveying employers. Firstly, such a survey would in many cases only reinforce existing prejudices. Frequently, the surveyed persons name the university where they themselves have studied. It has happened more than once that employers have described a university as a top institution in rankings, even though it did not even offer that subject.
Which indicators are preselected for the ranking?
The first view on the ranking shows four to six selected criteria. This is the same selection of indicators printed in the "ZEIT Studienführer"-magazine. Up to 34 different criteria are ranked for each subject referring e.g. to the make-up of students, study success, international orientation, equipment or research activities at the faculty. As values of different criteria are not combined in CHE ranking, there are consequently up to 34 different ranking lists for each subject. For an initial orientation, we have therefore selected criteria of which we assume that it is of special interest. The selected indicators vary for the individual subjects, but are generally composed of the overall opinion of the students, the professors' opinion, a research indicator, such as the number of PhDs, publications or third-party funds, an equipment indicator, e.g. the student opinion on the library or the number of therapy rooms, and another fact (support relation) or opinion (e.g. students' opinion on the organisation of studies or on support by teaching staff). On the Internet, the Ranking Overview takes the user to the detail pages, which show the indicators individually with the corresponding values and ranking groups.
Are students' opinions soft indicators?
In addition to facts about various areas, the higher education ranking contains a number of student opinions. In contrast to the frequently uttered opinion that they are purely "Well-being indicators", they are perfectly meaningful, provided the questions are asked with sufficient detail. On the one hand, previous years have shown that students and professors judge at different levels of grades - professors usually around one grade better than students - but the relative order of faculties is not very different, the correlation of opinions is - according to indicator and subject - at around 0.5 to 0.8. Therefore, a library regarded as poor in the eyes of students is usually also given poor assessments by professors. On the other hand, repeated examinations of the same subject show that students' opinions hardly change at most universities. Actual improvements, however, such as a faculty moving into new premises, are clearly reflected here; a sign that students do assess according to objective criteria.
What are bibliometrics and how does CHE do it?
In science, research results are made public in particular via publications. Bibliometric analyses are used to show the publication activity, and if necessary, the publication effect of scientists at universities in the subjects examined. On principle, the publication analysis carried out for the CHE higher education ranking is not based on a full survey of all publications of the period considered, but on queries in subject databases, whose contents satisfy at least certain quality demands. The uniform covering of the publications for any faculties involved is more important for relative ranking than completeness. Publications by professors and other scientific staff are queried (in medicine only publications by professores); the indicator "Publications per Scientist", "Publications per Professor" or "Publications per Year" is shown in the end. If the database used is the (Social) Science Citation Index, the effect of a paper can also be measured on the basis of citations and shown as "citations per publication".
Can research achievement be measured?
To compare the research activity of faculties, it is possible to find quantitatively measurable figures, which enable meaningful comparisons. In science for example, research results are made public in particular via publications. With the aid of bibliometric analyses, data can be determined such as "Publications per professor" for the publication activity and, if required, "citations per publication" the publication impact. Although it also has to be considered as an input figure, the acquired research money can provide information about the research achievement of a faculty, in particular in engineering, but also in the natural sciences, as the sponsor associates the allocations with the expectation of workable results. As several researchers apply for a limited amount of money, the most promising competitor will get the contract. The faculties that are most active in research can be identified by considering several of these indicators together.
Where in Germany can I study which course?
In the Ranking, all universities offering a course in a certain subject are listet. You may also use the search-function.
Courses and subjects not listed can be found at the Higher Education Compass
published in cooperation with the German Rectors'' Conference
Can conclusions be drawn from the results of the CHE UniversityRanking student questionnaires regarding the quality of the graduates from a HEI?
The CHE UniversityRanking primarily serves potential students. It is designed to help them choose a suitable HEI and simplify their process of reviewing the higher education landscape. To achieve this, the CHE collects assessments (by students, graduates and professors) and data (e.g. doctoral theses, research funds or publications). The combination of these assessments and facts gives a differentiated picture of the performance of HEIs in teaching and research. Student assessments in the CHE UniversityRanking state how students presently assess the study situation at their HEI (e.g. rooms, libraries, opportunities for study trips abroad, mentoring and tutoring). However, student interviews do not measure the performance of individual students. This means that it is not possible to draw conclusions on the quality of the graduates from the results of a department in the student interviews. This simply means that we can assume that students in a "well-ranked" department discovered better study conditions there. Student interviews expose weaknesses in the study conditions of individual HEIs which might trigger a process of change and improvement from which students will benefit in the end. The CHE supports HEIs in the analysis of their need for improvement by making available to them, free of charge, the relevant detailed analyses from the student interviews.
Why is there no ranking of master programmes?
The CHE Ranking concentrates on undergraduate studies. In addition to that, the number of students in master programmes is smaller than in the bachelor programmes, making it harder to reach enough students for a reliable students judgement. Nevertheless for some selected subjects rankings for the master programmes are available, e.g. Computer Science and Business Studies.
Why does the CHE not calculate a league table (with individual numeric positions)?
The CHE UniversityRanking gives ranking groups rather than individual numeric ranking positions in a league table. HEIs are allocated to a top group, middle group and bottom group. This is done since the assigning individual ranking positions has the danger to misinterpretate, minor differences in the numerical values of indicators as real differences in quality/performance. League tables suggest that each difference in ranking position marks a real difference and hence they tend to exaggerate the differences between institutions. By contrast, the group ranking method assures that the top and bottom groups can be statistically distinguished clearly from the total average value. However, differences within the groups can be considered insignificant, and for this reason the HEIs within one group are listed in alphabetical order.