Simply click on the most frequently asked questions regarding the ranking to get the answers.
The data for the single subjects are actualised in a three-year circle: humanities, psychology, peadagogy, engineering 2016, natural sciences, mathematics, computer science, medicine, nursing, sports 2015, law, economics, social sciences, social work 2014.
The enormous amount of work necessary for such a complex ranking makes a yearly update of the data for all subjects impossible. This would be beyond our capacities and the capacities of the universities, which assist with a considerable amount of preparatory work for the ranking project. Except for a few individual cases, changes at universities do not take place with the speed that would make an annual update necessary. In addition, most cases do not show any values referring to one year, but average values across several years (e.g. publications, study time). The figure-work of these values is unlikely to seriously change within one year. In addition, it is extremely unlikely that there would be substantial changes, e.g. for publications or the duration of studies within one year.
The subjects included in the ranking cover the degree courses of around 80% of first-year students. The subjects that are not included are, for the most part, subjects that are only offered at a few universities, or for which only a relatively small number of students per university have enrolled.
The CHE UniversityRanking 2016/17, for which the humanities, psychology, peadagogy and engineering were examined was published in the spring of 2016. For further information on this project see www.che-ranking.de.
Currently, the CHE Ranking includes the indicator "Research Reputation" only for a few subjects at universities. The indicator is displayed for the following subjects: Business Administration, Economics, Law, Medicine and Dentistry. The indicator reflects the faculties' reputation among the professors of the subject. For the Research Reputation the professors are asked, which universities they consider as "leading" in their subject. Within the subject community, there is, as a rule, a clear picture about the standing or the reputation of the individual faculties. Even if the professors do not know all the faculties of their subject in detail, there is nevertheless this reputation hierarchy in their heads. This indicator reflects the opinion of the professors; it is not an indicator of the universities' efficiency! A faculty's reputation may, but does not have to, match its factual achievements in research. There may be faculties, which still live off achievements from the past; but vice versa, there are also faculties, whose achievements are not yet recognised among the professors. Nevertheless, this indicator may be meaningful information - not least as the reputation of a university is also linked to the graduates.
Higher education ranking is often accused of only including subjective opinions and judgement about universities, which cannot paint a "real" picture of the situation at the universities. This reproach applies to some rankings where e.g. only students or professors or employers have been asked for their assessments. It does not, however, apply to the CHE/ZEIT OnlineRanking. The approach to our ranking is to obtain a precise and subtle picture of the study conditions and achievements of the universities from different perspectives. This includes assessments and opinions of the students about studying at their own university as well as facts. University is about students - they can competently assess study conditions and teaching as users or "customers" of the universities. Yet ranking covers a great deal more. We have collected a number of facts about the universities from different data sources: The indicators being ranked include - according to subject - e.g. the average duration of studies, final marks, failure rates, the ratio of students and professors, and also indicators of the research activities, such as the number of PhDs, publications or the amount of acquired research funds.
The CHE has - in contrast to some other rankings - deliberately omitted to ask employers for the "universities". There are several reasons for this decision. For one thing, the reputation of the university where a student has graduated is not as important for applications as is sometimes suggested. There are also methodical reasons that are points against surveying employers. Firstly, such a survey would in many cases only reinforce existing prejudices. Frequently, the surveyed persons name the university where they themselves have studied. It has happened more than once that employers have described a university as a top institution in rankings, even though it did not even offer that subject.
The "Compact Ranking" shows four to five criteria selected on the Internet and in the ZEIT study guide. Up to 34 different criteria are ranked for each subject referring e.g. to the make-up of students, study success, international orientation, equipment or research activities at the faculty. As values of different criteria are not combined in CHE ranking, there are consequently up to 34 different ranking lists for each subject. For an initial orientation, we have therefore selected criteria of which we assume that it is of special interest. The selected indicators vary for the individual subjects, but are generally composed of the overall opinion of the students, the professors' opinion, a research indicator, such as the number of PhDs, publications or third-party funds, an equipment indicator, e.g. the student opinion on the library or the number of therapy rooms, and another fact (support relation) or opinion (e.g. students' opinion on the organisation of studies or on support by teaching staff). On the Internet, the Ranking Overview takes the user to the detail pages, which show the indicators individually with the corresponding values and ranking groups.
In addition to facts about various areas, the higher education ranking contains a number of student opinions. In contrast to the frequently uttered opinion that they are purely "Well-being indicators", they are perfectly meaningful, provided the questions are asked with sufficient detail. On the one hand, previous years have shown that students and professors judge at different levels of grades - professors usually around one grade better than students - but the relative order of faculties is not very different, the correlation of opinions is - according to indicator and subject - at around 0.5 to 0.8. Therefore, a library regarded as poor in the eyes of students is usually also given poor assessments by professors. On the other hand, repeated examinations of the same subject show that students' opinions hardly change at most universities. Actual improvements, however, such as a faculty moving into new premises, are clearly reflected here; a sign that students do assess according to objective criteria.
In science, research results are made public in particular via publications. Bibliometric analyses are used to show the publication activity, and if necessary, the publication effect of scientists at universities in the subjects examined. On principle, the publication analysis carried out for the CHE higher education ranking is not based on a full survey of all publications of the period considered, but on queries in subject databases, whose contents satisfy at least certain quality demands. The uniform covering of the publications for any faculties involved is more important for relative ranking than completeness. Publications by professors and other scientific staff are queried (in medicine and dentistry only publications by professores); the indicator "Publications per Scientist", "Publications per Professor" or "Publications per Year" is shown in the end. If the database used is the (Social) Science Citation Index, the effect of a paper can also be measured on the basis of citations and shown as "citations per publication".
To compare the research activity of faculties, it is possible to find quantitatively measurable figures, which enable meaningful comparisons. In science for example, research results are made public in particular via publications. With the aid of bibliometric analyses, data can be determined such as "Publications per professor" for the publication activity and, if required, "citations per publication" the publication impact. Although it also has to be considered as an input figure, the acquired research money can provide information about the research achievement of a faculty, in particular in engineering, but also in the natural sciences, as the sponsor associates the allocations with the expectation of workable results. As several researchers apply for a limited amount of money, the most promising competitor will get the contract. The faculties that are most active in research can be identified by considering several of these indicators together.
In the Compact Ranking, all universities offering a course in a certain subject are listet. You may also use the search-fuction.
Courses and subjects not listed can be found at the DAAD University Guide published in cooperation with the German Rectors'' Conference
The CHE UniversityRanking primarily serves potential students. It is designed to help them choose a suitable HEI and simplify their process of reviewing the higher education landscape. To achieve this, the CHE collects assessments (by students, graduates and professors) and data (e.g. doctoral theses, research funds or publications). The combination of these assessments and facts gives a differentiated picture of the performance of HEIs in teaching and research. Student assessments in the CHE UniversityRanking state how students presently assess the study situation at their HEI (e.g. rooms, libraries, opportunities for study trips abroad, mentoring and tutoring). However, student interviews do not measure the performance of individual students. This means that it is not possible to draw conclusions on the quality of the graduates from the results of a department in the student interviews. This simply means that we can assume that students in a "well-ranked" department discovered better study conditions there. Student interviews expose weaknesses in the study conditions of individual HEIs which might trigger a process of change and improvement from which students will benefit in the end. The CHE supports HEIs in the analysis of their need for improvement by making available to them, free of charge, the relevant detailed analyses from the student interviews.
In Master programs the number of enrolled students is often too small to achieve an adequate return rate. Nevertheless, it is possible to survey the Master students in some fields. Master students in Business Administration and Economic Sciences have been surveyed in 2014 and students in Computer Sciences in 2015. The results of the Masters student's survey in Psychology and Engeneering will be published in autumn 2016.
The CHE UniversityRanking gives ranking groups rather than individual numeric ranking positions in a league table. HEIs are allocated to a top group, middle group and bottom group. This is done since the assigning individual ranking positions has the danger to misinterpretate, minor differences in the numerical values of indicators as real differences in quality/performance. League tables suggest that each difference in ranking position marks a real difference and hence they tend to exaggerate the differences between institutions. By contrast, the group ranking method assures that the top and bottom groups can be statistically distinguished clearly from the total average value. However, differences within the groups can be considered insignificant, and for this reason the HEIs within one group are listed in alphabetical order.