<p>The Quacquarelli Symonds (QS) ranking of world universities has just been published for the year 2022. There is much jubilation in India at the Indian Institute of Science (IISc) being ranked the top research university in the world. I am delighted that IISc is ranked at the top as one can look at the rankings critically without being subjected to the criticism that this is just a case of sour grapes.</p>.<p>So, we should take a critical look at the ranking process and ask ourselves how justified IISc's rank vis-à-vis those ranked at par or below IISc in research? These other universities include well-known centres of education and research such as MIT, Harvard University, Caltech, Stanford University, University of California - Berkeley and Cambridge University. Therefore, it is vital to analyse the validity of research rankings and not place so much importance on IISc's rank that we become complacent about our research performance.</p>.<p>It is quite ludicrous that the performance of entire institutes can be quantified and reduced to a single number, much like an individual's abilities are quantified by her of his IQ. After all, education and research involve the interplay of creativity, innovation and mentoring, and are much too complex to be reduced to a single performance metric. Human nature is to compare and quantify, and as long as people are willing to value rankings, there will be companies to do the task. But it is for people in decision-making capacities to view these critically and not make major decisions only based on the rankings.</p>.<p>The QS ranking system works by considering six criteria. These are: 1) academic reputation, 2) reputation, as perceived by employers, 3) faculty/student ratio, 4) citations per faculty, 5) the ratio of international faculty to national faculty, and 6) the ratio of international students to national students.</p>.<p>The first criterion – academic reputation – is decided by a survey conducted among about 100,000 experts in teaching and research. The subjective perception of these experts governs the score for academic standing, yet this criterion carries the most weight of 40 per cent in determining rankings. A similar survey conducted among about 50,000 responses from employers regarding the second criterion contributes 10 per cent weight to the ranking system. Thus, half of the intangibles decide maximum ranking points.</p>.<p>The remaining four criteria can be quantified based on the data provided by universities. The faculty-to-student ratio of a university carries a weight of 20 per cent. However, a larger faculty-to-student ratio does not always translate to better instruction for students. In many research institutes, support faculty such as scientific and technical officers are counted as teaching faculty, although they are only marginally involved in teaching. In many universities and institutes, well-established senior faculty are less accessible to students. They are often less involved in the institution's academic activities as they tend to be busy working in various committees, nationally and internationally.</p>.<p><strong>Also read: <a href="https://www.deccanherald.com/opinion/second-edit/iisc-and-its-will-to-excel-997191.html" target="_blank">IISc and its will to excel</a></strong></p>.<p>Another criterion is the citations per faculty, and this parameter constitutes 20 per cent of the ranking points. Citations are the number of times other publications cite a published paper. However, in scientific circles, to boost citations for one's papers, one has to circulate among scientists by attending conferences, inviting other scientists for seminars and personally socialising one's ideas.</p>.<p>So, the number of citations may scale more with the visibility of the paper than the true scientific impact of the work. It is also not uncommon to see citing a work to discredit it; this counts as a citation nevertheless. Notwithstanding these well-known drawbacks, the number of citations remains one of the leading metrics used by the research community and funding agencies to judge research performance. For the QS rankings, the citations received per faculty over the five years starting from seven years before the assessment are considered. The citations that count towards ranking exclude self-citations, namely, publications that cite the authors' own papers. Citation count is also normalised against the total citations in the field, as not all fields have the same number of researchers or total publications.</p>.<p>Citations can sometimes lead to incorrect conclusions. An example is Panjab University, which was ranked the top Indian university/institute in Times Higher Education ranking of Asian institutes in 2013, upstaging all other institutes that have continuously outperformed Panjab University on almost all metrics. The anomaly was attributed to many citations received by a few faculty and their students who were joint authors on papers on the discovery of the Higgs boson at the Large Hadron Collider (LHC) run by CERN in Switzerland. A few thousand authors from a few hundred universities and institutes authored these papers.</p>.<p>Besides such anomalies, usually the impact of a paper is not realised within the span of a few years of its publication. The impact of a paper is judged better by the longevity of its citations. Thus, restricting citations of publications to just a few years is flawed. Instead, the total citations received by all papers published from the beginning of an institute or university over the previous five years may be a better metric. Although I am generally opposed to the idea of ranking, this new way of reckoning citations score will weigh pioneering publications that have remained important over long periods. In another ranking system, the number of papers published by an institute was used as a parameter in deciding rankings. This led to an anomaly in the rankings in 2010 and involved the University of Alexandria. In that year's rankings, the University of Alexandria was surprisingly placed very high. This anomaly was traced to a single professor's practice of misusing his position as an editor of a journal to publish a large number of articles in that journal.</p>.<p>The remaining two metrics, namely fraction of international faculty and international students, are driven by sociological, financial and geographical aspects of where an institute is located. Although these two criteria have the least weight, they have the possibility of unfairly favouring one institute over the other for mostly non-academic reasons.</p>.<p>The question of whether it is necessary and desirable to rank universities and institutes is a moot point. Many universities often resort to window-dressing the data to improve their rankings. Often, they have personnel entrusted with finding ways to improve the rankings and embellish the achievements of students and faculty. Many a time, scarce resources are frittered away in showcasing the institute.</p>.<p>Critical evaluation of the ranking system has become important as funding agencies find it an easy metric to base their decisions on it. The ranking process is further encouraged by the scientific publishing machinery (which is already a money-spinning business). It has a lot more to profit when researchers tend to increase their publication numbers to improve the institute's ranking, besides contributing to increasing their own individual metrics.</p>.<p>I believe that just as we should do away with marks and ranks for the students in exams (which I have written about in Deccan Herald May 16, 2019), we should also do away with the ranking of universities. Instead, it is sufficient to divide universities into different tiers and let the clientele, who need to decide, to delve deeper to find what suits them. Such a tier-based division can also be done for specific categories such as best faculty, best campus, best peer group and so forth. It will certainly help prospective students and their parents to make decisions based on categories they care most about, instead of bestowing bragging rights for institutes or universities to advertise themselves.</p>.<p><em>(The author is an Emeritus Professor and an INSA Senior Scientist in the Solid State and Structural Chemistry Unit, Indian Institute of Science)</em></p>.<p><em>Disclaimer: The views expressed above are the author’s own. They do not necessarily reflect the views of DH.</em></p>
<p>The Quacquarelli Symonds (QS) ranking of world universities has just been published for the year 2022. There is much jubilation in India at the Indian Institute of Science (IISc) being ranked the top research university in the world. I am delighted that IISc is ranked at the top as one can look at the rankings critically without being subjected to the criticism that this is just a case of sour grapes.</p>.<p>So, we should take a critical look at the ranking process and ask ourselves how justified IISc's rank vis-à-vis those ranked at par or below IISc in research? These other universities include well-known centres of education and research such as MIT, Harvard University, Caltech, Stanford University, University of California - Berkeley and Cambridge University. Therefore, it is vital to analyse the validity of research rankings and not place so much importance on IISc's rank that we become complacent about our research performance.</p>.<p>It is quite ludicrous that the performance of entire institutes can be quantified and reduced to a single number, much like an individual's abilities are quantified by her of his IQ. After all, education and research involve the interplay of creativity, innovation and mentoring, and are much too complex to be reduced to a single performance metric. Human nature is to compare and quantify, and as long as people are willing to value rankings, there will be companies to do the task. But it is for people in decision-making capacities to view these critically and not make major decisions only based on the rankings.</p>.<p>The QS ranking system works by considering six criteria. These are: 1) academic reputation, 2) reputation, as perceived by employers, 3) faculty/student ratio, 4) citations per faculty, 5) the ratio of international faculty to national faculty, and 6) the ratio of international students to national students.</p>.<p>The first criterion – academic reputation – is decided by a survey conducted among about 100,000 experts in teaching and research. The subjective perception of these experts governs the score for academic standing, yet this criterion carries the most weight of 40 per cent in determining rankings. A similar survey conducted among about 50,000 responses from employers regarding the second criterion contributes 10 per cent weight to the ranking system. Thus, half of the intangibles decide maximum ranking points.</p>.<p>The remaining four criteria can be quantified based on the data provided by universities. The faculty-to-student ratio of a university carries a weight of 20 per cent. However, a larger faculty-to-student ratio does not always translate to better instruction for students. In many research institutes, support faculty such as scientific and technical officers are counted as teaching faculty, although they are only marginally involved in teaching. In many universities and institutes, well-established senior faculty are less accessible to students. They are often less involved in the institution's academic activities as they tend to be busy working in various committees, nationally and internationally.</p>.<p><strong>Also read: <a href="https://www.deccanherald.com/opinion/second-edit/iisc-and-its-will-to-excel-997191.html" target="_blank">IISc and its will to excel</a></strong></p>.<p>Another criterion is the citations per faculty, and this parameter constitutes 20 per cent of the ranking points. Citations are the number of times other publications cite a published paper. However, in scientific circles, to boost citations for one's papers, one has to circulate among scientists by attending conferences, inviting other scientists for seminars and personally socialising one's ideas.</p>.<p>So, the number of citations may scale more with the visibility of the paper than the true scientific impact of the work. It is also not uncommon to see citing a work to discredit it; this counts as a citation nevertheless. Notwithstanding these well-known drawbacks, the number of citations remains one of the leading metrics used by the research community and funding agencies to judge research performance. For the QS rankings, the citations received per faculty over the five years starting from seven years before the assessment are considered. The citations that count towards ranking exclude self-citations, namely, publications that cite the authors' own papers. Citation count is also normalised against the total citations in the field, as not all fields have the same number of researchers or total publications.</p>.<p>Citations can sometimes lead to incorrect conclusions. An example is Panjab University, which was ranked the top Indian university/institute in Times Higher Education ranking of Asian institutes in 2013, upstaging all other institutes that have continuously outperformed Panjab University on almost all metrics. The anomaly was attributed to many citations received by a few faculty and their students who were joint authors on papers on the discovery of the Higgs boson at the Large Hadron Collider (LHC) run by CERN in Switzerland. A few thousand authors from a few hundred universities and institutes authored these papers.</p>.<p>Besides such anomalies, usually the impact of a paper is not realised within the span of a few years of its publication. The impact of a paper is judged better by the longevity of its citations. Thus, restricting citations of publications to just a few years is flawed. Instead, the total citations received by all papers published from the beginning of an institute or university over the previous five years may be a better metric. Although I am generally opposed to the idea of ranking, this new way of reckoning citations score will weigh pioneering publications that have remained important over long periods. In another ranking system, the number of papers published by an institute was used as a parameter in deciding rankings. This led to an anomaly in the rankings in 2010 and involved the University of Alexandria. In that year's rankings, the University of Alexandria was surprisingly placed very high. This anomaly was traced to a single professor's practice of misusing his position as an editor of a journal to publish a large number of articles in that journal.</p>.<p>The remaining two metrics, namely fraction of international faculty and international students, are driven by sociological, financial and geographical aspects of where an institute is located. Although these two criteria have the least weight, they have the possibility of unfairly favouring one institute over the other for mostly non-academic reasons.</p>.<p>The question of whether it is necessary and desirable to rank universities and institutes is a moot point. Many universities often resort to window-dressing the data to improve their rankings. Often, they have personnel entrusted with finding ways to improve the rankings and embellish the achievements of students and faculty. Many a time, scarce resources are frittered away in showcasing the institute.</p>.<p>Critical evaluation of the ranking system has become important as funding agencies find it an easy metric to base their decisions on it. The ranking process is further encouraged by the scientific publishing machinery (which is already a money-spinning business). It has a lot more to profit when researchers tend to increase their publication numbers to improve the institute's ranking, besides contributing to increasing their own individual metrics.</p>.<p>I believe that just as we should do away with marks and ranks for the students in exams (which I have written about in Deccan Herald May 16, 2019), we should also do away with the ranking of universities. Instead, it is sufficient to divide universities into different tiers and let the clientele, who need to decide, to delve deeper to find what suits them. Such a tier-based division can also be done for specific categories such as best faculty, best campus, best peer group and so forth. It will certainly help prospective students and their parents to make decisions based on categories they care most about, instead of bestowing bragging rights for institutes or universities to advertise themselves.</p>.<p><em>(The author is an Emeritus Professor and an INSA Senior Scientist in the Solid State and Structural Chemistry Unit, Indian Institute of Science)</em></p>.<p><em>Disclaimer: The views expressed above are the author’s own. They do not necessarily reflect the views of DH.</em></p>