Site icon Youth Ki Awaaz

How Dependable Are The Parameters For The NIRF Rankings?

Started in 2015, the National Institute Ranking Framework (NIRF) was put into place to help analyze the situation and stature of institutes and universities across India on a plethora of parameters. These parameters, which have been criticised over and over for their non-inclusive nature, narrow approach and vague essence, are used to rank these colleges. The NIRF rankings tend to compare various colleges across various streams like engineering, pharmacy, law, management, architecture, and more. Along with these categories, these rankings also grade institutes in terms of their overall standings, which are calculated by associating a weight to all the parameters and hence using these weights to attach a score with every institute.

Although the NIRF rankings use a range of parameters that include important and specific metrics like the number of IPR and patents, or the metric for the number of PhD students graduated from an institute, they also tend to rely on very vague and over-simplified factors like teacher-student ratio and peer perception. In addition to this, the rankings require for the institutes to fill in the required details on to an online data capturing system, which when combined with an online perception capturing system, helps rate the institute. The weights associated with each parameter and a list of the parameters can be found here.

For a framework that relies strongly on the data submitted by the institutes themselves, its reliance on the process of verification holds significant worth in the process. In 2019’s report, which was released on April 4, the process of verification is described as a three-pronged mechanism that includes “Scouting for Outliers,” “Communication with Nodal Officers,” and “Verification of Data on Publications.” If one goes through the description of these three processes, the flaws in the followed methodologies stand out.

For starters, under the description of the mechanism of Scouting for Outliers, the report describes the existence of committees of domain ‘experts’ that examined the provided data on ‘various parameters,’ without disclosing anything about the experts employed or the parameters used. Moreover, the report describes how institutes whose data ‘seemed exaggerated’ were asked to telephonically confirm or correct the data. The process of re-confirming the data with the source you originally got it from is funny, at best, and descriptive of the existing machinery’s incompetence, at worst.

Moving on, as far as the part about Communication with Nodal Officers was concerned, the apple does not seem to have fallen too far from the tree. The institutions, themselves, were asked to appoint one of their senior functionaries as a nodal officer who was supposed to be the point of contact between the NIRF and the institution. These nodal officers, who mind you, are senior functionaries of the organizations, were to respond to queries on anomalies in the submitted data and were responsible for verification of the data. If this doesn’t surprise you enough, the report also states, “while significant efforts were made to authenticate the data, the final responsibility for the accuracy of the submitted data lies with the concerned institutions.” This absence of any secondary means to verify the data used has always been a point of criticism for the rankings.

Although the NIRF has had these shortcomings for a while, there is no denying the fact that these rankings hold strong significance as, both, the only reliable guide for students to choose between their college options and as an official source of information for statistical analysis for further studies and research initiatives. With the months of April and May being crucial in the life of future college students, the release of these rankings at this time plays a huge role in swaying public opinion and influencing individual choices.

This year, the framework expanded to covering around 3127 institutions, which is about 400 more than the number in 2018. Out of these 3127, around 970 were specifically engineering institutes, 555 belonged to management and the number of institutes which fell into the ‘overall’ category were 1479 in number.

One of the least surprising statistics displayed in 2019’s NIRF rankings was the declaration of Miranda House as the foremost college in the ‘colleges’ category. For three straight years, since 2017, Miranda House has been able to retain the foremost position in the rankings. Although this does portray a sense of competence of the college, it also describes the existence of a monopoly over better facilities, by a select few of the country.

In a time like today, when dissent in educational spaces is being forced to die a silent death, it is rather ironic how the top-five colleges of the country have had major protests and uprisings against the respective administrations in recent times. From the Pinjra Tod campaign at Miranda House to demands for elections at Chennai’s Presidency College to the uprising against sexist traditions at Delhi’s Hindu College to the protests at St. Stephen’s College, the causal relationship between dissent and incompetence that our mainstream media so dearly propagates falls flat. This false portrayal, of dissent as an evil force, fails again as one finds the Indian Institute of Science, Bangalore, whose scholars protested against low stipends, and Delhi’s infamous Jawaharlal Nehru University as the top-two universities in the country.

In the engineering category, the Indian Institute of Technology, Madras, has continued to bag the first rank for a stretch of four years now. The fact that, while naming institutes of eminence last year, the centre had first overlooked IIT, Madras, as a candidate, despite its stellar performance in these ratings, displays a stark contradiction.

Additionally, although the presence of 8 IITs in the top-ten of the country makes sense, given the funds poured into them, the existence of the same monopoly that we had seen earlier reflects itself, if one looks at rankings from previous years.

One of the major shocks that came in the engineering category was the demotion of Birla Institute of Technology and Science (BITS), Pilani, another institute of eminence, from a rank of 17 in 2018 to a rank of 25 this year. This raises serious doubts regarding the centre’s choice for the status, given how there are other private institutes, like Vellore Institute of Technology, which did better than BITS, Pilani and were not given the IoE tag.

A few institutes that topped the management category were unsurprisingly the Indian Institutes of Management. The framework expanded the total number of evaluated colleges from 50, in 2018, to 75 in 2019. In the pharmacy category, both Jamia Hamdard and Chandigarh’s Panjab University moved upward to dislodge Mohali’s National Institute of Pharmaceutical Education and Research from its first rank. The streams of law and architecture saw an increase of 5 colleges each, taking the total tally to 15. This low number of polled colleges portrays the lack of interest among colleges from these streams to take part in the NIRF rankings.

In conclusion, although the NIRF rankings suffer from a range of setbacks in the present scenario, the fact that they are referred to heavily by both students and authorities alike says a lot about their worth. Alternative and more reliable sources of classification of educational institutes like the QS’ and THE’s world university rankings do rank Indian institutes, but neither are these rankings as inclusive of local colleges as NIRF is and nor are they made in context of the Indian education system. In light of such a scenario, the current framework needs to be revamped and updated to expand to provide for an appropriate scale for the country’s education system.

Featured image source: Miranda House/Website; Amit Roykaran/Wikimedia Commons; Nithinmohan/Wikimedia Commons.
Exit mobile version