research and ranking performance

Welcome to Equentis Research and Ranking.

We make equity investments easy for you 

research and ranking performance

Sign in to Your Customized Portfolio

Not an existing subscriber? Start Now!

OTP Verification

Did’t get OTP ? Resend OTP

Login with your Password

Login with Password

Forgot Password?

Login with OTP

Forgot Your Password

Back to login

Having trouble verifying your email id? Support

SEBI Registered Investment Adviser Details: Registered name: Equentis Wealth Advisory Services Private Limited , Type of registration: Non-Individual , Registration No.: INA000003874. , Validity: Dec 08, 2015 – Perpetual

Copyright © 2023 Equentis Wealth Advisory Services Pvt. Ltd. All Rights Reserved.

research and ranking performance

Account Blocked

Your account has been blocked due to excess invalid attempts. You have entered wrong password more than 4 times.

For further assistance please contact [email protected]

research and ranking performance

An OTP Is Sent to Your Registered Email Address. .

An OTP Is Sent to Your Registered Mobile Number & Email Address. -->

Reset Your Password

Your password must.

Be at least 8 characters Include an Uppercase letter Include a number or a special character Not start or end with a space

Confirm Password

Your password must be the same as the new password.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

Are university rankings useful to improve research? A systematic review

Marlo m. vernon.

1 Department of Clinical and Digital Health Sciences, College of Allied Health Sciences, Augusta University, Augusta, Georgia, United States of America

E. Andrew Balas

Shaher momani.

2 Department of Mathematics, Faculty of Science, The University of Jordan, Amman, Jordan

3 Nonlinear Analysis and Applied Mathematics (NAAM) Research Group, Faculty of Science, King Abdulaziz University, Jeddah, Kingdom of Saudi Arabia

Associated Data

All relevant data are within the paper and its Supporting Information files.

Introduction

Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement.

A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted.

A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems.

No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and “luxury” indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.

Considering the significance of university innovation, there is a pressing need for outcome studies and quality improvement initiatives in the research enterprise. Keupp et al. [ 1 ] point out that current innovation management is characterized by conflicting predictions, knowledge gaps and theoretical inconsistencies. These issues may negatively impact the translation of academic research into discovery and applicable societal benefit. Research quality issues exist within university research; in the last 10 years, several studies and commentaries have highlighted the need for improvement in transparency, replicability, and meaningful research outcome reporting [ 2 – 6 ].

Many university administrators rely on university ranking systems as indicators of improvement over time and in comparison to other institutions. Universities promote improvement in standings as evidence of progress in the academic and research environments when requesting funding from government sources [ 7 ]. Other universities use ranking systems as evidence of cost-benefit for previously funded initiatives and to support additional funding requests. Consumers use university rankings to evaluate higher education opportunities both nationally and internationally.

Previous reviews of university rankings found that emphasis on reputation and institutional resources may not truly represent university quality [ 8 – 12 ]. Reviews of five ranking systems by Dill &Soo [ 8 ] focused on the suitability of rankings as representative of academic quality. Their findings demonstrate that ranking system indicators are not sufficient for promoting policy decisions or consumer choice. Suggested academic quality indicators include student entry criteria, program completion rates, proportion of graduates entering employment upon graduation, professional training, higher degrees, and the average starting salaries of graduates. Frey and Rost [ 13 ] concluded that publications and citations were not suitable indicators of scientific institutional worth. Their results suggest that multiple criteria should be implemented when assessing institutions for quality or choice for career decision.

Moed [ 12 ] most recently evaluated five world ranking systems and concluded that while ranking systems have improved in the last decade, the tendency to be one-dimensional hinders a more comprehensive university evaluation.

An evaluation of the Shanghai and Times Higher Education rankings conductd 70 simulations to replicate rankings; their results indicate that inaccurate weights were used to calculate the overall score [ 10 ]. The lack of replicability emphasizes the need for ongoing research quality evaluation and improvement. Trustworthiness of research influences not only scientific credibility but also effective innovation.

Assessment of the validity of research and academic quality indicators in university rankings is often unexplored; only once in the literature were two ranking systems so evaluated [ 14 ]. Integrating the much cited definitions of validity by Carmines and Hammersley, validity is the extent to which a measuring instrument accurately represents those features of a phenomena, that it is intended to describe[ 15 , 16 ].

While academic institutions have a responsibility to ensure that research process and outcomes efficiently and prudently manage resources, standardized research performance evaluation mechanisms for comparison across institutions do not currently exist. Academic institutions and administrators need reliable evaluation indicators of research and academic quality and university ranking systems are often used for this purpose. The objective of this study is to evaluate the usefulness of ranking systems for both academic and research performance and quality improvement, through a systematic review of publicly available university ranking systems.

We conducted a systematic review of university ranking systems utilizing the PRISMA protocol and checklist, researched relevant measures to ascertain commonly used indicators for evaluating research performance and innovation ( Fig 1 , S1 Table ) [ 17 ]. The review protocol for this study is available from the authors.

An external file that holds a picture, illustration, etc.
Object name is pone.0193762.g001.jpg

Eligibility criteria

Ranking systems which include over 100 doctoral granting universities in their sample were eligible. Rankings must be currently produced on an ongoing basis and include US and global universities. Ranking systems also needed to publish rank calculation methodology in English. Ineligible criteria included rankings which were solely based on reputation surveys, did not include research outcome indicators or ranked institutions solely by subject area.

A search of publicly available ranking systems for universities was undertaken between January and March 2017, through the use of internet search and qualitative literature review. Search terms included “university ranking”, “research productivity,” “measurement,” and “ranking university research.” Ranking system owners and VP of Research Administration were also consulted. Our searches were not limited to a certain field. Search engines used included PubMed (Search strategy: "university ranking"[All Fields]), Web of Science (WOS), and Google Scholar. To reduce selection bias, additional internet searches were also broadly conducted with the same search terms to identify any additional ranking systems.

Processing/Abstraction

The purpose of the ranking system and methodologies for calculation of ranks were pulled from published statements through each ranking system website or publicly available documentation on methodology. Terms such as “the objective,” or “purpose of” each ranking system are used to identify the stated purpose of the ranking system. All indicators which were stated by the ranking systems to evaluate research and academics were abstracted and compared across systems. The aggregation methodology was also abstracted and compared from the publicly available methodologies and results.

Ranking systems were also evaluated on their utility for institutional quality improvement based on transparency of data and data analysis, consistency of indicators used in rankings over time, and availability of institution level data from ranking system–made available for others to replicate ranking calculations.

In this study, validity of ranking was assessed based on the following criteria: (i) content (i.e., comprehensiveness by including measures of both IP and publications, reliance on empirical data); (ii) consistency (i.e., transparency of indicator calculation; transparency of data/availability of raw institutional data; transparency of data aggregation; consistency of measures over time; process of ranking replicable); and (iii) resistance to bias (i.e., avoidance of self-reported data; does not rely on peer reputation surveys). Transparency of data is evaluated on the availability of raw institutional data used for comparison and whether the data can be used to analyze trends over time. The transparency of the data analysis algorithm is also evaluated as indicator transformations are provided with sufficient detail for replication and if the algorithms used for rankings are replicable by outside entities. The disclosure of the included percentage for each subscale used by the ranking system is included in this item. Subscales refer to the different components or indicators included in each ranking system’s overall score, for example, the percent of the overall score attributed to publications in high impact journals, total citations, or number of PhD graduates. To evaluate the appropriateness of rankings for use in research quality improvement action plans, the consistency of indicators over time is roughly assessed using a binary rating of present or not present. Consistency of indicators used over time is determined by publication of ranking methodology or indicator changes prior to rankings release, the stated frequency of changes, and whether included measures have a life cycle of inclusion. Resistance to bias of the ranking systems is assessed by whether or not data are self-reported to ranking systems, and the presence or absence of a stated validation process to confirm self-reported data is utilized by the system. Resistance to bias is also assessed by degree of reliance on empirical or qualitative survey data (majority percent of total score), as reputation surveys are not factors that institutions can control or design.

For the purposes of this study, the definition of research performance is based on standards for the NIH Research Performance Progress Report: publications, conference papers, and presentations; website(s) or other Internet site(s); technologies or techniques; inventions, patent applications, and/or licenses; other products, such as data or databases, physical collections, audio or video products, software, models, educational aids or curricula, instruments or equipment, research material, interventions (e.g., clinical or educational), or new business creation [ 18 ]. This review of university ranking systems looked for impact and products along these lines. Correspondingly, research performance indicators are interpreted as measures of publications, citations, and/or intellectual property.

Academic quality is defined as improvement in students' capabilities or knowledge as a consequence of their education at a particular college or university [ 19 ]. It is interpreted as measures pertaining to student progress or acheivement, and teaching quality as defined by faculty credintals.

A total of 24 ranking systems were initially identified through searches. Thirteen ranking systems which published in 2015 or 2016 were included in the results ( Table 1 ). Excluded ranking systems were either no longer being published, did not include research performance indicators, or did not publish ranking methodologies. The range of institutions evaluated is between 500 and 5000 institutions. The oldest ranking system is the Carnegie Classification, established in 1973. All other ranking systems were first published between 2003 and 2015. Three ranking systems are run by universities, two by publications or news agencies, five by consulting or independent groups, and one by a government established entity. While the US News and World Report ranking of American universities was not eligible due to a lack of research performance indicators, the US News and World Report Global Ranking is included.

Ranking System (abbreviation)Initial YearSponsoring OrganizationTotal # of indicatorsFrequency of publicationParticipating InstitutionsVersionWebsite
2003Shanghai Ranking Consultancy6Annually5002016
1973Carnegie Commission on Higher Education/
Indiana U.
8Approximately every five years46642015
2012Center for World University Rankings8Annually10002016
2011Leiden University, Netherlands18Annually8422016
2013Quacquarelli Symonds Limited6Annually9162016
2010RUR Ranking Agency20Annually7612016
2009SCImago Lab12Annually51472016
2004TES Global Ltd13Annually8002016
2015Reuters10Annually1002016
2014European Union and Advisory Board30Annually1200+2016
2014US News and World Report12Annually12502016
2010Middle East Technical University6Annually20002016
2004Cybermetrics Lab, Spanish National Research Council4Biennial11,9952016

The purpose of most ranking systems is to identify top institutions for consumers, to classify institutions by their research activity, and to compare institutions within countries and across the globe ( Table 2 ). Some ranking systems state that they do not intend for the information to be used to compare institution to institution, but to provide a general interpretation of each institution’s annual performance.

PurposeRanking System
CWUR, Leiden, SCIMago, Times, RUR, Shanghai, URAP, UMR, Webometrics
Times, CA, UMR, URAP
CA
CWUR, QS World, RUR, Shanghai, Times, UMR, USN&WR
QS World, RUR, Shanghai, USN&WR
QS World RUR, Times, UMR, USN&WR
RUR, Shanghai, Times, UMR
RUR
Web

Four ranking systems specifically state that their results are intended to evaluate research quality. The Shanghai and UMR highlight their use in government cost benefit analysis; RUR, Shanghai, UMR, and Times state that their ranking systems may have use in supporting government funding requests.

The Carnegie Classification specifically states that their rankings are not intended to evaluate research performance. The Carnegie Classification System relies on R&D expenditure data in both STEM and non-STEM fields from the NSF Survey of Research and Development Expenditures at Universities and Colleges. Total staff working in science and engineering research are included from the NSF Survey of Graduate Students and Post-doctorates in Science and Engineering. No measures of research performance are assessed. The UMR system also provides indicators of quality, but leaves the definition of quality up to user preferences, by allowing a choice of indicators to be selected.

Tables ​ Tables3 3 and ​ and4 4 list the indicators utilized by the ranking systems to evaluate research performance or quality. Nine systems used total publications as an indicator–this is typically defined by the number of peer-reviewed articles that are included in either the Thomson Reuters Web of Science Core Collections database, or SCOPUS, produced by Elsevier. On average, 33.8% of ranking scores are assigned to publications and citations or various versions of these metrics. In most analyses, this is not dependent on first author affiliation, meaning that articles could be counted more than once across different institutions in collaborative works. Peer evaluation of both academic and research reputation and cumulative faculty awards contribute on average 39.8% of total ranking score among those who report weighting.

MetricData SourcesCarnegieCWURLeidenQSWorldRURSCIMagoShanghaiTimesCAUMRURAPUSN&WRWeb
WOS, Self-Reported         X10%5%
WOS, SCOPUS, SCI, InCites, Self-Reported 5%X  8%20%6%11.10%X21%10%
WOS, SCOPUS 5%X20%8%13%20%30%11.10%X21%7.50%
WOS     5%      
 5%   2%20%     
WOS  X         
Scopus  X 4%2% 2.50% 15%10%
WOS        11.10%X  
SCIMago Journal Rank indicator, WOS    2%   X15%32.50%30%
SCImago Journal Rank indicator, WOS     13%      
SCOPUS     5%      
InCites, WOS          18%10%
Google Scholar10%
         X  
    16%       
WOS        11.10%   
WOS 5%          
MetricData SourcesCarnegieCWURLeidenQSWorldRURSCIMagoShanghaiTimesCAUMRURAPUSN&WRWeb
US PTO, WPO, DerWent World Patents Index, Derwent Innovations Index 5%      11.10%X  
Derwent World Patents Index, Derwent Innovations Index, WPO        11.10%   
Derwent World Patents Index, Derwent Innovations Index        11.10%X  
PATSTAT, Patents Citation Index     30%  11.10%X  
PATSTAT         X  
Self-Report         X  
NSF, Self-ReportX      6% X  
Self-Report       2.50% X  
Self-reported    2%       
Self-reported    6%       
NSFX           
Self-Reported    2%       
NSFX           
Independent Survey, Clarivate Analytics    8%  18%   25%
National Education Ministries, Self-reportX        X  
Nobel Prize, Fields Medal, others 50%    30% 11.10%   
Ratio of weighted summary score by FTE of academic staff10%
75%100%20%46%80%100%65%100%— 100%100%40%

Ranking systems which rely heavily on publication and citation metrics include the Leiden Ranking, Shanghai, SCImago, URAP, US News and World Report and the EU U-Multirank systems. The Leiden Ranking provides size-dependent and size-independent variants of all indicators, except publication output. Citation indicators are also normalized for scientific field differences. The counting method is conducted using a full counting and a fractional counting method- wherein collaborative publications are given less weight than non-collaborative ones (Leiden indicators description, page 4). An algorithm is applied to calculate field-normalized impact indicators, described by Waltman and Van Eck [ 20 ]. In the Shanghai ranking system, publications in Nature/Science and Nobel or Fields Awards comprise 50% of the score–indicating a reliance on highly selective indicators. Rankings are created by scoring the highest institution as 100, and the rest as a percentage of 100. URAP rankings are entirely based on publication and citation metrics. Scores are normalized according to field of study. CWUR rankings are the only ranking system that incorporates the h -index developed by Hirsch [ 21 ] to indicate the broad impact of a university’s research based on performance and citation impact. The h- index of an institution equals x if the institution has published x papers that have each been cited at least x times. For all but two ranking systems, Leiden and Carnegie, data used in the calculations are not made available making replicability of the rankings impossible. Leiden and Carnegie both provide downloadable spreadsheets of the ranking indicator data.

The percent of scores attributed to intellectual property (IP) measures, such as patents, was only 3.5% across all systems. Four systems incorporated at least one of these indicators–CWUR, SCImago, CA, and UMR. The Clarivate Analytics Most Innovative Universities is the only ranking system heavily focused on intellectual property indicators and includes indicators based on independent empirical data. A patent success ratio is calculated from patent awards per applications. Raw data is not available for validation and replication. The UMR, CWUR include patent applications. The one indicator of IP performance in SCImago is based on citation metrics (publications cited in patent applications) and heavily weights this in the summary score at 30%.

Academic quality indicators are presented in Table 5 . Six systems incorporate academic quality by various indicators. The most common is a peer to peer survey, used by QS World, Times, US News and World Report, UMR, and RUR. Student/Faculty ratio is employed by each of these systems, excluding the US News and World Report. Carnegie, Times, and the UMR also use total doctoral degrees conferred when evaluating academic quality. Diversity of faculty and students are also used by QS World, Times, UMR and RUR as indicators of academic quality. CWUR attributes 25% of their ranking score to the number of alumni who are CEOs on the Forbes 100 list as the only measure of academic quality.

AcademicCarnegieCWURLeidenQSWorldRURSCIMagoShanghaiCATimesUMRURAPUSN&WRWeb
Independent Surveys, Student Survey   40%8%   15%X X
Not specified2.25%
Not specified, self-reported20%8%4.5%X
Independent 6urvey10%
IPEDS, Self-reportedX2.25%X
Self-Reported8%
Self-Reported8%
Self-Reported6%X
Self-Reported5%2%2.5%X
Self-Reported5%2%2.5%X
Self-Report2%
Forbes Top 100 Companies25%
Google5%10%
Google15%50%
Clarivate Analytics8%
Clarivate Analytics8%
25%0%80%54%20%0%0%35%— 0%0%60%

The SCImago rank web presence by Google metrics makes up 20% of the total score. Similarly, Webometrics includes all global universities that have a web presence. The goal is to encourage universities and staff to increase their visibility through the number of webpages and external networks originating at institution websites. Citations and publications make up 40% of the score, based on the production of the most cited faculty.

Five ranking systems include reputation surveys as a significant component of the ranking calculation. The QS World ranking attributes 50% of the institution score to academic and employer reputation surveys. Research and academic reputation surveys contribute 33% of the Times ranking system.

An audit by PricewaterhouseCooper was completed for this methodology, yet there is no independent validation of self-report data or explanation of the weighting of the indicator percentages. Raw data is not provided for independent replication or validation. USN&WR Global Rankings incorporates surveys of global and regional research reputation (25% of the total score), the results of which are not publicly available. Round University Ranking, based out of Moscow, Russia, uses surveys for 16% of the overall score.

Standardization and aggregation methods are employed in various iterations by the ranking systems ( Table 6 ). Efforts are made by all evaluated systems to normalize indicators by calculating ratios according to faculty numbers or research expenditures. Others normalized citations by field of study to lessen advantage of highly cited disciplines. Z scores, fractional counting, and weighted subscales are also used to standardize the ranking scores.

Method of AggregationExplanationRanking System
Data is standardized by ratio of total faculty or is weighted by disciplineCarnegie, QS World, RUR, SCIMago, Shanghai, Times, UMR
Scores are normalized to rank between 0 and 100Carnegie, CWUR, QS World, RUR, SCIMago, Shanghai, CA, USN&WR, Web
Raw data is normalized by percentages according to field of study or year of publicationLeiden, Times, UMR, URAP, USN&WR, Web,
Subscales are normalized using Z-scoresCWUR, Times, UN&WR
Subscales are assigned percentages when calculating the total scaleCWUR, QS World, RUR, SCImago, Shanghai, CA, Times, URAP, Web
Collaborative data is weighted by ratio of total authors’ participating institutionsLeiden, Shanghai
Classification into groups based on the distance of the indicator score from the median or group meanCarnegie, UMR

The suitability of ranking systems for use in research performance improvement is reported in Table 7 . It provides a rough binary assessment of the various ranking systems on the different dimensions. All ranking systems refine their analysis prior to each publication. No ranking systems report any specific measures or analysis of their indicator validity. Leiden provides a stability interval to support the individual indicator.

CarnegieCWURLeidenQSWorldRURSCIMagoShanghaiTimesCAUMRURAPUSN&WRWeb
-----X--XX---
-------------
XXX-X--XX-X--
X-X----------
-X-XX--X-X--
--X-X-X--X---
XXX---X-X---
X-XXXXX-XXX
XXX--X--XXX-X
XXX--XX-X-X-X

One research institution was compared across all ranking systems in Table 8 , to demonstrate the variability of ranking systems.

Ranking SystemActual RankRelative Rank
(% of total)
Highest Research Activityn/a
505.2
n/an/a
717.7
668.6
602.1
9318.6
524.9
2424.0
1256.2
n/an/a
666.6
400.03

* There is no overall rank.

Administrators, funders, and consumers should look for rankings which are consistent over time, cover multiple areas of measurement and are less reliant on peer reputation. Based on our results, reputation surveys, self-reported and unvalidated data, and non-replicable analyses create an impractical foundation for research improvement assessment, and can lead to a wide range of institutional ranks. When rankings are used to as support for budget requests, or as evidence of return on investment, indicators which provide a balanced approach have the best opportunity to be truly reflective.

When used in tandem, several ranking systems may have more reasonable comprehensiveness and validity. Use of the Leiden Ranking System, the Clarivate Analytics Innovation Ranking System, and SCImago process for systematic evaluation and comparison may be a promising approach for research administrators. The U-Multirank is the broadest of the systems examined, but without the ability to compare a university’s performance over time rather than in overall categories, trend analysis becomes difficult.

We found that current ranking systems rarely incorporate the promotion of innovation culture through patents or intellectual property disclosures. Increasing the research product: publication/patent, may be easily manipulated to increase rankings without actually increasing contribution to science [ 22 , 23 ].

In our sample, eight of the thirteen systems include indicators to measure academic quality. These are mainly focused on peer reputation, faculty achievement, student to faculty ratios, and the total number of awarded doctorates in both STEM and non-STEM fields. Valid measures of academic quality are not universally standardized [ 8 ]. Many ranking systems are marketed either for academic choice/comparison, yet, these indicators do not sufficiently reflect the teaching and learning environments of students.

Research expenditure is often used an indicator of the strength and quality of an institution’s research capabilities. However, no correlation has been found between more research expenditure and better quality research. A Canadian evaluation found a diminishing rate of return between the two factors, and in the US, NIH funding was significantly correlated with increased publications, but not with development of novel therapeutics [ 24 , 25 ].

University rankings tend to focus on bibliometric sources which are biased towards English language journals and are therefore not comprehensive or fully accurate. Peer reputation surveys are not published, nor is the data made available, and bias towards larger more well-known institutions may be inevitable. In addition, measures such as the number of Nobel Prize winners could be considered “luxury” indicators, accessible to elite universities but are out of reach and un-motivating for most other universities.

In this review, we explore the validity and suitability of ranking systems for research performance improvement. Clearly, there is a need for improvement in ranking methodologies. Applying organizational management principles may improve the validity and reliability of university ranking systems and assist with appropriate indicator choices.

We propose that the ideal ranking systems limits the significance of peer reputation to no more than 10%, and meets the Comprehensiveness, Transparency and Replicability criteria described in Table 5 . Current approaches rely on easily accessible output data sources; reliance on these measures perpetuates the perspective that a few approaches adequately represent scientific value, quality improvement and innovation performance. While we believe this represents a comprehensive analysis of appropriate ranking systems, other institutions may rely on different systems. Consultation with ranking system developers and research administrators has provided support for the included list.

Conclusions

There is a need for a credible quality improvement movement in research that develops new measures, and is useful for institutions to evaluate and improve performance and societal value. Quality over quantity should be emphasized to affirm research performance improvement initiatives and outcomes, which benefit society through scientific discovery, economic outcomes, and public health impact. Current indicators are inadequate to accurately evaluate research outcomes and should be supplemented and expanded to meet standardized criteria. We suggest that future research evaluate three dimensions of research outcomes: scientific impact, economic outcomes, and public health impact for evaluating research performance within an academic institutional environment.

Supporting information

Acknowledgments.

The authors wish to thank Chris Brown and Jill Pipher for their reviews and advice on this manuscript, and Nadine Mansour for her assistance with data collection.

Funding Statement

The authors received no specific funding for this work.

Data Availability

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox , Microsoft Edge , Google Chrome , or Safari 14 or newer. If you are unable to, and need support, please send us your feedback .

We'd appreciate your feedback. Tell us what you think!   opens in new tab/window

Learn more about university rankings and how research activities contribute

While rankings are not the sole indicator of an institution’s reputation and academic excellence, they help benchmark universities nationally, regionally, and globally. Many factors contribute to rankings, including research output and collaboration.

Explore these resources for a deeper look into how rankings work, and learn about analyzing and tracking research activities that are counted — so you can better support your rankings objectives.

Get notified of new content

Meeting top view

Overview & perspectives

Learn the basics about university rankings, their purpose and methodologies. Explore multiple perspectives from universities across the globe on how they respond, track and participate in rankings and the data behind them.

Guide: A closer look at how university rankings work

Ranking systems have their strengths, limitations, and specific focus — with evolving methodologies. Learn who the major ranking organizations are, how they rank universities and key things to consider.

Read the guide

University rankings

Podcast: Perspectives on rankings from a young university, featuring César Wazen

Take a moment to listen to Qatar University's approach to rankings. Cesar Wazen, Director of International Affairs, explains the intricate requirements of rankings and how they may or may not align with the university's goals and aspirations.

Listen to the podcast   opens in new tab/window

research and ranking performance

Article: Beyond university rankings: promoting transparency and accountability

Princeton University's President, Christopher L Eisgruber, advocates for universities to embrace alternative metrics and information sources beyond the conventional rankings.

Read the article

research and ranking performance

Article: Measuring knowledge exploration and exploitation in universities

Read an open access article by Marta Peris-Ortiz, Dayanis García-Hurtado, and Alberto Prado Román, delving into the intricacies of global university rankings, their indicators, and their impact on university management.

Read the article   opens in new tab/window

Illustration of university relationships

University Rankings: Dates to know

Track some of the key dates for rankings activities occurring throughout the year for five major ranking bodies: THE, QS, U.S. News & World Report, CWTS and Shanghai.

Download the infographic   opens in new tab/window

research and ranking performance

"We have always believed these rankings are not designed to say that one university is better than another. It is more about being able to look at the institution through a variety of lenses, whether it’s teaching, research, knowledge transfer or international visibility. Universities are increasingly investing in an evidence-based approach to develop a clear understanding of their position and progress. They are increasingly using a basket of diverse metrics to understand their strengths, set goals, chart their progress, and make budgetary decisions. Rankings are just one of the tools they can use."

M’hamed el Aisati

M’hamed El Aisati

VP Analytical and Data Services, Elsevier

Interested in this topic?

Get notified when new resources about university rankings and research activities become available.

The data and methodologies behind university rankings

Learn how different rankings organizations' methodologies work and how you can view, investigate and monitor the data points that may be contributing to your institution's rankings.

Quick guide to major global rankings

Get a comprehensive overview of seven ranking reports, including the organizations' stated objectives for each report, an explanation of their methodology, and the data source they use. The information is sourced from publicly available data. Last updated in September 2023.

Read the guide   opens in new tab/window

7 major rankings

Bibliometric data and THE University Rankings

To calculate their rankings, THE collects data from multiple external sources including bibliometric data. Discover the significance of this data, its contribution to their methodology and the valuable insights it offers.

Bibliometric data from SciVal

Bibliometric data and QS Rankings

The QS rankings, which consist of the World University Rankings and Ranking by Subjects, use bibliometric data to calculate their results. This year, they made enhancements to their methodology, which includes three new metrics. Learn more about how bibliometric data influences QS rankings.

Understanding Scopus & SciVal & the QS World University Rank

About THE Impact Rankings, Scopus & SciVal

For their Impact Rankings, THE uses research-related metrics and metrics based on the university's own data and evidence supporting progress. Learn about their methodology and the insights Scopus and SciVal provide.

THE Impact Rankings, Scopus and SciVal

U.S. News & World Report: 2024 Best Colleges Rankings Methodology Quick Guide

Recognizing the importance of evolving formulas and indicators over time, U.S. News & World Report has made significant methodological changes to the 2024 Best Colleges ranking, by modifying its weights of several other factors, and introducing a few new ones. This guide offers a concise overview of the national ranking and its new methodology.

Learn more   opens in new tab/window

U.S. News & World Report

We can help you support your institution's ranking and reputation goals

Get quality bibliometric data and analytical tools to inform your university's ranking strategy.

Scopus and SciVal enable university staff to plan, progress and evaluate institutional research activities with bibliometric data and analytical tools that align with ranking organizations' methodologies.

Meeting in an office

U.S. News partners with Elsevier

U.S. News & World Report has chosen to leverage the quality and breadth of Elsevier's Scopus data in its Best Colleges — National Universities r ankings and Best Graduate Schools rankings for Engineering Schools to accurately measure the research output of U.S. universities. These rankings assist prospective students in making informed decisions about their higher education choices.

Female author working on laptop in office

Interested in this topic? Get notified when new resources about university rankings and research activities become available.

A woman smiling in office

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Are university rankings useful to improve research? A systematic review

Affiliations.

  • 1 Department of Clinical and Digital Health Sciences, College of Allied Health Sciences, Augusta University, Augusta, Georgia, United States of America.
  • 2 Department of Mathematics, Faculty of Science, The University of Jordan, Amman, Jordan.
  • 3 Nonlinear Analysis and Applied Mathematics (NAAM) Research Group, Faculty of Science, King Abdulaziz University, Jeddah, Kingdom of Saudi Arabia.
  • PMID: 29513762
  • PMCID: PMC5841788
  • DOI: 10.1371/journal.pone.0193762

Introduction: Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement.

Methods: A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted.

Results: A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems.

Discussion: No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and "luxury" indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Fig 1. PRISMA flow diagram.

Similar articles

  • Relationship between bibliometric indicators and university ranking positions. Szluka P, Csajbók E, Győrffy B. Szluka P, et al. Sci Rep. 2023 Aug 30;13(1):14193. doi: 10.1038/s41598-023-35306-1. Sci Rep. 2023. PMID: 37648684 Free PMC article.
  • Inconsistent year-to-year fluctuations limit the conclusiveness of global higher education rankings for university management. Sorz J, Wallner B, Seidler H, Fieder M. Sorz J, et al. PeerJ. 2015 Aug 25;3:e1217. doi: 10.7717/peerj.1217. eCollection 2015. PeerJ. 2015. PMID: 26336646 Free PMC article.
  • Ranking United States University-Based General Surgery Programs on the Academic Achievement of Surgery Department Faculty. Keane CA, Lossia OV, Olson SR, Akhter MF, Davis RT, Jarbo DA, Hudson ML, Boyd CJ. Keane CA, et al. J Surg Educ. 2022 Mar-Apr;79(2):355-361. doi: 10.1016/j.jsurg.2021.10.015. Epub 2021 Nov 17. J Surg Educ. 2022. PMID: 34801483
  • Ethics: An Indispensable Dimension in the University Rankings. Khaki Sedigh A. Khaki Sedigh A. Sci Eng Ethics. 2017 Feb;23(1):65-80. doi: 10.1007/s11948-016-9758-1. Epub 2016 Jan 20. Sci Eng Ethics. 2017. PMID: 26792439 Review.
  • Refinement of the HCUP Quality Indicators. Davies SM, Geppert J, McClellan M, McDonald KM, Romano PS, Shojania KG. Davies SM, et al. Rockville (MD): Agency for Healthcare Research and Quality (US); 2001 May. Report No.: 01-0035. Rockville (MD): Agency for Healthcare Research and Quality (US); 2001 May. Report No.: 01-0035. PMID: 20734520 Free Books & Documents. Review.
  • A New Model for Ranking Schools of Public Health: The Public Health Academic Ranking. Dugerdil A, Babington-Ashaye A, Bochud M, Chan M, Chiolero A, Gerber-Grote A, Künzli N, Paradis G, Puhan MA, Suggs LS, Van der Horst K, Escher G, Flahault A. Dugerdil A, et al. Int J Public Health. 2024 Mar 11;69:1606684. doi: 10.3389/ijph.2024.1606684. eCollection 2024. Int J Public Health. 2024. PMID: 38528851 Free PMC article.
  • Possibilities for ranking business schools and considerations concerning the stability of such rankings. Boric S, Reichmann G, Schlögl C. Boric S, et al. PLoS One. 2024 Feb 15;19(2):e0295334. doi: 10.1371/journal.pone.0295334. eCollection 2024. PLoS One. 2024. PMID: 38358966 Free PMC article.
  • The Association of Research Quantitative Measures With Faculty Ranks of Australian and New Zealand Dental Schools. Fahim A, Shakeel S, Saleem F, Ur Rehman I, Siddique K, Qureshi HA, Zafar MS. Fahim A, et al. Cureus. 2023 Oct 18;15(10):e47271. doi: 10.7759/cureus.47271. eCollection 2023 Oct. Cureus. 2023. PMID: 38021485 Free PMC article.
  • Variations in surgical peer-reviewed publications among editorial board members, associate editors and their respective journal: Towards maintaining academic integrity. Sen-Crowe B, Sutherland M, Shir A, Kinslow K, Boneva D, McKenney M, Elkbuli A. Sen-Crowe B, et al. Ann Med Surg (Lond). 2020 Oct 27;60:140-145. doi: 10.1016/j.amsu.2020.10.042. eCollection 2020 Dec. Ann Med Surg (Lond). 2020. PMID: 33944862 Free PMC article.
  • Keupp MM, Palmié M, Gassmann O (2012) The strategic management of innovation: A systematic review and paths for future research. International Journal of Management Reviews 14: 367–390.
  • Begley CG, Ellis LM (2012) Drug development: Raise standards for preclinical cancer research. Nature 483: 531–533. doi: 10.1038/483531a - DOI - PubMed
  • Valantine HA, Collins FS (2015) National Institutes of Health addresses the science of diversity. Proceedings of the National Academy of Sciences 112: 12240–12242. - PMC - PubMed
  • Ioannidis JP (2005) Why most published research findings are false. PLoS Med 2: e124 doi: 10.1371/journal.pmed.0020124 - DOI - PMC - PubMed
  • Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, et al. (2014) Increasing value and reducing waste in research design, conduct, and analysis. Lancet 383: 166–175. doi: 10.1016/S0140-6736(13)62227-8 - DOI - PMC - PubMed

Publication types

  • Search in MeSH

Related information

Grants and funding, linkout - more resources, full text sources.

  • Europe PubMed Central
  • PubMed Central
  • Public Library of Science

Other Literature Sources

  • scite Smart Citations

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: How Ranking Performance Can Hurt Women

  • Klarita Gërxhani

research and ranking performance

A study found that women’s performance suffered when they knew they were being evaluated against their peers.

When it comes to gender equity in the workplace, many organizations focus largely on hiring more women. But to achieve more equitable representation, it’s also critical to examine disparities in how employees are evaluated and promoted once they’re on board. In this piece, the authors discuss their recent research on this topic, which found that competitive evaluation systems in which employees are ranked against one another can cause men to perform better and women to perform worse (on a task for which their performance would otherwise be roughly the same). They suggest that this likely stems from deeply-ingrained stereotypes that lead men to believe they are better than women in competitive environments, and that lead women to prioritize avoiding harming others. Based on these findings, the authors argue that organizations should build awareness of the potential harms of ranking employees, and that they should consider either adapting or totally overhauling existing performance evaluation systems to focus more on individual progress, and less on social comparisons.

Much effort has been spent on improving gender equity in hiring. Unfortunately, while these initiatives can help organizations get more female candidates in the door, they often fall short when it comes to retention and development . Of course, there are many reasons for this — but one of the key, systemic factors driving this ongoing challenge lies in how companies approach performance assessments and promotions.

research and ranking performance

  • KG Klarita Gërxhani is professor of economic sociology at the European University Institute, Florence, Italy. Her main research interests relate to the micro-foundations of economic sociology, institutional theory, social status and gender inequalities, and tax evasion. She is the author of many articles published in internationally peer-reviewed journals in economics and sociology, including the Journal of Political Economy, the Annual Review of Sociology, Social Networks, the European Sociological Review, and Experimental Economics.

Partner Center

The Captable

Social Story

Enterprise Story

The Decrypting Story

Daily Newsletter

By providing your information, you agree to our Terms of Use and our Privacy Policy. We use vendors that may also process your information to help provide our services. This site is protected by reCAPTCHA Enterprise and the Google Privacy Policy and Terms of Service apply.

Founder first

Announcement

Startup Sectors

Women in tech

Entertainment

Art & Culture

Travel & Leisure

Curtain Raiser

Wine and Food

ys-analytics

Mumbai-based Research & Ranking aims to help investors create wealth rather than manage it

Photo of Harshith Mallya

Tuesday March 14, 2017 , 6 min Read

It is estimated that only 2 percent of Indians invest in stocks, and  Ashish Kumar Chauhan, CEO of Bombay Stock Exchange (BSE) is quoted as saying  in a Quartz report that Indians have a love-hate relationship with the stock market. The average Indian investor often finds himself working on limited information and knowledge, having to rely on the advice of possibly equally ill-informed acquaintances or opaque institutions. Mumbai-based Research & Ranking aims to change this scenario by busting the myths about equity investing and providing investors with tech-enabled solutions.

Story so far

Research & Ranking is a SEBI-registered robo-advisory venture that advises retail investors on how to create and sustain wealth through long term investments into equity markets. The beta version of the platform was launched in January 2016, and the venture finally went public in October 2016.

Manish Goel, Director at Research & Ranking, says that he and his team decided to enter this space looking at the overall market size and common problems faced by equity investors, and also to bust myths and wrong perceptions around equity investment. He adds,

Equity as an asset class is not looked at the right way in India. Long term performance data, across several asset classes, has proven that equity beats all other asset classes in the long run. But people are generally not aware of how to go about investing.

research and ranking performance

Headquartered in Mumbai, Research & Ranking is a part of Equentis Group, which has two main companies – Equentis Capital, which was started in 2009, and Equentis Wealth Advisory Services, which was started in 2015. The venture currently has a team of 30, with core team consisting of Manish, Anju Prashar, Jeetendra Nair, Ritesh Jain, Gaurav Goel Pankaj Jain and Mayuri Yadav.

R&R took a few months after the beta launch to test its technology system and get the right team in place, and then in October 2016, started the sales process and onboarded clients. Says Manish,

We approached our clients personally to understand their feedback about our product and the work we are doing. The feedback we got was positive, and most of the clients appreciated our concept of C.E.R.T.A.I.N and our end-to-end wealth creation solution approach.

research and ranking performance

How Research & Ranking works

Research & Ranking has two main offerings-

  • 5*5 Wealth creation strategy

R&R helps investors identify stocks of well-run and growing businesses and create a personalised portfolio that has the potential to grow up to four to fivefold or more in five years, based on the investor’s risk assessment. Manish noted that such companies typically exhibit the twin traits of –

a. Consistent above average growth rates, and

b. Are run by management of exceptional pedigree.

As a part of this strategy, R&R’s research team and algorithms monitor portfolios of different stocks regularly to book profits in existing opportunities and replace the underperformers with better opportunities. To churn out non-performers quickly R&R sends recommendation alerts to the customers through SMS and email and then updates their dashboard simultaneously.

The portfolio’s growth is tracked every quarter based on stock price and detailed quarterly results analyses. Based on the progress, portfolio rebalancing is advised, as required, to make sure that yearly growth is captured along with the long term growth in the businesses.

One may choose to stay invested for any period of one year and above, but the team encourages investors to subscribe to this strategy for the medium to long term, with the aim of wealth creation. Manish noted,

It is a general myth that equity investments can’t be done based on your personal goals, needs, and objectives. Investors usually think that for goal-based investments, fixed income options are the best way to go. Our strategy tracks business growth and has a fair idea of how much the portfolio can grow. Our singular focus is ‘Wealth Creation’, which we strongly differentiate from Wealth Management.

2. Customised research reports

R&R also offers investors access to customised reports about any company that they are interested in. Manish noted that, upon request, R&R does extensive market research for a few months and then presents a report that includes parameters like management pedigree, key financial parameters, business outlooks, and recommendations on whether to buy, sell, or hold a particular stock.

R&R offers these services at a half-yearly or annual subscription, and also has a more premium plan, if one doesn’t wish to subscribe.

Guiding principles of Warren Buffett and Paul Samuelson

Manish noted that their guiding principle is captured by Paul Samuelson – “Investing should be more like watching the paint dry or watching grass grow. If you want excitement, go to Las Vegas.”

He believes that value investment is the right way to create wealth, and that wealth creation beats wealth preservation. The key objective for wealth creation is to purchase sound businesses run by competent management at a price that represents a material discount to their long term intrinsic value. This then allows the business value to surface over time through “Investment Discipline” and the “Power of Compounding”. He added,

Our approach for finding out the value opportunities and staying invested in them for the long term, with a lot of patience, is broadly aligned with the philosophy of Warren Buffet and Benjamin Graham .

Sector overview and future plans

According to reports by KMPG , financial majors like Wells Fargo, Bank of America Merrill Lynch, and Fidelity Investments have discussed or announced automated financial advisory services.

India, however, is estimated to has among the lowest mutual fund investment rates (7 percent), with mutual funds accounting for only 3.4 percent of total investments by individual investors (including HNIs and retail). So, during the initial stages, incumbents (or banks and larger financial institutions) will be the early adopters of robo-advisory solutions, before it hits the ancillary markets. AdviseSure, part-human, part-robot finance advisor, is a player in this space that offers counsel at Re 1 per day . Then there are players like Wixifi and TrakInvest that demystify stock trading. According to YourStory Research, there are close to 50 Indian early-stage startups flocking to the robo-advisory space.

Manish noted that Research and Ranking’s focus is on reaching the milestone of 20,000 customers in FY17-FY18. R&R also aims to come out with new tools and services, like a dashboard that compares one’s R&R portfolio to a mutual fund run-rate.

Website- Research & Ranking

  • stock market
  • Equentis Capital
  • Warren Buffett
  • wealth management
  • robo-advisory
  • Value investing
  • Research and Ranking
  • Wealth creation
  • Equity investments
  • Paul Samuelson
  • Equentis Group
  • robo-advisory solutions

MOST VIEWED STORIES

Glas Trust alleges unlawful removal from BYJU'S committee of creditors in SC

Meet the 15 founders making strides in India’s entrepreneurial ecosystem

Meet the 15 founders making strides in India’s entrepreneurial ecosystem

Flappy Bird Will Fly Again: Crypto, New Characters, Multiplayer Modes and Controversy

Flappy Bird Will Fly Again: Crypto, New Characters, Multiplayer Modes and Controversy

Meme-based dating app Schmooze bags $4M led by Elevation Capital

research and ranking performance

WhatsApp now offers Meta Verified, customised messages features to small businesses

WhatsApp now offers Meta Verified, customised messages features to small businesses

Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

  • > Journals
  • > European Review
  • > Volume 32 Issue 1
  • > Research Trends in University Rankings: A Scoping Review...

research and ranking performance

Article contents

Introduction, supplementary material, research trends in university rankings: a scoping review of the top 100 most cited articles in academic journals from 2017 to 2021.

Published online by Cambridge University Press:  20 February 2024

  • Supplementary materials

The objective of this research is to perform a comprehensive evaluation of the top 100 articles concerning university rankings, with the highest number of citations, which were published in academic journals during a period of five years, specifically from 2017 to 2021. This article adheres to the guidelines established by the PRISMA extension for scoping reviews. The selection of the 100 most frequently cited articles on the subject of university rankings is carried out by initially identifying 684 articles and subsequently screening 537 of them. Through an examination of these articles, the prevailing research domains, methodologies, samples, data collection instruments, data analysis techniques, focused variables, and keywords are determined. The abstracts of these articles are subjected to content analysis, resulting in the identification of five key themes: rankings, methodology, analysis, approach, and education. This investigation stands out as one of the pioneering studies in the field of research articles on university rankings. By delineating the boundaries of such studies, it aims to illuminate the path for future researchers by highlighting the existing gaps in the current literature and areas that warrant further exploration.

University rankings assume paramount significance within the contemporary higher education landscape, exerting profound influence on the strategic planning of tertiary institutions, guiding the decision-making processes of diverse stakeholders, and exerting a palpable impact on both national and international higher education policies. These rankings are ardently embraced by universities on a global scale, serving as pivotal metrics to gauge institutional performance, assess reputation, and facilitate the identification of institutional strengths and weaknesses. In parallel, discerning students, encompassing both domestic and international cohorts, invariably deploy these rankings as a critical determinative factor when making their educational choices, engendering consequential ramifications on enrolment patterns and, consequently, the fiscal health of universities. Furthermore, rankings actively contribute to the formulation of national and international higher education policies, thereby influencing governmental funding mechanisms and necessitating university compliance with stringent criteria and objectives defined by regulatory bodies. The influence of these rankings pervades the academic, administrative, and policy echelons of the higher education sector, corroborating their indispensability in shaping the strategic trajectory and policy milieu of tertiary education institutions (Altbach and Hazelkorn Reference Altbach and Hazelkorn 2019 ; Marginson Reference Marginson 2014 ).

Since the beginning of the new millennium, ranking universities has become a worldwide phenomenon in higher education. Although first rankings seem to appear in the United States as early as 1910 by McKeen Cattell (Hammarfelt et al . Reference Hammarfelt, de Rijcke and Wouters 2017 ), it has been with the help of the internet, and therefore the ease of reaching information, that rankings have reached the popularity that they have today. Not only have they included more and more institutions as calculative metrics, but also they have been regarded as assets of marketing and one of the motives behind organizational policy to target a better standing in national and international systems. That brought about a lot of criticism on the methodological validity and reliability of rankings, as well as the determinants and implications of rankings in higher education institutions. The reason why the matter has received so much response is partially because of how university rankings are used. First, they matter to education administrators, they matter to students, as well as parents and academicians. Second, we should not forget that part of the reason why rankings matter is that some of the organizations that produce them work really hard to make them matter (Lim Reference Lim 2018 ; Ringel et al . Reference Ringel, Brankovic and Werron 2020 ). It is even more than that. With the increasing attention to accountability and transparency in the management of institutions, universities (have to) openly and regularly declare a large amount of data on their performance, on a scope that is ranging from academic production to student enrolment and non-academic affairs. Therefore, whether they like it or not, higher education institutions are and will continue to be ranked by different ranking systems whose methodology, scope and implications will definitely continue to be criticized in the future.

It is, however, pretty interesting that the big three ranking systems – Times Higher Education (THE), QS and ARWU – are still the most researched and most referenced ranking systems, and which more or less continue their original methodology despite all the criticism they have received so far. Normally, when there are profound discussions regarding the fundamentals of a design, that design is very likely to be transformed into, or replaced by, another, which to begin with would receive more support in academic circles. Perhaps what Hazelkorn ( Reference Hazelkorn 2015 ) defined as ‘the battle for world-class excellence’ has already turned into ‘a battle for prestige’ and however unfavourable the term may sound, it surely is very appealing and deceitful.

One of the reasons behind why university ranking systems do not change (drastically) to address the concerns that are argued in the literature is perhaps the underlying problem of the effectiveness of knowledge sharing (Bejan Reference Bejan 2007 ). In other words, although academic research is initiated by research questions, to what extent every research question addresses a gap in the literature is a different story. In order to fill those gaps, it is suggested that research gaps should be structured and characterized based on their functionality (Miles Reference Miles 2017 ). To do that, it is imperative that the scope of literature, or at least some part of it, is determined. Likewise, scoping reviews are deemed fit when there is a body of literature that has not been comprehensively reviewed or exhibits an enormous and complex nature (Peters et al . Reference Peters, Godfrey, Khalil, McInerney, Parker and Soares 2015 ).

At this point, it would be beneficial to shed more light on potential gaps in the literature in relation to what the researchers are looking at for the past five years and to provide more insight into the potential size and scope of research on university rankings to help the discussion progress (Romund Reference Romund 2023 ). This is actually one of the main aims of scoping reviews and this study. A scoping review is defined as a systematic approach to map evidence on a topic and identify the main concepts, theories, sources, and knowledge gaps (Tricco et al . Reference Tricco, Lillie, Zarin, O’Brien, Colquhoun, Levac and Hempel 2018 ). Typically, a scoping review does not try to put together quantitative and qualitative data, but rather pinpoint, exhibit and discuss relevant features of sources of proof (Peters et al . Reference Peters, Marnie, Colquhoun, Garritty, Hempel, Horsley and Tricco 2021 ). It is indeed asserted in the literature that scoping reviews enable identification of research and systematic review topics (Lockwood et al . Reference Lockwood, Dos Santos and Pap 2019 ).

Following the same rationale, this scoping review highlights that most of the research in this domain focuses on implications and determinants of current university ranking systems. Methodology, on the other hand, continues to be discussed comprehensively, specifically trying to answer such questions – whether university rankings indeed measure what they intend to measure, whether they provide similar results using similar data and whether their performance indicators can comparatively and effectively reflect institutional performance. On a different note, as shown by the findings of this research, only 10% of research focuses on alternative models.

Additionally, this study is the first analytical study, in any given period, of research trends in university rankings. By identifying the scope of research that is distributed across the topic of university rankings, gaps in current literature and areas that need to be investigated further, this study aims to help researchers plan their studies accordingly, identify more beneficial research methods in the light of what has been already mapped by this study and inspire them to expand the scope of research on university rankings. Therefore, the purpose of this article is essentially to capture the state of the art in the research on university rankings. However, based on the analyses of data in this study, an overarching purpose of all research on this topic may be ‘perfecting ranking methodologies’, that is also why the author has included remarks in this direction in the conclusion.

In terms of a scoping analysis of publication trends in higher education rankings, this study is one of the first (if not the first) analyses of the publication scope of academic articles on the topic. When a thorough research is done on trends in university rankings, what comes up as a result is mostly articles discussing the current popular ranking systems, namely THE, QS and ARWU (Natalia Reference Natalia 2020 ), and reports on emerging topics getting attention, such as internationalization (De Wit Reference De Wit 2010 ). Although university rankings is mentioned as one of the trending topics in higher education in different publications (Altbach et al . Reference Altbach, Reisberg and Rumbley 2009 ; Shin and Harman Reference Shin and Harman 2009 ), there hasn’t been any scoping analysis on research publications so far.

Consequently, the purpose of this research is to conduct a scoping analysis of the top 100 most cited articles on university rankings published in academic journals in a five-year period, from the year 2017 to year 2021. In order to fulfil this purpose, the following research questions are sought to be answered:

RQ1. What are the most cited top 100 academic articles in university rankings?

RQ2. Of the articles included, what are the most frequent: (a) research areas, (b) author affiliations, (c) research designs, (d) sample ranking systems, (e) data collection instruments, (f) data analysis techniques, (g) focused variables, (h) keywords?

RQ3. What are the topics (themes) obtained as a result of the content analysis of the abstracts of these articles?

This article follows the guidance of PRISMA extension for scoping reviews (Tricco et al . Reference Tricco, Lillie, Zarin, O’Brien, Colquhoun, Levac and Hempel 2018 ). A detailed overview of the application of the protocol can be found in Table 2 of the Supplementary Material to the article. In order to answer the research questions, the articles are selected using PRISMA protocol for the identification, screening, eligibility and inclusion as shown in Figure  1 .

research and ranking performance

Figure 1. PRISMA flow of article selection process.

A number of databased are searched for academic articles; ABI/INFORM Collection, Academic Search Ultimate, Google Scholar, JSTOR Archive Collection A–Z Listing, SpringerLink Contemporary (1997–Present), International Bibliography of Social Sciences (IBSS), Wiley Online Library All Journals, SpringerNature Springer Journals All 2020, Taylor & Francis Current Content Access. Research terms that are used in these databases are: university rankings, college rankings and higher education rankings. To illustrate, the following samples of search strategy are used while conducting the initial research:

‘ranking’ OR ‘rankings’ AND university

higher education OR university AND ranking

The total number of citations is not a sum of citations from different databases such as Web of Science, Scopus or Google Scholar. Total citation numbers are taken only from Google Scholar. The research is narrowed down to publications from 2017 to 2021. This time frame was chosen in order to better reflect the current trends in literature. Publishing journals, titles and abstracts are reviewed in order to ensure that the article is published in an academic journal, and it is related to the subject of university rankings. The articles are then ordered according to the total number of citations in the Google Scholar database. As a result of such ranking, the first 100 articles with the highest number of citations are included in this study. In other words, the researcher studied 684 articles that met the initial database research and was left with 100 articles that met all the criteria at the end. The inclusion criteria were: (a) article published in an academic journal that is indexed in the above-mentioned databases; (b) article published between 2017 and 2021; (c) article’s language is English; (d) article is related to the topic of ‘university rankings’ either in the methodology or in the conclusion. This means an article is included on the basis that it contains one or more university ranking systems in the sample either as an overall performance ranker or a subject/country specific ranking system, or it draws conclusions from such ranking systems either comparatively or descriptively in its conclusion. Data extraction is handled manually by the researcher using an Excel workbook in which the articles’ (a) titles and abstracts, (b) number of citations, (c) first author’s affiliations, (d) names of the publishing journals, (e) publication years, (f) author names, (g) research areas, (h) research methodology, (i) samples, (j) data collection instruments, (k) data analysis techniques, (l) focused variables and (m) keywords are noted down. The researcher analysed the articles’ abstracts and keywords using NVIVO software to obtain the keyword frequency table, topics and concept maps. A detailed formulation process of the concept map in Figure  4 , later, is given in Table 3 of the Supplementary Material to this article. Creating a mind map offers value through its visual clarity and organization, aiding in consistency and stability. It allows for easy updates without disrupting the overall structure, traceability of information sources, and modularity for isolated changes. Revision control in digital tools enables the tracking of modifications and reversions when necessary. Effective communication and collaboration on a shared mind map help maintain data integrity and understanding among collaborators.

The top 100 most cited articles are given in Table 1 of the Supplementary Material to this article. The research areas of these articles are grouped according to the main purpose of research, in other words, what they intend to analyse using the methodology that is described in the study. Based on this rationale, the following areas are determined:

(a) Internationalization and competition

(b) Governance and autonomy

(c) Productivity and quality

(a) Institutional determinants

(b) Regional or country-specific determinants

(c) Global determinants

(d) Other (e.g. multi-authoring)

(a) Theoretical models (suggested model is given as a framework in theory)

(b) Practical models (suggested model is applied to HEIs and presented in conclusion)

(a) Validity (measuring what it intends to measure)

(b) Reliability (similarity of results when the same methodology is repeated)

(c) Performance indicators (methodological, holistic, lexical, semantic problems)

(5) Role (function) of university rankings

Accordingly, the frequency of the research areas is given in Table  1 .

Table 1. Number of articles by research areas

research and ranking performance

It is noted in findings of research areas that implications of university rankings constitute 39% of all research in the top 100 most cited articles in academic journals, making this the most researched topic between years 2017 and 2021. Matters related to how university rankings influence the governance and autonomy of higher education institutions get the highest attention of scholars. When we look into the variables of such research it is worth noticing that most of the research on implications have organizational policy (16), Footnote a quality assurance (2) and resource allocation (1) as one of the variables to look into.

The second most researched area is the methodology of university rankings with 28% of all research on this topic. The most common variables that researchers are analysing in this aspect are: performance indicators (22), rank differences (21), validity and reliability (11); based on whether they measure what they intend to or how similar the results are if the same dataset is applied into another ranking system. ‘Rank differences’ constitute the most common variable that methodology researchers are looking into. This type of research is divided between a quantitative-based method with a comparative correlational approach (16), where scholars are using statistical information that they obtain from publicly declared ranking tables (35) and conducting analysis through correlations (Pearson or Spearman; 16), regressions (20), central tendency (mean-median-mode; 4) and variability (variance-standard deviation-range; 11); and a qualitative-based method with a content analysis approach (29), in which scholars are collecting data through document analysis (22) and conducting analysis through content analysis (11), discourse analysis (6) and thematic analysis (5).

The third most researched area is determinants of university rankings. Research in this dimension takes 21% of all studies in the top 100. These studies focus mostly on institutional determinants (12), which are basically the performance indicators of the universities that are internally targeted through strategic planning and are thought to have a direct impact on the standing of the universities in national or international rankings.

In terms of first author affiliations that are displayed in Figure  2 , United States holds the first place with 19 articles whose first authors are affiliated to this country. It is seen that these authors have studied the implications of university rankings (7) especially in terms of governance and autonomy (5) and productivity and quality (2), institutional determinants (4) and regional and country-specific determinants (1). There are also theoretical models (3) that are suggested by authors from the US. When we look at the research design of these articles, comparative correlational and content analysis have five each, followed by causal comparative (2) and group comparisons (1) as the most preferred. The scope of these studies mainly focuses on international ranking systems (15). Following the US, authors from the United Kingdom take the second place with 13 articles published in the top 100. Most of these studies (8) focus on implications of university rankings using qualitative data analysis techniques such as content analysis (4), thematic analysis (2) and discourse analysis (1). Organizational policy seems to be the most common variable (5) that these studies focus on, followed by methodology (4) and rank differences (2). The third country with the most articles in the top 100 is Spain with 10 articles. These articles are mostly about the determinants of university rankings (5), implications (3) and methodology (2). Almost all publications (9) by these authors use quantitative methodologies, in which causal comparative (3) and comparative correlational designs (6) are dominant. Following Spain, Turkey has five articles in the top 100, followed by Canada, the Netherlands and Finland with four articles. Brazil has three articles in the list and Australia, Bulgaria, Hungary, Italy, Kazakhstan, Poland, Russia, Slovenia and Taiwan have two articles each. Other countries in the list have one article in the top 100. These are: Belgium, Chile, China, Denmark, Ecuador, France, Germany, Greece, India, Indonesia, Iran, Luxembourg, Malaysia, Mauritius, Mongolia, New Zealand, Serbia, South Africa, Venezuela, and Vietnam. It is also worth noticing that multi-authoring is quite common in the top 100. Although articles with only one author have the highest number (29) compared with articles with two, three, four or five authors; this also means that 71 articles have a minimum of two authors.

research and ranking performance

Figure 2. First authors’ affiliation by country.

As for research designs, which can also be seen in Figure  3 , comparative correlational studies make up to 43 articles in the top 100. These quantitative studies fundamentally compare a minimum of two variables by looking at correlations in between. In the context of higher education rankings, these correlations are analysed through Pearson correlations (14), regression analysis (13), variability (variance, standard deviation, range) (8) and central tendency (mean-median-mode) (4). Content analysis, on the other hand, is the second most popular research design with 31 articles in the top 100. These studies analyse data through content analysis (15), discourse analysis (6) and thematic analysis (8).

research and ranking performance

Figure 3. Research designs.

research and ranking performance

Figure 4. Topics obtained from the abstracts of top 100 most cited articles.

Ranking systems that are investigated in research articles are divided into three groups. The first is the international ranking systems with 82 articles in the top 100. Most commonly analysed ranking systems are: THE (54), QS (52), ARWU (48), USN&W (18), Leiden (12), URAP (7), Webometrics (6), U-Multirank (6) and NTU (5). It is also seen that in almost half of all the articles (46) QS, THE and ARWU are investigated together. In fact, most of the articles (91) have a minimum of two ranking systems to analyse. The number of articles analysing only one ranking system is 9, two ranking systems is 14, three ranking systems is 6, four is 8, five is 12, six is 5 and seven is 2. The second group of ranking systems are subject-specific ranking systems (4). Subjects in this group are health, engineering, and architecture. The third group is national ranking systems (13). This group covers a variety of national ranking systems that are specific to only one country, including the United States, India, Poland and the United Kingdom.

It is also seen that USN&W rankings are mainly sought by authors whose first affiliation is the USA (18), whereas authors from the United Kingdom mostly search ARWU (8) either on its own (1) or with other ranking systems (7). Spanish authors tend to search QS more (6) and authors from Turkey do research mostly on THE, QS and ARWU together (5). In addition to these ranking systems, Turkish authors also add URAP (3), a Turkish university originated ranking methodology. In addition to Turkish authors, URAP is also researched by authors from New Zealand (1), Finland (1) and Serbia (1). Canadian authors mostly search THE (3), Dutch authors ARWU, THE and QS together (3) and Finnish authors are also interested in USN&W (2) in addition to the ‘big three’.

Clearly, the USA and the UK have many articles in top cited (32), given that the articles in the analysis are limited to those in English. This limitation should be taken into account as a number of academicians argued the possible implications of doing academic research in a language other than a person’s own (Turner Reference Trner 2004 ; Duszak and Lewkowicz Reference Duszak and Lewkowicz 2008 ; Snow and Uccelli Reference Snow, Uccelli, Olson and Torrance 2009 ).

Data collection instruments in the top 100 articles are: publicly announced reports containing statistical data (50), document analysis (46) and surveys (4). As can be seen in Table  2 , content analysis is the most common data analysis technique (24) that is used in the top 100 most cited articles, followed by regression analysis (20), thematic analysis (12), Pearson correlations (13) and variability (11). In terms of focused variables, performance indicators seem to be the most researched variable (22). Other popular variables are rank differences (21), organizational policy (16), and methodology (10). Other variables that are worth noting are quality assurance (2) and reputation (2).

Table 2. Data Analysis Techniques

research and ranking performance

The most frequent 35 keywords are given in Table  3 . Notable results in the table include ‘indicators’ as in ‘performance indicators’ as being one of the highest mentioned keywords in research. Quality, policy, competition, sustainability, reputation and internationalisation in this list are worth noting as they summarize the fundamental role of university rankings in the international arena.

Table 3. Most common keywords

research and ranking performance

As for topics (themes) that are obtained from the abstracts of these articles, five main themes emerge. These are: rankings, methodology, analysis, approach and education. Overall, it is seen in the concept map that in relation to the main topic of rankings, the top 100 most cited articles focus on methodology, analysis, education and approach. All of these topics seem to interrelate with rankings in certain sections. Although there are intersections where two topics other than rankings are seen, these intersections have extensional sub-topics that originated from the main topic of rankings. Additionally, there are intersections populated by three topics such as rankings-methodology-analysis, rankings-analysis-education and rankings-education-approach. As shown in Figure  4 , these themes relate to each other in terms of, for example, how rankings affect education in general, quality of education, and decision of international students. There are also intersections in the concept map displaying relationships between institutional determinants and educational activities; and also with processes of analysis in the institution. Another cross-sectional relationship happens to be between the educational approach and performance in rankings. In terms of methodology of rankings, the indicators seem to be on the focus of published articles in the top 100, as well as analyses of universities’ positions, especially the ones in the top places in results. Using two different codes for ‘university’ in singular and plural form likely serves to distinguish between individual universities (singular) and the concept or category of universities as a whole (plural). This differentiation helps the researcher organize and analyse information more effectively, especially when dealing with data that may involve both specific institutions and broader trends or characteristics within the entire category of universities. It allows for precise categorization and analysis of data in a research context.

Summary of Evidence

The scope of the top 100 most cited research articles published in academic journals in this period (2017–2021) primarily sheds light on the research topics that have reached popularity among academicians. These topics are divided into five categories, which can be summarized as what rankings affect and what affects rankings, methodological studies questioning validity, reliability and indicators of such measures, suggested frameworks that are both applied and conceptual, and the function of these ranking systems at a time when data about organizational performance are collected and shared more than ever.

In parallel to research topics, the variables researchers are looking at are mostly about the technical aspects of ranking systems. Variables such as performance indicators, rank differences, and methodology seem to be the result of a growing interest towards analysing and comparing the methods that are used in these measures. On a slightly different note, implications of ranking systems on universities becomes obvious with the selection of variables such as organizational policy, quality assurance and reputation.

The United States holds first place when it comes to the association of the first authors of research articles on university rankings between 2017 and 2021. The United Kingdom follows the States with the second highest number of first authors in the list. Third and fourth places go to Spain and Turkey respectively. Canada, the Netherlands and Finland follow the forerunners in this category. Among the top 100 most cited articles, there are authors affiliated with countries ranging from Kazakhstan to Mongolia, Ecuador to Mauritius, and Iran to South Africa.

Qualitative research, specifically content analysis, and quantitative research, particularly comparative correlational studies, dominate the research designs. Qualitative studies mostly analyse the implications of rankings on organizational policies through content analysis, thematic analysis and discourse analysis whereas quantitative studies mostly analyse relationships among different methodologies through comparative correlational approaches. There are also qualitative studies in the top 100 in which theoretical and practical frameworks are suggested through grounded theory and quantitative studies where performance indicators in different ranking systems are examined with a causal comparative approach.

Data are collected mostly through detailed reports of rankings, documents that are shared by higher education institutions, and surveys. Qualitative studies choose to analyse data mostly by content analysis and thematic analysis. Quantitative studies analyse data mostly by regression analysis, correlation analysis and variability.

Topics in abstracts mostly fall under ‘rankings’, which acts as an umbrella category. In research articles, authors focused on issues regarding the role of educational activities and its relation to rankings, the analysis of organizational performance in reference to performance indicators in ranking systems, different approaches to measuring performance and the methodology of rankings mainly dealing with how they handle the large amount of data coming from higher education institutions. Therefore, the scope of this study does not necessarily limit the scope of academic articles in terms of having a direct connection to popular, global university ranking systems but takes a more comprehensive approach by screening the content of academic articles to make sure that the content is relevant to the topic of university rankings.

Limitations

This study is limited to the top 100 most cited research articles published in peer-reviewed academic journals in English.

Articles are selected by the total number of citations in selected databases that are mentioned in the methods section of this article.

Selected articles are manually evaluated to ensure that they are directly related to the topic of university rankings (alternative keywords are also taken into account).

Conclusions

It is worth noticing that, in research, one of the overarching topics is the concern regarding the methodology of rankings. Not only have the authors questioned the pros and cons of the existence of such ranking measures, but they have also technically examined whether such systems are valid and reliable. This is a big step towards perfecting ranking methodologies, and also food for thought on how to make the amount of big data more presentable and accessible to more institutions and to the public. It is also a sign of growing concern in academic circles in regard to what and who these rankings really represent.

A significant amount of popularity of academic publications seems to be stemming from their focus on rank differences across different ranking systems. There is also interest in comparing performance indicators in those systems, a deeper analysis to determine an ideal way of measurement. This may lend itself into the development of a multidimensional, more collaborative ranking system and perhaps an interdisciplinary, multi-layered form of measurement in future studies.

The involvement of authors from different parts of the world provides a wider perspective in terms of region and country-specific factors that are both affected and affect the rankings. Moreover, maximizing the variety of different institutions has let the researchers look closer into the fine-tuning of organizational policies in higher education institutions. This provides more insight into analysing whether these policies are more productive if they come into existence in consequence of standings in ranking systems or they are developed with a specific purpose to target desired positions and only focus on specific performance indicators rather than increasing the performance of the institution as a whole.

The discussion about the role of rankings seems to take a significant portion of the publications in the top 100 most cited articles. In particular, the question about who should give more importance to rankings appears to expand as more authors attempt to explain the rationale behind why and how these measures facilitate change, providing different and expanding perspectives. On the other hand, the fundamental issue about education administrators’ role in facilitating this change remains intact as they have always been accepted as guides of policy in terms of how the institution will handle its position (or lack of position) in rankings.

A significant number of articles in the top 100 appear to be in favour of an update in ranking methodologies. In a swiftly changing atmosphere, shaped by great forces such as the internet, social media and pandemics, rankings cannot stand still but only adapt and respond, not only as entities that are affected, but also as assets that affect the choices of students, academicians, institutions and even countries.

In such an atmosphere, whether rankings should focus on the international platform or, instead, be more localized has different benefits. After all, higher education institutions continue to be ranked by different organizations and there is no escaping from that. On the other hand, institutions are now a part of the ranking ecosystem whether they are listed or not. This means that the lack of a standing in a particular ranking system does not stop the institution from targeting certain indicators in that system and acting accordingly. This, in a way, turns into a tailored, individual acting plan for the institution in which it is possible to see similar performance indicators from different ranking systems. Increasingly, it is already known that some universities take this approach while setting their strategic goals and listing them in their strategic plans.

Suggestions for Future Research

It is one of the aims of a scoping review to determine the gaps in literature on university rankings and give suggestions in regard to possible research areas that need to be investigated further. Therefore, suggestions for future research are drawn in the light of the evidence and conclusions provided in this study.

First, the function of university rankings might be studied more as it compiles only 2% of all studies that are included in this research. It is argued that higher education institutions do not need university rankings to shape their decisions during strategic planning processes (Bornmann Reference Bornmann 2014 ). However, there are some scholars who argue that university rankings affect the reputation of an institution (Sarupiciute and Druteikiene Reference Sarupiciute and Druteikiene 2018 ) in terms of indicating world-class status and internationalization (Lo Reference Lo 2014 ). Authors need to be careful as there is an argument that methodologies which use reputation as a benchmark for rankings are widely criticized for being overly subjective, self-referential and self-perpetuating where participant’s knowledge is limited to what they know and reputation is equated with quality or organizational age (Hazelkorn Reference Hazelkorn 2019 ).

Another potential topic for further research might be alternative models to existing ranking methodologies. In this scoping review, it is seen that academic articles that focus on alternative models, both theoretical and practical, are only 10% of all research in the top 100. Especially during and after the pandemic, there have been numerous studies reflecting on how Covid-19 reshaped higher education. In order to respond to such changes on the institutional level, university rankings have declared an interest in adapting their methodology to those changes (Holmes Reference Holmes 2020 ). It would therefore be very interesting to look into those factors that facilitate this change and evaluate how well ranking systems respond.

In addition, it is seen that most research in the top 100 focuses mainly on THE, QS and ARWU. Other ranking methodologies are investigated much less. This indicates that there is a lot of room for identifying any superior aspects if any, comparing their methodologies and perhaps proving why other ranking systems such as U-MULTIRANK and URAP deserve more attention in research.

In terms of research designs, it is observed that case studies take 4% of all studies in the top 100 most cited articles. Case studies do not only raise knowledge on different and complicated cultural and social settings, but they also inform about how studies handle significant conceptional problems (Allen Reference Allen 2018 ; Singh-Peterson et al . Reference Singh-Peterson, Carnegie, Bourke, Bue, Kunatuba, Laqeretabua, Vilisoni, Singh-Peterson and Carnegie 2019 ). In this sense, to better see a variation of how theory is related to the application in different countries and institutions, increasing the number of case studies will definitely be helpful. It is in fact one of the criticisms of global university rankings that certain countries and institutions have the upper hand. Therefore, case studies will present researchers with new opportunities to compare different settings and actually prove or disprove if such criticism is scientifically justified.

Another research design that might be employed more often at this point could be meta-analysis. It is seen in evidence that meta-analysis accounts for up to 5% of all studies in the top 100. On the other hand, comparative correlational analysis accounts for up to 43% of all studies. This suggests that there may be similar correlational comparisons that could be used to come up with meta-analyses. The superiority of meta-analysis over other types of conventional research methodology would definitely expand the scope of studies on university rankings.

When it comes to data analysis techniques, it is shown by evidence that only 3% of all studies in the top 100 use factor analysis and structural equation modelling (SEM), while in the literature there is a considerable amount of discussion on how university rankings affect different aspects of higher education and vice versa. In this respect, future studies would benefit from clarifying those factors in the form of illustrating and validating relationships among such variables.

In terms of keywords, on the presumption that keywords give an educated guess regarding what the article is mainly about, some keywords are worth noticing. For instance, reputation, ethical and internationalization occurred far less than other keywords in the list. Researchers might benefit from looking into these concepts in relation to potential areas of interest for research. Needless to say, these keywords on their own might initiate new perspectives for university rankings research, and even an inferential test might be done between such keywords, such as whether there is a significant meaningful relationship between reputation and internationalization.

When research topics are analysed, social and cultural implications and determinants of university rankings emerge as an area that needs to be investigated further. For example, how university rankings affect relationships among higher education institutions and other stakeholders or certain industries; or how the society affects or responds to university rankings, might also be possible topics for research. These could shed light on clarifying whether such a relationship exists and, if it does, to what extent and what kind of nature. Eventually, this would help policymakers and academicians, as well as society, to understand whether these metrics have bonds to the social and cultural aspects of their immediate environment.

To view supplementary material for this article, please visit https://www.doi.org/10.1017/S1062798723000595 .

About the Author

İrfan Ayhan , PhD. is an instructor at Sabancı University, Turkey. His research areas cover university rankings, strategic planning, higher education and quality management.

a. Throughout the article, numbers in parentheses refer to number of articles in the top 100.

Figure 0

Ayhan supplementary material

Crossref logo

No CrossRef data available.

View all Google Scholar citations for this article.

Save article to Kindle

To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Volume 32, Issue 1
  • İrfan Ayhan (a1)
  • DOI: https://doi.org/10.1017/S1062798723000595

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox .

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive .

Reply to: Submit a response

- No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted

Your details

Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly.

You have entered the maximum number of contributors

Conflicting interests.

Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners.

research and ranking performance

SEBI registered investment advisory services

research and ranking performance

Video Gallery

Watch our exclusively curated financial videos

research and ranking performance

Media, Award & Accolades

Stay updated with our winning journey

research and ranking performance

Stay in touch

Newsletters

Stay on top of the stock market

research and ranking performance

Research Reports

Get In-depth sample research reports

research and ranking performance

The Phoenix Mills Ltd. (PDF)

Polycab (PDF)

Union Bank (PDF)

research and ranking performance

Stocks Screener

Choose stocks based on your criteria

research and ranking performance

Stay updated on past,current and upcoming IPOs

research and ranking performance

Current IPOs

research and ranking performance

Upcoming IPOs

research and ranking performance

Listed IPOs

research and ranking performance

Financial Calculators

Manage your personal finances with our range of calculators

research and ranking performance

CAGR Calculator

research and ranking performance

SIP Calculator

research and ranking performance

Retirement Planning Calculator

research and ranking performance

Stock of the Month

Know the trending stocks every month

research and ranking performance

Adani Ports and SEZ

research and ranking performance

Asian Paints

research and ranking performance

Bajaj Finserv Ltd

research and ranking performance

Bharti Airtel

research and ranking performance

Macro Dashboard Coming soon

Discover your contribution in the Indian economy

research and ranking performance

Portfolio Analysis Coming soon

Know your portfolio's health

research and ranking performance

5 in 5 Strategy

A portfolio of of 20-25 potential high-return stocks

research and ranking performance

1 high-growth stock recommendation/ month, that is trading below its intrinsic value

research and ranking performance

A combined solution of 5-in-5 wealth creation strategy & mispriced opportunities

research and ranking performance

Manage your portfolio with dhanwaan

Choose from our range of pricing packages

Informed InvestoRR

An edu-tech platform that offers insights on investing practices

search

  • Market Tools
  • Decode Yourself

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NATURE INDEX
  • 29 April 2020

Leading research institutions 2020

  • Bec Crew 0 &

Senior editor, Nature Index

You can also search for this author in PubMed   Google Scholar

Photo of scientist preparing to launch a 3D-printed rocket

A researcher at the University of California, San Diego, prepares to launch a 3D-printed rocket. Credit: Erik Jepsen/UC San Diego

The Chinese Academy of Sciences (CAS) in Beijing has topped the Nature Index 2020 Annual Tables list as the most prolific producer of research published in the 82 selected journals tracked by the Index (see Graphic).

CAS’s Share of 1805.22 in 2019 was almost twice that of Harvard University in Cambridge, Massachusetts, which came in second. Research institutions from China, the United States, France, Germany and the United Kingdom feature among the ten most prolific institutions in the Index. See the 2020 Annual Tables Top 100 research institutions for 2019 .

(Share, formerly referred to in the Nature Index as Fractional Count (FC), is a measure of an entity’s contribution to articles in the 82 journals tracked by the index, calculated according to the proportion of its affiliated authors on an article relative to all authors on the article. When comparing data over time, Share values are adjusted to 2019 levels to account for the small annual variation in the total number of articles in the Nature Index journals. The Nature Index is one indicator of institutional research performance. See Editor’s note below.)

research and ranking performance

Source: Nature Index

Here is a selection of institutions from the top 25 of the Nature Index 2020 Annual Tables .

University of Science and Technology of China

Share: 455.82; Count: 1,231; Change in adjusted Share (2018–19): +25.6%; Place: 8th

Established by the Chinese Academy of Sciences (CAS) in 1958 in Beijing (then known as Peking), the University of Science and Technology of China (USTC) moved to its current location in Hefei, the capital of the eastern Chinese province of Anhui, in 1970.

Today, it employs about 16,000 students, including 1,900 PhD students, as well as 1,812 faculty members, 547 of which are professors.

research and ranking performance

Nature Index 2020 Annual Tables

The institution’s strongest subjects in the Nature Index are chemistry and physical sciences. USTC is a global collaborator, counting the Max Planck Society in Munich, Germany, the University of Oxford, UK, and Stanford University in California among its close partners.

In 2019, USTC researchers were part of an international team that discovered a stellar black hole with a mass 70 times greater than that of the Sun. The findings, published in Nature , were mentioned in more than 300 tweets and nearly 200 news stories, according to Altmetric.

University of Michigan, United States

Share: 343.45; Count: 939; Change in adjusted Share (2018–19): − 3.3%; Place: 19th

Placed first among public universities in the United States for research volume, according to the US National Science Foundation, the University of Michigan in Ann Arbor encompasses 260,000 square metres of lab space, which is accessed by students and staff in 227 centres and institutes across its campus.

With US$1.62 billion in research expenditure and more than 500 new invention reports in the fiscal year 2019, the University of Michigan is focused on innovative areas in research, including data science, precision health and bioscience. Its Global CO 2 Initiative, launched in 2018, aims to identify and pursue commercially sustainable approaches that reduce atmospheric CO 2 levels by 4 gigatons per year.

A 2019 study published in Science on honesty and selfishness across cultures, led by behavioural economist Alain Cohn, was covered by almost 300 online news outlets and reached more than 22 million people on Twitter, according to Altmetric. The study, which tested people’s willingness to return a dummy lost wallet, revealed a ‘high level’ of civic honesty.

University of California, San Diego, United States

Share: 340.85; Count: 1,048; Change in adjusted Share (2018–19): − 1.2%; Place: 20th

With US$1.35 billion in annual research funding, the University of California, San Diego, is a force in natural-sciences research, particularly in oceanography and the life sciences.

Its health-sciences group, which includes the School of Medicine and Skaggs School of Pharmacy and Pharmaceutical Sciences, brought in US$761 million in research funding in the fiscal year 2019, and Scripps Oceanography, one of the world’s oldest and largest centres for research in ocean and Earth science, won $180 million in funding.

The university also has a focus on innovation, with more than 2,500 active inventions, 1,870 US and foreign patents, and 31 start-ups launched in 2018 by faculty members, students and staff. One such start-up was CavoGene LifeSciences, which aims to develop gene therapies to treat neurodegenerative disease.

Zhejiang University, China

Share: 329.82; Count: 815; Change in adjusted Share (2018–19): +10.5%; Place: 23rd

Zhejiang University in Hangzhou, China, is part of the Chinese government’s Double First Class Plan, which aims to develop several world-class universities by 2050. It employs 3,741 full-time faculty members and partners with nearly 200 institutions around the world.

Zhejiang’s total research funding reached 4.56 billion yuan (US$644 million) in 2018, with 926 projects supported by the Chinese National Natural Science Fund and 1,838 Chinese invention patents issued. The university is home to materials scientist Dawei Di, who was listed as a top innovator under 35 by MIT Technology Review in 2019 for his work on organic light-emitting diodes and perovskite light-emitting diodes.

In 2019, Zheijiang researchers published a Science paper with an international team that proposed a method for boosting plant growth while reducing water use, which could contribute to more sustainable agriculture practices.

Northwestern University, United States

Share: 317.12; Count: 762; Change in adjusted Share (2018–19): − 7.6%; Place: 25th

Founded as a private research university in 1851, Northwestern University, based in Evanston, Illinois, now also has campuses in Chicago and Doha, Qatar, and employs 3,300 full-time research staff. It has an annual budget of US$2 billion and attracts more than US$700 million for sponsored research each year.

The fastest-rising institution in the United States in high-quality life-sciences research output, Northwestern University was also 14th in the world in chemistry in the Nature Index 2020 Annual Tables .

Its star researchers include mathematician Emmy Murphy, one of six recipients of the 2020 New Horizons Prize for her work in the field of topology — the study of geometric properties and relationships — and physicist John Joseph Carrasco and neuroscientist Andrew Miri, who in February were awarded prestigious Sloan Research Fellowships.

doi: https://doi.org/10.1038/d41586-020-01230-x

This article is part of Nature Index 2020 Annual Tables , an editorially independent supplement. Advertisers have no influence over the content.

Editor’s note: The Nature Index is one indicator of institutional research performance. The metrics of Count and Share used to order Nature Index listings are based on an institution’s or country’s publication output in 82 natural-science journals, selected on reputation by an independent panel of leading scientists in their fields. Nature Index recognizes that many other factors must be taken into account when considering research quality and institutional performance; Nature Index metrics alone should not be used to assess institutions or individuals. Nature Index data and methods are transparent and available under a creative commons licence at natureindex.com .

Related Articles

research and ranking performance

Partner content: The right environment for achievement

  • Institutions

The human costs of the research-assessment culture

The human costs of the research-assessment culture

Career Feature 09 SEP 24

Can South Korea regain its edge in innovation?

Can South Korea regain its edge in innovation?

Nature Index 21 AUG 24

What will it take to open South Korean research to the world?

What will it take to open South Korean research to the world?

A guide to the Nature Index

A guide to the Nature Index

Has your paper been used to train an AI model? Almost certainly

Has your paper been used to train an AI model? Almost certainly

News 14 AUG 24

Estonians gave their DNA to science — now they’re learning their genetic secrets

Estonians gave their DNA to science — now they’re learning their genetic secrets

News 26 JUN 24

My identity was stolen by a predatory conference

Correspondence 17 SEP 24

Publishing nightmare: a researcher’s quest to keep his own work from being plagiarized

Publishing nightmare: a researcher’s quest to keep his own work from being plagiarized

News 04 SEP 24

Associate or Full Professor Neuroimager Faculty Position - Mesulam Center (MCNADC)

Associate or Full Professor Tenure Track Faculty Position in Human Neuroimaging.

Chicago, Illinois

Northwestern University Feinberg School of Medicine - Mesulam Center

research and ranking performance

Assistant Professor Tenure-Track Faculty Positions

Nashville, Tennessee

Vanderbilt University

research and ranking performance

Faculty Positions in Biology and Biological Engineering: Caltech, Pasadena, CA, United States

The Division of Biology and Biological Engineering (BBE) at Caltech is seeking new faculty in the area of Molecular Cell Biology.

Pasadena, California

California Institute of Technology (Caltech)

research and ranking performance

Assistant Professor of Molecular and Cellular Biology

We seek applications for a tenure-track faculty position in the Department of Molecular and Cellular Biology.

Cambridge, Massachusetts

Harvard University - Department of Molecular and Cellular Biology

research and ranking performance

Husbandry Technician I

Memphis, Tennessee

St. Jude Children's Research Hospital (St. Jude)

research and ranking performance

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research and ranking performance

Do rankings drive better performance?

Global ranking is still only 13 years old but has already installed itself as a permanent part of international higher education and has deeply transformed the sector.

Global ranking is inevitable. People inside and outside the sector want to understand higher education, and ranking is the simplest way to do so. It maps the pecking order and underpins partnership strategies. It guides investors in research capacity. It shapes the life decisions of many thousands of cross-border students and faculty—despite the patchy quality of much of the data, and the perverse effects of all rankings, good or bad.

Global ranking has remade global higher education as a relational environment, magnifying some potentials in that environment, and blocking others. It has done so in three ways.

First, competition. Ranking has burned into the global consciousness the idea of higher education as a competitive market of universities and countries. This competition is about research performance, the main driver of ranking outcomes, and about reputation.

Second, hierarchy. Ranking is a core element of the system of valuation, whereby unequal weights are assigned to knowledge and to the credentials that graduates take into national and global labor markets. Through ranking universities become more tightly connected to the political economy, the labor markets and the unequal societies in which they sit.

Third, performance. Ranking has installed a performance economy that controls behavior, driving an often frenetic culture of continuous improvement in each institution.

Unequal competition

There are naturally competitive elements in research and in graduate labor markers. But ranking gives competition a more powerful and pristine form, embedding it in indicators and incentives. It makes competition the principal strategy for many university rectors, presidents and vice-chancellors. Solidarity and cooperation within systems is weakened.

We continue to cooperate, regardless of ranking. The metrics include intellectual collaboration in publishing, though this is often explained as self-interest (joint publication expands citation rates). But the point is that a large and increasing share of the remarkable collective resources in global higher education is allocated to mutual conflict.

Cooperation is further hampered by the hierarchy of value formed in ranking. Though research and learning flow freely across borders they are not equally valued. There is a clear status hierarchy. What defines this hierarchy is not a global system for valuing credentials or learning. There is no global system for credentials. We don’t measure learning on a comparative basis. What systematizes the global hierarchy is the process of codifying, rating and ranking knowledge, summarized and spread everywhere by global ranking.

Knowledge is ordered by journal metrics and hierarchies, publication metrics, citation metrics and hierarchies, and crowned by rankings, which are largely based on research. Research performance is the whole content of the Shanghai ARWU , and the Leiden ranking and Scimago, and more than two thirds of the Times Higher Education ranking. Rankings translate the status economy in research into an institutional hierarchy, determining the value of each knowledge producer and so determining the value of what they produce. Knowledge metrics and ranking recycle the dominance of the strongest universities.

Better performance?

What about performance improvement? This is the ultimate rationale for competition. If ranking is grounded in real university performance, and measures the important things about universities, then a better ranking means improved performance. If every university strives for a higher rank, all must be lifting performance. Is this what happens? Yes and no.

The potential is there, for a virtuous circle between ranking, strategy, efforts to improve, better performance, then back to better ranking, and so on. But there are problems.

Only some university activities are included in ranking. There is no virtuous circle for teaching and learning, a big gap in the performance driver. Many research metrics are inside the virtuous circle, but not in the humanities, the humanistic social sciences and most professional disciplines, and all scholarly work outside English is excluded.

What about science? There some rankings drive performance, others do not. Rankings that rest on coherent metrics for publication and citation drive more and better research outputs, all else being equal (e.g. ARWU , Leiden, Scimago). Since 2003 research-based rankings have contributed to increased investment in university scientific capacity and elevated research outputs within institutional strategy.

The picture is more mixed with the Times Higher and QS ranking. To the extent they draw on strong research metrics, there is the potential for a virtuous circle. Taken alone, the QS indicator for citations per faculty, and the Times Higher indicators for citations and for research volume, potentially have this effect. ‘Potentially’, because the incentives are blunted: the research-based indicators are buried within combined multi-indicators.

The internationalization indicators generate incentives to increase students and faculty from abroad, and joint publications, but tare minor within the total ranking—and again the performance incentive is buried within the other elements in the multi-indicators used.

Therefore a university may improve its citation per faculty performance, or improve its internationalisation numbers, but watch its ranking go down because of what happened in the reputational surveys, which constitute a large slab of both the Times Higher and the QS ranking but are decoupled from real performance. Surveys contain data about opinions about performance, not data about performance. The link between effort, improvement and ranking, essential to the virtuous circle, is broken.

The same happens when the ranking position changes because of small shifts in methodology. Again, there is no coherent link between effort, performance and ranking.

Wait on, you might say, reputation matters to students. The value of degrees is affected by the pecking order. That’s right. And a reputational hierarchy based on surveys, by itself, uncontaminated by other factors, does tell us something important. But a reputational ranking alone, while interesting, cannot drive continually improving performance in real terms. It can only drive a position-and-marketing game. In the end, reputation must be grounded in real performance to consistently benefit stakeholders and the public good.

The point can be made by analogy. The winner of the World Cup in football is determined by who scores the most goals within the allotted time on the field. Now what if FIFA changes the rules. Instead of rewarding the final performance alone, who scores the most goals, it decides to give 50% to the most goals, and 50% to the team believed to be the best, measured by survey. We would all have less trust in the result, wouldn’t we?

Multi-indicator rankings provide a large data set, but because the link between effort in a each area and the rankings outcome is not transparent, they cannot coherently drive performance. The incentives pull in different directions and the effects are invisible. In ARWU the different indicators correlate fairly well; they pull in the same direction and share common performance drivers. But QS and Times Higher use heterogeneous indicators.

On the other hand, if the multi-indicator rankings were disaggregated, the individual indicators could effectively drive performance improvement. Then at least ranking competition would be directed towards better outcomes, not reputation for its own sake.

Share this blog post

Professor Simon Marginson

Recent Blogs

  • Privacy Overview
  • 3rd Party Cookies

Centre for Global Higher Education

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.

Please enable Strictly Necessary Cookies first so that we can save your preferences!

  • Share on twitter
  • Share on facebook

THE World University Rankings 2021: methodology

Institutions across the globe provide us with information that we scrutinise rigorously to construct the world university rankings. here we explain how we assess data on more than 1,500 institutions to produce the tables.

  • Share on linkedin
  • Share on mail

World University Rankings 2021

The Times Higher Education World University Rankings are the only global performance tables that judge research-intensive universities across all their core missions: teaching, research, knowledge transfer and international outlook. We use 13 carefully calibrated performance indicators to provide the most comprehensive and balanced comparisons, trusted by students, academics, university leaders, industry and governments.

The performance indicators are grouped into five areas: Teaching (the learning environment); Research (volume, income and reputation); C itations (research influence); International outlook (staff, students and research); and I ndustry income (knowledge transfer).

The full methodology is published in the file at the bottom of this page.

WUR methodology graphic

Teaching  (the learning environment): 30%

  • Reputation survey: 15%
  • Staff-to-student ratio: 4.5%
  • Doctorate-to-bachelor’s ratio: 2.25%
  • Doctorates-awarded-to-academic-staff ratio: 6%
  • Institutional income: 2.25%

The most recent Academic Reputation Survey (run annually) that underpins this category was carried out between November 2019 and February 2020. It examined the perceived prestige of institutions in teaching and research. The responses were statistically representative of the geographical and subject mix of academics globally. The 2020 data are combined with the results of the 2019 survey, giving more than 22,000 responses.

As well as giving a sense of how committed an institution is to nurturing the next generation of academics, a high proportion of postgraduate research students also suggests the provision of teaching at the highest level that is thus attractive to graduates and effective at developing them. This indicator is normalised to take account of a university’s unique subject mix, reflecting that the volume of doctoral awards varies by discipline.

Institutional income is scaled against academic staff numbers and normalised for purchasing-power parity (PPP). It indicates an institution’s general status and gives a broad sense of the infrastructure and facilities available to students and staff.

Research  (volume, income and reputation):  30%

  • Reputation survey: 18%
  • Research income: 6%
  • Research productivity: 6%

The most prominent indicator in this category looks at a university’s reputation for research excellence among its peers, based on the responses to our annual Academic Reputation Survey (see above).

Research income is scaled against academic staff numbers and adjusted for purchasing-power parity (PPP). This is a controversial indicator because it can be influenced by national policy and economic circumstances. But income is crucial to the development of world-class research, and because much of it is subject to competition and judged by peer review, our experts suggested that it was a valid measure. This indicator is fully normalised to take account of each university’s distinct subject profile, reflecting the fact that research grants in science subjects are often bigger than those awarded for the highest-quality social science, arts and humanities research.

To measure productivity we count the number of publications published in the academic journals indexed by Elsevier’s Scopus database per scholar, scaled for institutional size and normalised for subject. This gives a sense of the university’s ability to get papers published in quality peer-reviewed journals. From the 2018 rankings, we devised a method to give credit for papers that are published in subjects where a university declares no staff.

Citations  (research influence):  30%

Our research influence indicator looks at universities’ role in spreading new knowledge and ideas.

We examine research influence by capturing the average number of times a university’s published work is cited by scholars globally. This year, our bibliometric data supplier Elsevier examined more than 86 million citations to 13.6 million journal articles, article reviews, conference proceedings, books and book chapters published over five years. The data include more than 24,000 academic journals indexed by Elsevier’s Scopus database and all indexed publications between 2015 and 2019. Citations to these publications made in the six years from 2015 to 2020 are also collected.

The citations help to show us how much each university is contributing to the sum of human knowledge: they tell us whose research has stood out, has been picked up and built on by other scholars and, most importantly, has been shared around the global scholarly community to expand the boundaries of our understanding, irrespective of discipline.

The data are normalised to reflect variations in citation volume between different subject areas. This means that institutions with high levels of research activity in subjects with traditionally high citation counts do not gain an unfair advantage.

We have blended equal measures of a country-adjusted and non-country-adjusted raw measure of citations scores.

In 2015-16, we excluded papers with more than 1,000 authors because they were having a disproportionate impact on the citation scores of a small number of universities. In 2016-17, we designed a method for reincorporating these papers. Working with Elsevier, we developed a fractional counting approach that ensures that all universities where academics are authors of these papers will receive at least 5 per cent of the value of the paper, and where those that provide the most contributors to the paper receive a proportionately larger contribution.

International outlook  (staff, students, research):  7.5%

  • Proportion of international students: 2.5%
  • Proportion of international staff: 2.5%
  • International collaboration: 2.5%

The ability of a university to attract undergraduates, postgraduates and faculty from all over the planet is key to its success on the world stage.

In the third international indicator, we calculate the proportion of a university’s total relevant publications that have at least one international co-author and reward higher volumes. This indicator is normalised to account for a university’s subject mix and uses the same five-year window as the “Citations: research influence” category.

Industry income  (knowledge transfer):  2.5%

A university’s ability to help industry with innovations, inventions and consultancy has become a core mission of the contemporary global academy. This category seeks to capture such knowledge-transfer activity by looking at how much research income an institution earns from industry (adjusted for PPP), scaled against the number of academic staff it employs.

The category suggests the extent to which businesses are willing to pay for research and a university’s ability to attract funding in the commercial marketplace – useful indicators of institutional quality.

Universities can be excluded from the World University Rankings if they do not teach undergraduates, or if their research output amounted to fewer than 1,000 relevant publications between 2015 and 2019 (with a minimum of 150 a year). Universities can also be excluded if 80 per cent or more of their research output is exclusively in one of our 11 subject areas.

Data collection

Institutions provide and sign off their institutional data for use in the rankings. On the rare occasions when a particular data point is not provided, we enter a conservative estimate for the affected metric. By doing this, we avoid penalising an institution too harshly with a “zero” value for data that it overlooks or does not provide, but we do not reward it for withholding them.

Getting to the final result

Moving from a series of specific data points to indicators, and finally to a total score for an institution, requires us to match values that represent fundamentally different data. To do this, we use a standardisation approach for each indicator, and then combine the indicators in the proportions shown to the right.

The standardisation approach we use is based on the distribution of data within a particular indicator, where we calculate a cumulative probability function, and evaluate where a particular institution’s indicator sits within that function.

For all indicators except for the Academic Reputation Survey, we calculate the cumulative probability function using a version of Z-scoring. The distribution of the data in the Academic Reputation Survey requires us to add an exponential component.

Related files

The world university rankings 2021 methodology.

PDF icon

POSTSCRIPT:

Print headline:  Full marks

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter

Or subscribe for unlimited access to:

  • Unlimited access to news, views, insights & reviews
  • Digital editions
  • Digital access to THE’s university and college rankings analysis

Already registered or a current subscriber? Login

You might also like

Impact Rankings logo

Impact Rankings 2025: time to submit your data

Universities can now provide information to participate in the sustainability-focused league table

World University Rankings 2025

More than 2,000 universities in World University Rankings for first time

Milestone marks biggest rise in number of ranked universities in five years

research and ranking performance

Augusta University Research Profiles Logo

  • Help & FAQ

Are university rankings useful to improve research? A systematic review

  • Interdisciplinary Health Sciences

Research output : Contribution to journal › Review article › peer-review

Introduction Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement. Methods A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted. Results A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems. Discussion No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and “luxury” indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.

Original languageEnglish (US)
Article numbere0193762
Journal
Volume13
Issue number3
DOIs
StatePublished - Mar 2018

ASJC Scopus subject areas

Access to document.

  • 10.1371/journal.pone.0193762

Other files and links

  • Link to publication in Scopus
  • Link to the citations in Scopus

Fingerprint

  • University Rankings Keyphrases 100%
  • Ranking System Keyphrases 100%
  • Systematic Review Social Sciences 100%
  • Academic Quality Social Sciences 100%
  • University Ranking Social Sciences 100%
  • Research Performance Computer Science 100%
  • Research Quality Keyphrases 33%
  • University Ranking Systems Keyphrases 33%

T1 - Are university rankings useful to improve research? A systematic review

AU - Vernon, Marlo M.

AU - Andrew Balas, E.

AU - Momani, Shaher

N1 - Publisher Copyright: © 2018 Vernon et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

PY - 2018/3

Y1 - 2018/3

N2 - Introduction Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement. Methods A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted. Results A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems. Discussion No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and “luxury” indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.

AB - Introduction Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement. Methods A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted. Results A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems. Discussion No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and “luxury” indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.

UR - http://www.scopus.com/inward/record.url?scp=85042907101&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85042907101&partnerID=8YFLogxK

U2 - 10.1371/journal.pone.0193762

DO - 10.1371/journal.pone.0193762

M3 - Review article

C2 - 29513762

AN - SCOPUS:85042907101

SN - 1932-6203

JO - PloS one

JF - PloS one

M1 - e0193762

American Marketing Association Logo

  • Join the AMA
  • Find learning by topic
  • Free learning resources for members
  • Credentialed Learning
  • Training for teams
  • Why learn with the AMA?
  • Marketing News
  • Academic Journals
  • Guides & eBooks
  • Marketing Job Board
  • Academic Job Board
  • AMA Foundation
  • Diversity, Equity and Inclusion
  • Collegiate Resources
  • Awards and Scholarships
  • Sponsorship Opportunities
  • Strategic Partnerships

We noticed that you are using Internet Explorer 11 or older that is not support any longer. Please consider using an alternative such as Microsoft Edge, Chrome, or Firefox.

AMA Membership rates increase on Oct. 8—renew or join now to secure the current rate! Explore Membership .

Do Performance Rankings Actually Motivate Salespeople?

Do Performance Rankings Actually Motivate Salespeople?

Molly Ahearne, Mohsen Pourmasoudi, Yashar Atefi and Son K. Lam

research and ranking performance

U.S. firms spend an estimated $3.6 billion annually on sales performance management (SPM) practices and tools. This figure is expected to rise to $6.4 billion by 2030, underscoring the growing importance of SPM practices within organizations.

One of the most commonly used SPM practices involves companies publishing the sales performance rankings of their salespeople on key performance metrics. The goal of publishing performance rankings is to provide feedback to all salespeople by disclosing their performance relative to their peers, thereby creating a competitive motive for performance improvement. However, despite widespread use, the effectiveness of these rankings has not been explored.

In a new Journal of Marketing study , we examine how the presentation of performance rankings influences critical outcomes, including salesperson quota attainment and employee turnover.

Questions around Performance Rankings

Our research poses four primary questions:

  • Do performance rankings effectively motivate salespeople to improve their performance?
  • Does this effectiveness vary by the type of information published alongside the ranking?
  • What are the conditions under which publishing certain information with performance rankings is more or less effective?
  • What are the long-term implications of performance rankings on salesperson turnover?

Our research team conducted two studies involving over 27,000 salespeople from more than 170 firms across 83 countries. These studies leveraged extensive field data to examine the effects of three distinct information conditions: anonymized performance rankings, identifiable performance rankings, and identifiable rankings with quotas.

Our findings reveal that while performance rankings can positively influence sales outcomes, their effectiveness—and, by extension, the value derived from the performance ranking dashboard—hinges significantly on the type of information disclosed within the dashboards.

While performance rankings can positively influence sales outcomes, their effectiveness—and, by extension, the value derived from the performance ranking dashboard—hinges significantly on the type of information disclosed within the dashboards.

For instance, anonymized rankings effectively motivate salespeople to increase their quota attainment, yet they also lead to higher turnover rates, which can result in substantial indirect costs related to recruitment, training, and loss of organizational knowledge. As a result, the costs associated with implementing and maintaining anonymized ranking systems may not be justified by the outcomes unless turnover can be effectively managed.

In contrast, identifiable performance rankings have the most substantial positive impact across our two studies, significantly enhancing quota attainment and reducing turnover. Our findings indicate that when salespeople know the identities of their peers in the rankings, they are motivated not only to improve their performance but also to maintain a positive social image. This dual motivation of self-improvement and self-presentation drives better performance and lowers turnover rates. However, when quotas are disclosed alongside identities and performance rankings, we fail to see performance enhancing benefits.

Lessons for Chief Marketing Officers

Our study offers valuable lessons for managers and salespeople:

  • More information is not always better. Instead, the strategic selection and combination of performance data are crucial for achieving both immediate and enduring positive outcomes.
  • Managers should develop and implement identifiable ranking systems, ensuring transparency in how rankings are determined and communicated.
  • Managers should avoid including fixed or objective performance metrics (i.e., quotas) in ranking systems to focus on relative performance evaluations, which is essential for the effectiveness of these systems.

Implementing these recommendations can drive essential behavioral changes among sales managers and executive leadership within sales organizations. Sales managers will be able to adopt a more strategic approach to performance ranking disclosures, emphasizing transparency and leveraging the motivational benefits of identifiable rankings, which should lead to improvements in quota attainment and reduced turnover within their teams.

Furthermore, executive leaders can invest in performance ranking dashboards that are tailored to their organization’s unique characteristics, taking into account their sales force’s compensation structure and size. By doing so, they can ensure the investment in performance dashboards will justify the costs by achieving substantial performance gains and minimizing turnover, thereby enhancing the overall effectiveness of the sales force.

Our research highlights the critical role of transparency and information type in performance rankings. By implementing performance rankings and carefully selecting the information disclosed alongside them, they can create a more motivated and loyal sales force. This approach will not only drive better performance outcomes but also contribute to a more sustainable organizational culture.

We urge scholars to build on our research and explore rankings on team goals and how they interact with individual salesperson rankings. Furthermore, it is important to study factors such as familiarity and social interactions between salespeople, office proximity and location, physical versus virtual contact between peers, and the extent of knowledge sharing. Future studies can also expand our understanding of how performance rankings may differ in effectiveness depending on the motivational orientation of salespeople.

Read the Full Study for Complete Details

Source: Molly Ahearne, Mohsen Pourmasoudi, Yashar Atefi, and Son K. Lam, “ Sales Performance Rankings: Examining the Impact of the Type of Information Displayed on Sales Force Outcomes ,” Journal of Marketing .

Go to the  Journal of Marketing

research and ranking performance

What’s Better for Motivating Salespeople: Group or Individual Incentives? New Research Shows it Depends on the Brand

research and ranking performance

The Value of Secondary Selling: How Salesperson Behavior Beyond the Salesperson–Customer Dyad Generates Higher Sales and Customer Satisfaction

research and ranking performance

How Dark Personality Traits Impact Sales Performance

research and ranking performance

Molly Ahearne is a postdoctoral scholar, Vanderbilt University, USA.

research and ranking performance

Mohsen Pourmasoudi is Assistant Professor of Marketing, San Diego State University, USA.

research and ranking performance

Yashar Atefi is Evelyn & Jay G. Piccinati Associate Professor of Marketing, University of Denver, USA.

research and ranking performance

Son K. Lam is Professor of Marketing, Terry Dean’s Advisory Council Distinguished Professorship, University of Georgia, USA.

By continuing to use this site, you accept the use of cookies, pixels and other technology that allows us to understand our users better and offer you tailored content. You can learn more about our privacy policy here

share this!

September 17, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Do performance rankings effectively motivate salespeople to improve their performance?

by American Marketing Association

sales results

Researchers from Vanderbilt University, San Diego State University, University of Denver, and University of Georgia have published a new study that examines how the presentation of performance rankings influences critical outcomes, including salesperson quota attainment and employee turnover.

The study, appearing in the Journal of Marketing, is titled "Sales Performance Rankings: Examining the Impact of the Type of Information Displayed on Sales Force Outcomes" and is authored by Molly Ahearne, Mohsen Pourmasoudi, Yashar Atefi, and Son K. Lam.

U.S. firms spend an estimated $3.6 billion annually on sales performance management (SPM) practices and tools. This figure is expected to rise to $6.4 billion by 2030, underscoring the growing importance of SPM practices within organizations.

One of the most commonly used SPM practices is companies publishing the sales performance rankings of their salespeople on key performance metrics. The goal of publishing performance rankings is to provide feedback to all salespeople by disclosing their performance relative to their peers, thereby creating a competitive motive for performance improvement . However, despite widespread use, the effectiveness of these rankings has not been explored.

This new study examines how the presentation of performance rankings influences critical outcomes, including salesperson quota attainment and employee turnover.

The questions around performance rankings

The study poses four primary research questions:

  • Does this effectiveness vary by the type of information published alongside the ranking ?
  • What are the conditions under which publishing certain information with performance rankings is more or less effective?
  • What are the long-term implications of performance rankings on salesperson turnover?

The research team conducted two studies involving over 27,000 salespeople from more than 170 firms across 83 countries. These studies leveraged extensive field data to examine the effects of three distinct information conditions: anonymized performance rankings, identifiable performance rankings, and identifiable rankings with quotas.

Ahearne explains, "Our findings reveal that while performance rankings can positively influence sales outcomes, their effectiveness and the value derived from the performance ranking dashboards, hinges significantly on the type of information disclosed within the dashboards."

For instance, anonymized rankings effectively motivate salespeople to increase their quota attainment, yet they also lead to higher turnover rates, which can result in substantial indirect costs related to recruitment, training, and loss of organizational knowledge.

"As a result," says Pourmasoudi, "the costs associated with implementing and maintaining anonymized ranking systems may not be justified by the outcomes unless turnover can be effectively managed."

In contrast, identifiable performance rankings have the most substantial positive impact across two studies, significantly enhancing quota attainment and reducing turnover. The findings indicate that when salespeople know the identities of their peers in the rankings, they are motivated not only to improve their performance but also to maintain a positive social image.

"This dual motivation of self-improvement and self-presentation drives better performance and lowers turnover rates. However, when quotas are disclosed alongside identities and performance rankings, we fail to see performance-enhancing benefits," Atefi says.

Lessons for chief marketing officers

This study offers valuable lessons for managers and salespeople:

  • More information is not always better. Instead, the strategic selection and combination of performance data are crucial for achieving both immediate and enduring positive outcomes.
  • Managers should develop and implement identifiable ranking systems, ensuring transparency in how rankings are determined and communicated.
  • Managers should avoid including fixed or objective performance metrics (i.e., quotas) in ranking systems to focus on relative performance evaluations, which is essential for the effectiveness of these systems.

Implementing these recommendations can drive essential behavioral changes among sales managers and executive leadership within sales organizations. Sales managers will be able to adopt a more strategic approach to performance ranking disclosures, emphasizing transparency and leveraging the motivational benefits of identifiable rankings, which should lead to improvements in quota attainment and reduced turnover within their teams.

Furthermore, executive leaders can invest in performance ranking dashboards that are tailored to their organization's unique characteristics, taking into account their sales force's compensation structure and size. By doing so, they can ensure the investment in performance dashboards will justify the costs by achieving substantial performance gains and minimizing turnover , thereby enhancing the overall effectiveness of the sales force.

The research highlights the critical role of transparency and information type in performance rankings. By implementing performance rankings and carefully selecting the information disclosed alongside them, they can create a more motivated and loyal sales force.

Lam adds, "This approach will not only drive better performance outcomes, but also contribute to a more sustainable organizational culture."

Journal information: Journal of Marketing

Provided by American Marketing Association

Explore further

Feedback to editors

research and ranking performance

Unraveling an ancient European extinction mystery: Disappearance of dwarf megafauna on paleolithic Cyprus

2 hours ago

research and ranking performance

Highly-sensitive beaks could help albatrosses and penguins find their food

research and ranking performance

'Scuba-diving' lizards use bubble to breathe underwater and avoid predators

research and ranking performance

Pollen affects cloud formation and precipitation patterns, researchers find

4 hours ago

research and ranking performance

Freshwater oysters could be key to developing stronger, 'greener' adhesives

research and ranking performance

Nuclear theorists turn to supercomputers to map out matter's building blocks in 3D

research and ranking performance

Study discovers that fruit flies' visual navigation tactics differ by environment

research and ranking performance

Zirconium metals under extreme conditions found to deform in surprisingly complex ways

5 hours ago

research and ranking performance

Astronomers discover new feature in exoplanet distribution that's between the Neptunian Desert and Savanna

research and ranking performance

Computational tool can pinpoint causal relationships from complex biological data

Relevant physicsforums posts, cover songs versus the original track, which ones are better.

3 hours ago

Who Is the Most Underrated Rock Drummer?

13 hours ago

Interesting anecdotes in the history of physics?

14 hours ago

Biographies, history, personal accounts

Sep 16, 2024

Bach, Bach, and more Bach please

Sep 15, 2024

Favorite Mashups - All Your Favorites in One Place

More from Art, Music, History, and Linguistics

Related Stories

research and ranking performance

Going down: A drop in rankings matters more than a rise for organizations, study finds

Sep 3, 2024

research and ranking performance

How employees' rankings disrupt cooperation and how managers can restore it

Jan 27, 2020

research and ranking performance

Study analyzes strategies for airlines to boost on-time performance

Sep 22, 2023

research and ranking performance

The effect of dark traits such as Machiavellianism, narcissism, and psychopathy on salesperson performance

Aug 3, 2022

Variable compensation and salesperson health

Mar 5, 2021

research and ranking performance

Lack of transparency in urban sustainability rankings

Feb 10, 2020

Recommended for you

research and ranking performance

People underestimate the income of the top 1%, researchers find

8 hours ago

research and ranking performance

Study links EV charging stations to increased local business activity

Sep 11, 2024

research and ranking performance

Streamlining energy regulations on Native American reservations could help alleviate poverty

Sep 10, 2024

research and ranking performance

Privileged parents who believe in economic upward mobility are more likely to hoard resources: Study

Sep 4, 2024

research and ranking performance

Simulation study explores how gift giving drives social change

research and ranking performance

The right to be wrong: How context or human rationality may influence our decisions

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here . By continuing to use our site, you accept our use of cookies, revised Privacy Policy and Terms of Service .

Zacks Investment Research Home

Member Sign In

Don't Know Your Password?

Zacks

  • Zacks #1 Rank
  • Zacks Industry Rank
  • Zacks Sector Rank
  • Equity Research
  • Mutual Funds
  • Mutual Fund Screener
  • ETF Screener
  • Earnings Calendar
  • Earnings Releases
  • Earnings ESP
  • Earnings ESP Filter
  • Stock Screener
  • Premium Screens
  • Basic Screens
  • Thematic Screens
  • Research Wizard
  • Personal Finance
  • Money Management
  • Retirement Planning
  • Tax Information
  • My Portfolio
  • Create Portfolio
  • Style Scores
  • Testimonials
  • Zacks.com Tutorial

Services Overview

  • Zacks Ultimate
  • Zacks Investor Collection
  • Zacks Premium

Investor Services

  • ETF Investor
  • Home Run Investor
  • Income Investor
  • Stocks Under $10
  • Value Investor
  • Top 10 Stocks

Other Services

  • Method for Trading
  • Zacks Confidential

Trading Services

  • Black Box Trader
  • Counterstrike
  • Headline Trader
  • Insider Trader
  • Large-Cap Trader
  • Options Trader
  • Short Sell List
  • Surprise Trader
  • Alternative Energy

Zacks Investment Research Home

You are being directed to ZacksTrade, a division of LBMZ Securities and licensed broker-dealer. ZacksTrade and Zacks.com are separate companies. The web link between the two companies is not a solicitation or offer to invest in a particular security or type of security. ZacksTrade does not endorse or adopt any particular investment strategy, any analyst opinion/rating/report or any approach to evaluating individual securities.

If you wish to go to ZacksTrade, click OK . If you do not, click Cancel.

NA: (MF: LAVGX)

(na) as of na et.

Add to portfolio

$NA USD

This is our Mutual Fund rating system that serves as a timeliness indicator for Mutual Fund's over the next 6 months:

Zacks Rank Definition
1 Strong Buy
2 Buy
3 Hold
4 Sell
5 Strong Sell

Zacks Mutual Fund Rank FAQ - Learn more about the Zacks Mutual Fund Rank Zacks Mutual Fund Rank Home - All Zacks Rank resources in one place Zacks Premium - The only way to get access to the Zacks Rank

View All Zacks Mutual Fund Rank #1's

Latest Performance as of Aug 31, 2024

Total Return %* Percentile Rank Objective
YTD 15.77 5
3 months 4.70 32
6 months 7.78 66
1 Year 21.31 13
3 Year -1.17 58
5 Year NA NA
10 Year NA NA

* Annual for three years and beyond.

Zacks Premium Research for LAVGX

Zacks MF Rank More Info
Zacks Rank Definition
1 Strong Buy
2 Buy
3 Hold
4 Sell
5 Strong Sell

Mutual Funds Research & Tools

Search similar mutual funds.

Research similar Mutual Funds with higher ranks and performance ratings.

Mutual Funds Comparison View (minimum of 2)

Mutual funds screener.

Narrow down the universe of 18,000+ funds we rank with our robust, yet easy-to-use mutual fund screeners. Select from up to 50 different data points to find the mutual funds that best Screen meet your unique criteria.

Portfolio Statistics

% Unrealized Gain 18.70
% Yield 0.62
% SEC Yield NA
Net Assets (Mil $) 8/31/2024 0.81
% Turnover 8/31/2024 104.00
3 Year 5 Year 10 Year
Beta 0.98 NA NA
Alpha -9.29 0.00 0.00
R Squared 0.81 NA NA
Std. Dev. 19.38 NA NA
Sharpe -0.14 NA NA

Sector / Country Weightings

As of 8/31/2024 % of Portfolio
Japan 11.95
France 10.20
United States 10.10
Netherlands 7.87
Taiwan 7.71
Germany 6.50
United Kingdom 5.98
Canada 5.76
India 4.81
Denmark 4.60

Portfolio Holdings

Top Equity Holdings
(As of 4/30/2024)
% of Portfolio Value (Mil$)
4.51 0.13
NOVO NORDISK A S B 4.60 0.13
ASML HOLDINGS NV 3.66 0.10
TOTAL*OTHER 3.02 0.08
2.65 0.07
SAP AG 2.40 0.07
SCHNEIDER ELECTRIC SE 2.42 0.07
SHOPIFY INC 1.84 0.05
TOKYO ELECTRON LTD 1.85 0.05

Stock Holding % of Net Assets

As Of 4/30/2024
Total Issues NA
Avg. P/E 29.79
Avg. P/Book 5.86
Avg. EPS Growth 10.86
Avg. Market Value (Mil $) 388,575
% of Portfolio
Large Growth 22.12
Large Value 0.00
Small Growth 8.59
Small Value 0.00
Foreign Stock 45.01
Emerging Market 14.00
Precious Metal 0.00
Intermediate Bond 0.00
Foreign Bond 0.00
High Yield Bond 0.00

This file is used for Yahoo remarketing pixel add

research and ranking performance

Due to inactivity, you will be signed out in approximately:

IMAGES

  1. Research & Ranking's Model Portfolio clocks 79% gains

    research and ranking performance

  2. Bell Curve Forced Ranking Method

    research and ranking performance

  3. Ranking performance comparison

    research and ranking performance

  4. What are the advantages and disadvantages of performance appraisal

    research and ranking performance

  5. Performance Appraisal types and examples Performance appraisal Traditional

    research and ranking performance

  6. 5 Point Rating Scale For Performance Evaluation Infographic Template

    research and ranking performance

VIDEO

  1. Ranking of problem by forced ranking method

  2. Weak ringgit, targeted subsidies blamed for decline in world competitiveness ranking, says PM

  3. Equentis Research & Ranking

  4. Research in 3 Minutes: Peer Review

  5. Ranking Method of Performance Appraisal, one of the traditional methods

  6. Structured Data And How It Affects Your Website’s SEO

COMMENTS

  1. Welcome to Equentis Research and Ranking

    Welcome to Equentis Research and Ranking. We make equity investments easy for you ...

  2. SEBI Registered Investment Advisory in India: Share Market Advisory

    Some firms may use both methods like Equentis Research & Ranking -stock advisory company, which has been specializing in smart investment and long-term stocks since 2015. ... Registration granted by SEBI, enlistment as IA with Exchange and certification from NISM in no way guarantee performance of the intermediary or provide any assurance of ...

  3. Research and Ranking

    As part of Equentis Wealth Advisory, at Research & Ranking we recommend 20-25 stocks to you after understanding your goals and risk appetite. This personalised recommendation is enabled by over ...

  4. Research & Ranking Reviews

    Research & Ranking has an overall rating of 4.0 out of 5, based on over 44 reviews left anonymously by employees. 79% of employees would recommend working at Research & Ranking to a friend and 64% have a positive outlook for the business. This rating has decreased by 2% over the last 12 months.

  5. Are university rankings useful to improve research? A systematic review

    Methods. A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ...

  6. 5 in 5 wealth creation strategy

    In-depth research reports. Comprehensive fundamental analysis of stocks, companies, industries, and trends. Personalized portfolio. Tailored portfolio of 20-25 high-return stocks crafted to meet your unique financial goals. Timely buy-hold-sell alerts. Get notified when to buy, hold and sell every recommendation. Ideal buying range.

  7. University Rankings: A Closer Look for Research Leaders

    Universities are ranked by several academic or research performance indicators, including: Alumni and staff winning Nobel Prizes and Fields Medals. Highly cited researchers. ... the best way of looking at rankings is best summarized by Cesar Wazen from Qatar University when he speaks about rankings and strategy on the Research 2030 podcast.

  8. University Rankings Data: A Closer Look for Research Leaders

    Ranking methodologies rely on data inputs from a range of external resources. These resources often include university and researcher data, relevant data on human resources, student administration, finances, and data from reputation surveys — each varying based on a rankings' niche and focus. In this guide, we focus on the bibliometrics used ...

  9. Learn more about university rankings and how research ...

    While rankings are not the sole indicator of an institution's reputation and academic excellence, they help benchmark universities nationally, regionally, and globally. Many factors contribute to rankings, including research output and collaboration. Explore these resources for a deeper look into how rankings work, and learn about analyzing ...

  10. Are university rankings useful to improve research? A ...

    Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. ... Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% ...

  11. Business News Today: Read Latest Business News, Live India Share Market

    We would like to show you a description here but the site won't allow us.

  12. Research: How Ranking Performance Can Hurt Women

    Research: How Ranking Performance Can Hurt Women. Summary. When it comes to gender equity in the workplace, many organizations focus largely on hiring more women. But to achieve more equitable ...

  13. Mumbai-based Research & Ranking aims to help investors ...

    Mumbai-based Research & Ranking aims to change this scenario by busting the myths about equity investing and providing investors with tech-enabled solutions. ... Long term performance data, across ...

  14. Research Trends in University Rankings: A Scoping Review of the Top 100

    In research articles, authors focused on issues regarding the role of educational activities and its relation to rankings, the analysis of organizational performance in reference to performance indicators in ranking systems, different approaches to measuring performance and the methodology of rankings mainly dealing with how they handle the ...

  15. "Wealth Creation" Research Reports

    "Equentis - MultiplyRR", "Equentis - Research & Ranking" and "Equentis - Private Wealth" are the brands under which Equentis Wealth Advisory Services Limited offers its Investment Advisory Services. ... enlistment as IA with Exchange and certification from NISM in no way guarantee performance of the intermediary or provide any ...

  16. Leading research institutions 2020

    Credit: Erik Jepsen/UC San Diego. The Chinese Academy of Sciences (CAS) in Beijing has topped the Nature Index 2020 Annual Tables list as the most prolific producer of research published in the 82 ...

  17. Are university rankings useful to improve research? A systematic review

    The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement. Methods A systematic review of ...

  18. Do rankings drive better performance?

    First, competition. Ranking has burned into the global consciousness the idea of higher education as a competitive market of universities and countries. This competition is about research performance, the main driver of ranking outcomes, and about reputation. Second, hierarchy. Ranking is a core element of the system of valuation, whereby ...

  19. THE World University Rankings 2021: methodology

    The Times Higher Education World University Rankings are the only global performance tables that judge research-intensive universities across all their core missions: teaching, research, knowledge transfer and international outlook. We use 13 carefully calibrated performance indicators to provide the most comprehensive and balanced comparisons, trusted by students, academics, university ...

  20. Journals ranking and impact factors: how the performance of journals is

    Another example of a ranking for journals can be found within the ERA initiative (Excellence in Research for Australia, 2008), announced in February 2008. ERA aims to assess research quality of the Australian higher education sector biennially, based on peer-review assessment of a number of performance measures, including bibliometric indicators.

  21. Are university rankings useful to improve research? A systematic review

    Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.",

  22. Do Performance Rankings Actually Motivate Salespeople?

    Our research team conducted two studies involving over 27,000 salespeople from more than 170 firms across 83 countries. These studies leveraged extensive field data to examine the effects of three distinct information conditions: anonymized performance rankings, identifiable performance rankings, and identifiable rankings with quotas.

  23. Do performance rankings effectively motivate salespeople to improve

    The research highlights the critical role of transparency and information type in performance rankings. By implementing performance rankings and carefully selecting the information disclosed ...

  24. The Impact of Peer Performance and Relative Rank on ...

    Information Systems Research; INFORMS Journal on Applied Analytics; INFORMS Journal on Computing; INFORMS Journal on Data Science ... Rafael P. Ribas, Breno Sampaio, Giuseppe Trevisan (2024) The Impact of Peer Performance and Relative Rank on Managerial Career Attainment: Evidence from College Students. Management Science 0(0). https://doi.org ...

  25. LAVGX

    Narrow down the universe of 18,000+ funds we rank with our robust, yet easy-to-use mutual fund screeners. Select from up to 50 different data points to find the mutual funds that best Screen meet ...