Monash University Logo

  • Help & FAQ

How can education systems improve? A systematic literature review

  • School of Education Culture & Society

Research output : Contribution to journal › Review Article › Research › peer-review

Understanding what contributes to improving a system will help us tackle the problems in education systems that usually fail disproportionately in providing quality education for all, especially for the most disadvantage sectors of the population. This paper presents the results of a qualitative systematic literature review aimed at providing a comprehensive overview of what education research can say about the factors that promote education systems’ improvement. This literature is emerging as a topic of empirical research that merges comparative education and school effectiveness studies as standardized assessments make it possible to compare results across systems and time. To examine and synthesize the papers included in this review we followed a thematic analysis approach. We identify, analyze, and report patterns in the papers included in this systematic review. From the coding process, four drivers for system improvement emerged: (1) system-wide approaches; (2) human capital; (3) governance and macro–micro level bridges; and (4) availability of resources.

Original languageEnglish
Pages (from-to)479-499
Number of pages21
Journal
Volume24
DOIs
Publication statusPublished - 2023
  • Comparative education
  • Educational change
  • International education
  • System-wide improvement

This output contributes to the following UN Sustainable Development Goals (SDGs)

Access to Document

  • 10.1007/s10833-022-09453-7

Other files and links

  • Link to publication in Scopus

T1 - How can education systems improve? A systematic literature review

AU - Barrenechea, Ignacio

AU - Beech, Jason

AU - Rivas, Axel

N1 - Publisher Copyright: © 2022, The Author(s), under exclusive licence to Springer Nature B.V.

N2 - Understanding what contributes to improving a system will help us tackle the problems in education systems that usually fail disproportionately in providing quality education for all, especially for the most disadvantage sectors of the population. This paper presents the results of a qualitative systematic literature review aimed at providing a comprehensive overview of what education research can say about the factors that promote education systems’ improvement. This literature is emerging as a topic of empirical research that merges comparative education and school effectiveness studies as standardized assessments make it possible to compare results across systems and time. To examine and synthesize the papers included in this review we followed a thematic analysis approach. We identify, analyze, and report patterns in the papers included in this systematic review. From the coding process, four drivers for system improvement emerged: (1) system-wide approaches; (2) human capital; (3) governance and macro–micro level bridges; and (4) availability of resources.

AB - Understanding what contributes to improving a system will help us tackle the problems in education systems that usually fail disproportionately in providing quality education for all, especially for the most disadvantage sectors of the population. This paper presents the results of a qualitative systematic literature review aimed at providing a comprehensive overview of what education research can say about the factors that promote education systems’ improvement. This literature is emerging as a topic of empirical research that merges comparative education and school effectiveness studies as standardized assessments make it possible to compare results across systems and time. To examine and synthesize the papers included in this review we followed a thematic analysis approach. We identify, analyze, and report patterns in the papers included in this systematic review. From the coding process, four drivers for system improvement emerged: (1) system-wide approaches; (2) human capital; (3) governance and macro–micro level bridges; and (4) availability of resources.

KW - Comparative education

KW - Educational change

KW - International education

KW - System-wide improvement

UR - http://www.scopus.com/inward/record.url?scp=85127669663&partnerID=8YFLogxK

U2 - 10.1007/s10833-022-09453-7

DO - 10.1007/s10833-022-09453-7

M3 - Review Article

AN - SCOPUS:85127669663

SN - 1389-2843

JO - Journal of Educational Change

JF - Journal of Educational Change

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

A systematic literature review on educational recommender systems for teaching and learning: research trends, limitations and opportunities

Felipe leite da silva.

1 Centro de Estudos Interdisciplinares em Novas Tecnologias da Educação, Universidade Federal do Rio Grande do Sul, Porto Alegre, Rio Grande do Sul Brazil

Bruna Kin Slodkowski

Ketia kellen araújo da silva, sílvio césar cazella.

2 Departamento de Ciências Exatas e Sociais Aplicadas, Universidade Federal de Ciências da Saúde de Porto Alegre, Porto Alegre, Rio Grande do Sul Brazil

Associated Data

The datasets generated during the current study correspond to the papers identified through the systematic literature review and the quality evaluation results (refer to Section  3.4 in paper). They are available from the corresponding author on reasonable request.

Recommender systems have become one of the main tools for personalized content filtering in the educational domain. Those who support teaching and learning activities, particularly, have gained increasing attention in the past years. This growing interest has motivated the emergence of new approaches and models in the field, in spite of it, there is a gap in literature about the current trends on how recommendations have been produced, how recommenders have been evaluated as well as what are the research limitations and opportunities for advancement in the field. In this regard, this paper reports the main findings of a systematic literature review covering these four dimensions. The study is based on the analysis of a set of primary studies ( N  = 16 out of 756, published from 2015 to 2020) included according to defined criteria. Results indicate that the hybrid approach has been the leading strategy for recommendation production. Concerning the purpose of the evaluation, the recommenders were evaluated mainly regarding the quality of accuracy and a reduced number of studies were found that investigated their pedagogical effectiveness. This evidence points to a potential research opportunity for the development of multidimensional evaluation frameworks that effectively support the verification of the impact of recommendations on the teaching and learning process. Also, we identify and discuss main limitations to clarify current difficulties that demand attention for future research.

Supplementary Information

The online version contains supplementary material available at 10.1007/s10639-022-11341-9.

Introduction

Digital technologies are increasingly integrated into different application domains. Particularly in education, there is a vast interest in using them as mediators of the teaching and learning process. In such a task, the computational apparatus serves as an instrument to support human knowledge acquisition from different educational methodologies and pedagogical practices (Becker, 1993 ).

In this sense, Educational Recommender Systems (ERS) play an important role for both educators and students (Maria et al., 2019 ). For instructors, these systems can contribute to their pedagogical practices through recommendations that improve their planning and assist in educational resources filtering. As for the learners, through preferences and educational constraints recognition, recommenders can contribute for their academic performance and motivation by indicating personalized learning content (Garcia-Martinez & Hamou-Lhadj, 2013 ).

Despite the benefits, there are known issues upon the usage of the recommender system in the educational domain. One of the main challenges is to find an appropriate correspondence between the expectations of users and the recommendations (Cazella et al., 2014 ). Difficulties arise from differences in learner’s educational interests and needs (Verbert et al., 2012 ). The variety of student’s individual factors that can influence the learning process (Buder & Schwind, 2012 ) is one of the challenging matters that makes it complex to be overcome. On a recommender standpoint, this reflects an input diversity with potential to tune recommendations for users.

In another perspective, from a technological and artificial intelligence standpoint, the ERS are likely to suffer from already known issues noted on the general-purpose ones, such as the cold start and data sparsity problems (Garcia-Martinez & Hamou-Lhadj, 2013 ). Furthermore, problems are related to the approach used to generate recommendations. For instance, the overspecialization is inherently associated with the way that content-based recommender systems handle data (Iaquinta et al., 2008 ; Khusro et al., 2016 ). These issues pose difficulties to design recommenders that best suit the user’s learning needs and that distance themselves from user’s dissatisfaction in the short and long term.

From an educational point of view, issues emerge on how to evaluate ERS effectiveness. A usual strategy to measure the quality of educational recommenders is to apply the traditional recommender’s evaluation methods (Erdt et al., 2015 ). This approach determines system quality based on performance properties, such as its precision and prediction accuracy. Nevertheless, in the educational domain, system effectiveness needs to take into account the students’ learning performance. This dimension brings new complexities on how to successfully evaluate ERS.

As ERS topic has gradually increased in attraction for scientific community (Zhong et al., 2019 ), extensive research have been carried out in recent years to address these issues (Manouselis et al. 2010 ; Manouselis et al., 2014 ; Tarus et al., 2018 ; George & Lal, 2019 ). ERS has become a field of application and combination of different computational techniques, such as data mining, information filtering and machine learning, among others (Tarus et al., 2018 ). This scenario indicates a diversity in the design and evaluation of recommender systems that support teaching and learning activities. Nonetheless, research is dispersed in literature and there is no recent study that encompasses the current scientific efforts in the field that reveals how such issues are addressed in current research. Reviewing evidence, and synthesizing findings of current approaches in how ERS produce recommendations, how ERS are evaluated and what are research limitations and opportunities can provide a panoramic perspective of the research topic and support practitioners and researchers for implementation and future research directions.

From the aforementioned perspective, this work aims to investigate and summarize the main trends and research opportunities on ERS topic through a Systematic Literature Review (SLR). The study was conducted based on the last six years publications, particularly, regarding to recommenders that support teaching and learning process.

Main trends referrer to recent research direction on the ERS field. They are analyzed in regard to how recommender systems produce recommendations and how they are evaluated. As mentioned above, these are significant dimensions related to current issues of the area. Specifically for the recommendation production, this paper provides a three-axis-based analysis centered on systems underlying techniques, input data and results presentation.

Additionally, research opportunities in the field of ERS as well as their main limitations are highlighted. Because current comprehension of these aspects is fragmented in literature, such an analysis can shed light for future studies.

The SLR was carried out using Kitchenham and Charters ( 2007 ) guidelines. The SLR is the main method for summarizing evidence related to a topic or a research question (Kitchenham et al., 2009 ). Kitchenham and Charters ( 2007 ) guidelines, in turn, are one of the leading orientations for reviews on information technology in education (Dermeval et al., 2020 ).

The remainder of this paper is structured as follows. In Section  2 , the related works are presented. Section  3 details the methodology used in carrying out the SLR. Section  4 covers the SLR results and related discussion. Section  5 presents the conclusion.

Related works

In the field of education, there is a growing interest in technologies that support teaching and learning activities. For this purpose, ERS are strategic solutions to provide a personalized educational experience. Research in this sense has attracted the attention of the scientific community and there has been an effort to map and summarize different aspects of the field in the last 6 years.

In Drachsler et al. ( 2015 ) a comprehensive review of technology enhanced learning recommender systems was carried out. The authors analyzed 82 papers published from 2000 to 2014 and provided an overview of the area. Different aspects were analyzed about recommenders’ approach, source of information and evaluation. Additionally, a categorization framework is presented and the study includes the classification of selected papers according to it.

Klašnja-Milićević et al. ( 2015 ) conducted a review on recommendation systems for e-learning environments. The study focuses on requirements, challenges, (dis)advantages of techniques in the design of this type of ERS. An analysis on collaborative tagging systems and their integration in e-learning platform recommenders is also discussed.

Ferreira et al. ( 2017 ) investigated particularities of research on ERS in Brazil. Papers published between 2012 and 2016 in three Brazilian scientific vehicles were analyzed. Rivera et al. ( 2018 ) presented a big picture of the ERS area through a systematic mapping. The study covered a larger set of papers and aimed to detect global characteristics in ERS research. Aiming at the same focus, however, setting different questions and repositories combination, Pinho, Barwaldt, Espíndola, Torres, Pias, Topin, Borba and Oliveira (2019) performed a systematic review on ERS. In these works, it is observed the common concern of providing insights about the systems evaluation methods and the main techniques adopted in the recommendation process.

Nascimento et al. ( 2017 ) carried out a SLR covering learning objects recommender systems based on the user’s learning styles. Learning objects metadata standards, learning style theoretical models, e-learning systems used to provide recommendations and the techniques used by the ERS were investigated.

Tarus et al ( 2018 ) and George and Lal ( 2019 ) concentrated their reviews on ontology-based ERS. Tarus et al. ( 2018 ) examined research distribution in a period from 2005 to 2014 according to their years of publication. Furthermore, the authors summarized the techniques, knowledge representation, ontology types and ontology representations covered in the papers. George and Lal ( 2019 ), in turn, update the contributions of Tarus et al. ( 2018 ), investigating papers published between 2010 and 2019. The authors also discuss how ontology-based ERS can be used to address recommender systems traditional issues, such as cold start problem and rating sparsity.

Ashraf et al. ( 2021 ) directed their attention to investigate course recommendation systems. Through a comprehensive review, the study summarized the techniques and parameters used by this type of ERS. Additionally, a taxonomy of the factors taken into account in the course recommendation process was defined. Salazar et al. ( 2021 ), on the other hand, conducted a review on affectivity-based ERS. Authors presented a macro analysis, identifying the main authors and research trends, and summarized different recommender systems aspects, such as the techniques used in affectivity analysis, the source of affectivity data collection and how to model emotions.

Khanal et al. ( 2019 ) reviewed e-learning recommendation systems based on machine learning algorithms. A total of 10 papers from two scientific vehicles and published between 2016 and 2018 were examined. The study focal point was to investigate four categories of recommenders: those based on collaborative filtering, content-based filtering, knowledge and a hybrid strategy. The dimensions analyzed were the machine learning algorithms used, the recommenders’ evaluation process, inputs and outputs characterization and recommenders’ challenges addressed.

Related works gaps and contribution of this study

The studies presented in the previous section have a diversity of scope and dimensions of analysis, however, in general, they can be classified into two distinct groups. The first, focus on specific subjects of ERS field, such as similar methods of recommendations (George & Lal, 2019 ; Khanal et al., 2019 ; Salazar et al., 2021 ; Tarus et al., 2018 ) and same kind of recommendable resources (Ashraf et al., 2021 ; Nascimento et al., 2017 ). This type of research scrutinizes the particularities of the recommenders and highlights aspects that are difficult to be identified in reviews with a broader scope. Despite that, most of the reviews concentrate on analyses of recommenders’ operational features and have limited discussion on crosswise issues, such as ERS evaluation and presentation approaches. Khanal et al. ( 2019 ), specifically, makes contributions regarding evaluation, but the analysis is limited to four types of recommender systems.

The second group is composed of wider scope reviews and include recommendation models based on a diversity of methods, inputs and outputs strategies (Drachsler et al., 2015 ; Ferreira et al., 2017 ; Klašnja-Milićević et al., 2015 ; Pinho et al., 2019 ; Rivera et al., 2018 ). Due to the very nature of systematic mappings, the research conducted by Ferreira et al. ( 2017 ) and Rivera et al. ( 2018 ) do not reach in depth some topics, for example, the data synthesized on the evaluations of the ERS are delimited to indicate only the methods used. Ferreira et al. ( 2017 ), in particular, aims to investigate only Brazilian recommendation systems, offering partial contributions to an understanding of the state of the art of the area. In Pinho et al. ( 2019 ) it is noted the same limitation of the systematic mappings. The review was reported with a restricted number of pages, making it difficult to detail the findings. On the other hand, Drachsler et al. ( 2015 ) and, Klašnja-Milićević et al. ( 2015 ) carried out comprehensive reviews that summarizes specific and macro dimensions of the area. However, the papers included in their reviews were published until 2014 and there is a gap on the visto que advances and trends in the field in the last 6 years.

Given the above, as far as the authors are aware, there is no wide scope secondary study that aggregate the research achievements on recommendation systems that support teaching and learning in recent years. Moreover, a review in this sense is necessary since personalization has become an important feature in the teaching and learning context and ERS are one of main tools to deal with different educational needs and preferences that affect individuals’ learning process.

In order to widen the frontiers of knowledge in the field of research, this review aims to contribute to the area by presenting a detailed analysis of the following dimensions: how recommendations are produced and presented, how recommender systems are evaluated and what are the studies limitations and research opportunities. Specifically, to summarize the current knowledge, a SLR was conducted based on four research questions (Section  3.1 ). The review focused on papers published from 2015 to 2020 in scientific journals. A quality assessment was performed to select the most mature systems. The data found on the investigated topics are summarized and discussed in Section  4 .

Methodology

This study is based on the SLR methodology for gathering evidences related to the research topic investigated. As stated by Kitchenham and Charters ( 2007 ) and Kitchenham et al. ( 2009 ), this method provides the means for aggregate evidences from current research prioritizing the impartiality and reproducibility of the review. Therefore, a SLR is based on a process that entails the development of a review protocol that guides the selection of relevant studies and the subsequent extraction of data for analysis.

Guidelines for SLR are widely described in literature and the method can be applied for gathering evidences in different domains, such as, medicine and social science (Khan et al., 2003 ; Pai et al., 2004 ; Petticrew & Roberts, 2006 ; Moher et al., 2015 ). Particularly for informatics in education area, Kitchenham and Charters ( 2007 ) guidelines have been reported as one of the main orientations (Dermeval et al, 2020 ). Their approach appears in several studies (Petri & Gresse von Wangenheim, 2017 ; Medeiros et al., 2019 ; Herpich et al, 2019 ) including mappings and reviews on ERS field (Rivera et al., 2018 ; Tarus et al., 2018 ).

As mentioned in Section  1 , Kitchenham and Charters ( 2007 ) guidelines were used in the conducted SLR. They are based on three main stages: the first for planning the review, the second for conducting it and the last for the results report. Following these orientations, the review was structured in three phases with seven main activities distributed among them as depicted in Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 10639_2022_11341_Fig1_HTML.jpg

Systematic literature review phases and activities

The first was the planning phase. The identification of the need for a SLR about teaching and learning support recommenders and the development of the review protocol occurred on this stage. In activity 1, the search for SLR with the intended scope of this study was performed. The result did not return compatible papers with this review scope. Papers identified are described in Section  2 . In activity 2, the review process was defined. The protocol was elaborated through rounds of discussion by the authors until consensus was reached. The activity 2 output were the research questions, search strategy, papers selection strategy and the data extraction method.

The next was the conducting phase. At this point, activities for relevant papers identification (activity 3) and selection (activities 4) were executed. In Activity 3, searches were carried out in seven repositories indicated by Dermeval et al. ( 2020 ) as relevant to the area of informatics in education. Authors applied the search string into these repositories search engines, however, due to the large number of returned research, the authors established the limit of 600 to 800 papers that would be analyzed. Thus, three repositories whose sum of search results was within the established limits were chosen. The list of potential repositories considered for this review and the selected ones is listed in Section  3.1 . The search string used is also shown in Section  3.1 .

In activity 4, studies were selected through two steps. In the first, inclusion and exclusion criteria were applied to each identified paper. Accepted papers had they quality assessed in the second step. Parsifal 1 was used to manage planning and conducting phase data. Parsifal is a web system, adhering to Kitchenham and Charters ( 2007 ) guidelines, that helps in SLR conduction. At the end of this step, relevant data were extracted (activity 5) and registered in a spreadsheet. Finally, in the reporting phase, the extracted data were analyzed in order to answer the SLR research questions (activity 6) and the results were recorded in this paper (activity 7).

Research question, search string and repositories

Teaching and learning support recommender systems have particularities of configuration, design and evaluation method. Therefore, the following research questions (Table ​ (Table1) 1 ) were elaborated in an effort to synthesize these knowledge as well as the main limitations and research opportunities in the field from the perspective of the most recent studies:

SLR research questions

IdResearch questionRationale
RQ1How teaching and learning support recommender systems produce recommendations?There is a variety of input parameters and techniques used for building recommenders for teaching and learning (Drachsler et al., ; Manouselis et al., ). They have been proposed in an attempt to find the best approach to match users’ expectations and recommendations. Also, they include by design intrinsic limitations of each strategy (Garcia-Martinez & Hamou-Lhadj, ). Currently, studies are dispersed in literature and, as far as the authors are aware, there are not research synthesizing the knowledge about techniques and inputs used to tackle the field issues. Analyzing the last 6 years trends should clarify the current state of the art on how this kind of recommender has been designed and the latest trends in this sense
RQ2How teaching and learning support recommender systems present recommendations?Complementarily to RQ1, this research question leads to a broad analysis of the architecture of teaching and learning support recommender systems proposed by the scientific community. This research question adds to RQ1 and widens the insights of current state of the art on how ERS have been designed
RQ3How are teaching and learning support recommender systems evaluated?There are distinct methods aiming on measuring quality dimensions of an educational recommender and a previous study have suggested a growing awareness of the necessity for the elaboration of educational-focused evaluations in the ERS research field (Erdt et al., ). Analyzing the last 6 years trends on teaching and learning support recommender system evaluation topic will shed light on the current state of the art and will uncover insights of what evaluation goals have been prioritized as well as how recommenders’ pedagogical effectiveness has been measured
RQ4What are the limitations and research opportunities related to the teaching and learning support recommender systems field?As the ERS research area have been developing through the past years, research limitations that hinder advancements in the field have been reported or can be observed in current studies. Also, an in-depth investigation can inform of under-explored topics that need further investigation taking into account their potential to contribute to the advancement of area. As long as the authors are aware, literature lacks on identifying current limitations and opportunities in teaching and learning support recommender system research. With this research question it is intended to reveal them from the perspective of the last 6 years scientific production. Answering this question can clarify the needs for future research in this topic

Regarding the search strategy, papers were selected from three digital repositories (Table ​ (Table2). 2 ). For the search, “Education” and “Recommender system” were defined as the keywords and synonyms were derived from them as secondary terms (Table ​ (Table3). 3 ). From these words, the following search string was elaborated:

  • ("Education" OR "Educational" OR "E-learning" OR "Learning" OR "Learn") AND ("Recommender system" OR "Recommender systems" OR "Recommendation system" OR "Recommendation systems" OR "Recommending system" OR "Recommending systems")

Repositories considered for the SLR

NameURLNo of papersSelected?Rationale
IEEExplore 310yesWithin defined threshold
ACM Digital Library 60yes
Science Direct 386yes
Springer Link 3.613noSeveral studies returned exceeding defined threshold
Engineering Village 1.205no
Scopus 1.760no
Web of Science 1018no

Keywords and their synonyms used in the search string

KeywordSynonym
EducationalEducation, E-learning, Learn, Learning
Recommender systemRecommender systems, Recommendation system, Recommendation systems, Recommending system, Recommending systems

Inclusion and exclusion criteria

The first step for the selection of papers was performed through the application of objective criteria, thus a set of inclusion and exclusion criteria was defined. The approved papers formed a group that comprises the primary studies with potential relevance for the scope of the SLR. Table ​ Table4 4 lists the defined criteria. In the description column of Table ​ Table4, 4 , the criteria are informed and in the id column they are identified with a code. The latter was defined appending an abbreviation of the respective kind of criteria (IC for Inclusion Criteria and EC for Exclusion Criteria) with an index following the sequence of the list. The Id is used for referencing its corresponding criterion in the rest of this document.

Inclusion and exclusion criteria of the SLR

Inclusion criteriaExclusion criteria
IC1Paper published from 2015 to 2020EC1Paper published before 2015 and after 2020
IC2Paper published in scientific journalsEC2Paper is not published in scientific journals (ex. conference and workshops)
IC3Paper must be in English languageEC3Paper is not in English language
IC4Paper must be a full paperEC4Paper is not a full paper
IC5The search string must be in at least one of the following papers metadata: title, abstract or keywordsEC5The search string cannot be found in at least one of the following papers metadata: title, abstract or keywords
IC6Paper should focus on the development of a recommendation system and its application in the educational domain as a tool to support teaching or learningEC6Paper does not focus on the development of a recommendation system and its application in the educational domain as a tool to support teaching or learning
EC7Paper does not present the recommendation system evaluation
IC7Paper must present the recommendation system evaluationEC8Paper is not a primary study

Since the focus of this review is on the analysis of recent ERS publications, only studies from the past 6 years (2015–2020) were screened (see IC1). Targeting mature recommender systems, only full papers from scientific journals that present the recommendation system evaluation were considered (see IC2, IC4 and IC7). Also, solely works written in English language were selected, because they are the most expressive in quantity and are within the reading ability of the authors (see IC3). Search string was verified on papers’ title, abstract and keywords to ensure only studies related to the ERS field were screened (see IC5). The IC6, specifically, delimited the subject of selected papers and aligned it to the scope of the review. Additionally, it prevented the selection of secondary studies in process (e.g., others reviews or systematic mappings). Conversely, exclusion criteria were defined to clarify that papers contrasting with the inclusion criteria should be excluded from review (see EC1 to EC8). Finally, duplicate searches were marked and, when all criteria were met, only the latest was selected.

Quality evaluation

The second step in studies selection activity was the quality evaluation of the papers. A set of questions were defined with answers of different weights to estimate the quality of the studies. The objective of this phase was to filter researches with higher: (i) validity; (ii) details of the context and implications of the research; and (iii) description of the proposed recommenders. Research that detailed the configuration of the experiment and carried out an external validation of the ERS obtained higher weight in the quality assessment. Hence, the questions related to recommender evaluation (QA8 and QA9) ranged from 0 to 3, while the others, from 0 to 2. The questions and their respective answers are presented in Table ​ Table7 7 (see Appendix). Each paper evaluated had a total weight calculated according to Formula 1 :

Quality evaluation questions and answers

IdQuestionsAvailable Answers (Weight)
QA1Does the paper clearly present the research contributions?

The paper explicitly lists or describes the research contributions. They are clearly connected with the presented results. (2pts)

The paper provides a general description of research contributions. They are clearly connected with the presented results. (1pt)

The paper provides a general description of research contributions. They are not clearly connected with the presented results. (0.5pts)

The paper does not clearly provide a research contribution. If presented, the contributions are not clearly connected with the results of the study. (0)

QA2Does the paper clearly present how research differs from other related works?

The research is compared in detail with related works. Authors provide the strengths and/or weaknesses of each related work and position their research granularly, stating contributions explicitly. (2pts)

The research is compared in detail with related works. Authors provide a general description of the related works and they position their research stating contribution explicitly. (1,0pts)

The paper provides a general description of related works. A brief introduction to each study or groups of it is introduced without identifying strengths and/or weaknesses. The authors explain how their research stands out without explicitly comparing it with the related works. (0,5pts)

The paper does not position the research in relation to other works in the area. The unique contribution of the study is general or not presented explicitly. (0)

QA3Does the paper clearly present the limitations of the research?

The paper lists or describes the limitations of the study. If the evaluation produces any results that are difficult to explain, the challenges are detailed presented. (2pts)

The paper presents the study limitations with a general description. If the evaluation produces any results that are difficult to explain, the paper describes the challenges in a general way. (1,0pts)

The paper does not explicitly present or list the limitations of the study. Nonetheless, the paper presents some research-related challenges when discussing the results of the experiments or in the conclusion. (0,5pts)

The paper does not present the limitations of the study. If the evaluation presents any results that are difficult to explain, the paper does not describe the challenges. (0)

QA4Does the paper clearly present directions for future work?

The paper explicitly lists or describes directions for future work. These are based on experiment results or limitations explicitly discussed. (2pts)

The paper explicitly lists or describes directions for future work. Yet, such directions are not linked with experiment results or the paper does not present the motivations or foundations. (1,0pts)

The paper presents directions for future work in general. (0,5pts)

The paper does not present directions for future work. (0)

QA5Does the paper clearly present the inputs for the recommendation generation process?

The paper explicitly presents the recommender system input parameters and the way it is collected. When the recommender does not produce recommendations based on a user profile, the authors describe the input elements used. When such information cannot be understood directly, the authors describe in detail each element that composes it and how these elements are obtained (2pts)

The paper presents the recommender input parameter through a general description or it is noticed a partial omission of some information, for example, through the use of “etc.” at the end of a list. (1,0pts)

The paper does not present the recommender input parameter. (0)

QA6Does the paper clearly present the techniques/methods for the recommendation generation process?

The paper describes the techniques and methods used for the recommendation generation process. They are presented in a continuous flow, beginning with an overview followed by specific details of the proposal. Authors may or may not use one or more illustrations to present the iterations and how the proposed ERS functions in detail. (2pts)

The paper describes the techniques and methods used for the recommendation generation process. Yet, these elements are not presented in a continuous flow, beginning with an overview followed by specific details of the proposal. Authors may or may not use one or more illustrations to present the iterations and how the proposed ERS functions in detail. (1,0pts)

The paper presents the techniques and methods used for the recommendation generation process in general. The presentation does not have a continuous flow that begins with an overview and then the specific details of the proposal. The author does not use illustrations to present the iterations and the functioning of the proposed recommender or uses illustrations that lack important components crucial for their understanding. (0,5pts)

The paper does not present the techniques and methods used in the elaboration of the recommender (0)

QA7Does the paper clearly present the target audience of the recommendations?

The paper explicitly presents the recommender target audience, contextualizes how the research addresses or minimizes a specific issue of them and, whenever possible, provides specific characteristics, such as their age range, education level (e.g., students) or teaching level (e.g., professors). (2pts)

The paper explicitly presents recommender target audience and contextualizes how their research resolves or minimizes a specific problem of this audience. However, they do not present the specific characteristics of this audience. (1,0pts)

The paper presents a general description of the recommender target audience. (0,5pts)

The paper does not specify the recommender target audience (e.g., individuals that use the system are identified only as users). (0)

QA8Does the paper clearly present the setting of the experiment?

The paper explicitly describes the settings of the experiment. All of these main elements are listed: 1) Number of users; 2) Number of items to recommend; 3) Kind of recommended items; 4) Source of the data used. (3pts)

The paper explicitly describes the settings of the experiment. Still, one of the following elements is not explained: 1) Number of users; 2) Number of items to be recommended; 3) Kind of recommended items; 4) Source of the data used. (1,5pts)

The paper provides a general description of the experiment settings. Yet, it is noted that there is a little detail regarding the experiment configuration and more than one of the following key elements is not explained: 1) Number of users; 2) Number of items to be recommended; 3) Kind of recommended items;

4) Source of the data used. (0,75pts)

Authors do not describe the experiment settings. (0)

QA9Does the paper clearly present how the recommender was evaluated?

The paper describes the evaluation steps and if the experiment is based on methodologies and/or theories. It also explicitly presents how the evaluation carried out validates that the research offers direct contributions to the educational development of the user or those who relate to him. It was conducted an external validation through an online evaluation or an experiment based on control/experimental groups. (3pts)

The paper describes the evaluation steps and if the experiment is based on methodologies and/or theories. It also explicitly presents how the evaluation carried out validates that the research offers direct contributions to the educational development of the user or those who relate to him. It was conducted an internal validation through an offline evaluation followed or not by questionnaire-based user study. (2,5pts)

The paper describes the evaluation steps, but it does not explicitly justify the used approach. It also explicitly presents how the evaluation carried out validates that the research offers direct contributions to the educational development of the user or those who relate to him. It was conducted either an internal or external validation. (2,0pts)

The paper describes the evaluation steps and if the experiment is based on methodologies and/or theories. It does not explicitly present how the evaluation carried out validates that the research offers direct contributions to the educational development of the user or those who relate to him. (1,0pts)

The paper describes evaluation steps; however, they do not explicitly justify the experiment approach. It also does not explicitly present how the evaluation carried out validates that the research offers direct contributions to the educational development of the user or those who relate to him. (0,75pts)

The paper generally presents the evaluation and does not explicitly justify ethe xperiment approach. It also does not explicitly present how the evaluation carried out validates that the research offers direct contributions to the educational development of the user or those who relate to him. (0,35pts)

The paper does not present the recommender evaluation (0)

Papers total weight range from 0 to 10. Only works that reached the minimum weight of 7 were accepted.

Screening process

Papers screening process occurred as shown in Fig.  2 . Initially, three authors carried out the identification of the studies. In this activity, the search string was applied into search engines of the repositories along with the inclusion and exclusion criteria through filtering settings. Two searches were undertaken on the three repositories at distinct moments, one in November 2020 and another in January 2021. The second one was performed to ensure that all 2020 published papers in the repositories were counted. A number of 756 preliminary primary studies were returned and their metadata were registered in Parsifal.

An external file that holds a picture, illustration, etc.
Object name is 10639_2022_11341_Fig2_HTML.jpg

Flow of papers search and selection

Following the protocol, the selection activity was initiated. At the start, the duplicity verification feature of Parsifal was used. A total of 5 duplicate papers were returned and the oldest copies were ignored. Afterwards, papers were divided into groups and distributed among the authors. Inclusion and exclusion criteria were applied through titles and abstracts reading. In cases which were not possible to determine the eligibility of the papers based on these two fields, the body of text was read until it was possible to apply all criteria accurately. Finally, 41 studies remained for the next step. Once more, papers were divided into three groups and each set of works was evaluated by one author. Studies were read in full and weighted according to each quality assessment question. At any stage of this process, when questions arose, the authors defined a solution through consensus. As a final result of the selection activity, 16 papers were approved for data extraction.

Procedure for data analysis

Data from selected papers were extracted in a data collection form that registered general information and specific information. The general information extracted was: reviewer identification, date of data extraction and title, authors and origin of the paper. General information was used to manage the data extraction activity. The specific information was: recommendation approach, recommendation techniques, input parameters, data collection strategy, method for data collection, evaluation methodology, evaluation settings, evaluation approaches, evaluation metrics. This information was used to answer the research questions. Tabulated records were interpreted and a descriptive summary with the findings was prepared.

Results and discussion

In this section, the SLR results are presented. Firstly, an overview of the selected papers is introduced. Next, the finds are analyzed from the perspective of each research question in a respective subsection.

Selected papers overview

Each selected paper presents a distinct recommendation approach that advances the ERS field. Following, an overview of these studies is provided.

Sergis and Sampson ( 2016 ) present a recommendation system that supports educators’ teaching practices through the selection of learning objects from educational repositories. It generates recommendations based on the level of instructors’ proficiency on ICT Competences. In Tarus et al. ( 2017 ), the recommendations are targeted at students. The study proposes an e-learning resource recommender based on both user and item information mapped through ontologies.

Nafea et al. ( 2019 ) propose three recommendation approaches. They combine item ratings with student’s learning styles for learning objects recommendation. Klašnja-Milićević et al. ( 2018 ) present a recommender of learning materials based on tags defined by the learners. The recommender is incorporated in Protus e-learning system.

In Wan and Niu ( 2016 ), a recommender based on mixed concept mapping and immunological algorithms is proposed. It produces sequences of learning objects for students. In a different approach, the same authors incorporate the self-organization theory into ERS. Wan and Niu ( 2018 ) deals with the notion of self-organizing learning objects. In this research, resources behave as individuals who can move towards learners. This movement results in recommendations and is triggered based on students’ learning attributes and actions. Wan and Niu ( 2020 ), in turn, self-organization refers to the approach of students motivated by their learning needs. The authors propose an ERS that recommends self-organized cliques of learners and, based on these, recommend learning objects.

Zapata et al. ( 2015 ) developed a learning object recommendation strategy for teachers. The study describes a methodology based on collaborative methodology and voting aggregation strategies for the group recommendations. This approach is implemented in the Delphos recommender system. In a similar research line, Rahman and Abdullah ( 2018 ) show an ERS that recommends Google results tailored to students’ academic profile. The proposed system classifies learners into groups and, according to the similarity of their members, indicates web pages related to shared interests.

Wu et al. ( 2015 ) propose a recommendation system for e-learning environments. In this study, complexity and uncertainties related to user profile data and learning activities is modeled through tree structures combined with fuzzy logic. Recommendations are produced from matches of these structures. Ismail et al. ( 2019 ) developed a recommender to support informal learning. It suggests Wikipedia content taking into account unstructured textual platform data and user behavior.

Huang et al. ( 2019 ) present a system for recommending optional courses. The system indications rely on the student’s curriculum time constraints and similarity of academic performance between him and senior students. The time that individuals dedicate for learning is also a relevant factor in Nabizadeh et al. ( 2020 ). In this research, a learning path recommender that includes lessons and learning objects is proposed. Such a system estimates the learner’s good performance score and, based on that, produces a learning path that satisfies their time constraints. The recommendation approach also provides indication of auxiliary resources for those who do not reach the estimated performance.

Fernandez-Garcia et al. ( 2020 ) deals with recommendations of disciplines through a dataset with few instances and sparse. The authors developed a model based on several techniques of data mining and machine learning to support students’ decision in choosing subjects. Wu et al. ( 2020 ) create a recommender that captures students’ mastery of a topic and produces a list of exercises with a level of difficulty adapted to them. Yanes et al. ( 2020 ) developed a recommendation system, based on different machine learning algorithms, that provides appropriate actions to assist teachers to improve the quality of teaching strategies.

How teaching and learning support recommender systems produce recommendations?

The process of generating recommendations is analyzed based on two axes. Underlying techniques of recommender systems are discussed first then input parameters are covered. Studies details are provided in Table ​ Table5 5 .

Summary of ERS techniques and input parameters used in the selected papers

Research (citation)Recommendation approachMain techniquesMain input parametersType of data collection strategyMethod for data collection
Sergis and Sampson ( )Hybrid (collaborative filtering and fuzzy logic)Neighbors users based on Euclidean distance and fuzzy sets(i) ICT Competency; (ii) Rating (users' preferences)Hybrid

(i) Collection of users’ usage data;

(ii) User defined

Tarus et al. ( )Hybrid (Collaborative Filtering, sequential pattern mining and knowledge representation)Neighbors users based on cosine similarities, Generalized Sequential Pattern algorithm and student/learning resource domain ontologies(i) Learning style; (ii) Learning level; (iii) Item attributes (iv) Rating (users' preferences)Explicit(i) Questionnaire; (ii) Online test; (iii) N/A; (iv) User defined
Nafea et al. ( )Collaborative filtering, content-based filtering and Hybrid (combining the last two approaches)Neighbors users based on Pearson correlation, Neighbors items based on Pearson correlation and cosine similarity(i) Learning style; (ii) Item attributes; (iii) Rating (users' preferences)Explicit(i) Questionnaire; (ii) Specialist defined
Wan and Niu ( )Self-organization basedSelf-organization theory(i) Learning style; (ii) Item attributes; (iii) learning objectives (iv) learners’ behaviorsHybrid(i) Questionnaire; (ii) Specialist defined / students’ feedback; (iii) N/A; (iv) Collection of users’ usage data
Rahman and Abdullah ( )Group basedGroupzation algorithm(i) Academic information; (ii) learners’ behaviors (iii) Contextual informationImplicit(i) Learning management system records; (ii) Collection of users’ usage data; (iii) Tracking changes in user academic records and behavior
Zapata et al. ( )Hybrid (techniques for group-based recommendation)Collaborative methodology, voting aggregation strategies and meta- learning techniquesRating (users' preferences)ExplicitUser defined
Wan and Niu ( )Hybrid (knowledge representation and heuristic methods)Mixed concept mapping and immune algorithm(i) Learning styles; (ii) item attributesExplicit(i) Questionnaire; (ii) Specialist de- fined / Students’ feedback
Klašnja- Miliéevié et al. ( )Hybrid (Social tagging and sequential pattern mining)Most popular tags algorithms and weighted hybrid strategy(i) Tags; (ii) Learners’ behaviorsHybrid(i) User defined; (ii) Collection of users’ usage data
Wu et al. ( )Hybrid (Knowledge representation, collaborative filtering and fuzzy logic)fuzzy tree matching method, neighbors users based on cosine similarity and fuzzy set strategy(i) Learning activities (ii) Learning objectives; (iii) Academic information; (iv) Rating (users' preferences)Hybrid(i) Collection of users’ usage data (ii, iii, iv) User defined
Ismail et al. ( )Hybrid (Graph based and fuzzy logic)Structural topical graph analysis algorithms and fuzzy set(i) Learning interests; (ii) ThesaurusImplicit(i) Collection of users’ usage data; (ii) Data extraction from another system
Huang et al. ( )Cross-user-domain collaborative filteringNeighbors users based on cosine similarityAcademic informationExplicitInput file/Dataset
Yanes et al. ( )Hybrid (machine learning algorithms)One-vs-All, Binary Relevance, Classifier Chain, Label Powerset, Multil Label, K Nearest-NeighborsAcademic informationExplicitInput file/Dataset
Wan and Niu ( )Hybrid (fuzzy logic, self-organization and sequential pattern mining)Intuitionist fuzzy logic, self-organization theory and PrefixSpan algorithm(i) Learning style (ii) Learning objectives (iii) Tags (iv) Academic information (v) Information from academic social relationsHybrid(i, ii) Questionnaire (iii) Extracted from m learners’ learning profiles (iv, v) Extracted from e-learning platform records
Fernandez- Garcia et al. ( )Hybrid (data mining and machine learning algorithms)Encoding, Feature Engineering, Scaling, Resampling, Random Forest, Logistic Regression, Decision Tree, Support Vector Machine, K Nearest Neighbors, Multilayer Perceptron and Gradient Boosting ClassifierAcademic informationExplicitInput file/Dataset
Wu et al. ( )Hybrid (neural network techniques)Recurrent Neural Networks and Deep Knowledge TracingAnswers recordsExplicitInput file/Dataset
Nabizadeh et al. ( )Hybrid (graph based, clustering technique and matrix factorization)Depth first search, k-means and matrix factorization(i) Background knowledge; (ii) users’ available time; (iii) Learning scoreImplicit(i) Collection of users’ usage data; (ii, iii) estimated data

Techniques approaches

Through selected papers analysis is observed that hybrid recommendation systems are predominant in selected papers. Such recommenders are characterized by computing predictions through a set of two or more algorithms in order to mitigate or avoid the limitations of pure recommendation systems (Isinkaye et al., 2015 ). From sixteen analyzed papers, thirteen (p = 81,25%) are based on hybridization. This tendency seems to be related with the support that hybrid approach provides for development of recommender systems that must meet multiple educational needs of users. For example, Sergis and Sampson ( 2016 ) proposed a recommender based on two main techniques: fuzzy set to deal with uncertainty about teacher competence level and Collaborative Filtering (CF) to select learning objects based on neighbors who may have competences similarities. In Tarus et al. ( 2017 ) students and learning resources profiles are represented as ontologies. The system calculates predictions based on them and recommends learning items through a mechanism that applies collaborative filtering followed by a sequential pattern mining algorithm.

Moreover, the hybrid approach that combines CF and Content-Based Filtering (CBF), although a traditional technique (Bobadilla, Ortega, Hernando and Gutiérrez, 2013), it seems to be not popular in teaching and learning support recommender systems research. From the selected papers, only Nafea et al. ( 2019 ) has a proposal in this regard. Additionally, the extracted data indicates that a significant number of hybrid recommendation systems (p = 53.85%, n  = 7) have been built based on the combination of methods of treatment or representation of data, such as the use of ontologies and fuzzy sets, with methods to generate recommendation. For example, Wu et al. ( 2015 ) structure users profile data and learning activities through fuzzy trees. In such structures the values assigned to their nodes are represented by fuzzy sets. The fuzzy tree data model and users’ ratings feed a tree structured data matching method and a CF algorithm for similarities calculation.

Collaborative filtering recommendation paradigm, in turn, plays an important role in research. Nearly a third of the studies (p = 30.77%, n  = 4) that propose hybrid recommenders includes a CF-based strategy. In fact, this is the most frequent pure technique on the research set. A total of 31.25%( n  = 5) are based on a CF adapted version or combine it with other approaches. CBF-based recommenders, in contrast, have not shared the same popularity. This technique is an established recommendation approach that produces results based on the similarity between items known to the user and others recommendable items (Bobadilla et al., 2013 ). Only Nafea et al. ( 2019 ) propose a CBF-based recommendation system.

Also, CF user-based variant is widely used in analyzed research. In this version, predictions are calculated by similarity between users, as opposed to the item-based version where predictions are based on item similarities (Isinkaye et al., 2015 ). All CF-based recommendation systems identified, whether pure or combined with other techniques, use this variant.

The above finds seem to be related to the growing perception, in the education domain, of the relevance of a student-centered teaching and learning process (Krahenbuhl, 2016 ; Mccombs, 2013 ). Recommendation approaches that are based on users’ profile, such as interests, needs, and capabilities, naturally fit this notion and are more widely used than those based on other information such as the characteristics of the recommended items.

Input parameters approaches

In regard to the inputs consumed in the recommendation process, collected data shows that the main parameters are attributes related to users’ educational profile. Examples are ICT competences (Sergis & Sampson, 2016 ); learning objectives (Wan & Niu, 2018 ; Wu et al., 2015 ), learning styles (Nafea et al., 2019 ), learning levels (Tarus et al., 2017 ) and different academic data (Yanes et al., 2020 ; Fernández-García et al., 2020). Only 25% ( n  = 4) of the systems apply item-related information in the recommendation process. Furthermore, with the exception of the Nafea et al. ( 2019 ) CBF-based recommendation, the others are based on a combination of items and users’ information. A complete list of the identified input parameters is provided in Table ​ Table5 5 .

Academic information and learning styles, compared to others parameters, features highly on research. They appear, respectively, in 37.5% ( n  = 6) and 31.25% ( n  = 5) papers. Student’s scores (Huang et al., 2019 ), academic background (Yanes et al., 2020 ), learning categories (Wu et al., 2015 ) and subjects taken (Fernández-García et al.,2020) are some of the academic data used. Learning styles, in turn, are predominantly based on Felder ( 1988 ) theory. Wan and Niu ( 2016 ), exceptionally, combine Felder ( 1988 ), Kolb et al. ( 2001 ) and Betoret ( 2007 ) to build a specific notion of learning styles. This is also used in two other researchers, carried out by the same authors, and has a questionnaire also developed by them (Wan & Niu, 2018 , 2020 ).

Regarding the way inputs are captured, it was observed that explicit feedback is prioritized over others data collection strategies. In this approach, users have to directly provide the information that will be used in the process of preparing recommendations (Isinkaye et al., 2015 ). Half of analyzed studies are based only on explicit feedback. The use of graphical interface components (Klašnja-Milićević et al., 2018 ), questionnaires (Wan & Niu, 2016 ) and manual entry of datasets (Wu et al., 2020 ; Yanes et al., 2020 ) are the main methods identified.

Only 18.75%( n  = 3) ERS rely solely on gathering information through implicit feedback, that is, when inputs are inferred by the system (Isinkaye et al., 2015 ). This type of data collection appears to be more popular when applied with an explicit feedback method for enhancing the prediction tasks. Recommenders that combine both approaches occur in 31.25%( n  = 5) of the studies. Implicit data collection methods identified are user’s data usage tracking, as access, browsing and rating history (Rahman & Abdullah, 2018 ; Sergis & Sampson, 2016 ; Wan & Niu, 2018 ), data extraction from another system (Ismail et al., 2019 ), users data session monitoring (Rahman & Abdullah, 2018 ) and data estimation (Nabizadeh et al., 2020 ).

The aforementioned results indicate that, in the context of the teaching and learning support recommender systems, the implicit collection of data has usually been explored in a complementary way to the explicit one. A possible rationale is that the inference of information is noisy and less accurate (Isinkaye et al., 2015 ) and, therefore, the recommendations produced from it involve greater complexity to be adjusted to the users’ expectations (Nichols, 1998 ). This aspect makes it difficult to apply the strategy in isolation and can be a factor that produces greater user dissatisfaction when compared to the disadvantage of the acquisition load of the explicit strategy inputs.

How teaching and learning support recommender systems present recommendations?

From the analyzed paper, two approaches for presenting recommendations are identified. The majority of the proposed ERS are based on a listing of ranked items according to a per-user prediction calculation (p = 87.5%, n  = 14). This strategy is applied in all cases where the supported task is to find good items that assist users in teaching and learning tasks (Ricci et al., 2015 ; Drachsler et al., 2015 ). The second one, is based on a learning pathway generation. In this case, recommendations are displayed through a series of linked items tied by some prerequisites. Only 2 recommenders use this approach. In them, the sequence is established by learning objects association attributes (Wan & Niu, 2016 ) and by a combination of prior knowledge of the user, the time he has available and a learning score (Nabizadeh et al., 2020 ). These ERS are associated with the item sequence recommendation task and are intended to guide users who wish to achieve a specific knowledge (Drachsler et al., 2015 ).

In a further examination, it is observed that more than a half (62.5%, n  = 10) do not present details of how recommendations list is presented to the end user. In Huang et al. ( 2019 ), for example, there is a vague description of a production of predicted scores for students and a list of the top-n optional courses and it is not specified how this list is displayed. This may be related to the fact that most of these recommenders do not report an integration into another system (e.g., learning management systems) or the purpose of making it available as a standalone tool (e.g., web or mobile recommendation system). The absence of such requirements mitigates the need for the development of a refined presentation interface. Only Tarus et al. ( 2017 ), Wan and Niu ( 2018 ) and Nafea et al. ( 2019 ) propose recommenders incorporated in an e-learning system and do not detail the way in which the results are exhibited. In the six papers that provide insights about recommendation presentation, a few of them (33.33%, n  = 2), have a graphical interface that explicitly seeks to capture the attention of the user who may be performing another task in the system. This approach highlights recommendations and is common in commercial systems (Beel, Langer and Genzmehr, 2013). In Rahman and Abdullah ( 2018 ), a panel entitled “recommendations for you” is used. In Ismail et al. ( 2019 ), a pop-up box with suggestions is displayed to the user. The other part of the studies exhibits organic recommendations, i.e., naturally arranged items for user interaction (Beel et al., 2013 ).

In Zapata et al. ( 2015 ), after the user defines some parameters, a list of recommended learning objects that are returned similarly to a search engine result. As for the aggregation methods, another item recommended by the system, only the strategy that fits better to the interests of the group is recommended. The result is visualized through a five-star Likert scale that represents the users’ consensus rating. In Klašnja-Milićević et al. ( 2018 ) and Wu et al. ( 2015 ), the recommenders’ results are listed in the main area of the system. In Nabizadeh et al. ( 2020 ) the learning path occupies a panel on the screen and the items associated with it are displayed as the user progresses through the steps. The view of the auxiliary learning objects is not described in the paper. These three last recommenders do not include filtering settings and distance themselves from the archetype of a search engine.

Also, a significant number of researches are centralized on learning objects recommendations (p = 56.25%, n  = 9). Other researches recommendable items identified are learning activities (Wu et al., 2015 ), pedagogical actions (Yanes et al., 2020 ), web pages (Ismail et al., 2019 ; Rahman & Abdullah, 2018 ), exercises (Wu et al., 2020 ), aggregation methods (Zapata et al., 2015 ), lessons (Nabizadeh et al., 2020 ) and subjects (Fernández-García et al., 2020). None of the study relates the way of displaying results to the recommended item. This is a topic that needs further investigation to answer whether there are more appropriate ways to present specific types of items to the user.

How teaching and learning support recommender systems are evaluated?

In ERS, there are three main evaluation methodologies (Manouselis et al., 2013 ). One of them is the offline experiment, which is based on the use of pre-collected or simulated data to test recommenders’ prediction quality (Shani & Gunawardana, 2010 ). User study is the second approach. It takes place in a controlled environment where information related to real interactions of users are collected (Shani & Gunawardana, 2010 ). This type of evaluation can be conducted, for example, through a questionnaire and A/B tests (Shani & Gunawardana, 2010 ). Finally, the online experiment, also called real life testing, is one in which recommenders are used under real conditions by the intended users (Shani & Gunawardana, 2010 ).

In view of these definitions, the analyzed researches comprise only user studies and offline experiments in reported experiments. Each of these methods were identified in 68.75% ( n  = 11) papers respectively. Note that they are not exclusive for all cases and therefore the sum of the percentages is greater than 100%. For example, Klašnja-Milićević et al. ( 2018 ) and Nafea et al. ( 2019 ) assessed the quality of ERS predictions from datasets analysis and also asked users to use the systems to investigate their attractiveness. Both evaluation methods are carried out jointly in 37.5%( n  = 6) papers. When comparing with methods exclusive usage, each one is conducted at 31.25% ( n  = 5). Therefore, the two methods seem to have a balanced popularity. Real-life tests, on the contrary, although they are the ones that best demonstrate the quality of a recommender (Shani & Gunawardana, 2010 ), are the most avoided, probably due to the high cost and complexity of execution.

An interesting finding concerns user study methods used in research. When associated with offline experiments, the user satisfaction assessment is the most common ( p  = 80%, n  = 5). Of these, only Nabizadeh et al. ( 2020 ) performed an in-depth evaluation combining a satisfaction questionnaire with an experiment to verify the pedagogical effectiveness of their recommender. Wu et al. ( 2015 ), in particular, does not include a satisfaction survey. They conducted a qualitative investigation of user interactions and experiences.

Although questionnaires assist in identification of users’ valuables information, it is sensitive to respondents’ intentions and can be biased with erroneous answers (Shani & Gunawardana, 2010 ). Papers that present only user studies, in contrast, have a higher rate of experiments that results in direct evidence about the recommender’s effectiveness in teaching and learning. All papers in this group have some investigation in this sense. Wan and Niu ( 2018 ), for example, verified whether the recommender influenced the academic score of students and their time to reach a learning objective. Rahman and Abdullah ( 2018 ) investigated whether the recommender impacted the time students took to complete a task.

Regarding the purpose of the evaluations, ten distinct research goals were identified. Through Fig.  3 , it is observed that the occurrence of accuracy investigation excelled the others. Only 1 study did not carry out experiments in this regard. Different traditional metrics were identified for measuring the accuracy of recommenders. The Mean Absolute Error (MAE), in particular, has the higher frequency. Table ​ Table6 6 lists the main metrics identified.

An external file that holds a picture, illustration, etc.
Object name is 10639_2022_11341_Fig3_HTML.jpg

Evaluation purpose of recommender systems in selected papers

Summary of ERS evaluation settings, approaches and metrics in selected papers

Research (citation)Evaluation MethodologyDataset Size / No. of SubjectsNo. Recommendable ItemsHighlights of Evaluation ApproachesHighlights of Evaluation Metrics
Sergis and Sampson ( )Offline200596.196Layered evaluation, Dataset split (70% training, 30% test) and comparison with collaborative filtering variationsJaccard coefficient (for user’s ICT profile elicitation accuracy) and RMSE
Tarus et al. ( )Offline and user study50240Dataset split (70% training, 30% test), comparison with collaborative filtering variations and questionnaire surveyMAE, Precision, Recall and user’s satisfaction level
Nafea et al. ( )Offline and user study80At least 300Comparison between the proposed algorithms and questionnaire surveyMAE, RMSE and user’s satisfaction level
Wan and Niu ( )User study7493043A/B test, comparison with Instructors’ suggestions and e-learning recommender systems based on genetic algorithm and Markov chain, and questionnaire surveyAverage students’ score, learning time, learning objects utilization, fitness function, learning objects’ hit rate, learning objects’ proportions marked with educational meaning tags, non centralized distribution of learning objects, proportion of new recommended items, user’s satisfaction level and entropy (time to achieve a stable sequence of recommendations)
Rahman and Abdullah ( )User study60N/AA/B test and questionnaire surveySearch time for educational materials, quantity of access to recommended items, user’s satisfaction level, level of ease of use and the usefulness of recommender
Zapata et al. ( )Offline and user study75 for offline experiment and 63 for questionnaireN/AComparison between rating aggregation methods, analysis of appropriate aggregation method selection and questionnaire surveyRMSE, Top 1 Frequency, Average Ranking, Average Recommenda- tion Error, Mean Reciprocal Rank, user’s satisfaction level and level of ease of use and the usefulness of recommender
Wan and Niu ( )User study250235A/B test, comparison with Instructors’ suggestions and e-learning recommender systems based on genetic algorithm, particle swarm optimization and ant colony optimization algorithm, and questionnaire surveyTime spent on learning planning, quantity of recommended learning objects, average score, quantity of students who successfully passed the final exam, average recommendation rounds, Average total recommended learning objects for each learner among all recommendation rounds, average time of learning objects recommendation, average evolution time and user’s satisfaction level
Klašnja- Miliéevié et al. ( )Offline and user study120 for offline experiment and 65 for questionnaire62Dataset split (80% training, 20% test), Comparison between tag recommendation methods and questionnaire surveyPrecision, Recall, user’s satisfaction level and level of ease of use and the usefulness
Wu et al. ( )Offline and user study2213 for offline experiment and 5 for case studyN/ADataset split (20%/40%/50% test set), Compared with recommendation approach for e-learning recommender systems proposed by Bobadilla et al. ( ) and case studyMAE
Ismail et al. ( )User study100 for comparison and 80 for questionnaireN/AA/B test, comparison between the proposed recom- menders with a control group and a baseline approach and questionnaire surveyMean Average Precision, knowledge level, visited articles, perceived relevance and user’s satisfaction level
Huang et al. ( )Offline1166782Dataset split (Training and testing dataset divided according to semesters) and comparison of recommender predicted optional courses and the ground- truth optional courses that student has enrolled onAverage hit rate, average accuracy
Yanes et al. ( )OfflineN/A9Dataset split (70% training, 30% test) and comparison of different machine learning algorithmsPrecision, Recall, F1-measure, Hamming Loss
Wan and Niu ( )User study11192386A/B test, comparison with Instructors’ suggestions and with variants of the proposed algorithm, and questionnaire surveyStudents’ average scores, proportion of students who passed the exam, average learning time, proportion of the resources that learners had visited out of the total number of resources, matching degree between learning objects and learners, diversity of learning objects attributes, proportion of learner’s tags, user’s satisfaction level, level of usefulness and entropy (time to achieve a stable sequence of recommendations)
Fernandez-Garcia et al. ( Offline32345Sequential chain of steps with dataset transformationsAccuracy, F1-score
Wu et al. ( )Offline512619.136Dataset split (70% Training, 10% validation, 20% test) and comparison with user-based and item-based collaborative filter, content-based filtering, hybrid recommendation model based on deep collaborative filtering and a knowledge-graph embedding based collaborative filteringAccuracy, novelty and diversity
Nabizadeh et al. ( )Offline and user study205 for offline experiment and 32 for user study90 for offline experiment and 59 for user studyDataset split (Training and testing dataset divided according to a defined observed and unobserved LO classification), algorithms comparison, A/B test, control and experimental groupsAE, number of correctly completed learning objects/lessons by users, time that users spend to get their goals and user’s satisfaction level

The system attractiveness analysis, through the verification of user satisfaction, has the second highest occurrence. It is present in 62.5% ( n  = 10) studies. The pedagogical effectiveness evaluation of the ERS has a reduced participation in the studies and occurs in only 37.5% ( n  = 6). Experiments to examine recommendations diversity, user’s profile elicitation accuracy, evolution process, user’s experience and interactions, entropy, novelty and perceived usefulness and easiness were also identified, albeit to a lesser extent.

Also, 81.25% ( n  = 13) papers presented experiments to achieve multiple purposes. For example, in Wan and Niu ( 2020 ) an evaluation is carried out to investigate recommenders’ pedagogical effectiveness, student satisfaction, accuracy, diversity of recommendations and entropy. Only in Huang et al. ( 2019 ), Fernandez-Garcia et al. ( 2020 ) and Yanes et al. ( 2020 ) evaluated a single recommender system dimension.

The upper evidence suggests an engagement of the scientific community in demonstrating the quality of the recommender systems developed through multidimensional analysis. However, offline experiments and user studies, particularly those based on questionnaires, are mostly adopted and can lead to incomplete or biased interpretations. Thus, such data also signalize the need for a greater effort to conduct real life tests and experiments that lead to an understanding of the real impact of recommenders on the teaching and learning process. Researches that synthesize and discuss the empirical possibilities of evaluating the pedagogical effectiveness of ERS can help to increase the popularity of these experiments.

Through papers analysis is also find that the results of offline experiments are usually based on a greater amount of data compared to user studies. In this group, 63.64% ( n  = 7) of evaluation datasets have records of more than 100 users. User studies, on the other hand, predominate sets of up to 100 participants in the experiments (72.72%, n  = 8). In general, offline assessments that have smaller datasets are those that occur in association with a user study. This is because the data for both experiments usually come from the same subjects (Nafea et al., 2019 ; Tarus et al., 2017 ). The cost (e.g., time and money) related to surveying participants for the experiment is possibly a determining factor in defining appropriate samples.

Furthermore, it is also verified that the greater parcel of offline experiments has a 70/30% division approach for training and testing data. Nguyen et al. ( 2021 ) give some insights in this sense arguing that this is the most suitable ratio for training and validating machine learning models. Further details on recommendation systems evaluation approaches and metrics are presented in Table ​ Table6 6 .

What are the limitations and research opportunities related to the teaching and learning support recommender systems field?

The main limitations observed in selected papers are presented below. They are based on articles’ explicit statements and on authors’ formulations. In this section, only those that are transverse to the majority of the studies are listed. Next, a set of research opportunities for future investigations are pointed out.

Research limitations

Research limitations are factors that hinders current progress in the ERS field. Knowing these factors can assist researchers to attempt coping with them on their study and mitigate the possibility of the area stagnation, that is, when new proposed recommenders does not truly generate better outcomes than the baselines (Anelli et al., 2021 ; Dacrema et al., 2021 ). As a result of this SLR, research limitations were identified in three strands that are presented below.

Reproducibility restriction

The majority of the papers report a specifically collected dataset to evaluate the proposed ERS. The main reason for this is the scarcity of public datasets suited to the research’s needs, as highlighted by some authors (Nabizadeh et al., 2020 ; Tarus et al., 2017 ; Wan & Niu, 2018 ; Wu et al., 2015 ; Yanes et al., 2020 ). Such approach restricts the feasibility of experiment reproduction and makes it difficult to compare recommenders. In fact, this is an old issue in the ERS field. Verbert et al. ( 2011 ) observed, in the beginning of the last decade, the necessity to improve reproducibility and comparison on ERS in order to provide stronger conclusions about their validity and generalizability. Although there was an effort in this direction in the following years based on a broad educational dataset sharing, currently, most of the known ones (Çano & Morisio, 2015 ; Drachsler et al., 2015 ) are retired, and the remaining, proved not to be sufficient to meet current research demands. Of the analyzed studies, only Wu et al. ( 2020 ) use public educational datasets.

Due to the fact that datasets sharing play an important role for recommenders’ model reproduction and comparison in the same conditions, this finding highlight the need of a research community effort for the creation of means to supply this need (e.g., development of public repositories) in order to mitigate current reproducibility limitation.

Dataset size / No of subjects

As can be observed on Table ​ Table6, 6 , a few experimental results are based on a large amount of data. Only five studies have information from 1000 or more users. In particular, the offline evaluation conducted by Wu et al. ( 2015 ), despite having an extensive dataset, uses MovieLens records and is not based on real information related to teaching and learning. Another limitation concerns where data comes from, it is usually from a single origin (e.g., class of a college).

Although experiments based on small datasets can reveal the relevance of an ERS, an evaluation based on a large-scale dataset should provide stronger conclusions on recommendation effectiveness (Verbert et al., 2011 ). Experiments based on larger and more diverse data (e.g., users from different areas and domains) would contribute to most generalizable results. On another hand, scarcity of public dataset may be impairing the quantity and diversity of data used on scientific experiments in the ERS field. As reported by Nabizadeh et al. ( 2020 ), the increasement of the size of the experiment is costly in different aspects. If more public dataset were available, researchers would be more likely to find the ones that could be aligned to their needs and, naturally, increasing the size of their experiment. In this sense, they could be favored by reducing data acquisition difficulty and cost. Furthermore, the scientific community would access users’ data out of their surrounding context and could base their experiments on diversified data.

Lack of in-depth investigation of the impact of known issues in the recommendation system field

Cold start, overspecialization and sparsity are some known challenges in the field of recommender systems (Khusro et al., 2016 ). They are mainly related to a reduced and unequally distributed number of users’ feedback or item description used for generating recommendations (Kunaver & Požrl, 2017 ). These issues also permeate the ERS Field. For instance, in Cechinel et al. ( 2011 ) is reported that on a sample of more than 6000 learning objects from Merlot repository was observed a reduced number of users ratings over items. Cechinel et al. ( 2013 ), in turn, observed, in a dataset from the same repository, a pattern of few users rating several resources while the vast number of them rating 5 or less. Since such issues directly impact the quality of recommendations, teaching and learning support recommenders should be evaluated considering such issues to clarify in which extent they can be effective in real life situations. Conversely, in this SLR, we detected an expressive number of papers (43.75%, n  = 7) that do not analyze or discuss how the recommenders behave or handle, at least partially, these issues. Studies that rely on experiments to examine such aspects would elucidate more details of the quality of the proposed systems.

Research opportunities

From the analyzed papers, a set of research opportunities were identified. They are based on gaps related to the subjects explored through the research questions of this SLR. The identified opportunities provide insights of under-explored topics that need further investigation taking into account their potential to contribute to the advancement of the ERS field. Research opportunities were identified in three strands that are presented below.

Study of the potential of overlooked user’s attributes

The papers examined present ERS based on a variety of inputs. Preferences, prior knowledge, learning style, and learning objectives are some examples (Table ​ (Table5 5 has the complete list). Actually, as reported by Chen and Wang ( 2021 ), this is aligned with a current research trend of investigating the relationships between individual differences and personalized learning. Nevertheless, one evidence that rises from this SLR also confirms that “some essential individual differences are neglected in existing works” (Chen & Wang, 2021 ). The papers sample suggests a lack of studies that incorporate, in recommendation model, others notably relevant information, such as emotional state and cultural context of students (Maravanyika & Dlodlo, 2018 ; Salazar et al., 2021 ; Yanes et al., 2020 ). This indicates that further investigation is needed in order to clarify the true contributions and existing complexities of collect, measure and apply these other parameters. In this sense, an open research opportunity refers to the investigation of these other users’ attributes in order to explore the impact of such characteristics on the quality of ERS results.

Increase studies on the application of ERS in informal learning situations

Informal learning refers to a type of learning that, typically, occurs out of an education institution (Pöntinen et al., 2017 ). In it, learners do not follow a structured curriculum or have a domain expert to guide him (Pöntinen et al., 2017 ; Santos & Ali, 2012 ). Such aspects influence how ERS can support users. For instance, in informal settings, content can come from multiple providers, as a consequence, it can be delivered without taking into account a proper pedagogical sequence. ERS targeting this scenario, in turn, should concentrate on organizing and sequencing recommendations guiding users’ learning process (Drachsler et al., 2009 ).

Although literature highlight the existence of significative differences on the design of educational recommenders that involves formal or informal learning circumstance (Drachsler et al., 2009 ;Okoye et al, 2012 ; Manouselis et al., 2013 ; Harrathi & Braham, 2021 ), through this SLR was observed that current studies tend to not be explicit in reporting this characteristic. This scenario makes it difficult to obtain a clear landscape of the current field situation in this dimension. Nonetheless, through the characteristics of the proposed ERS, it was observed that current research seems to be concentrated on the formal learning context. This is because recommenders from analyzed papers usually use data that are maintained by institutional learning systems. Moreover, recommendations, predominantly, do not provide a pedagogical sequencing to support self-directed and self-paced learning (e.g., recommendations that build a learning path to lead to specific knowledge). Conversely, informal learning has increasingly gained attention of the scientific community with the emergence of the coronavirus pandemic (Watkins & Marsick, 2020 ).

In view of this, the lack of studies of ERS targeting informal learning settings open a research opportunity. Specifically, further investigation focused on the design and evaluation of recommenders that take into consideration different contexts (ex. location or used device) and that guide users through a learning sequence to achieve a specific knowledge would figure prominently in this context considering the less structured format informal learning circumstances has in terms of learning objectives and learning support.

Studies on the development of multidimensional evaluation frameworks

Evidence from this study shows that the main purpose of ERS evaluation has been to assess recommender’s accuracy and users’ satisfaction (Section  4.4 ). This result, connected with Erdt et al. ( 2015 ) reveals a two decade of evaluation predominantly based on these two goals. Even though others evaluation purposes had a reduced participation in research, they are also critical for measuring the success of ERS. Moubayed et al. ( 2018 ), for example, highlights two e-learning systems evaluation aspects, one is concerned with how to properly evaluate the student performance, the other refers to measuring learners’ learning gains through systems usage. Tahereh et al. ( 2013 ) identifies that stakeholder and indicators associated with technological quality are relevant to consider in educational system assessment. From the perspective of recommender systems field, there are also important aspects to be analyzed in the context of its application in the educational domain such as novelty and diversity (Pu et al., 2011 ; Cremonesi et al., 2013 ; Erdt et al., 2015 ).

Upon this context, it is noted that, although evaluating recommender's accuracy and users’ satisfaction give insights about the value of the ERS, they are not sufficient to fully indicate the quality of the system in supporting the learning process. Other different factors reported in literature are relevant to take in consideration. However, to the best of our knowledge, there is no framework that identifies and organizes these factors to be considered in an ERS evaluation, leading to difficulties for the scientific community to be aware of them and incorporate them in studies.

Because the evaluation of ERS needs to be a joint effort between computer scientists and experts from other domains (Erdt et al., 2015 ), further investigation should be carried out seeking the development of a multidimensional evaluation framework that encompass evaluation requirements based on a multidisciplinary perspective. Such studies would clarify the different dimensions that have the potential to contribute to better ERS evaluation and could even identify which one should be prioritized to truly assess learning impact with reduced cost.

In recent years, there has been an extensive scientific effort to develop recommenders that meet different educational needs; however, research is dispersed in literature and there is no recent study that encompasses the current scientific efforts in the field.

Given this context, this paper presents an SLR that aims to analyze and synthesize the main trends, limitations and research opportunities related to the teaching and learning support recommender systems area. Specifically, this study contributes to the field providing a summary and an analysis of the current available information about the teaching and learning support recommender systems topic in four dimensions: (i) how the recommendations are produced (ii) how the recommendations are presented to the users (iii) how the recommender systems are evaluated and (iv) what are the limitations and opportunities for research in the area.

Evidences are based on primary studies published from 2015 to 2020 from three repositories. Through this review, it is provided an overarching perspective of current evidence-based practice in ERS in order to support practitioners and researchers for implementation and future research directions. Also, research limitations and opportunities are summarized in light of current studies.

The findings, in terms of current trends, shows that hybrid techniques are the most used in teaching and learning support recommender systems field. Furthermore, it is noted that approaches that naturally fit a user centered design (e.g., techniques that allow to represent students’ educational constraints) have been prioritized over that based on other aspects, like item characteristics (e.g., CBF Technique). Results show that these approaches have been recognized as the main means to support users with recommendations in their teaching and learning process and provide directions for practitioners and researchers who seek to base their activities and investigations on evidence from current studies. On the other hand, this study also reveals that highly featured techniques in the major topic of general recommender systems, such as the bandit-based and the deep learning ones (Barraza-Urbina & Glowacka, 2020 ; Zhang et al., 2020 ), have been underexplored, implying a mismatch between the areas. Therefore, the result of this systematic review indicates that a greater scientific effort should be employed to investigate the potential of these uncovered approaches.

With respect to recommendation presentation, the organic display is the most used strategy. However, most of the researches have the tendency to not show details of the used approach making it difficult to understand the state of the art of this dimension. Furthermore, among other results, it is observed that the majority of the ERS evaluation are based on the accuracy of recommenders and user's satisfaction analysis. Such a find open research opportunity scientific community for the development of multidimensional evaluation frameworks that effectively support the verification of the impact of recommendations on the teaching and learning process.

Lastly, the limitations identified indicate that difficulties related to obtaining data to carry out evaluations of ERS is a reality that extends for more than a decade (Verbert et al., 2011 ) and call for scientific community attention for the treatment of this situation. Likewise, the lack of in-depth investigation of the impact of known issues in the recommendation system field, another limitation identified, points to the importance of aspects that must be considered in the design and evaluation of these systems in order to provide a better elucidation of their potential application in a real scenario.

With regard to research limitations and opportunities, some of this study findings indicate the need for a greater effort in the conduction of evaluations that provide direct evidence of the systems pedagogical effectiveness and the development of a multidimensional evaluation frameworks for ERS is suggested as a research opportunity. Also, it was observed a scarcity of public dataset usage on current studies that leads to limitation in terms of reproducibility and comparison of recommenders. This seems to be related to a restricted number of public datasets currently available, and such aspect can also be affecting the size of experiments conducted by researchers.

In terms of limitations of this study, the first refers to the number of datasources used for paper selection. Only the repositories mentioned in Section  3.1 were considered. Thus, the scope of this work is restricted to evidence from publications indexed by these platforms. Furthermore, only publications written in English were examined, thus, results of papers written in other languages are beyond the scope of this work. Also, the research limitations and opportunities presented on Section  4.5 were identified based on the extracted data used to answer this SLR research questions, therefore they are limited to their scope. As a consequence, limitations and opportunities of the ERS field that surpass this context were not identified nor discussed in this study. Finally, the SLR was directed to papers published in scientific journals and, due to this, the results obtained do not reflect the state of the area from the perspective of conference publications. In future research, it is intended to address such limitations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Table ​ Table7 7

Author contribution

Felipe Leite da Silva: Conceptualization, Methodology approach, Data curation, Writing – original draft. Bruna Kin Slodkowski: Data curation, Writing – original draft. Ketia Kellen Araújo da Silva: Data curation, Writing – original draft. Sílvio César Cazella: Supervision and Monitoring of the research; Writing – review & editing.

Data availability statement

Informed consent.

This research does not involve human participation as research subject, therefore research subject consent does not apply.

Authors consent with the content presented in the submitted manuscript.

Financial and non-financial interests

The authors have no relevant financial or non-financial interests to disclose.

Research involving human participants and/or animals

This research does not involve an experiment with human or animal participation.

Competing interests

The authors have no competing interests to declare that are relevant to the content of this article.

1 http://parsif.al/

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Anelli, V. W., Bellogín, A., Di Noia, T., & Pomo, C. (2021). Revisioning the comparison between neural collaborative filtering and matrix factorization. Proceedings of the Fifteenth ACM Conference on Recommender Systems , 521–529. 10.1145/3460231.3475944
  • Ashraf E, Manickam S, Karuppayah S. A comprehensive review of curse recommender systems in e-learning. Journal of Educators Online. 2021; 18 :23–35. [ Google Scholar ]
  • Barraza-Urbina, A., & Glowacka, D. (2020). Introduction to Bandits in Recommender Systems. Proceedings of the Fourteenth ACM Conference on Recommender Systems , 748–750. 10.1145/3383313.3411547
  • Becker F. Teacher epistemology: The daily life of the school. 1. Editora Vozes; 1993. [ Google Scholar ]
  • Beel J, Langer S, Genzmehr M. Sponsored vs. Organic (Research Paper) Recommendations and the Impact of Labeling. In: Aalberg T, Papatheodorou C, Dobreva M, Tsakonas G, Farrugia CJ, editors. Research and Advanced Technology for Digital Libraries. Springer Berlin Heidelberg; 2013. pp. 391–395. [ Google Scholar ]
  • Betoret F. The influence of students’ and teachers’ thinking styles on student course satisfaction and on their learning process. Educational Psychology. 2007; 27 (2):219–234. doi: 10.1080/01443410601066701. [ CrossRef ] [ Google Scholar ]
  • Bobadilla J, Serradilla F, Hernando A. Collaborative filtering adapted to recommender systems of e-learning. Knowledge-Based Systems. 2009; 22 (4):261–265. doi: 10.1016/j.knosys.2009.01.008. [ CrossRef ] [ Google Scholar ]
  • Bobadilla J, Ortega F, Hernando A, Gutiérrez A. Recommender systems survey. Knowledge-Based Systems. 2013; 46 :109–132. doi: 10.1016/j.knosys.2013.03.012. [ CrossRef ] [ Google Scholar ]
  • Buder J, Schwind C. Learning with personalized recommender systems: A psychological view. Computers in Human Behavior. 2012; 28 (1):207–216. doi: 10.1016/j.chb.2011.09.002. [ CrossRef ] [ Google Scholar ]
  • Çano, E., & Morisio, M. (2015). Characterization of public datasets for Recommender Systems. (2015 IEEE 1 st ) International Forum on Research and Technologies for Society and Industry Leveraging a better tomorrow (RTSI) , 249–257.10.1109/RTSI.2015.7325106
  • Cazella SC, Behar PA, Schneider D, Silva KKd, Freitas R. Developing a learning objects recommender system based on competences to education: Experience report. New Perspectives in Information Systems and Technologies. 2014; 2 :217–226. doi: 10.1007/978-3-319-05948-8_21. [ CrossRef ] [ Google Scholar ]
  • Cechinel C, Sánchez-Alonso S, García-Barriocanal E. Statistical profiles of highly-rated learning objects. Computers & Education. 2011; 57 (1):1255–1269. doi: 10.1016/j.compedu.2011.01.012. [ CrossRef ] [ Google Scholar ]
  • Cechinel C, Sicilia M-Á, Sánchez-Alonso S, García-Barriocanal E. Evaluating collaborative filtering recommendations inside large learning object repositories. Information Processing & Management. 2013; 49 (1):34–50. doi: 10.1016/j.ipm.2012.07.004. [ CrossRef ] [ Google Scholar ]
  • Chen SY, Wang J-H. Individual differences and personalized learning: A review and appraisal. Universal Access in the Information Society. 2021; 20 (4):833–849. doi: 10.1007/s10209-020-00753-4. [ CrossRef ] [ Google Scholar ]
  • Cremonesi P, Garzotto F, Turrin R. User-centric vs. system-centric evaluation of recommender systems. In: Kotzé P, Marsden G, Lindgaard G, Wesson J, Winckler M, editors. Human-Computer Interaction – INTERACT 2013, 334–351. Springer Berlin Heidelberg; 2013. [ Google Scholar ]
  • Dacrema MF, Boglio S, Cremonesi P, Jannach D. A troubling analysis of reproducibility and progress in recommender systems research. ACM Transactions on Information Systems. 2021; 39 (2):1–49. doi: 10.1145/3434185. [ CrossRef ] [ Google Scholar ]
  • Dermeval, D., Coelho, J.A.P.d.M., & Bittencourt, I.I. (2020). Mapeamento Sistemático e Revisão Sistemática da Literatura em Informática na Educação. Metodologia de Pesquisa Científica em Informática na Educação: Abordagem Quantitativa . Porto Alegre.  https://jodi-ojs-tdl.tdl.org/jodi/article/view/442
  • Drachsler H, Hummel HGK, Koper R. Identifying the goal, user model and conditions of recommender systems for formal and informal learning. Journal of Digital Information. 2009; 10 (2):1–17. [ Google Scholar ]
  • Drachsler H, Verbert K, Santos OC, Manouselis N. Panorama of Recommender Systems to Support Learning. In: Ricci F, Rokach L, Shapira B, editors. Recommender Systems Handbook. Springer; 2015. pp. 421–451. [ Google Scholar ]
  • Erdt M, Fernández A, Rensing C. Evaluating recommender systems for technology enhanced learning: A quantitative survey. IEEE Transactions on Learning Technologies. 2015; 8 (4):326–344. doi: 10.1109/TLT.2015.2438867. [ CrossRef ] [ Google Scholar ]
  • Felder R. Learning and teaching styles in engineering education. Journal of Engineering Education. 1988; 78 :674–681. [ Google Scholar ]
  • Fernandez-Garcia AJ, Rodriguez-Echeverria R, Preciado JC, Manzano JMC, Sanchez-Figueroa F. Creating a recommender system to support higher education students in the subject enrollment decision. IEEE Access. 2020; 8 :189069–189088. doi: 10.1109/ACCESS.2020.3031572. [ CrossRef ] [ Google Scholar ]
  • Ferreira, V., Vasconcelos, G., & França, R. (2017). Mapeamento Sistemático sobre Sistemas de Recomendações Educacionais. Proceedings of the XXVIII Brazilian Symposium on Computers in Education , 253-262. 10.5753/cbie.sbie.2017.253
  • Garcia-Martinez S, Hamou-Lhadj A. Educational recommender systems: A pedagogical-focused perspective. Multimedia Services in Intelligent Environments. Smart Innovation, Systems and Technologies. 2013; 25 :113–124. doi: 10.1007/978-3-319-00375-7_8. [ CrossRef ] [ Google Scholar ]
  • George G, Lal AM. Review of ontology-based recommender systems in e-learning. Computers & Education. 2019; 142 :103642–103659. doi: 10.1016/j.compedu.2019.103642. [ CrossRef ] [ Google Scholar ]
  • Harrathi M, Braham R. Recommenders in improving students’ engagement in large scale open learning. Procedia Computer Science. 2021; 192 :1121–1131. doi: 10.1016/j.procs.2021.08.115. [ CrossRef ] [ Google Scholar ]
  • Herpich F, Nunes F, Petri G, Tarouco L. How Mobile augmented reality is applied in education? A systematic literature review. Creative Education. 2019; 10 :1589–1627. doi: 10.4236/ce.2019.107115. [ CrossRef ] [ Google Scholar ]
  • Huang L, Wang C-D, Chao H-Y, Lai J-H, Yu PS. A score prediction approach for optional course recommendation via cross-user-domain collaborative filtering. IEEE Access. 2019; 7 :19550–19563. doi: 10.1109/ACCESS.2019.2897979. [ CrossRef ] [ Google Scholar ]
  • Iaquinta, L., Gemmis, M. de,Lops, P., Semeraro, G., Filannino, M.& Molino, P. (2008). Introducing serendipity in a content-based recommender system.  Proceedings of the Eighth International Conference on Hybrid Intelligent Systems , 168-173, 10.1109/HIS.2008.25
  • Isinkaye FO, Folajimi YO, Ojokoh BA. Recommendation systems: Principles, methods and evaluation. Egyptian Informatics Journal. 2015; 16 (3):261–273. doi: 10.1016/j.eij.2015.06.005. [ CrossRef ] [ Google Scholar ]
  • Ismail HM, Belkhouche B, Harous S. Framework for personalized content recommendations to support informal learning in massively diverse information Wikis. IEEE Access. 2019; 7 :172752–172773. doi: 10.1109/ACCESS.2019.2956284. [ CrossRef ] [ Google Scholar ]
  • Khan KS, Kunz R, Kleijnen J, Antes G. Five steps to conducting a systematic review. Journal of the Royal Society of Medicine. 2003; 96 (3):118–121. doi: 10.1258/jrsm.96.3.118. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Khanal SS, Prasad PWC, Alsadoon A, Maag A. A systematic review: Machine learning based recommendation systems for e-learning. Education and Information Technologies. 2019; 25 (4):2635–2664. doi: 10.1007/s10639-019-10063-9. [ CrossRef ] [ Google Scholar ]
  • Khusro S, Ali Z, Ullah I. Recommender Systems: Issues, Challenges, and Research Opportunities. In: Kim K, Joukov N, editors. Lecture Notes in Electrical Engineering. Springer; 2016. pp. 1179–1189. [ Google Scholar ]
  • Kitchenham, B. A., & Charters, S. (2007). Guidelines for performing Systematic Literature Reviews in Software Engineering. Technical Report EBSE 2007–001 . Keele University and Durham University Joint Report. https://www.elsevier.com/data/promis_misc/525444systematicreviewsguide.pdf .
  • Kitchenham B, Pearl Brereton O, Budgen D, Turner M, Bailey J, Linkman S. Systematic literature reviews in software engineering – A systematic literature review. Information and Software Technology. 2009; 51 (1):7–15. doi: 10.1016/j.infsof.2008.09.009. [ CrossRef ] [ Google Scholar ]
  • Klašnja-Milićević A, Ivanović M, Nanopoulos A. Recommender systems in e-learning environments: A survey of the state-of-the-art and possible extensions. Artificial Intelligence Review. 2015; 44 (4):571–604. doi: 10.1007/s10462-015-9440-z. [ CrossRef ] [ Google Scholar ]
  • Klašnja-Milićević A, Vesin B, Ivanović M. Social tagging strategy for enhancing e-learning experience. Computers & Education. 2018; 118 :166–181. doi: 10.1016/j.compedu.2017.12.002. [ CrossRef ] [ Google Scholar ]
  • Kolb, D., Boyatzis, R., Mainemelis, C., (2001). Experiential Learning Theory: Previous Research and New Directions Perspectives on Thinking, Learning and Cognitive Styles , 227–247.
  • Krahenbuhl KS. Student-centered Education and Constructivism: Challenges, Concerns, and Clarity for Teachers. The Clearing House: A Journal of Educational Strategies, Issues and Ideas. 2016; 89 (3):97–105. doi: 10.1080/00098655.2016.1191311. [ CrossRef ] [ Google Scholar ]
  • Kunaver M, Požrl T. Diversity in recommender systems – A survey. Knowledge-Based Systems. 2017; 123 :154–162. doi: 10.1016/j.knosys.2017.02.009. [ CrossRef ] [ Google Scholar ]
  • Manouselis N, Drachsler H, Vuorikari R, Hummel H, Koper R. Recommender systems in technology enhanced learning. In: Ricci F, Rokach L, Shapira B, Kantor P, editors. Recommender Systems Handbook. Springer; 2010. pp. 387–415. [ Google Scholar ]
  • Manouselis N, Drachsler H, Verbert K, Santos OC. Recommender systems for technology enhanced learning. Springer; 2014. [ Google Scholar ]
  • Manouselis, N., Drachsler, H., Verbert, K., & Duval, E. (2013). Challenges and Outlook. Recommender Systems for Learning , 63–76. 10.1007/978-1-4614-4361-2
  • Maravanyika M, Dlodlo N. An adaptive framework for recommender-based learning management systems. Open Innovations Conference (OI) 2018; 2018 :203–212. doi: 10.1109/OI.2018.8535816. [ CrossRef ] [ Google Scholar ]
  • Maria, S. A. A., Cazella, S. C., & Behar, P. A. (2019). Sistemas de Recomendação: conceitos e técnicas de aplicação. Recomendação Pedagógica em Educação a Distância , 19–47, Penso.
  • McCombs, B. L. (2013). The Learner-Centered Model: Implications for Research Approaches. In Cornelius-White, J., Motschnig-Pitrik, R. & Lux, M. (eds), Interdisciplinary Handbook of the Person-Centered Approach , 335–352. 10.1007/ 978-1-4614-7141-7_23
  • Medeiros RP, Ramalho GL, Falcao TP. A systematic literature review on teaching and learning introductory programming in higher education. IEEE Transactions on Education. 2019; 62 (2):77–90. doi: 10.1109/te.2018.2864133. [ CrossRef ] [ Google Scholar ]
  • Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, PRISMA-P Group Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews. 2015; 4 (1):1. doi: 10.1186/2046-4053-4-1. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Moubayed A, Injadat M, Nassif AB, Lutfiyya H, Shami A. E-Learning: Challenges and research opportunities using machine learning & data analytics. IEEE Access. 2018; 6 :39117–39138. doi: 10.1109/access.2018.2851790. [ CrossRef ] [ Google Scholar ]
  • Nabizadeh AH, Gonçalves D, Gama S, Jorge J, Rafsanjani HN. Adaptive learning path recommender approach using auxiliary learning objects. Computers & Education. 2020; 147 :103777–103793. doi: 10.1016/j.compedu.2019.103777. [ CrossRef ] [ Google Scholar ]
  • Nafea SM, Siewe F, He Y. On Recommendation of learning objects using Felder-Silverman learning style model. IEEE Access. 2019; 7 :163034–163048. doi: 10.1109/ACCESS.2019.2935417. [ CrossRef ] [ Google Scholar ]
  • Nascimento PD, Barreto R, Primo T, Gusmão T, Oliveira E. Recomendação de Objetos de Aprendizagem baseada em Modelos de Estilos de Aprendizagem: Uma Revisão Sistemática da Literatura. Proceedings of XXVIII Brazilian Symposium on Computers in Education- SBIE. 2017; 2017 :213–222. doi: 10.5753/cbie.sbie.2017.213. [ CrossRef ] [ Google Scholar ]
  • Nguyen QH, Ly H-B, Ho LS, Al-Ansari N, Le HV, Tran VQ, Prakash I, Pham BT. Influence of data splitting on performance of machine learning models in prediction of shear strength of soil. Mathematical Problems in Engineering. 2021; 2021 :1–15. doi: 10.1155/2021/4832864. [ CrossRef ] [ Google Scholar ]
  • Nichols, D. M. (1998). Implicit rating and filtering. Proceedings of the Fifth Delos Workshop: Filtering and Collaborative Filtering , 31–36.
  • Okoye, I., Maull, K., Foster, J., & Sumner, T. (2012). Educational recommendation in an informal intentional learning system. Educational Recommender Systems and Technologies , 1–23. 10.4018/978-1-61350-489-5.ch001
  • Pai M, McCulloch M, Gorman JD, Pai N, Enanoria W, Kennedy G, Tharyan P, Colford JM., Jr Systematic reviews and meta-analyses: An illustrated, step-by-step guide. The National Medical Journal of India. 2004; 17 (2):86–95. [ PubMed ] [ Google Scholar ]
  • Petri G, Gresse von Wangenheim C. How games for computing education are evaluated? A systematic literature review. Computers & Education. 2017; 107 :68–90. doi: 10.1016/j.compedu.2017.01.00. [ CrossRef ] [ Google Scholar ]
  • Petticrew M, Roberts H. Systematic reviews in the social sciences a practical guide. Blackwell Publishing. 2006 doi: 10.1002/9780470754887. [ CrossRef ] [ Google Scholar ]
  • Pinho PCR, Barwaldt R, Espindola D, Torres M, Pias M, Topin L, Borba A, Oliveira M. Developments in educational recommendation systems: a systematic review. Proceedings of 2019 IEEE Frontiers in Education Conference (FIE) 2019 doi: 10.1109/FIE43999.2019.9028466. [ CrossRef ] [ Google Scholar ]
  • Pöntinen S, Dillon P, Väisänen P. Student teachers’ discourse about digital technologies and transitions between formal and informal learning contexts. Education and Information Technologies. 2017; 22 (1):317–335. doi: 10.1007/s10639-015-9450-0. [ CrossRef ] [ Google Scholar ]
  • Pu, P., Chen, L., & Hu, R. (2011). A user-centric evaluation framework for recommender systems. Proceedings of the fifth ACM conference on Recommender systems , 157–164. 10.1145/2043932.2043962
  • Rahman MM, Abdullah NA. A personalized group-based recommendation approach for web search in E-Learning. IEEE Access. 2018; 6 :34166–34178. doi: 10.1109/ACCESS.2018.2850376. [ CrossRef ] [ Google Scholar ]
  • Ricci, F., Rokach, L., & Shapira, B. (2015). Recommender Systems: Introduction and Challenges. I Ricci, F., Rokach, L., Shapira, B. (eds), Recommender Systems Handbook , 1–34. 10.1007/978-1-4899-7637-6_1
  • Rivera, A. C., Tapia-Leon, M., & Lujan-Mora, S. (2018). Recommendation Systems in Education: A Systematic Mapping Study. Proceedings of the International Conference on Information Technology & Systems (ICITS 2018) , 937–947. 10.1007/978-3-319-73450-7_89
  • Salazar C, Aguilar J, Monsalve-Pulido J, Montoya E. Affective recommender systems in the educational field. A systematic literature review. Computer Science Review. 2021; 40 :100377. doi: 10.1016/j.cosrev.2021.100377. [ CrossRef ] [ Google Scholar ]
  • Santos IM, Ali N. Exploring the uses of mobile phones to support informal learning. Education and Information Technologies. 2012; 17 (2):187–203. doi: 10.1007/s10639-011-9151-2. [ CrossRef ] [ Google Scholar ]
  • Sergis S, Sampson DG. Learning object recommendations for teachers based on elicited ICT competence profiles. IEEE Transactions on Learning Technologies. 2016; 9 (1):67–80. doi: 10.1109/TLT.2015.2434824. [ CrossRef ] [ Google Scholar ]
  • Shani G, Gunawardana A. Evaluating recommendation systems. In: Ricci F, Rokach L, Shapira B, Kantor P, editors. Recommender Systems Handbook. Springer; 2010. pp. 257–297. [ Google Scholar ]
  • Tahereh, M., Maryam, T. M., Mahdiyeh, M., & Mahmood, K. (2013). Multi dimensional framework for qualitative evaluation in e-learning. 4th International Conference on e-Learning and e-Teaching (ICELET 2013), 69–75. 10.1109/icelet.2013.6681648
  • Tarus JK, Niu Z, Yousif A. A hybrid knowledge-based recommender system for e-learning based on ontology and sequential pattern mining. Future Generation Computer Systems. 2017; 72 :37–48. doi: 10.1016/j.future.2017.02.049. [ CrossRef ] [ Google Scholar ]
  • Tarus JK, Niu Z, Mustafa G. Knowledge-based recommendation: A review of ontology-based recommender systems for e-learning. Artificial Intelligence Review. 2018; 50 (1):21–48. doi: 10.1007/s10462-017-9539-5. [ CrossRef ] [ Google Scholar ]
  • Verbert K, Manouselis N, Ochoa X, Wolpers M, Drachsler H, Bosnic I, Duval E. Context-aware recommender systems for learning: A survey and future challenges. IEEE Transactions on Learning Technologies. 2012; 5 (4):318–335. doi: 10.1109/TLT.2012.11. [ CrossRef ] [ Google Scholar ]
  • Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., & Duval, E. (2011). Dataset-Driven Research for Improving Recommender Systems for Learning. Proceedings of the 1st International Conference on Learning Analytics and Knowledge , 44–53. 10.1145/2090116.2090122
  • Wan S, Niu Z. A learner oriented learning recommendation approach based on mixed concept mapping and immune algorithm. Knowledge-Based Systems. 2016; 103 :28–40. doi: 10.1016/j.knosys.2016.03.022. [ CrossRef ] [ Google Scholar ]
  • Wan S, Niu Z. An e-learning recommendation approach based on the self-organization of learning resource. Knowledge-Based Systems. 2018; 160 :71–87. doi: 10.1016/j.knosys.2018.06.014. [ CrossRef ] [ Google Scholar ]
  • Wan S, Niu Z. A hybrid E-Learning recommendation approach based on learners’ influence propagation. IEEE Transactions on Knowledge and Data Engineering. 2020; 32 (5):827–840. doi: 10.1109/TKDE.2019.2895033. [ CrossRef ] [ Google Scholar ]
  • Watkins KE, Marsick VJ. Informal and incidental learning in the time of COVID-19. Advances in Developing Human Resources. 2020; 23 (1):88–96. doi: 10.1177/1523422320973656. [ CrossRef ] [ Google Scholar ]
  • Wu D, Lu J, Zhang G. A Fuzzy Tree Matching-based personalized E-Learning recommender system. IEEE Transactions on Fuzzy Systems. 2015; 23 (6):2412–2426. doi: 10.1109/TFUZZ.2015.2426201. [ CrossRef ] [ Google Scholar ]
  • Wu Z, Li M, Tang Y, Liang Q. Exercise recommendation based on knowledge concept prediction. Knowledge-Based Systems. 2020; 210 :106481–106492. doi: 10.1016/j.knosys.2020.106481. [ CrossRef ] [ Google Scholar ]
  • Yanes N, Mostafa AM, Ezz M, Almuayqil SN. A machine learning-based recommender system for improving students learning experiences. IEEE Access. 2020; 8 :201218–201235. doi: 10.1109/ACCESS.2020.3036336. [ CrossRef ] [ Google Scholar ]
  • Zapata A, Menéndez VH, Prieto ME, Romero C. Evaluation and selection of group recommendation strategies for collaborative searching of learning objects. International Journal of Human-Computer Studies. 2015; 76 :22–39. doi: 10.1016/j.ijhcs.2014.12.002. [ CrossRef ] [ Google Scholar ]
  • Zhang S, Yao L, Sun A, Tay Y. Deep learning based recommender system. ACM Computing Surveys. 2020; 52 (1):1–38. doi: 10.1145/3285029. [ CrossRef ] [ Google Scholar ]
  • Zhong J, Xie H, Wang FL. The research trends in recommender systems for e-learning: A systematic review of SSCI journal articles from 2014 to 2018. Asian Association of Open Universities Journal. 2019; 14 (1):12–27. doi: 10.1108/AAOUJ-03-2019-0015. [ CrossRef ] [ Google Scholar ]

Welcome to the new OASIS website! We have academic skills, library skills, math and statistics support, and writing resources all together in one new home.

review of literature in education system

  • Walden University
  • Faculty Portal

Education Literature Review: Education Literature Review

What does this guide cover.

Writing the literature review is a long, complex process that requires you to use many different tools, resources, and skills.

This page provides links to the guides, tutorials, and webinars that can help you with all aspects of completing your literature review.

The Basic Process

These resources provide overviews of the entire literature review process. Start here if you are new to the literature review process.

  • Literature Reviews Overview : Writing Center
  • How to do a Literature Review : Library
  • Video: Common Errors Made When Conducting a Lit Review (YouTube)  

The Role of the Literature Review

Your literature review gives your readers an understanding of the evolution of scholarly research on your topic.

In your literature review you will:

  • survey the scholarly landscape
  • provide a synthesis of the issues, trends, and concepts
  • possibly provide some historical background

Review the literature in two ways:

  • Section 1: reviews the literature for the Problem
  • Section 3: reviews the literature for the Project

The literature review is NOT an annotated bibliography. Nor should it simply summarize the articles you've read. Literature reviews are organized thematically and demonstrate synthesis of the literature.

For more information, view the Library's short video on searching by themes:

Short Video: Research for the Literature Review

(4 min 10 sec) Recorded August 2019 Transcript 

Search for Literature

The iterative process of research:

  • Find an article.
  • Read the article and build new searches using keywords and names from the article.
  • Mine the bibliography for other works.
  • Use “cited by” searches to find more recent works that reference the article.
  • Repeat steps 2-4 with the new articles you find.

These are the main skills and resources you will need in order to effectively search for literature on your topic:

  • Subject Research: Education by Jon Allinder Last Updated Sep 2, 2024 6260 views this year
  • Keyword Searching: Finding Articles on Your Topic by Oasis Content Last Updated Sep 2, 2024 31770 views this year
  • Google Scholar by Jon Allinder Last Updated Sep 2, 2024 19543 views this year
  • Quick Answer: How do I find books and articles that cite an article I already have?
  • Quick Answer: How do I find a measurement, test, survey or instrument?

Video: Education Databases and Doctoral Research Resources

(6 min 04 sec) Recorded April 2019 Transcript 

Staying Organized

The literature review requires organizing a variety of information. The following resources will help you develop the organizational systems you'll need to be successful.

  • Organize your research
  • Citation Management Software

You can make your search log as simple or complex as you would like.  It can be a table in a word document or an excel spread sheet.  Here are two examples.  The word document is a basic table where you can keep track of databases, search terms, limiters, results and comments.  The Excel sheet is more complex and has additional sheets for notes, Google Scholar log; Journal Log, and Questions to ask the Librarian.  

  • Search Log Example Sample search log in Excel
  • Search Log Example Sample search log set up as a table in a word document.
  • Literature Review Matrix with color coding Sample template for organizing and synthesizing your research

Writing the Literature Review

The following resources created by the Writing Center and the Academic Skills Center support the writing process for the dissertation/project study. 

  • Critical Reading
  • What is Synthesis 
  • Walden Templates
  • Quick Answer: How do I find Walden EdD (Doctor of Education) studies?
  • Quick Answer: How do I find Walden PhD dissertations?

Beyond the Literature Review

The literature review isn't the only portion of a dissertation/project study that requires searching. The following resources can help you identify and utilize a theory, methodology, measurement instruments, or statistics.

  • Education Theory by Jon Allinder Last Updated Sep 1, 2024 844 views this year
  • Tests and Measures in Education by Kimberly Burton Last Updated Sep 2, 2024 65 views this year
  • Education Statistics by Jon Allinder Last Updated Sep 1, 2024 86 views this year
  • Office of Research and Doctoral Services

Books and Articles about the Lit Review

The following articles and books outline the purpose of the literature review and offer advice for successfully completing one.

  • Chen, D. T. V., Wang, Y. M., & Lee, W. C. (2016). Challenges confronting beginning researchers in conducting literature reviews. Studies in Continuing Education, 38(1), 47-60. https://doi.org/10.1080/0158037X.2015.1030335 Proposes a framework to conceptualize four types of challenges students face: linguistic, methodological, conceptual, and ontological.
  • Randolph, J.J. (2009). A guide to writing the dissertation literature review. Practical Assessment, Research & Evaluation 14(13), 1-13. Provides advice for writing a quantitative or qualitative literature review, by a Walden faculty member.
  • Torraco, R. J. (2016). Writing integrative literature reviews: Using the past and present to explore the future. Human Resource Development Review, 15(4), 404–428. https://doi.org/10.1177/1534484316671606 This article presents the integrative review of literature as a distinctive form of research that uses existing literature to create new knowledge.
  • Wee, B. V., & Banister, D. (2016). How to write a literature review paper?. Transport Reviews, 36(2), 278-288. http://doi.org/10.1080/01441647.2015.1065456 Discusses how to write a literature review with a focus on adding value rather and suggests structural and contextual aspects found in outstanding literature reviews.
  • Winchester, C. L., & Salji, M. (2016). Writing a literature review. Journal of Clinical Urology, 9(5), 308-312. https://doi.org/10.1177/2051415816650133 Reviews the use of different document types to add structure and enrich your literature review and the skill sets needed in writing the literature review.
  • Xiao, Y., & Watson, M. (2017). Guidance on conducting a systematic literature review. Journal of Planning Education and Research. https://doi.org/10.1177/0739456X17723971 Examines different types of literature reviews and the steps necessary to produce a systematic review in educational research.

review of literature in education system

HIDE GUIDE LEVEL BREADCRUMB

  • Office of Student Disability Services

Walden Resources

Departments.

  • Academic Residencies
  • Academic Skills
  • Career Planning and Development
  • Customer Care Team
  • Field Experience
  • Military Services
  • Student Success Advising
  • Writing Skills

Centers and Offices

  • Center for Social Change
  • Office of Academic Support and Instructional Services
  • Office of Degree Acceleration
  • Office of Student Affairs

Student Resources

  • Doctoral Writing Assessment
  • Form & Style Review
  • Quick Answers
  • ScholarWorks
  • SKIL Courses and Workshops
  • Walden Bookstore
  • Walden Catalog & Student Handbook
  • Student Safety/Title IX
  • Legal & Consumer Information
  • Website Terms and Conditions
  • Cookie Policy
  • Accessibility
  • Accreditation
  • State Authorization
  • Net Price Calculator
  • Cost of Attendance
  • Contact Walden

Walden University is a member of Adtalem Global Education, Inc. www.adtalem.com Walden University is certified to operate by SCHEV © 2024 Walden University LLC. All rights reserved.

A literature review on the student evaluation of teaching: An examination of the search, experience, and credence qualities of SET

Higher Education Evaluation and Development

ISSN : 2514-5789

Article publication date: 6 December 2018

Issue publication date: 22 January 2019

Competition among higher education institutions has pushed universities to expand their competitive advantages. Based on the assumption that the core functions of universities are academic, understanding the teaching–learning process with the help of student evaluation of teaching (SET) would seem to be a logical solution in increasing competitiveness. The paper aims to discuss these issues.

Design/methodology/approach

The current paper presents a narrative literature review examining how SETs work within the concept of service marketing, focusing specifically on the search, experience, and credence qualities of the provider. A review of the various factors that affect the collection of SETs is also included.

Relevant findings show the influence of students’ prior expectations on SET ratings. Therefore, teachers are advised to establish a psychological contract with the students at the start of the semester. Such an agreement should be negotiated, setting out the potential benefits of undertaking the course and a clear definition of acceptable performance within the class. Moreover, connections should be made between courses and subjects in order to provide an overall view of the entire program together with future career pathways.

Originality/value

Given the complex factors affecting SETs and the antecedents involved, there appears to be no single perfect tool to adequately reflect what is happening in the classroom. As different SETs may be needed for different courses and subjects, options such as faculty self-evaluation and peer-evaluation might be considered to augment current SETs.

  • Higher education
  • Student expectations
  • Service marketing
  • Teacher evaluation
  • Teaching and learning process

Ching, G. (2018), "A literature review on the student evaluation of teaching: An examination of the search, experience, and credence qualities of SET", Higher Education Evaluation and Development , Vol. 12 No. 2, pp. 63-84. https://doi.org/10.1108/HEED-04-2018-0009

Emerald Publishing Limited

Copyright © 2018, Gregory Ching

Published in Higher Education Evaluation and Development . Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode

1. Introduction

For the past number of years, the increasing number of degree providing institutions has dramatically changed global higher education ( Altbach et al. , 2009 ; Usher, 2009 ). This rising number of higher education institutions has actually led to increased competition among universities ( Naidoo, 2016 ). Furthermore, with cutbacks in government funding for higher education ( Mitchell et al. , 2016 ), differentiation is essential for universities to distinguish themselves and compete with other institutions ( Staley and Trinkle, 2011 ). Such differentiation of higher education institutions has become commonplace, forcing universities to become more innovative, cost conscious, and entrepreneurial ( Longanecker, 2016 ; MacGregor, 2015 ).

These global dilemmas are not new to Taiwan, wherein universities have to outperform each other for financial subsidies, while also competing to recruit new students ( Chou and Ching, 2012 ). The problem of recruitment results from a serious decline of birth rate in Taiwan. The National Statistics Office of Taiwan (2018) reported that birth figures declined from 346,208 in 1985 to 166,886 in 2010, representing a fall of almost 50 percent. Projecting these numbers into university entrants, a drop of around 20,000 incoming students can be noted for the academic year 2016/2017 ( Chang, 2014 ). In fact, only 241,000 freshmen students are noted for the current 2017/2018 academic year and this number is expected to drop to around only 157,000 in 2028 ( Wu, 2018 ). This issue of declining number of students has resulted in financial difficulties for academic institutions ( Chen and Chang, 2010 ). In such difficult times, it is crucial for higher education institutions in Taiwan to differentiate themselves and develop their competitive advantages.

In the age of big data, differentiation can be achieved with the help of large data sets that provide institutions with the capacity to address complex institutional issues ( Daniel, 2015 ; Norris and Baer, 2013 ). Many researchers have begun to collect and analyze institutional data sets to address various administrative and instructional issues faced by the universities ( Picciano, 2012 ). The results of these studies can provide school administrators and students with useful information ( Castleman, 2016 ). In Taiwan, big data has provided institutions with information on topics such as trends in enrollment rates, students’ online learning performances, and research outputs measured by number of academic publications ( Tseng, 2016 ). Another study reported on the advantages of collecting and understanding student learning experiences using big data ( Lin and Chen, 2016 ). Based on the assumption that the core functions of higher education institutions remains to be academic ( Altbach, 2011 ), i.e., teaching and learning, determining and understanding the quality of the teaching learning process with the aid of big data can be extremely useful.

In order to understand the quality of the teaching learning process, higher education institutions in Taiwan and elsewhere have long been using the student evaluation of teaching (SET), which provides feedback on teaching performance and appraises faculty members ( Aleamoni and Hexner, 1980 ; Centra, 1979 ; Clayson, 2009 ; Curran and Rosen, 2006 ; Pozo-Muñoz et al. , 2000 ). Even though the practice of using SETs is well established in higher education institutions ( Rice, 1988 ; Wachtel, 2006 ) and is considered relatively reliable for evaluating courses and instructors ( Aleamoni, 1999 ; Marsh, 1987, 2007 ; Nasser and Fresko, 2002 ), its usefulness and effectiveness has been challenged ( Boring et al. , 2016 ).

Over time, numerous issues have arisen in research on SETs. It has been claimed that SETs are used as a tool by students to reward or punish their instructor ( Clayson et al. , 2006 ), that SET results differ across areas (course, subject, and discipline) ( Chen, 2006 ) and type (including course design and class size) of study ( Feldman, 1978 ; Marsh, 1980 ), and that completion rate and background demographics of students significantly affect SETs ( Stark and Freishtat, 2014 ). Moreover, SETs can be biased with respect to the gender of the instructor and that of the students ( Boring et al. , 2016 ). Interestingly, recent research has found that effective teachers are receiving low SET ratings ( Braga et al. , 2014 ; Kornell and Hausman, 2016 ). This has caused many institutions, including universities in Taiwan, to analyze and redesign their SETs ( Chen, 2006 ; Zhang, 2003 ).

In light of these issues, the current paper shall seek to provide a better understanding of the inner workings of SETs. With the better understanding of SETs, more appropriate and effective evaluation tools can be developed. In addition, the categorization of education as a type of services ( WTO, 1998 ) has also opened up new ways of looking into the entire academe. Anchoring on the narrative literature review paradigm, this paper will shape the discussion of SETs within the concept of service marketing. A common framework used to evaluate services, which is to determine the search, experience, and credence qualities of the provider ( Fisk et al. , 2014 , p. 151; Wilson et al. , 2012 , p. 29). In addition, the paper will review the definitions of SET in the existing literature as well as the dimensions commonly used to measure the quality of teaching. Finally, the various factors that affects the collection of SETs are discussed.

2. Methodology

The current study is anchored on a literature review paradigm. For any study, a literature review is an integral part of the entire process ( Fink, 2005 ; Hart, 1998 ; Machi and McEvoy, 2016 ) In general, literature reviews involve database retrievals and searches defined by a specific topic ( Rother, 2007 ). To perform a comprehensive literature review, researchers adopt various approaches for organizing and synthesizing information, adopting either a qualitative or quantitative perspective for data interpretation ( Baumeister and Leary, 1997 ; Cronin et al. , 2008 ; Fink, 2005 ; Hart, 1998 ; Lipsey and Wilson, 2001 ; Petticrew and Roberts, 2005 ; Rocco and Plakhotnik, 2009 ; Torraco, 2005 ).

For the current study, the researcher adopts a narrative literature review approach. Narrative review or more commonly refer to as traditional literature review is a comprehensive, critical, and objective analysis of the current knowledge on a topic ( Charles Stuart University Library, 2018 ). The review should be objective insofar as it should have a specific focus but should provide critiques of important issues ( Dudovskiy, 2018 ). More importantly, the results of a narrative review are qualitative in nature ( Rother, 2007 ).

The study follows the suggestion of Green et al. (2006) with regard to synthesizing search results retrieved from computer databases. For the current study, the researcher used Google Scholar as a starting point, followed by searches within ProQuest and PsycINFO. Keywords used for searches were “student evaluation of teaching” and related terminologies (see next section for more information on SET synonymous terms). Selections of relevant articles are explicit and potentially biased insofar as the researcher focuses on the search, experience, and credence qualities of providers within SET studies. Data analysis methods consist of a procedure for organizing information into specific themes developed by Miles and Huberman (1994) and Glaser’s (1965, 1978) technique for continuous comparison of previously gathered data.

3. Defining student evaluation of teaching

In relation to students’ college experience, determining whether a course or a teacher is good or bad can be equated to measuring service quality ( Curran and Rosen, 2006 ). This is especially the case with regard to SETs. The concepts behind SETs have been discussed since the early 1920s ( Otani et al. , 2012 ), and literally thousands of studies have been carried out on these various interrelated concepts ( Marsh, 2007 ). Furthermore, within the vast spectrum of literature on the topic, a variety of related terms are used interchangeably. Hence, a thorough, comprehensive literature review is impossible.

SET is a relatively recent term that is used synonymously with several previous terminologies such as Student Evaluation Of Educational Quality (SEEQ) ( Coffey and Gibbs, 2001 ; Grammatikopoulos et al. , 2015 ; Lidice and Saglam, 2013 ), SET effectiveness ( Marsh, 1987, 2007 ), student evaluation of teacher performance ( Chuah and Hill, 2004 ; Coburn, 1984 ; Flood, 1970 ; Poonyakanok et al. , 1986 ), student evaluation of instruction ( Aleamoni, 1974 ; Aleamoni and Hexner, 1980 ; Clayson et al. , 2006 ; Powell, 1977 ), student course satisfaction ( Betoret, 2007 ; Bolliger, 2004 ; Rivera and Rice, 2002 ), or just simply student course evaluation ( Anderson et al. , 2005 ; Babcock, 2010 ; Bembenutty, 2009 ; Chen, 2016 ; Duggan and Carlson-Bancroft, 2016 ; Huynh, 2015 ; Pravikoff and Nadasen, 2015 ; Stark and Freishtat, 2014 ). Despite the difference in terms, the core objectives of all of the above are similar.

[…] the process of using student inputs concerning the general activity and attitude of teachers. These observations allow the overall assessors to determine the degree of conformability between student expectations and the actual teaching approaches of teachers. Student evaluations are expected to offer insights regarding the attitude in class of a teacher and/or the abilities of a teacher […] ( Vlăsceanu et al. , 2004 , pp. 59-60).

This definition implies three main aspects, namely, the evaluation of the teacher (the teacher itself), the teaching process (general activity and teaching approaches), and the learning outcomes as perceived by the students (student expectations) This is similar to the framework for evaluating service marketing, whereby the teacher corresponds to the “search” qualities, the teaching process to the “experience” qualities, and the learning outcomes to the “credence” qualities (see Figure 1 ).

3.1 Search qualities in SET

As previously mentioned, one of the first aspects of SET that focuses on the teacher is the student evaluation of the teacher, or rather the student’s perception of the teacher’s characteristics ( Fox et al. , 1983 ). As Tagiuri (1969) notes, in an early study, a person’s (in this case a teacher’s) personality, characteristics, qualities, and inner states (p. 395) matter significantly. Early research findings suggest that students sometimes interpret a teacher’s creativeness as a positive characteristic ( Costin et al. , 1971 ), while others note that a teacher’s personality traits affect their SET ratings ( Clayson and Sheffet, 2006 ; Mogan and Knox, 1987 ; Murray et al. , 1990 ). For instance, the interpersonal characteristics of teachers influence interactions between the students ( Mogan and Knox, 1987 ), which ultimately leads to better engagement and learning ( Hu and Ching, 2012 ; Hu et al. , 2015 ; Skinner and Belmont, 1993 ).

This focus on the teacher also leads to various biases in SET. For example, teachers’ physical appearance can have an effect on their SET ratings ( Bonds-Raacke and Raacke, 2007 ; Hultman and Oghazi, 2008 ). Felton et al. (2004) in their study of the university teachers rating website ( www.ratemyprofessors.com/ ) conclude after analyzing 3,190 faculty members across 65,678 posts that physically attractive teachers get higher ratings. In addition, a study by Buck and Tiene (1989) finds that attractive female teachers, even if they are considered authoritarian, tend to receive higher SET ratings compared to their less attractive female counterparts. Besides the teacher’s physical appearance, gender, and age are also important ( Buck and Tiene, 1989 ; Sohr-Preston et al. , 2016 ). Younger male faculty members were found to receive higher ratings ( Boring et al. , 2016 ), while more senior teachers received lower SET ratings ( Clayson, 1999 ). Similarly, a teacher’s ethnicity is also a factor ( Dee, 2005 ; Ehrenberg et al. , 1995 ). For instance, students may consider female African American teachers more sympathetic ( Patton, 1999 ), which can affect their SET ratings. These biases in SETs are unfair since an individual’s demographics and personality traits are fixed and cannot be changed.

Drawing on the concept of service marketing, the aforementioned teacher factors can be considered the search qualities that students look for before enrolling in a particular course. Students sometimes look for easy teachers just to pass a subject ( Felton et al. , 2004 ). However, research shows that most students tend to search for competent teachers ( Feldman, 1984 ) and credible faculty members ( Patton, 1999 ; Pogue and Ahyun, 2006 ). This disproves the fallacy that easy teachers receive high SET ratings ( Beatty and Zahn, 1990 ; Marsh and Roche, 2000 ).

By definition, search qualities are the easily observable and most common physical attributes a product (or in this case a teacher or course) may possess ( Curran and Rosen, 2006 , p. 136). Moreover, these search qualities are apparent and can be judged relative to similar options ( Lubienski, 2007 ). What is most important is that students are able to judge these search qualities beforehand. This means that students have certain initial preferences with regard to aspects such as the type of teacher, the physical characteristics of the classroom, or even the schedule of the course. Students tend to compare various options before signing up for a class. In addition, social psychology literature has long demonstrated the influence of beauty on individual judgments ( Adams, 1977 ; Berscheid and Walster, 1974 ). Individuals tend to relate beauty to being good ( Eagly et al. , 1991 ). This halo effect explains why teachers’ attractiveness tends to influence their SET ratings ( Felton et al. , 2004 ). Furthermore, students may also have a preference with regard to the physical situation of the classroom ( Douglas and Gifford, 2001 ; Hill and Epps, 2010 ), which influences their overall level of satisfaction.

In summary, more emphasis should be placed on the perceived expectations of students, which can be discerned from their search qualities. As studies by Buck and Tiene (1989) and Patton (1999) find, students tend to associate physical attributes with certain types of behavior, such as expecting attractive female teachers to be more feminine and female African American teachers to be more sympathetic. Another important issue here is that students are expecting something, regardless of their reasons for having these expectations of their teachers and courses. These expectations, whether arising from stereotyping of attributes or hearsay from schoolmates, must be met to satisfy the students. However, this should not be the case, and teachers should focus on building their professionalism and credibility ( Carter, 2016 ). In-class behaviors such as self-disclosure, humor, warmth, clarity, enthusiasm, verbal and nonverbal messages, teacher immediacy; nonverbal interactions that enhance closeness ( Mehrabian, 1968 , p. 203), and affinity seeking; the creation of positive feelings toward oneself ( Bell and Daly, 1984 ) are just a few examples of effective strategies that can positively affect how students view teachers ( Patton, 1999 ). These behaviors make for effective teaching and can also prevent students from stereotyping teachers because of their appearance or on account of demographic features.

3.2 Experience qualities in SET

Besides the teacher, the second aspect of SET identified is the teaching process. In reality, this is what the majority of SETs currently being used measures ( Algozzine et al. , 2004 ; Wachtel, 2006 ). The goal of SETs is to determine the teachers’ teaching effectiveness ( Marsh, 2007 ). Such instruments have been used throughout academia for a long time, but their validity, reliability, and usefulness are still being challenged ( Aleamoni, 1974, 1999 ; Aleamoni and Hexner, 1980 ; Arreola, 2007 ; Costin et al. , 1971 ; Feldman, 1978, 1984 ; Marsh, 1984, 2007 ; Marsh and Roche, 1997 ; Rodin and Rodin, 1972 ; Wright et al. , 1984 ). This makes sense since teaching is a complex activity ( Shulman, 1987 ), so the factors used to measure a teacher’s effectiveness are multidimensional ( Marsh, 1991, 2007 ; Marsh and Bailey, 1993 ) and difficult to comprehend. Nonetheless, it has been proven that SETs contribute to faculty development by enhancing the teaching and learning experience ( Centra, 1979 ; Marsh, 1980, 1984, 1987, 1991, 2007 ; Marsh and Bailey, 1993 ; Marsh and Roche, 1997 ). This function is formative ( Shulman, 1987 ), meaning that SETs can provide evidence to support improvements that shape the overall quality of teaching ( Berk, 2005 , p. 48).

Evaluating the teaching process is a complex and complicated undertaking, requiring a full understanding of how students came to conclusions with regard to their teachers and courses ( Curran and Rosen, 2006 ). Typically, taking a university course would require the student to attend class every week, which corresponds to repeated service encounters that are critical to later evaluation ( Solomon et al. , 1985 ). Within the concept of service marketing, these repeated service encounters (which in this case are the repeated classroom encounters) correspond to the experience qualities that students perceive when taking the course. These experience qualities are not readily apparent and can only be judged when the entire service experience is over (generally at the end of the course) ( Curran and Rosen, 2006 ; Lubienski, 2007 ). However, because such experiences are repeated, it can be difficult to know whether the resulting SET ratings are based on an overall experience of the course or just on one single event that made a lasting impression ( Curran and Rosen, 2006 ). Furthermore, students attend class with their classmates, so there are other individuals partaking of the service delivery at the same time. Therefore, the interactions of these students within the class might either enhance or reduce the service quality, which might, in turn, affect an individual student’s experience ( Grove and Fisk, 1997 ).

Based on the above points, evidence shows that students can compare their teachers with other teachers teaching the same course before actually signing up for the class. However, it is most likely that student would ask around, seeking other students who had taken the course already and asking for their comments. This is because students generally do not have access to SET results ( Marlin, 1987 ). Marsh (2007) notes that although a few universities do publish their SET summaries, this is solely for the purpose of course or subject selection. The publication of SET results is controversial ( Babad et al. , 1999 ; Perry et al. , 1979 ) and is generally regarded negatively by faculty members ( Howell and Symbaluk, 2001 ).

It is important to note that, based on asking around prior to taking a course, students might expect to receive a certain grade or a certain amount of classwork, or even have expectations with regard to how the lectures are conducted ( Nowell, 2007 ; Remedios and Lieberman, 2008 ; Sander et al. , 2000 ; Voss et al. , 2007 ). If teachers then behave contrary to the students expectations, students may be disappointed and SET ratings may be affected ( Bennett, 1982 ). Such student expectations can also contribute to the development of a psychological contract between the teacher and the students. These prior expectations, whether arising from the students’ desire to benefit from the course ( Voss et al. , 2007 ) or from hearsay, are found to contribute to such a psychological contract ( Roehling, 1997 ).

A psychological contract can be defined as any individual beliefs, shaped by the institution, regarding the terms of an exchange agreement, between students and their teachers ( Kolb, Rubin, and McIntyre, 1984 ; Rousseau, 1995, 2001 ). Recent research finds that when the psychological contract between the teacher and the students is positive, learning motivation is enhanced ( Liao, 2013 ). Furthermore, these agreements might be either implicitly or explicitly made between the teachers and students. To make them more effective, the agreements should be negotiated at the start of the term, and should constitute a shared psychological contract between the teacher and the students ( Pietersen, 2014 ). More importantly, Cohen (1980) notes that if SETs are accomplished during the middle of the semester, teachers are still able to improve their teaching pedagogy re-aligning the previously agreed upon psychological contract. Hence, faculty members who received mid-semester feedback ended up with significantly higher SET ratings than their counterparts who did not have a mid-semester evaluation ( Cohen, 1980 ). Ultimately, mid-semester feedbacks provide ample opportunity for instructional improvement ( Overall and Marsh, 1979 ).

In summary, it has been noted in the literature that evaluating the teaching process, or rather the effectiveness of teaching, is a complex task. It is multidimensional and mostly concerns the experience qualities of the students who have taken the course. More important, in relation to the numerous biases associated with SETs discussed in the introduction of this paper, perceptions of course delivery and service quality are affected by a variety of issues, including peers, class size, and type of course. Adding to the fact that students have their own personal expectations of what the course should offer, it is difficult to satisfy every student. Pietersen (2014) suggests the making of a psychological contract between the teacher and the students, to provide clear study goals and remove false expectations. In addition, an evaluation can be conducted in the middle of a semester, giving the teacher ample opportunity to address students’ doubts and to re-adjust the shared contract based on students’ abilities. Furthermore, as the goal of SETs is to provide formative suggestions for teachers to improve teaching, it is also prudent to include statements on the provision of formative lessons and on how course designs contribute to student learning ( Brownell and Swaner, 2009 ; Kuh, 2008 ; Kuh et al. , 2013 ).

3.3 Credence qualities and SETs

The last component of SETs identified is the evaluation of learning outcomes, more specifically, the accomplishment of goals. It has long been accepted that goals are important predictors of educationally relevant learning outcomes ( Ames and Archer, 1988 ), while also focusing on the motivational aspects driven by mastery and performance-approach goals ( Harackiewicz et al. , 2002 ). In simple terms, if students clearly understand the skills necessary for future employment, while also understanding taking a certain course will enable them to master those skills, they should be motivated to do well in that course. Research shows that students are more engaged with their academic classwork when future career consequences are clearly understood ( Greene et al. , 2004 ; Miller et al. , 1996 ). However, in reality many students are uncertain of their study goals and are at risk of dropping out ( Mäkinen et al. , 2004 ).

A university education is characterized by high credence properties ( Curran and Rosen, 2006 ). Credence qualities are those properties that are not apparent, can be never be fully known or appreciated by students ( Lubienski, 2007 ), and might, therefore, be impossible to evaluate ( Curran and Rosen, 2006 ). Credence properties are generally found in goods and services that are characterized by high levels of complexity ( Darby and Karni, 1973 ), such as the teaching and learning process. More importantly, even after the service has been used (in this case, when a student graduate from the university), the consumer (student) may still find it difficult to evaluate the service ( Zeithaml, 1981 ). A famous example of credence qualities in a product can be found in the taking of vitamin pills, where there is low verification of the alleged effectiveness and quality of the product, even after it has been tried by the consumer ( Galetzka et al. , 2006 ). In higher education, the true value of a course may be realized by a student only after the skills and knowledge learned are used in employment, which might be several months or even years after the service has ceased ( Curran and Rosen, 2006 ).

The credence qualities of higher education are related to the concept of academic delay of gratification ( Bembenutty, 2009 ). Academic delay of gratification is a term used to describe the “postponement of immediately available opportunities to satisfy impulses in favor of pursuing chosen important academic rewards or goals that are temporally remote but ostensibly more valuable” ( Bembenutty and Karabenick, 1998 , p. 329). Similar to what is described by the achievement goals theory, students are motivated when they clearly perceive benefits that lead to future success ( Bembenutty, 1999 ). In addition, students who adhere to the academic delay of gratitude principle tend to become autonomous learners ( Bembenutty and Karabenick, 2004 ). If students know the usefulness of the course subject, they are more willing to delay gratification, participate in class, and complete academic tasks, and are ultimately more satisfied and hence give high SET ratings ( Bembenutty, 2009 ).

In summary, the literature shows that besides formatives evaluations, SETs also include summative evaluations ( Kuzmanovic et al. , 2012 ; Mortelmans and Spooren, 2009 ; Otani et al. , 2012 ; Spooren and Van Loon, 2012 ), which involve summing up the overall performance of teachers ( Berk, 2005 ). Summative SETs generally contribute to teacher audits and evaluations that may lead to the hiring, tenure, and even promotion of faculty members ( Arthur, 2009 ; Berk, 2005 ; Marks, 2000 ; Stark and Freishtat, 2014 ). Literature suggests that school administrators should be careful in using SET results containing many summative evaluations ( Spooren et al. , 2013 ), because, with respect to the credence properties of education, students might be unable to grasp the entire and actual benefits of certain courses. In order for effective learning to occur, the potential benefits of the course and an outline of acceptable performance should be defined in advance ( Otter, 1995 ). Moreover, connections should be made between previous, current, and future courses, thus providing an overview of the entire program together with a clear outline of future career pathways.

4. Dimensions of SET

As has been noted, SETs are complex and involves multiple interrelated dimensions. In his early meta-analysis, Feldman (1978) shows that although most studies focus on the overall rating of the instructor. However, SETs that focus only on summative evaluations and that use global measures (few summary items) are highly discouraged ( Cashin and Downey, 1992 ; Marks, 2000 ; Sproule, 2000 ). The majority of SETs aim at a more comprehensive rating of teachers and, as Marsh (2007) notes, are mostly constructed around the concept of effective teaching. The usefulness and effectiveness of an SET depends on how well it can capture the concepts it measures. Hence, careful design is essential ( Aleamoni, 1974, 1999 ; Aleamoni and Hexner, 1980 ; Arreola, 2007 ).

One of the early syntheses of SETs is conducted by analyzing students’ views of the characteristics of a superior teacher ( Feldman, 1976 ). For the study, three categories are identified: presentation, which includes teachers’ enthusiasm for teaching and for the subject matter, their ability to motivate students’, their knowledge of the subject matter, clarity of presentation, and organization of the course; facilitation, which denotes teachers’ availability for consultation (helpfulness), their ability to show concern and respect for students (friendliness), and their capacity to encourage learners through class interactions and discussions (openness); and regulation, which includes the teachers’ ability to set clear objectives and requirements, appropriateness of course materials (including supplementary learning resources) and coursework (with regard to difficulty and workload), fairness in evaluating students and providing feedback, and classroom management skills ( Feldman, 1976 ).

Another early analysis of SETs conducted by Hildebrand (1973) and Hildebrand et al. (1971) and his associates identifies five constructs for measuring the effectiveness of teaching: analytic/synthetic skills, which includes the depth of a teacher’s scholarship and his or her analytic ability and conceptual understanding of the course content; organization/clarity, denoting the teacher’s presentation skills in the course subject area; instructor group interaction, which describes the teacher’s ability to actively interact with the class, his or overall rapport with the class, sensitivity to class responses, and ability to maintain active class participation; instructor–individual student interaction, which includes the teacher’s ability to establish mutual respect and rapport with individual students; and dynamism/enthusiasm, which relates to the teacher’s enthusiasm for teaching and includes confidence, excitement about the subject, and pleasure in teaching ( Hildebrand et al. , 1971 , p. 18).

More recently, the SEEQ is frequently used by many higher education institutions. The SEEQ measures nine factors that constitute quality instruction ( Marsh, 1982, 1987 ; Marsh and Dunkin, 1997 ; Richardson, 2005 ). These are assignments and readings, breadth of coverage, examinations and grading, group interaction, individual rapport, instructor enthusiasm, learning and academic value, organization and clarity, and workload and difficulty ( Marsh, 2007 , p. 323). Some SEEQ studies include an overall summative evaluation of the course subject as an additional factor ( Schellhase, 2010 ). The similarities with Hildebrand (1973) and Hildebrand et al. (1971) and Feldman’s (1976) criteria of effective teaching are apparent.

In a series of studies conducted at the University of Hawaii, SET is first analyzed with respect to the perspectives of faculty members, which identifies important factors such as evaluation information based from students, information from peers (colleagues), student performance and grades, and external performance evaluations of teachers ( Meredith, 1977 ). A study that included apprentice teachers (practice teachers) found that students preferred instructors who exhibited classroom organizational skills, who focused on students’ learning outcomes, and who interacted well with students ( Meredith and Bub, 1977 ). A set of evaluation criteria was developed based on a study of both faculty members and students in the School of Law at the University of Hawaii, which included dimensions such as knowledge of subject matter, ability to stimulate interest and motivate students, organization of the course, preparation for the course, concern for students, quality of course materials, and an overall summative evaluation of the teacher ( Meredith, 1978 ). Other studies measured teaching excellence by subject mastery, teaching skills, and personal qualities of the teacher ( Meredith, 1985b ), while an overall analysis of student satisfaction used the criteria social interaction, teaching quality, campus environment, employment opportunities, and classroom facilities ( Meredith, 1985a ), all of which contribute to SET ratings.

In summary, it is noted that SETs can vary depending on whether the evaluations are from the perspective of faculty members (how teachers teach) or from the students (how students learn). However, although several variations of SETs exist, comparisons suggest that as long as the overall objective is to evaluate effective teaching, dimensions within these SETs are interrelated and may overlap ( Marsh, 1984, 2007 ; Marsh and Bailey, 1993 ; Marsh and Dunkin, 1997 ). A study conducted by the American Association of University Professors involving 9,000 faculty members found that SETs are generally established with controversial biases and issues ( Flaherty, 2015 ). The more important issue is the establishment of the objectives for SET implementation within the university and careful decision making about who should participate in the development of such an evaluation instrument.

5. Antecedents of SET

Within the vast literature on SETs, analysis of their validity and reliability has identified various antecedents affecting effective evaluation. SET ratings are dependent on several issues, including the various biases already discussed. The first obvious antecedent is the instructor, as can be discerned from the previous discussions. Besides personality issues, gender plays an important role. Boring et al. (2016) find that SETs are statistically biased against female faculty, and that such biases can cause effective teachers to get lower SET ratings than less effective ones. MacNell et al. (2015) conducted an experiment in which students were blind to the gender of their online course instructors. For the experiment, two online course instructors were selected, one male and one female, and each was given two classes to teach. Later in the course, each instructor presented as one gender to one class and the opposite gender to the other class. The SET results gathered at the end of the semester are interesting. Regardless of the instructor’s real gender, students gave the teacher they thought was male and the actual male teacher higher SET ratings than the teacher they perceived as female. This experiment clearly shows that the rating difference results from gender bias ( Marcotte, 2014 ).

Previous studies also show that the time of SET evaluation matters. As discussed, when SET evaluations are administered during the middle of the semester, results can assist teachers in re-evaluating their course design to better fit with the students’ needs and capabilities. However, this phenomenon is limited. SETs are mostly given before the end of the term or during final examinations, and studies have shown that ratings taken at this time tend to be lower compared to evaluations conducted a few weeks before final exams ( Braskamp et al. , 1984 ). Interestingly, no significant differences were found when comparing SET ratings gathered before the end of the semester with those taken in the first week of the succeeding term ( Frey, 1976 ). This debunks the fallacy that students tend seek revenge on teachers because of issues with the grades received ( Clayson et al. , 2006 ; Skinner and Belmont, 1993 ). In fact, studies have proven that students who received poor grades were less likely to care enough to complete the SET ( Liegle and McDonald, 2005 ).

In terms of the students themselves, as previously mentioned the background demographics of students do significantly affect SETs ( Stark and Freishtat, 2014 ). Although some biases are found between gender and SET ratings ( Boring et al. , 2016 ; Feldman, 1977 ), still there are no consistent evidence of such difference exists ( Wachtel, 2006 ). For instance, different studies have shown that male and female students give higher ratings as compared to their peers of opposite genders ( Tatro, 1995 ). In some instances, students evaluate their same gender teachers higher than their opposite gender instructors ( Centra, 1993a, b ). With regards to ethnicity, Marsh et al. (1997) translated the SEEQ instrument into Chinese and found that there are no significant differences with the results reported as compared with the studies done in the USA. In other Chinese studies, besides the significant differences in SET ratings between students of various discipline and nature ( Chen and Watkins, 2010 ; Liu et al. , 2016 ), it is well noted that linguistics or foreign language teachers tend to received higher evaluations than the faculty of other discipline ( Chen and Watkins, 2010 ).

Administration conditions or the way SETs are administered also matters. Currently, SETs are mostly collected using online course evaluations ( Spooren and Van Loon, 2012 ). However, literature shows that online SETs results in lower participation ( Anderson et al. , 2005 ; Avery et al. , 2006 ), although reminders do increase the response rate ( Norris and Conn, 2005 ). With paper-and-pen SETs, the person administering the evaluation also contributes to any inconsistencies in the ratings. This holds true even if the teacher leaves the room during the SET administration and the forms are anonymous, as students may still be reluctant to provide an objective evaluation ( Pulich, 1984 ). Many researchers have agreed that SETs should be entrusted to a third-party individual for effective collection ( Braskamp et al. , 1984 ; Centra, 1979 ).

The characteristics of the course subject also matters. Wachtel (2006) notes that the nature of the course subject, such as whether it is a required course or an elective, affects how students rate its importance. Sometimes students give higher ratings for elective course subjects due to their having a prior interest in the subject ( Feldman, 1978 ). Class schedule can sometimes affect ratings, and odd schedules such as early morning classes or late afternoon classes have been found to receive the lowest SET ratings ( Koushki and Kuhn, 1982 ). However, inconsistencies were found in several other studies ( Aleamoni, 1999 ; Centra, 1979 ; Feldman, 1978 ; Wachtel, 2006 ), but it has been suggested that the level of the course is a relevant factor. The year or level of the course is closely related to the students’ age; as students continue with their studies, they becomes more mature and become aware that their opinions are taken seriously by the school administration ( Spooren and Van Loon, 2012 ). Class size has also been found to have an impact ( Feldman, 1978 ; Marsh, 2007 ) since bigger classes tend to present less opportunities for interaction between the teacher and the individual students, which can affect ratings ( Meredith and Ogasawara, 1982 ). Finally, the subject area and the discipline also greatly influence SET ratings. Since the discipline affects how classes are held (e.g. laboratory classes compared to lecture intensive courses), comparisons between colleges are not advisable ( Wachtel, 2006 ). For instance, task-oriented subjects such as mathematics and science offer less interaction than the social sciences ( Centra, 1993a, b ).

In summary, apart from the issues relating to students that affect SETs discussed in the “Experience Qualities” section of this paper, including their gender, learning motivations, and grade expectations ( Boring et al. , 2016 ), many more have been added to the discussion. Having examined the various antecedents of SETs, it is apparent that one model is not suitable for all instances. More specifically, one single type of SET cannot and should not be used to collect students’ perception across all courses and subjects. This is actually the main reason why some higher education institutions choose to use global measures to collect the summative evaluations of the class. In practice, separate SETs should be used for different course types. Since this can place a significant burden on institutions, careful analysis and research is necessary.

6. Conclusion

To sum up, literature has shown that the use of SETs to collect information regarding the teaching–learning process is commonplace. However, given the complex nature of academic processes, the data resulting from SETs are questionable and limited. The current paper presents a review of the literature on SETs, focusing on the concept of service marketing evaluation. The framework’s three criteria are used to examine SETs, whereby the teacher represents the “search” qualities, the teaching process the “experience” qualities, and the learning outcomes the “credence” qualities.

The search qualities with regard to SETs are the easily observable attributes of teachers. These may include the appearance, gender, age, ethnicity, and personalities traits of faculty members. In practice, course subject selections are made prior to enrollment in a course; students can compare faculty members when deciding which one to enroll with. Hence, the expectations of students are important. It has been noted that stereotyping faculty members according to certain demographic factors such as gender and age is unfair since these features are fixed and impossible to change. Students should look beyond these obvious factors and focus more on the teachers’ credibility and competencies.

Beyond initial search preferences, students place much importance on evaluating their learning experiences. As the literature suggests, for the sake of simplicity, many SETs include only global summative evaluations of the teaching–learning process. However, given that the nature of the learning experience is complex and multidimensional, evidence to support student development should be in the form of formative judgments. Furthermore, the actual teaching–learning process is composed of repeated service encounters (a semester in Taiwan typically lasts around 18 weeks). It is, therefore, difficult to determine whether a single class experience or the collective sum of the semester’s learning encounters contribute to the SET ratings. Considering the influence of prior expectations on SET ratings, teachers are advised to establish a psychological contract with the students. To make these agreements effective, they should be negotiated at the start of the term, so that they are shared contracts between the teacher and the students.

Finally, accepting that university education is characterized by high credence qualities, students must be aware of the concept of academic delay of gratification, so that they understand and accept that the benefits of undertaking a course are not immediate. Combining this with the importance of students’ expectations and the usefulness of creating a psychological contract, clear definitions of the potential benefits and acceptable performance should be provided during the first class. Moreover, connections should be made between previous, current, and future courses, thus providing an overview of the entire program together with career pathways.

In summary, since SETs are frequently used to collect information on effective teaching, it is important for higher education institutions to establish what kinds of SETs are effective. Given the complex factors involved and the various antecedents of SETs, it appears that no one perfect tool exists to accurately measure what happens in the classroom. As different SETs may be necessary for different courses and subjects, options such as faculty members’ self-evaluation and/or faculty members’ peer-evaluation might be considered to provide what is lacking in SETs. It is hoped that as technology advances, an innovative way of collecting SETs might be found to make the process more productive.

6.1 Recommendations for further research

Having analyzed the above issues, several recommendations for further research are proposed:

Develop and validate an SET

The development of the SET is important in the ongoing dialogue within the literature. As the literature shows, SETs are only useful if they can appropriately capture what they are being used to measure. Hence, in order to develop a relevant and constructive SET, the participation of significant stakeholders, such as school administrators, faculty, and students, is essential. A constructive SET would be capable of providing formative recommendations to improve the performance of both faculty members and students. More important, an effective SET should consider the service attributes (the search, experience, and credence qualities) that students want.

Develop an SET software program

In the current age of technological advances and big data, students are adept at using mobile devices ( Gikas and Grant, 2013 ). Therefore, an app designed to collect SET ratings – either directly after each class, twice a semester (after midterm exams and before the end of the semester), or once before the end of the semester –could be made available to students for easy and convenient collection of data. This could initiate a new strand of SET literature. Combining technology with pedagogy can provide a more accurate evaluation of teaching, by facilitating the collection of real-time SET results.

SET concepts

Adams , G.R. ( 1977 ), “ Physical attractiveness research: toward a developmental social psychology of beauty ”, Human Development , Vol. 20 No. 4 , pp. 217 - 239 , available at: https://doi.org/10.1159/000271558

Aleamoni , L.M. ( 1974 ), “ Typical faculty concerns about student evaluation of instruction ”, NACTA , Vol. 20 No. 1 , pp. 16 - 21 .

Aleamoni , L.M. ( 1999 ), “ Student rating myths versus research facts from 1924 to 1998 ”, Journal of Personnel Evaluation in Education , Vol. 13 No. 2 , pp. 153 - 166 , available at: https://doi.org/10.1023/A:1008168421283

Aleamoni , L.M. and Hexner , P.Z. ( 1980 ), “ A review of the research on student evaluation and a report on the effect of different sets of instructions on student course and instructor evaluation ”, Instructional Science , Vol. 9 No. 1 , pp. 67 - 84 .

Algozzine , B. , Gretes , J. , Flowers , C. , Howley , L. , Beattie , J. , Spooner , F. , Mohanty , G. and Bray , M. ( 2004 ), “ Student evaluation of college teaching: a practice in search of principles ”, College Teaching , Vol. 52 No. 4 , pp. 134 - 141 , available at: https://doi.org/10.3200/CTCH.52.4.134-141

Altbach , P.G. ( 2011 ), “ Introduction ”, in Altbach , P.G. (Ed.), Leadership for World-Class Universities: Challenges for Developing Countries , Routledge , New York, NY , pp. 1 - 7 .

Altbach , P.G. , Reisberg , L. and Rumbley , L.E. ( 2009 ), Trends in Global Higher Education: Tracking an Academic Revolution , UNESCO , Paris .

Ames , C. and Archer , J. ( 1988 ), “ Achievement goals in the classroom: students’ learning strategies and motivation processes ”, Journal of Educational Psychology , Vol. 80 No. 3 , pp. 260 - 267 .

Anderson , H.M. , Cain , J. and Bird , E. ( 2005 ), “ Online student course evaluations: review of literature and a pilot study ”, American Journal of Pharmaceutical Education , Vol. 69 No. 1 , pp. 34 - 43 .

Arreola , R.A. ( 2007 ), Developing a Comprehensive Faculty Evaluation System: A Guide to Designing, Building, and Operating Large-Scale Faculty Evaluation Systems , Jossey-Bass , San Francisco, CA .

Arthur , L. ( 2009 ), “ From performativity to professionalism: lectures’ responses to student feedback ”, Teaching in Higher Education , Vol. 14 No. 4 , pp. 441 - 454 , available at: https://doi.org/10.1080/13562510903050228

Avery , R.J. , Bryant , W.K. , Mathios , A. , Kang , H. and Bell , D. ( 2006 ), “ Electronic course evaluations: does an online delivery system influence student evaluations? ”, The Journal of Economic Education , Vol. 36 No. 1 , pp. 21 - 37 , available at: https://doi.org/10.3200/JECE.37.1.21-37

Babad , E. , Darley , J.M. and Kaplowitz , H. ( 1999 ), “ Developmental aspects in students’ course selection ”, Journal of Educational Psychology , Vol. 91 No. 1 , pp. 157 - 168 .

Babcock , P. ( 2010 ), “ Real costs of nominal grade inflation? New evidence from student course evaluations ”, Economic Inquiry , Vol. 48 No. 4 , pp. 983 - 996 , available at: https://doi.org/10.1111/j.1465-7295.2009.00245.x

Baumeister , R.F. and Leary , M.R. ( 1997 ), “ Writing narrative literature reviews ”, Review of General Psychology , Vol. 1 No. 3 , pp. 311 - 320 .

Beatty , M.J. and Zahn , C.J. ( 1990 ), “ Are student ratings of communication instructors due to ‘easy’ grading practices? An analysis of teacher credibility and student-reported performance levels ”, Communication Education , Vol. 39 No. 4 , pp. 275 - 282 , available at: https://doi.org/10.1080/03634529009378809

Bell , R.A. and Daly , J.A. ( 1984 ), “ The affinity-seeking function of communication ”, Communication Monographs , Vol. 51 No. 2 , pp. 91 - 115 , available at: https://doi.org/10.1080/03637758409390188

Bembenutty , H. ( 1999 ), “ Sustaining motivation and academic goals: the role of academic delay of gratification ”, Learning and Individual Differences , Vol. 11 No. 3 , pp. 233 - 257 , available at: https://doi.org/10.1016/S1041-6080(99)80002-8

Bembenutty , H. ( 2009 ), “ Teaching effectiveness, course evaluation, and academic performance: the role of academic delay of gratification ”, Journal of Advanced Academics , Vol. 20 No. 2 , pp. 326 - 355 .

Bembenutty , H. and Karabenick , S.A. ( 1998 ), “ Academic delay of gratification ”, Learning and Individual Differences , Vol. 10 No. 4 , pp. 329 - 346 , available at: https://doi.org/10.1016/S1041-6080(99)80126-5

Bembenutty , H. and Karabenick , S.A. ( 2004 ), “ Inherent association between academic delay of gratification, future time perspective, and self-regulated learning ”, Educational Psychology Review , Vol. 16 No. 1 , pp. 35 - 57 , available at: https://doi.org/10.1023/B:EDPR.0000012344.34008.5c

Bennett , S.K. ( 1982 ), “ Student perceptions of and expectations for male and female instructors: evidence relating to the question of gender bias in teaching evaluation ”, Journal of Educational Psychology , Vol. 74 No. 2 , pp. 170 - 179 , available at: https://doi.org/10.1037/0022-0663.74.2.170

Berk , R.A. ( 2005 ), “ Survey of 12 strategies to measure teaching effectiveness ”, International Journal of Teaching and Learning in Higher Education , Vol. 17 No. 1 , pp. 48 - 62 .

Berscheid , E. and Walster , E. ( 1974 ), “ Physical attractiveness ”, Advances in Experimental Social Psychology , Vol. 7 No. 1 , pp. 157 - 215 , available at: https://doi.org/10.1016/S0065-2601(08)60037-4

Betoret , F.D. ( 2007 ), “ The influence of students’ and teachers’ thinking styles on student course satisfaction and on their learning process ”, Educational Psychology: An International Journal of Experimental Educational Psychology , Vol. 27 No. 2 , pp. 219 - 234 , available at: https://doi.org/10.1080/01443410601066701

Bolliger , D.U. ( 2004 ), “ Key factors for determining student satisfaction in online courses ”, International Journal on E-Learning , Vol. 3 No. 1 , pp. 61 - 67 .

Bonds-Raacke , J. and Raacke , J.D. ( 2007 ), “ The relationship between physical attractiveness of professors and students’ ratings of professor quality ”, Journal of Psychiatry, Psychology and Mental Health , Vol. 1 No. 2 , pp. 1 - 7 .

Boring , A. , Ottoboni , K. and Stark , P.B. ( 2016 ), “ Student evaluations of teaching (mostly) do not measure teaching effectiveness ”, ScienceOpen Research, available at: https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1 (accessed March 30, 2018 ).

Braga , M. , Paccagnella , M. and Pellizzari , M. ( 2014 ), “ Evaluating students’ evaluations of professors ”, Economics of Education Review , Vol. 41 No. 1 , pp. 71 - 88 , available at: https://doi.org/10.1016/j.econedurev.2014.04.002

Braskamp , L.A. , Brandenburg , D.C. and Ory , J.C. ( 1984 ), Evaluating Teaching Effectiveness , Sage , Newbury Park, CA .

Brownell , J.E. and Swaner , L.E. ( 2009 ), “ High-impact practices: applying the learning outcomes literature to the development of successful campus programs ”, Peer Review , Vol. 11 No. 2 , pp. 26 - 30 .

Buck , S. and Tiene , D. ( 1989 ), “ The impact of physical attractiveness, gender, and teaching philosophy on teacher evaluations ”, The Journal of Educational Research , Vol. 82 No. 3 , pp. 172 - 177 , available at: https://doi.org/10.1080/00220671.1989.10885887

Carter , R.E. ( 2016 ), “ Faculty scholarship has a profound positive association with student evaluations of teaching: except when it doesn’t ”, Journal of Marketing Education , Vol. 38 No. 1 , pp. 18 - 36 .

Cashin , W.E. and Downey , R.G. ( 1992 ), “ Using global student rating items for summative evaluation ”, Journal of Educational Psychology , Vol. 84 No. 4 , pp. 563 - 572 .

Castleman , B. ( 2016 ), “ Data-driven behavioral nudges: a low-cost strategy to improve postsecondary education ”, paper presented at the Annual Conference of the Association for Institutional Research, New Orleans, LA .

Centra , J.A. ( 1979 ), Determining Faculty Effectiveness: Assessing Teaching, Research, and Service for Personnel Decisions and Improvement , Jossey-Bass , San Francisco, CA .

Centra , J.A. ( 1993a ), Determining Faculty Effectiveness , Jossey-Bass , San Francisco, CA .

Centra , J.A. ( 1993b ), Reflective Faculty Evaluation , Jossey-Bass , San Francisco, CA .

Chang , J. ( 2014 ), “ Number of universities should be reduced: education minister ”, available at: www.chinapost.nownews.com/20140925-50233 (accessed March 30, 2018 ).

Charles Stuart University Library ( 2018 ), “ Literature review: traditional or narrative literature reviews ”, available at: http://libguides.csu.edu.au/c.php?g=476545&p=3997199 (accessed March 30, 2018 ).

Chen , C.Y. ( 2006 ), “ A study on teaching evaluation in public universities in Taiwan (Wguó gōnglìdàxué jiàoshī jiàoxué píngjiàn zhī yánjiū) ”, unpublished doctoral dissertation, National ChengChi University, Taipei .

Chen , D.-S. and Chang , M.-K. ( 2010 ), “ Higher education in Taiwan: the crisis of rapid expansion ”, available at: www.isa-sociology.org/universities-in-crisis/?p=417 (accessed March 30, 2018 ).

Chen , G.-H. and Watkins , D. ( 2010 ), “ Stability and correlates of student evaluations of teaching at a Chinese university ”, Assessment and Evaluation in Higher Education , Vol. 35 No. 6 , pp. 675 - 685 , available at: https://doi.org/10.1080/02602930902977715

Chen , L. ( 2016 ), “ Do student characteristics affect course evaluation completion? ”, paper presented at the 2016 Annual Conference of the Association for Institutional Research, New Orleans, LA .

Chou , C.P. and Ching , G.S. ( 2012 ), Taiwan Education at the Crossroad: When Globalization Meets Localization , Palgrave Macmillan , New York, NY .

Chuah , K.L. and Hill , C. ( 2004 ), “ Student evaluation of teacher performance: random pre-destination ”, Journal of College Teaching & Learning , Vol. 1 No. 6 , pp. 109 - 114 .

Clayson , D.E. ( 1999 ), “ Students’ evaluation of teaching effectiveness: some implications of stability ”, Journal of Marketing Education , Vol. 21 No. 1 , pp. 68 - 75 .

Clayson , D.E. ( 2009 ), “ Student evaluations of teaching: are they related to what students learn? A meta-analysis and review of the literature ”, Journal of Marketing Education , Vol. 31 No. 1 , pp. 16 - 30 .

Clayson , D.E. and Sheffet , M.J. ( 2006 ), “ Personality and the student evaluation of teaching ”, Journal of Marketing Education , Vol. 28 No. 2 , pp. 149 - 160 .

Clayson , D.E. , Frost , T.F. and Sheffet , M.J. ( 2006 ), “ Grades and the student evaluation of instruction: a test of the reciprocity effect ”, Academy of Management: Learning and Education , Vol. 5 No. 1 , pp. 52 - 65 , available at: https://doi.org/10.5465/AMLE.2006.20388384

Coburn , L. ( 1984 ), “ Student evaluation of teacher performance ”, ERIC Document Reproduction Service No. ED289887, National Institute of Education, Washington, DC .

Coffey , M. and Gibbs , G. ( 2001 ), “ The evaluation of the Student Evaluation of Educational Quality Questionnaire (SEEQ) in UK higher education ”, Assessment & Evaluation in Higher Education , Vol. 26 No. 1 , pp. 89 - 93 , available at: https://doi.org/10.1080/02602930020022318

Cohen , P.A. ( 1980 ), “ Effectiveness of student-rating feedback for improving college instruction: a meta-analysis of findings ”, Research in Higher Education , Vol. 13 No. 4 , pp. 321 - 341 , doi: 10.1007/bf00976252 .

Costin , F. , Greenough , W.T. and Menges , R.J. ( 1971 ), “ Student ratings of college teaching: reliability, validity, and usefulness ”, Review of Educational Research , Vol. 41 No. 5 , pp. 511 - 535 .

Cronin , P. , Ryan , F. and Coughlan , M. ( 2008 ), “ Undertaking a literature review: a step-by-step approach ”, British Journal of Nursing , Vol. 17 No. 1 , pp. 38 - 43 .

Curran , J.M. and Rosen , D.E. ( 2006 ), “ Student attitudes toward college courses: an examination of influences and intentions ”, Journal of Marketing Education , Vol. 28 No. 2 , pp. 135 - 148 , available at: https://doi.org/10.1177/0273475306288401

Daniel , B. ( 2015 ), “ Big data and analytics in higher education: opportunities and challenges ”, British Journal of Educational Technology , Vol. 46 No. 5 , pp. 904 - 920 , available at: https://doi.org/10.1111/bjet.12230

Darby , M.R. and Karni , E. ( 1973 ), “ Free competition and the optimal amount of fraud ”, The Journal of Law & Economics , Vol. 16 No. 1 , pp. 67 - 88 .

Dee , T.S. ( 2005 ), “ A teacher like me: does race, ethnicity, or gender matter? ”, The American Economic Review , Vol. 95 No. 2 , pp. 158 - 165 .

Douglas , D. and Gifford , R. ( 2001 ), “ Evaluation of the physical classroom by students and professors: a lens model approach ”, Educational Research , Vol. 43 No. 3 , pp. 295 - 309 , available at: https://doi.org/10.1080/00131880110081053

Dudovskiy , J. ( 2018 ), “ The ultimate guide to writing a dissertation in business studies: a step-by-step assistance ”, Research Methodology .

Duggan , M. and Carlson-Bancroft , A. ( 2016 ), “ How Emerson College increased participation rates in course evaluations and NSSE ”, paper presented at the Annual Conference of the Association for Institutional Research, New Orleans, LA .

Eagly , A.H. , Ashmore , R.D. , Makhijani , M.G. and Longo , L.C. ( 1991 ), “ What is beautiful is good, but …: a meta-analytic review of research on the physical attractiveness stereotype ”, Psychological Bulletin , Vol. 110 No. 1 , pp. 109 - 128 , available at: https://doi.org/10.1037/0033-2909.110.1.109

Ehrenberg , R.G. , Goldhaber , D.D. and Brewer , D.J. ( 1995 ), “ Do teachers’ race, gender, and ethnicity matter? Evidence from the national education longitudinal study of 1988 ”, Industrial and Labor Relations Review , Vol. 48 No. 3 , pp. 547 - 561 .

Feldman , K.A. ( 1976 ), “ The superior college teacher from the students’ view ”, Research in Higher Education , Vol. 5 No. 3 , pp. 243 - 288 , available at: https://doi.org/10.1007/BF00991967

Feldman , K.A. ( 1977 ), “ Consistency and variability among college students in rating their teachers and courses: a review and analysis ”, Research in Higher Education , Vol. 6 No. 3 , pp. 223 - 274 .

Feldman , K.A. ( 1978 ), “ Course characteristics and college students’ ratings of their teachers: what we know and what we don’t ”, Research in Higher Education , Vol. 9 No. 3 , pp. 199 - 242 , available at: https://doi.org/10.1007/BF00976997

Feldman , K.A. ( 1984 ), “ Class size and college students’ evaluations of teachers and courses: a closer look ”, Research in Higher Education , Vol. 21 No. 1 , pp. 45 - 116 , available at: https://doi.org/10.1007/BF00975035

Felton , J. , Mitchell , J. and Stinson , M. ( 2004 ), “ Web-based student evaluations of professors: the relations between perceived quality, easiness, and sexiness ”, Assessment & Evaluation in Higher Education , Vol. 29 No. 1 , pp. 91 - 108 , available at: https://doi.org/10.1080/0260293032000158180

Fink , A. ( 2005 ), Conducting Research Literature Reviews: From the Internet to Paper , 2nd ed. , Sage , Thousand Oaks, CA .

Fisk , R.P. , Grove , S.J. and John , J. ( 2014 ), Services Marketing: An Interactive Approach , 4th ed. , Cengage Learning , Mason, OH .

Flaherty , C. ( 2015 ), “ Flawed evaluations ”, available at: www.insidehighered.com/news/2015/06/10/aaup-committee-survey-data-raise-questions-effectiveness-student-teaching (accessed March 30, 2018 ).

Flood , B. ( 1970 ), “ Student evaluation of teacher performance ”, Journal of Education for Librarianship , Vol. 10 No. 4 , pp. 283 - 285 , available at: https://doi.org/10.2307/40322085

Fox , R. , Peck , R.F. , Blattstein , A. and Blattstein , D. ( 1983 ), “ Student evaluation of teacher as a measure of teacher behavior and teacher impact on students ”, The Journal of Educational Research , Vol. 77 No. 1 , pp. 16 - 21 .

Frey , P.W. ( 1976 ), “ Validity of student instructional ratings: does timing matter? ”, The Journal of Higher Education , Vol. 47 No. 3 , pp. 327 - 336 .

Galetzka , M. , Verhoeven , J.W.M. and Pruyn , A.T.H. ( 2006 ), “ Service validity and service reliability of search, experience and credence services: a scenario study ”, International Journal of Service Industry Management , Vol. 17 No. 3 , pp. 271 - 283 , available at: https://doi.org/10.1108/09564230610667113

Gikas , J. and Grant , M.M. ( 2013 ), “ Mobile computing devices in higher education: student perspectives on learning with cellphones, smartphones & social media ”, The Internet and Higher Education , Vol. 19 No. 1 , pp. 18 - 26 .

Glaser , B.G. ( 1965 ), “ The constant comparative method of qualitative analysis ”, Social Problems , Vol. 12 No. 4 , pp. 436 - 445 .

Glaser , B.G. ( 1978 ), Theoretical Sensitivity: Advances in the Methodology of Grounded Theory , Sociology Press , Mill Valley, CA .

Grammatikopoulos , V. , Linardakis , M. , Gregoriadis , A. and Oikonomidis , V. ( 2015 ), “ Assessing the Students’ Evaluations of Educational Quality (SEEQ) questionnaire in Greek higher education ”, Higher Education , Vol. 70 No. 3 , pp. 395 - 408 , available at: https://doi.org/10.1007/s10734-014-9837-7

Green , B.N. , Johnson , C.D. and Adams , A. ( 2006 ), “ Writing narrative literature review for peer-reviewed journals: secrets of the trade ”, Journal of Chiropractic Medicine , Vol. 5 No. 3 , pp. 101 - 117 .

Greene , B.A. , Miller , R.B. , Crowson , H.M. , Duke , B.L. and Akey , K.L. ( 2004 ), “ Predicting high school students’ cognitive engagement and achievement: contributions of classroom perceptions and motivation ”, Contemporary Educational Psychology , Vol. 29 No. 4 , pp. 462 - 482 , available at: https://doi.org/10.1016/j.cedpsych.2004.01.006

Grove , S.J. and Fisk , R.P. ( 1997 ), “ The impact of other customers on service experiences: a critical incident examination of ‘getting alone’ ”, Journal of Retailing , Vol. 73 No. 1 , pp. 63 - 85 .

Harackiewicz , J.M. , Barron , K.E. , Pintrich , P.R. , Elliot , A.J. and Thrash , T.M. ( 2002 ), “ Revision of achievement goal theory: necessary and illuminating ”, Journal of Educational Psychology , Vol. 94 No. 3 , pp. 638 - 645 , available at: https://doi.org/10.1037/0022-0663.94.3.638

Hart , C. ( 1998 ), Doing a Literature Review: Releasing the Social Science Research Imagination , Sage , Thousand Oaks, CA .

Hildebrand , M. ( 1973 ), “ The character and skills of the effective professor ”, The Journal of Higher Education , Vol. 44 No. 1 , pp. 41 - 50 .

Hildebrand , M. , Wilson , R.C. and Dienst , E.R. ( 1971 ), Evaluating University Teaching , Center for Research and Development in Higher Education , Berkeley, CA .

Hill , M.C. and Epps , K.K. ( 2010 ), “ The impact of physical classroom environment on student satisfaction and student evaluation of teaching in the university environment ”, Academy of Educational Leadership Journal , Vol. 14 No. 4 , pp. 65 - 79 .

Howell , A.J. and Symbaluk , D.G. ( 2001 ), “ Published student ratings of instruction: revealing and reconciling the views of students and faculty ”, Journal of Educational Psychology , Vol. 93 No. 4 , pp. 790 - 796 , available at: https://doi.org/10.1037/0022-0663.93.4.790

Hu , Y.-L. and Ching , G.S. ( 2012 ), “ Factors affecting student engagement: an analysis on how and why students learn ”, Conference on Creative Education , Scientific Research Publishing , Irvine, CA , pp. 989 - 992 .

Hu , Y.-L. , Hung , C.-H. and Ching , G.S. ( 2015 ), “ Student-faculty interaction: mediating between student engagement factors and educational outcome gains ”, International Journal of Research Studies in Education , Vol. 4 No. 1 , pp. 43 - 53 , available at: https://doi.org/10.5861/ijrse.2014.800

Hultman , M. and Oghazi , P. ( 2008 ), “ Good looks - good courses: the link between physical attractiveness and perceived performance in higher educational services ”, in Thyne , M. , Deans , K.R. and Gnoth , J. (Eds), Australian and New Zealand Marketing Academy Conference , University of Otago , Dunedin , pp. 2588 - 2597 .

Huynh , P. ( 2015 ), “ Overcoming low response rates for online course evaluations ”, paper presented at the Annual Conference of the Association for Institutional Research, Denver, CO .

Kolb , D.A. , Rubin , I.M. and McIntyre , J.M. ( 1984 ), Organizational Psychology: An Experimental Approach to Organizational Behavior , Prentice-Hall , Englewood Cliffs, NJ .

Kornell , N. and Hausman , H. ( 2016 ), “ Do the best teachers get the best rating? ”, Frontiers in Psychology , Vol. 7 , No. 570 , pp. 1 - 8 , available at: https://doi.org/10.3389/fpsyg.2016.00570

Koushki , P.A. and Kuhn , H.A.J. ( 1982 ), “ How reliable are student evaluations of teachers? ”, Engineering Education , Vol. 72 No. 3 , pp. 362 - 367 .

Kuh , G.D. ( 2008 ), High-Impact Educational Practices: What they are, who has Access to them, and why they Matter , AACU , Washington, DC .

Kuh , G.D. , O’Donnell , K. and Reed , S. ( 2013 ), Ensuring Quality and Taking High-Impact Practices to Scale , AACU , Washington, DC .

Kuzmanovic , M. , Savic , G. , Popovic , M. and Martic , M. ( 2012 ), “ A new approach to evaluation of university teaching considering heterogeneity of students’ preferences ”, Procedia – Social and Behavioral Sciences , Vol. 64 No. 1 , pp. 402 - 411 , available at: https://doi.org/10.1016/j.sbspro.2012.11.047

Liao , S. ( 2013 ), “ Psychological contract between teacher and student improves teaching process in the network courses of college: a study based on the network course of psychology in Shaoguan University ”, in Luo , X. (Ed.), International Conference on Education Technology and Management Science , Atlantis Press , Amsterdam , pp. 885 - 887 .

Lidice , A. and Saglam , G. ( 2013 ), “ Using students’ evaluations to measure educational quality ”, Procedia – Social and Behavioral Sciences , Vol. 70 No. 25 , pp. 1009 - 1015 , available at: https://doi.org/10.1016/j.sbspro.2013.01.152

Liegle , J.O. and McDonald , D.S. ( 2005 ), “ Lessons learned from online vs paper-based computer information students evaluation system ”, Information Systems Education Journal , Vol. 3 No. 37 , pp. 1 - 14 , available at: http://isedj.org/3/37/

Lin , J.-H. and Chen , J.-H. ( 2016 ), “ SELF-COLA: assessing students learning experiences and first-year outcomes ”, paper presented at the International Conference: Higher Education Institutional Research, New Orleans, LA .

Lipsey , M.W. and Wilson , D.B. ( 2001 ), Practical Meta-Analysis , Vol. 49 , Sage , Thousand Oaks, CA .

Liu , S. , Keeley , J. and Buskist , W. ( 2016 ), “ Chinese college students’ perceptions of excellent teachers across three disciplines ”, Psychology, Chemical Engineering, and Education , Vol. 43 No. 1 , pp. 70 - 74 , available at: https://doi.org/10.1177/0098628315620888

Longanecker , D. ( 2016 ), “ Higher education in the new normal of the 21st century: an era of evidence based change ”, paper presented at the Annual Conference of the Association for Institutional Research, New Orleans, LA .

Lubienski , C. ( 2007 ), “ Marketing schools: consumer goods and competitive incentives for consumer information ”, Education and Urban Society , Vol. 40 No. 1 , pp. 118 - 141 .

MacGregor , K. ( 2015 ), “ Six key elements of an entrepreneurial university ”, University World News, available at: www.universityworldnews.com/article.php?story=20151106141848199 (accessed March 30, 2018 ).

Machi , L.A. and McEvoy , B.T. ( 2016 ), The Literature Review: Six Steps to Success , 3rd ed. , Sage , Thousand Oaks, CA .

MacNell , L. , Driscoll , A. and Hunt , A.N. ( 2015 ), “ What’s in a name: exposing gender bias in student ratings of teaching ”, Innovative Higher Education , Vol. 40 No. 4 , pp. 291 - 303 , available at: https://doi,org/10.1007/s10755-014-9313-4

Mäkinen , J. , Olkinuora , E. and Lonka , K. ( 2004 ), “ Students at risk: students’ general study orientations and abandoning/prolonging the course of studies ”, Higher Education , Vol. 48 No. 2 , pp. 173 - 188 , available at: https://doi.org/10.1023/B:HIGH.0000034312.79289.ab

Marcotte , A. ( 2014 ), “ Best way for professors to get good student evaluations? Be male ”, available at: www.slate.com/blogs/xx_factor/2014/12/09/gender_bias_in_student_evaluations_professors_of_online_courses_who_present.html (accessed March 30, 2018 ).

Marks , R.B. ( 2000 ), “ Determinants of student evaluations of global measures of instructor and course value ”, Journal of Marketing Education , Vol. 22 No. 2 , pp. 108 - 119 , available at: https://doi.org/10.1177/0273475300222005

Marlin , J.W. Jr ( 1987 ), “ Student perceptions of end-of-course evaluations ”, The Journal of Higher Education , Vol. 58 No. 6 , pp. 704 - 716 .

Marsh , H.W. ( 1980 ), “ The influence of student, course, and instructor characteristics in evaluations of university teaching ”, American Educational Research Journal , Vol. 17 No. 2 , pp. 219 - 237 .

Marsh , H.W. ( 1982 ), “ SEEQ: a reliable, valid, and useful instrument for collecting students’ evaluations of university teaching ”, British Journal of Educational Psychology , Vol. 52 No. 1 , pp. 77 - 95 , available at: https://doi,org/10.1111/j.2044-8279.1982.tb02505.x

Marsh , H.W. ( 1984 ), “ Students’ evaluations of university teaching: dimensionality, reliability, validity, potential baises, and utility ”, Journal of Educational Psychology , Vol. 76 No. 5 , pp. 707 - 754 , available at: https://doi.org/10.1037/0022-0663.76.5.707

Marsh , H.W. ( 1987 ), “ Students’ evaluations of university teaching: research findings, methodological issues, and directions for future research ”, International Journal of Educational Research , Vol. 11 No. 3 , pp. 253 - 388 , available at: https://doi.org/10.1016/0883-0355(87)90001-2

Marsh , H.W. ( 1991 ), “ Multidimensional students’ evaluations of teaching effectiveness: a test of alternative higher-order structures ”, Journal of Educational Psychology , Vol. 83 No. 2 , pp. 285 - 296 , available at: https://doi.org/10.1037/0022-0663.83.2.285

Marsh , H.W. ( 2007 ), “ Students’ evaluations of university teaching: dimensionality, reliability, validity, potential biases and usefulness ”, in Perry , R.P. and Smart , J.C. (Eds), The Scholarship of Teaching and Learning in Higher Education: An Evidence-Based Perspective , Springer , Dordrecht , pp. 319 - 383 .

Marsh , H.W. and Bailey , M. ( 1993 ), “ Multidimensional students’ evaluations of teaching effectiveness: a profile analysis ”, The Journal of Higher Education , Vol. 64 No. 1 , pp. 1 - 18 , available at: https://doi.org/10.2307/2959975

Marsh , H.W. and Dunkin , M.J. ( 1997 ), “ Students’ evaluations of university teaching: a multidimensional perspective ”, in Perry , R.P. and Smart , J.C. (Eds), Effective Teaching in Higher Education: Research and Practice , Agathon , New York, NY , pp. 241 - 320 .

Marsh , H.W. and Roche , L.A. ( 1997 ), “ Making students’ evaluations of teaching effectiveness effective: the critical issues of validity, bias, and utility ”, American Psychologist , Vol. 52 No. 11 , pp. 1187 - 1197 , available at: https://doi.org/10.1037/0003-066X.52.11.1187

Marsh , H.W. and Roche , L.A. ( 2000 ), “ Effects of grading leniency and low workload on students’ evaluations of teaching: popular myth, bias, validity, or innocent bystanders? ”, Journal of Educational Psychology , Vol. 92 No. 1 , pp. 202 - 228 , available at: https://doi.org/10.1037/0022-0663.92.1.202

Marsh , H.W. , Hau , K.-T. , Chung , C.-M. and Siu , T.L.P. ( 1997 ), “ Students’ evaluations of university teaching: Chinese version of the students’ evaluations of educational quality instrument ”, Journal of Educational Psychology , Vol. 89 No. 3 , pp. 568 - 572 , available at: https://doi.org/10.1037/0022-0663.89.3.568

Mehrabian , A. ( 1968 ), “ Some referents and measures of nonverbal behavior ”, Behavior Research Methods & Instrumentation , Vol. 1 No. 6 , pp. 203 - 207 , available at: https://doi.org/10.3758/BF03208096

Meredith , G.M. ( 1977 ), “ Faculty-based indicators of teaching effectiveness in higher education ”, Psychological Reports , Vol. 41 No. 2 , pp. 675 - 676 , available at: https://doi.org/10.2466/pr0.1977.41.2.675

Meredith , G.M. ( 1978 ), “ Student-based ratings of teaching effectiveness in legal education ”, Psychological Reports , Vol. 43 No. 3 , pp. 953 - 954 , available at: https://doi.org/10.2466/pr0.1978.43.3.953

Meredith , G.M. ( 1985a ), “ Student-based indicators of campus satisfaction as an outcome of higher education ”, Psychological Reports , Vol. 56 No. 2 , pp. 597 - 598 , available at: https://doi.org/10.2466/pr0.1985.56.2.597

Meredith , G.M. ( 1985b ), “ Two rating indicators of excellence in teaching in lecture format courses ”, Psychological Reports , Vol. 56 No. 1 , pp. 52 - 54 , available at: https://doi,org/10.2466/pr0.1985.56.1.52

Meredith , G.M. and Bub , D.N. ( 1977 ), “ Evaluation of apprenticeship teaching in higher education ”, Psychological Reports , Vol. 40 No. 3 , pp. 1123 - 1126 , available at: https://doi.org/10.2466/pr0.1977.40.3c.1123

Meredith , G.M. and Ogasawara , T.H. ( 1982 ), “ Preference for class size in lecture-format courses among college students ”, Psychological Reports , Vol. 51 No. 3 , pp. 961 - 962 , available at: https://doi.org/10.2466/pr0.1982.51.3.961

Miles , M. and Huberman , M. ( 1994 ), Qualitative Data Analysis , 2nd ed. , Sage , Beverly Hills, CA .

Miller , R.B. , Greene , B.A. , Montalvo , G.P. , Ravindran , B. and Nichols , J.D. ( 1996 ), “ Engagement in academic work: the role of learning goals, future consequences, pleasing others, and perceived ability ”, Contemporary Educational Psychology , Vol. 21 No. 4 , pp. 388 - 422 , available at: https://doi.org/10.1006/ceps.1996.0028

Mitchell , M. , Leachman , M. and Masterson , K. ( 2016 ), “ Funding down, tuition up ”, available at: www.cbpp.org/research/state-budget-and-tax/funding-down-tuition-up (accessed March 30, 2018 ).

Mogan , J. and Knox , J.E. ( 1987 ), “ Characteristics of ‘best’ and ‘worst’ clinical teachers as perceived by university nursing faculty and students ”, Journal of Advanced Nursing , Vol. 12 No. 3 , pp. 331 - 337 , available at: https://doi.org/10.1111/j.1365-2648.1987.tb01339.x

Mortelmans , D. and Spooren , P. ( 2009 ), “ A revalidation of the SET37 questionnaire for student evaluations of teaching ”, Educational Studies , Vol. 35 No. 5 , pp. 547 - 552 , available at: https://doi.org/10.1080/03055690902880299

Murray , H.G. , Rushton , J.P. and Paunonen , S.V. ( 1990 ), “ Teacher personality traits and student instructional ratings in six types of university courses ”, Journal of Educational Psychology , Vol. 82 No. 2 , pp. 250 - 261 , available at: https://doi.org/10.1037/0022-0663.82.2.250

Naidoo , R. ( 2016 ), “ Higher education is trapped in a competition fetish ”, University World News, available at: www.universityworldnews.com/article.php?story=20160413131355443 (accessed March 30, 2018 ).

Nasser , F. and Fresko , B. ( 2002 ), “ Faculty views of student evaluation of college teaching ”, Assessment & Evaluation in Higher Education , Vol. 27 No. 2 , pp. 187 - 198 , available at: https://doi.org/10.1080/02602930220128751

National Statistics office of Taiwan ( 2018 ), “ Statistical tables ”, available at: https://eng.stat.gov.tw/lp.asp?ctNode=1629&CtUnit=779&BaseDSD=7&mp=5 (accessed January 1, 2018 ).

Norris , D. and Baer , L. ( 2013 ), Building Organizational Capacity for Analytics , Educause , Louisville, CO .

Norris , J. and Conn , C. ( 2005 ), “ Investigating strategies for increasing student response rates to online delivered course evaluations ”, Quarterly Review of Distance Education , Vol. 6 No. 1 , pp. 13 - 29 .

Nowell , C. ( 2007 ), “ The impact of relative grade expectations on student evaluation of teaching ”, International Review of Economics Education , Vol. 6 No. 2 , pp. 42 - 56 , available at: https://doi.org/10.1016/S1477-3880(15)30104-3

Otani , K. , Kim , B.J. and Cho , J.-I. ( 2012 ), “ Student evaluation of teaching (SET) in higher education: how to use SET more effectively and efficiently in public affairs education ”, Journal of Public Affairs Education , Vol. 18 No. 3 , pp. 531 - 544 .

Otter , S. ( 1995 ), “ Learning outcomes in higher education ”, in Burke , J. (Ed.), Outcomes, Learning and the Curriculum: Implications for NVQ’s, GNVQ’s and Other Qualifications , Falmer Press , Bristol, PA , pp. 273 - 284 .

Overall , J.U. and Marsh , H.W. ( 1979 ), “ Midterm feedback from students: Its relationship to instructional improvement and students’ cognitive and affective outcomes ”, Journal of Educational Psychology , Vol. 71 No. 6 , pp. 856 - 865 .

Patton , T.O. ( 1999 ), “ Ethnicity and gender: an examination of its impact on instructor credibility in the university classroom ”, The Howard Journal of Communications , Vol. 10 No. 2 , pp. 123 - 144 , available at: https://doi.org/10.1080/106461799246852

Perry , R.P. , Abrami , P.C. , Leventhal , L. and Check , J. ( 1979 ), “ Instructor reputation: an expectancy relationship involving student ratings and achievement ”, Journal of Educational Psychology , Vol. 71 No. 6 , pp. 776 - 787 , available at: https://doi.org/10.1037/0022-0663.71.6.776

Petticrew , M. and Roberts , H. ( 2005 ), Systematic Reviews in the Social Sciences: A Practical Guide , Blackwell Publishers , Malden, MA .

Picciano , A.G. ( 2012 ), “ The evolution of big data and learning analytics in American higher education ”, Journal of Asynchronous Learning Networks , Vol. 16 No. 3 , pp. 9 - 20 .

Pietersen , C. ( 2014 ), “ Negotiating a shared psychological contract with students ”, Mediterranean Journal of Social Sciences , Vol. 5 No. 7 , pp. 25 - 33 , available at: https://doi.org/10.5901/mjss.2014.v5n7p25

Pogue , L.L. and Ahyun , K. ( 2006 ), “ The effect of teacher nonverbal immediacy and credibility on student motivation and affective learning ”, Communication Education , Vol. 55 No. 3 , pp. 331 - 344 , available at: https://doi.org/10.1080/03634520600748623

Poonyakanok , P. , Thisayakorn , N. and Digby , P.W. ( 1986 ), “ Student evaluation of teacher performance: some initial research findings from Thailand ”, Teaching and Teacher Education , Vol. 2 No. 2 , pp. 145 - 154 , available at: https://doi.org/10.1016/0742-051X(86)90013-2

Powell , R.W. ( 1977 ), “ Grades, learning, and student evaluation of instruction ”, Research in Higher Education , Vol. 7 No. 3 , pp. 193 - 205 , available at: https://doi.org/10.1007/BF00991986

Pozo-Muñoz , C. , Rebolloso-Pacheco , E. and Fernández-Ramírez , B. ( 2000 ), “ The ‘ideal teacher’: implications for student evaluation of teacher effectiveness ”, Assessment & Evaluation in Higher Education , Vol. 25 No. 3 , pp. 253 - 263 , available at: https://doi.org/10.1080/02602930050135121

Pravikoff , P. and Nadasen , D. ( 2015 ), “ Course evaluations simplified: the largest US public university did it and you can too ”, paper presented at the Annual Conference of the Association for Institutional Research, Denver, CO .

Pulich , M.A. ( 1984 ), “ Better use of student evaluations for teaching effectiveness ”, Improving College and University Teaching , Vol. 32 No. 2 , pp. 91 - 94 .

Remedios , R. and Lieberman , D.A. ( 2008 ), “ I liked your course because you taught me well: the influence of grades, workload, expectations and goals on students’ evaluations of teaching ”, British Educational Research Journal , Vol. 34 No. 1 , pp. 91 - 115 , available at: https://doi.org/10.1080/01411920701492043

Rice , L.C. ( 1988 ), “ Student evaluation of teaching: problems and prospects ”, Teaching Philosophy , Vol. 11 No. 4 , pp. 329 - 344 , available at: https://doi.org/10.5840/teachphil198811484

Richardson , J.T.E. ( 2005 ), “ Instruments for obtaining student feedback: a review of the literature ”, Assessment & Evaluation in Higher Education , Vol. 30 No. 4 , pp. 387 - 415 , available at: https://doi.org/10.1080/02602930500099193

Rivera , J.C. and Rice , M.L. ( 2002 ), “ A comparison of student outcomes and satisfaction between traditional and web based course offerings ”, Online Journal of Distance Learning Administration , Vol. 5 No. 3 , pp. 1 - 11 , available at: www.westga.edu/~distance/ojdla/fall53/rivera53.html (accessed March 30, 2018 ).

Rocco , T.S. and Plakhotnik , M.S. ( 2009 ), “ Literature reviews, conceptual frameworks, and theoretical frameworks: terms, functions, and distinctions ”, Human Resource Development Review , Vol. 8 No. 1 , pp. 120 - 130 , available at: https://doi.org/10.1177/1534484309332617

Rodin , M. and Rodin , B. ( 1972 ), “ Student evaluations of teachers ”, Science , Vol. 177 No. 4055 , pp. 1164 - 1166 , available at: https://doi.org/10.1126/science.177.4055.1164

Roehling , M.V. ( 1997 ), “ The origins and early development of the psychological contract construct ”, Journal of Management History , Vol. 3 No. 2 , pp. 204 - 217 .

Rother , E.T. ( 2007 ), “ Systematic literature review×narrative review ”, Acta Paulista de Enfermagem , Vol. 20 No. 2 , pp. vii - viii .

Rousseau , D.M. ( 1995 ), Psychological Contracts in Organizations , Sage , Thousand Oaks, CA .

Rousseau , D.M. ( 2001 ), “ Schema, promise and mutuality: the building blocks of the psychological contract ”, Journal of Occupational and Organizational Psychology , Vol. 74 No. 4 , pp. 511 - 541 , available at: https://doi.org/10.1348/096317901167505

Sander , P. , Stevenson , K. , King , M. and Coates , D. ( 2000 ), “ University students’ expectations of teaching ”, Studies in Higher Education , Vol. 25 No. 3 , pp. 309 - 323 , available at: https://doi.org/10.1080/03075070050193433

Schellhase , K.C. ( 2010 ), “ The relationship between student evaluation of instruction scores and faculty formal educational coursework ”, Athletic Training Education Journal , Vol. 5 No. 4 , pp. 156 - 164 .

Shulman , L. ( 1987 ), “ Knowledge and teaching: Foundations of the new reform ”, Harvard Educational Review , Vol. 57 No. 1 , pp. 1 - 23 , available at: https://doi.org/10.17763/haer.57.1.j463w79r56455411

Skinner , E.A. and Belmont , M.J. ( 1993 ), “ Motivation in the classroom: reciprocal effects of teacher behavior and student engagement across the school year ”, Journal of Educational Psychology , Vol. 85 No. 4 , pp. 571 - 581 , available at: https://doi.org/10.1037/0022-0663.85.4.571

Sohr-Preston , S.L. , Boswell , S.S. , McCaleb , K. and Robertson , D. ( 2016 ), “ Professor gender, age, and ‘hotness’ in influencing college students’ generation and interpretation of professor ratings ”, Higher Learning Research Communications , Vol. 6 No. 3 , pp. 1 - 23 , available at: https://doi.org/10.18870/hlrc.v6i3.328

Solomon , M.R. , Surprenant , C.F. , Czepiel , J.A. and Gutman , E.G. ( 1985 ), “ A role theory perspective on dyadic interactions: the service encounter ”, Journal of Marketing , Vol. 49 No. 1 , pp. 99 - 111 .

Spooren , P. and Van Loon , F. ( 2012 ), “ Who participates (not)? A non-response analysis on students’ evaluations of teaching ”, Procedia – Social and Behavioral Sciences , Vol. 69 No. 1 , pp. 990 - 996 .

Spooren , P. , Brockx , B. and Mortelmans , D. ( 2013 ), “ On the validity of student evaluation of teaching: the state of the art ”, Review of Educational Research , Vol. 83 No. 4 , pp. 598 - 642 , available at: https://doi.org/10.3102/0034654313496870

Sproule , R. ( 2000 ), “ Student evaluation of teaching: methodological critique ”, Education Policy Analysis Archives , Vol. 8 No. 50 , pp. 1 - 23 , available at: https://doi.org/10.14507/epaa.v8n50.2000

Staley , D.J. and Trinkle , D.A. ( 2011 ), “ The changing landscape of higher education ”, Educause, pp. 16-32, available at: http://er.educause.edu/articles/2011/2/the-changing-landscape-of-higher-education (accessed March 30, 2018 ).

Stark , P. and Freishtat , R. ( 2014 ), “ An evaluation of course evaluations ”, ScienceOpen Research, available at: https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AOFRQA.v1 (accessed March 30, 2018 ).

Tagiuri , R. ( 1969 ), “ Person perception ”, in Lindzey , G. and Aronson , E. (Eds), The Handbook of Social Psychology: The Individual in a Social Context , 2nd ed. , Vol. 3 , Addison-Wesley , Reading, MA , pp. 395 - 449 .

Tatro , C.N. ( 1995 ), “ Gender effects on student evaluations of faculty ”, Journal of Research and Development in Education , Vol. 28 No. 3 , pp. 169 - 173 .

Torraco , R.J. ( 2005 ), “ Writing integrative literature reviews: guidelines and examples ”, Human Resource Development Review , Vol. 4 No. 3 , pp. 356 - 367 .

Tseng , Y.-H. ( 2016 ), “ Development and application of databases for institutional research and analysis (Xiàowù yánjiū zīliàokù de jiàngòu yfēnxī yìngyòng) ”, Contemporary Educational Research Quarterly , Vol. 24 No. 1 , pp. 107 - 134 , available at: https://doi.org/10.6151/CERQ.2016.2401.04

Usher , A. ( 2009 ), “ Ten years back and ten years forward: developments and trends in higher education in Europe region ”, paper presented at the UNESCO Forum on Higher Education in the Europe Region, Bucharest .

Vlăsceanu , L. , Grünberg , L. and Pârlea , D. ( 2004 ), Quality Assurance and Accreditation: A Glossary of Basic Terms and Definitions , United Nations Educational, Scientific and Cultural Organization , Bucharest .

Voss , R. , Gruber , T. and Szmigin , I. ( 2007 ), “ Service quality in higher education: the role of student expectations ”, Journal of Business Research , Vol. 60 No. 9 , pp. 949 - 959 , available at: https://doi.org/10.1016/j.jbusres.2007.01.020

Wachtel , H.K. ( 2006 ), “ Student evaluation of college teaching effectiveness: a brief review ”, Assessment & Evaluation in Higher Education , Vol. 23 No. 2 , pp. 191 - 212 , available at: https://doi.org/10.1080/0260293980230207

Wilson , A. , Zeithaml , V.A. , Bitner , M.J. and Gremler , D.D. ( 2012 ), Services Marketing: Integrating Customer Focus Across the Firm , 2nd European ed. , McGraw-Hill Education , Berkshire .

WTO ( 1998 ), “ Education services ”, Document No. S/C/W/49 98-3691, World Trade Organization, Geneva, available at: www.wto.org/english/tratop_e/serv_e/w49.doc (accessed March 30, 2018 ).

Wright , P. , Whittington , R. and Whittenburg , G.E. ( 1984 ), “ Student ratings of teaching effectiveness: what the research reveals ”, Journal of Accounting Education , Vol. 2 No. 2 , pp. 5 - 30 , available at: https://doi.org/10.1016/0748-5751(84)90002-2

Wu , P.-M. ( 2018 ), “ The declining birthrate threatens. The number of college freshmen is reduced to only 100,000 after 10 years (shǎo znhuà fā wēi dà zhuān xiào yuàn xīn shēng 10 nián hòu jiǎn jìn 10 wàn rén) ”, available at: https://udn.com/news/story/7266/3156797 (accessed March 30, 2018 ).

Zeithaml , V.A. ( 1981 ), “ How consumer evaluation processes differ between goods and services ”, in Donnelly , J.H. and George , W.R. (Eds), Marketing of Services , American Marketing Association , Chicago, IL , pp. 186 - 190 .

Zhang , Y.-W. ( 2003 ), “ Development of student instructional rating scale (Dàxuéshēng jiàoxuépíngjiànliàngbiǎo zhī fāzhǎnyánjiū) ”, Journal of Education and Psychology , Vol. 26 No. 2 , pp. 227 - 239 .

Corresponding author

Related articles, all feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 09 September 2024

Navigating post-pandemic challenges through institutional research networks and talent management

  • Muhammad Zada   ORCID: orcid.org/0000-0003-0466-4229 1 , 2 ,
  • Imran Saeed 3 ,
  • Jawad Khan   ORCID: orcid.org/0000-0002-6673-7617 4 &
  • Shagufta Zada 5 , 6  

Humanities and Social Sciences Communications volume  11 , Article number:  1164 ( 2024 ) Cite this article

34 Accesses

Metrics details

  • Business and management

Institutions actively seek global talent to foster innovation in the contemporary landscape of scientific research, education, and technological progress. The COVID-19 pandemic underscored the importance of international collaboration as researchers and academicians faced limitations in accessing labs and conducting research experiments. This study uses a research collaboration system to examine the relationship between organizational intellectual capital (Human and structural Capital) and team scientific and technological performance. Further, this study underscores the moderating role of top management support. Using a time-lagged study design, data were collected from 363 participants in academic and research institutions. The results show a positive relationship between organizational intellectual capital (Human and structural Capital) and team scientific and technological performance using a research collaboration system. Moreover, top management support positively moderates the study’s hypothesized relationships. The study’s findings contribute significantly to existing knowledge in this field, with implications for academia, researchers, and government focused on technology transmission, talent management, research creative collaboration, supporting innovation, scientific research, technological progress, and preparing for future challenges.

Similar content being viewed by others

review of literature in education system

Towards understanding the characteristics of successful and unsuccessful collaborations: a case-based team science study

review of literature in education system

Remote collaboration fuses fewer breakthrough ideas

review of literature in education system

Interpersonal relationships drive successful team science: an exemplary case-based study

Introduction.

Global talent management and the talent hunt within research and educational institutions have become extensively discussed topics in international human resource management (HRM) (Al et al., 2022 ). Global talent management is intricately connected to the notion of finding, managing, and facilitating the fetch of research, skills, techniques, and knowledge among team members and progress in education and technology (Kwok, 2022 ; Sommer et al., 2017 ). This topic assumes a greater position when it is looked at through the lens of research, academicians, and educational institutions serving as a means of achieving scientific and technological advancement and performance (Kaliannan et al., 2023 ; Patnaik et al., 2022 ). Effective knowledge management and transfer occur between teams engaged in cross-border research collaborations (Davenport et al., 2002 ; Fasi, 2022 ). Effective team management, global talent recruitment, and the exchange of scientific knowledge across national boundaries face different challenges due to the swift growth of economic and political fanaticism. This is particularly evident in advanced economies that rely heavily on knowledge-based industries (Vaiman et al., 2018 ). Research and educational sectors are encountering significant challenges in effectively hunting and managing international talent, particularly in the aftermath of the COVID-19 pandemic, during which approximately half of the global workforce faced the possibility of job loss (Almeida et al., 2020 ; Radhamani et al., 2021 ). Due to the implementation of lockdown measures by governments, many research intuitions are facing significant issues, and the pandemic has changed the situation; work was stuck, and scientists around the globe are thinking to be prepared for this kind of situation, which is possible through the use scientific research collaboration platforms. These platforms serve as a means to exchange research and knowledge, which is crucial in the talent hunt and management (Haak-Saheem, 2020 ). In the situation above, wherein limitations exist regarding the exchange of research and knowledge within the institutions, it becomes imperative for the top management of institutions to incentivize employees to engage the team in knowledge sharing actively and achieve team-level scientific and technological advancement. It can be achieved by implementing a research collaboration system that facilitates knowledge exchange and contributes to effective talent hunt and management (Haider et al., 2022 ; Xu et al., 2024 ).

A research collaboration network is a tool for scientific and technological advancement and talent management encompassing various processes and practices to facilitate the sharing, integration, translation, and transformation of scientific knowledge (Biondi & Russo, 2022 ). During and after the COVID-19 era characterized by travel restrictions, research networking platforms serve as valuable tools for students and researchers located in variance regions to engage in the exchange of research knowledge and achieve team-level scientific and technological advancement (Yang et al., 2024 ). Enhancing intellectual capital (IC) within the organizations is imperative within this framework (Pellegrini et al., 2022 ; Vătămănescu et al., 2023 ). Intellectual capital (IC) is the intangible assets owned by an organization that has the potential to generate value (Stewart, 1991 ). An organization’s intellectual capital (IC) includes human and structural capital (Marinelli et al., 2022 ). According to Vătămănescu et al. ( 2023 ), the organization can effectively manage the skills and abilities of its team members across different countries by properly utilizing both human and structural capital and establishing a strong research collaboration system with the help of top management support. This capability remains intact even during and after the COVID-19 pandemic. This study emphasizes the importance of talent hunt and management within research and educational institutions in the post-COVID-19 pandemic because of every country’s following implementation of lockdown measures. Our study focuses on the implication of facilitating the exchange of research, knowledge, and techniques among team members during and after this period. The effective way to share research expertise and techniques in such a scenario is through a research collaboration network (O’Dwyer et al., 2023 ).

While previous research has extensively explored talent management in various industries (Al Ariss, Cascio, & Paauwe, 2014 ; Susanto, Sawitri, Ali, & Rony, 2023 ), a noticeable gap exists in the body of knowledge regarding the discussion of global talent acquisition and management within research and academic institutions, particularly within volatile environments and about scientific and technological advancements (Harsch & Festing, 2020 ). The objective of this research is to fill this research gap.1) To investigate the strategies of how research and educational institutions hunt and manage gobble talent. 2)To analyze the impact of human and structural capital and team scientific and technological performance using a research collaboration system. 3) To examine the moderating effect of top management support on the IC to use the research collation network among institution research teams and scientific and technological performance.

In addition, current research contributes significantly to the literature by elucidating the pivotal role of organizational intellectual capital in strengthening scientific and technological performance through research collaborative networks. This study advances our grip on how internal resources drive innovation and research outcomes by empirically demonstrating the positive association between human and structural capital and team-level scientific and technological performance. Furthermore, the current study highlights the moderating effect of top management support, suggesting that management commitment can amplify the benefits of intellectual capital (human and structural capital). These results show a subtle perspective on how organizations can influence their intellectual assets to foster higher levels of productivity and innovation. The study’s theoretical contributions lie in integrating resource-based views and organizational theory with performance metrics, while its practical implications provide actionable insights for institutions aiming to optimize their intellectual resources and management practices. This research also sets the stage for future inquiries into the dynamics of intellectual capital and management support in various collaborative contexts.

Research theories, literature review, and hypotheses development

Research theories.

The focus of the current study pertains to the challenges surrounding talent management within institutions during and after the COVID-19 pandemic(Fernandes et al., 2023 ). Global talent management is intently linked to the objective of enhancing the intellectual capital of the organization (Zada et al., 2023 ). Considering the COVID-19 pandemic, which raised much more attention toward scientific and technological advancement, the academic sector has noticed an observable shift towards utilizing research collaboration platforms to share scientific knowledge effectively and achieve scientific and technological performance. Intellectual capital encompasses five distinct resource categories, as identified by Roos and Roos ( 1997 ), comprising three immaterial and two touchable resources. Intangible resources such as human capital, structural capital, and customer capital are complemented by tangible resources, encompassing monetary and physical assets. Global talent management encompasses human and structural capital management (Felin & Hesterly, 2007 ). The enhancement of talent management capabilities within the institution can be achieved by cultivating institution-specific competencies in both human and structural capital (Al Ariss et al., 2014 ). This concept lines up with the theoretical background of the resource-based view (RBV) theory presented by Barney ( 1991 ). According to this theory, organizations should prioritize examining their core resources to recognize valuable assets, competencies, and capabilities that can contribute to attaining a sustainable competitive advantage (Barney, 1991 ).

During and after the COVID-19 scenario, virtual platforms are utilized by institutions to engage students and staff abroad in research and knowledge exchange, which is part of global talent management. Staff possessing adequate knowledge repositories will likely participate in knowledge exchange activities. Therefore, organizations must improve their internal resources to enhance talent management, as per the fundamental principle of the RBV theory (Barney, 1991 ). Enhancing internal resources entails strengthening an organization’s human capital, which refers to its staff’s scientific research and technical skills and knowledge and structural capital. Strengthening these two resources can facilitate the institution in effectively sharing knowledge through a research collaboration platform, consequently enhancing their global talent management endeavors and contributing to the team’s scientific and technological performance.

In this research, we also utilize institutional theory (Oliver, 1997 ) and Scott ( 2008 ) as a framework to examine the utilization of research collaboration social platforms by faculty of institutions. Our focus is on exchanging research and technical knowledge within the climate of global talent management during and after the COVID-19 epidemic. According to Scott ( 2008 ), “Institutional theory is a widely recognized theoretical framework emphasizing rational myths, isomorphism, and legitimacy (p. 78)”. For electronic data interchange, the theory has been utilized in technology adoption research (Damsgaard, Lyytinen ( 2001 )) and educational institutes (J. et al., 2007 ). In the pandemic situation, institutional theory provides researchers with a framework to analyze the motivations of employees within institutions to engage in teams to achieve team-level scientific and technological performance through a research collaboration system. According to institutional theory, organizations should utilize a research collaboration network to ensure that their staff do not need to compromise their established norms, values, and expectations. During the COVID-19 pandemic, numerous countries implemented limitations on international movement as a preventive measure. Consequently, there has been a growing identification of the potential importance of utilizing an institutional research collaboration platform for facilitating the online exchange of knowledge, skills, research techniques, and global talent management among employees of institutions operating across various countries. The active support of staff by the top management of an institution can play a key role in expediting the implementation of social networks for research collaboration within the institution (Zada et al., 2023 ).

Literature review

An institution’s scientific and technological advancement is contingent upon optimal resource utilization (Muñoz et al., 2022 ). Global talent hunt and management encompasses utilizing information and communication technologies (ICT) to provide a way for the exchange of research knowledge and techniques, thereby enabling the implementation of knowledge-based strategies (Muñoz et al., 2022 ). In a high research-level turbulent environment, it becomes imperative to effectively manage human capital (HUC) to facilitate the appropriate exchange of research knowledge and techniques (Salamzadeh, Tajpour, Hosseini, & Brahmi, 2023 ). Research shows that transferring research knowledge and techniques across national boundaries, exchanging best practices, and cultivating faculty skills are crucial factors in maintaining competitiveness (Farahian, Parhamnia, & Maleki, 2022 ; Shao & Ariss, 2020 ).

It is widely acknowledged in scholarly literature that there is a prevailing belief among individuals that talent possesses movability and that research knowledge and techniques can be readily transferred (Bakhsh et al., 2022 ; Council, 2012 ). However, it is essential to note that the matter is more complex than it may initially appear (Biondi & Russo, 2022 ). The proliferation of political and economic nationalism in developed knowledge-based economies poses a significant risk to exchanging research knowledge and techniques among faculty members in research and educational institutions worldwide (Arocena & Sutz, 2021 ). During and after COVID-19, knowledge transfer can be effectively facilitated by utilizing a research collaboration network platform (Duan & Li, 2023 ; Sulaiman et al., 2022 ). This circumstance is noticeable within the domain of international research and development, wherein academic professionals have the opportunity to utilize research collaboration platforms as a means of disseminating valuable research knowledge and techniques to their counterparts in various nations (Jain et al., 2022 ).

The scientific and technological advancement of institutions linked by intuition research and development level and research and development depend on the intuition’s quality of research, knowledge, and management (Anshari & Hamdan, 2022 ). However, there is a need to enhance the research team’s capacity to learn and transfer research knowledge and techniques effectively. Research suggests that institutional human capital (HUC) is critical in managing existing resources and hunting international talent, particularly after the COVID-19 pandemic (Sigala, Ren, Li, & Dioko, 2023 ). Human capital refers to the combined implicit and crystal clear knowledge of employees within an institution and their techniques and capabilities to effectively apply this knowledge to achieve scientific and technological advancements (Al-Tit et al., 2022 ). According to Baron and Armstrong ( 2007 ) Human capital refers to the abilities, knowledge, techniques, skills, and expertise of individuals, particularly research team members, that are relevant to the current task.

Furthermore, HUC encompasses the scope of individuals who can contribute to this reservoir of research knowledge, techniques, and expertise through individual learning. As the literature shows, the concept of IC encompasses the inclusion of structural capital (STC), which requires fortification through the implementation of a proper global talent acquisition and management system (Pak et al., 2023 ; Phan et al., 2020 ). STC encompasses various mechanisms to enhance an institution’s performance and productivity (Barpanda, 2021 ). STC is extensively acknowledged as an expedited framework for HUC, as discussed by Bontis ( 1998 ) and further explored by Gogan, Duran, and Draghici ( 2015 ). During and after the COVID-19 epidemic, a practical approach to global talent management involves leveraging research collaboration network platforms to facilitate knowledge exchange among research teams (Arslan et al., 2021 ). However, the crucial involvement of top management support is imperative to effectively manage talent by utilizing research collaboration network platforms for knowledge transfer (Zada et al., 2023 ). Nevertheless, the existing body of knowledge needs to adequately explore the topic of talent management about knowledge transfer on research collaboration platforms, particularly in the context of institution-active management support (Tan & Md. Noor, 2013 ).

Conceptual model and research hypothesis

By analyzing pertinent literature and theoretical frameworks, we have identified the factors influencing staff intention in research and academic institutions to utilize research collaboration networks after the COVID-19 pandemic and achieve scientific and technical performance. This study aims to explain the determinants. Additionally, this study has considered the potential influence of top management support as a moderator on the associations between education and research institution staff intention on IC to utilize research collaboration platforms in the post-COVID-19 era and predictors. Through this discourse, we shall generate several hypotheses to serve as the basis for constructing a conceptual model (see Fig. 1 ).

figure 1

Relationships between study variables: human capital, structural capital, top management support, and team scientific and technological performance. Source: authors’ development.

Human capital and team scientific and technological performance

According to Dess and Picken ( 2000 ), HUC encompasses individuals’ capabilities, knowledge, skills, research techniques, and experience, including staff and supervisors, relevant to the specific task. Human capital also refers to the ability to pay to this reservoir of knowledge, techniques, and expertize through individual learning (Dess & Picken, 2000 ). HUC refers to the combinations of characteristics staff possess, including but not limited to research proficiency, technical aptitude, business acumen, process comprehension, and other similar competencies (Kallmuenzer et al., 2021 ). The HUC is considered an institutional repository of knowledge, as Bontis and Fitz‐enz ( 2002 ) indicated, with its employees serving as representatives. The concept of HUC refers to the combined abilities, research proficiency, and competencies that individuals possess to address and resolve operational challenges within an institutional setting (Barpanda, 2021 ; Yang & Xiangming, 2024 ). The human capital possessed by institutions includes crucial attributes that allow organizations to acquire significant internal resources that are valuable, difficult to replicate, scarce, and cannot be substituted. It aligns with the theoretical framework of the RBV theory, as suggested by Barney ( 1991 ). IC is extensively recognized as a main factor in revitalizing organizational strategy and promoting creativity and innovation. It is crucial to enable organizations to acquire and effectively disseminate knowledge among their employees, contribute to talent management endeavors, and achieve scientific and technological performance (Alrowwad et al., 2020 ; He et al., 2023 ). Human capital is linked to intrinsic aptitude, cognitive capabilities, creative problem-solving, exceptional talent, and the capacity for originality (Bontis & Fitz‐enz, 2002 ). In talent management, there is a focus on enhancing scientific and technological performance and development. According to Shao and Ariss ( 2020 ), HUC is expected to strengthen employee motivation to utilize research collaboration networks for scientific knowledge-sharing endeavors. Based on these arguments, we proposed that.

Hypothesis 1 Human capital (HUC) positively impacts team scientific and technological performance using a research collaboration system.

Structural capital and team scientific and technological Performance

According to Mehralian, Nazari, and Ghasemzadeh ( 2018 ) structural capital (STC) encompasses an organization’s formalized knowledge assets. It consists of the structures and mechanisms employed by the institution to enhance its talent management endeavors. The concept of STC is integrated within the framework of institutions’ programs, laboratory settings, and databases (Cavicchi & Vagnoni, 2017 ). The significance of an organization’s structural capital as an internal tangible asset that bolsters its human capital has been recognized by scholars such as Secundo, Massaro, Dumay, and Bagnoli ( 2018 ), and This concept also lines up with the RBV theory (J. Barney, 1991 ). The strategic assets of an organization encompass its capabilities, organizational culture, patents, and trademarks (Gogan et al., 2015 ).

Furthermore, Birasnav, Mittal, and Dalpati ( 2019 ) Suggested that these strategic assets promote high-level organizational performance, commonly called STC. Literature shows that STC encompasses an organization’s collective expertise and essential knowledge that remains intact even when employees depart (Alrowwad et al., 2020 ; Mehralian et al., 2018 ; Sarwar & Mustafa, 2023 ). The institution’s socialization, training, and development process facilitates the transfer of scientific research knowledge, skills, and expertise to its team (Arocena & Sutz, 2021 ; Marchiori et al., 2022 ). The STC is broadly recognized as having important potential and is a highly productive resource for generating great value. STC motivates its team member to share expertise with their counterparts at subordinate organizations by utilizing an institution’s research collaboration network and achieving team-level scientific and technological performance. This method remains effective even in challenging environments where traditional means of data collection, face-to-face meetings, and travel are not feasible (Secundo et al., 2016 ). In light of the above literature and theory, we propose the following hypothesis.

Hypothesis 2: Structural capital (STC) positively impacts team scientific and technological performance using a research collaboration system.

Top management support as a moderator

If the relationship between two constructs is not constant, the existence of a third construct can potentially affect this relationship by enhancing or diminishing its strength. In certain cases, the impact of a third construct can adjust the trajectory of the relationship between two variables. The variable in question is commonly called the “moderating variable.” According to Zada et al. ( 2023 ), top management support to leaders efficiently encourages team members within institutions to share research scientific knowledge with their counterparts in different countries through international research collaboration systems. Similarly, another study shows that the active endorsement of the top management significantly affects the development of direct associations, thereby influencing the team and organization’s overall performance (Biondi & Russo, 2022 ; Phuong et al., 2024 ). Different studies have confirmed that top management support is crucial in fostering a conducive knowledge-sharing environment by offering necessary resources (Ali et al., 2021 ; Lee et al., 2016 ; Zada et al., 2023 ). During and after the COVID-19 epidemic, numerous nations implemented nonessential travel restrictions and lockdown measures. In the given context, utilizing a research collaboration system would effectively facilitate the exchange of research, skills, and knowledge among staff belonging to various subsidiaries of an institution (Rådberg & Löfsten, 2024 ; Rasheed et al., 2024 ). However, it is common for researchers to exhibit resistance to adopting a novel research technique, often citing various justifications for their reluctance. To address the initial hesitance of employees at subsidiary institutes towards utilizing research collaborative networking within the institute, top management must employ strategies that foster motivation, encouragement, and incentives. These measures help create an atmosphere where team members feel empowered to engage with the new system freely. Institutional theory asserts that top management support is crucial for aligning talent management with institutional norms. Human and structural capital, pivotal within the institutional framework, contributes to an institution’s capacity to attract and retain talent, enhancing legitimacy. Adaptation to scientific and technological advancements is imperative for international institutional competitiveness, as institutional theory dictates (Oliver, 1997 ). Grounded on the above discussion, we have hypothesized.

Hypothesis 3a : Top management support moderates the relationship between human capital (HUC) and team scientific and technological performance. Specifically, this relationship will be stronger for those with higher top management support and weaker for those with lower top management support.

Hypothesis 3b : Top management support moderates the relationship between structural capital (STC) and team scientific and technological performance through the use of research collaboration network platforms. Specifically, this relationship will be stronger for those with higher top management support and weaker for those with lower top management support.

Methods data and sample

Sample and procedures.

To test the proposed model, we collected data from respondents in China’s research and academic sector in three phases to mitigate standard method variance (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003 ). In the first phase (T1-phase), respondents rated human capital, structural capital, and demographic information. After one month, respondents rated the team’s scientific and technological performance in the second phase (T2-phase). Following another one-month interval, respondents were asked to rate top management support in the third phase (T3-phase). In the first phase, after contacting 450 respondents, we received 417 usable questionnaires (92.66%). In the second phase, we received 403 usable questionnaires. In the third phase, we received 363 usable questionnaires (90.07%), constituting our final sample for interpreting the results. The sample comprises 63.4% male and 36.6% female respondents. The age distribution of the final sample was as follows: 25–30 years old (6.6%), 31–35 years old (57%), 36–40 years old (19.8%), and above 40 years old (16.5%). Regarding respondents’ experience, 45.7% had 1–5 years, 39.4% had 6–10 years, 11.3% had 11–15 years, and 3.6% had over 16 years. According to the respondents’ levels of education, 4.1% had completed bachelor’s degrees, 11.6% had earned master’s degrees, 78.8% were doctorate (PhD) scholars, and 5.5% were postdoctoral and above.

Measurement

To measure the variables, the current study adopted a questionnaire from previous literature, and age, gender, education, and experience were used as control variables. A five-point Likert scale was used (1 = strongly disagree to 5 = strongly agree). Human capital (HUC) was measured through an eight-item scale adopted by Kim, Atwater, Patel, and Smither ( 2016 ). The sample item is “The extent to which human capital of research and development department is competitive regarding team performance”. The self-reported scale developed by Nezam, Ataffar, Isfahani, and Shahin ( 2013 ) was adopted to measure structural capital. The scale consists of seven items. The sample scale item is “My organization emphasizes IT investment.” In order to measure top management support, a six-item scale was developed by Singh, Gupta, Busso, and Kamboj ( 2021 ), was adopted, and sample item includes “Sufficient incentives were provided by top management (TM) for achieving scientific and technological performance.” Finlay, the self-reported scale developed by Gonzalez-Mulé, Courtright, DeGeest, Seong, and Hong ( 2016 ) was adopted to gauge team scientific and technological performance and scales items are four. The sample item is “This team achieves its goals.”

Assessment of measurement model

In the process of employing AMOS for analysis, the initial step encompasses an assessment of the model to determine the strength and validity of the study variables. The evaluation of variable reliability conventionally revolves around two key aspects, which are indicator scale reliability and internal reliability. More precisely, indicator reliability is deemed to be recognized when factor loadings exceed the threshold of 0.60. In parallel, internal consistency reliability is substantiated by the attainment of values exceeding 0.70 for both Cronbach’s alpha and composite reliability, aligning with well-established and recognized guidelines (Ringle et al., 2020 ).

To gauge the reliability of construct indicators, we utilized two key metrics which are composite reliability (CR) and average variance extracted (AVE). The CR values for all variables were notably high, exceeding 0.70 and falling within the range of 0.882 to 0.955. This signifies a robust level of reliability for the indicators within each construct. Furthermore, the AVE values, which indicate convergent validity, exceeded the minimum threshold of 0.50, with each construct value varying from 0.608 to 0.653, thus affirming the presence of adequate convergent validity.

In addition to assessing convergent validity, we also examined discriminant validity by scrutinizing the cross-loadings of indicators on the corresponding variables and the squared correlations between constructs and AVE values. Our findings indicated that all measures exhibited notably stronger loadings on their intended constructs, thereby underscoring the measurement model’s discriminant validity.

Discriminant validity was recognized by observing average variance extracted (AVE) values that exceeded the squared correlations between constructs, as indicated in Table 1 . In conjunction with the Composite Reliability (CR) and AVE values, an additional discriminant validity assessment was conducted through a Heterotrait-Monotrait Ratio (HTMT) analysis. This analysis entailed a comparison of inter-construct correlations against a predefined upper threshold of 0.85. The results demonstrated that all HTMT values remained significantly below this threshold, affirming satisfactory discriminant validity for each variable (Henseler et al., 2015 ). Every HTMT value recorded was situated beneath the specified threshold, thereby supplying supplementary confirmation regarding the constructs’ discriminant validity. In summary, the results of the outer model assessment indicate that the variables showcased commendable levels of reliability and validity, with the discriminant validity being suitably and convincingly established.

Moreover, correlation Table 2 shows that human capital is significantly and positively correlated with structural capital ( r  = 0.594**), TMS ( r  = 0.456 **), and STP ( r  = 0.517**). Structural capital is also significantly and positively correlated with TMS ( r  = 0.893**) and STP ( r  = 0.853**). Furthermore, TMS is significantly and positively correlated with STP (0.859**).

Confirmatory factor analysis (CFA)

A comprehensive confirmatory factor analysis was estimated by employing the software AMOS version 24 to validate the distinctiveness of the variables. CFA shows the fitness of the hypothesized four factors model, including human capital, structural capital, top management support, and team scientific and technological performance, as delineated in Table 3 ; the results show that the hypothesized four-factor model shows fit and excellent alternative models. Consequently, The study variables demonstrate validity and reliability, which makes the dimension model appropriate for conducting a structural path analysis, as advocated by Hair, Page, and Brunsveld ( 2019 ).

Hypotheses testing

This study used the bootstrapping approach, which involves 5,000 bootstrap samples to test the proposed study model and assess the significance and strength of the structural correlations. Using this approach, bias-corrected confidence intervals and p-values were generated in accordance with Streukens and Leroi-Werelds ( 2016 ) guidelines. First, we did an analysis that entailed checking the path coefficients and their connected significance. The findings, as shown in Table 4 , validate Hypothesis 1, revealing a positive correlation between HUC and STP ( β  = 0.476, p  < 0.001). Additionally, the finding validates Hypothesis 2, highlighting a positive association between structural capital and STP ( β  = 0.877, p  < 0.001). For the moderation analysis, we utilized confidence intervals that do not encompass zero, per the guidelines that Preacher and Hayes ( 2008 ) recommended.

In our analysis, we found support for Hypothesis 3a, which posited that top management support (TMS) moderates the relationship between human capital (HUC) and team scientific and technological performance (STP). The results in Table 4 showed that the moderating role, more precisely, the interaction between HUC and TMS, was substantial and positive ( β  = −0.131, p  = 0.001). These results suggest that TMS enhances the positive association between HUC and STP, as shown in Fig. 2 . Consequently, we draw the conclusion that our data substantiates hypothesis 3a. Furthermore, Hypothesis 3b posited that TMS moderates the relationship between STC and STP. The results indicate that TMS moderates the association between STC and STP ( β  = −0.141, p  = 0.001, as presented in Table 4 and Fig. 3 ).

figure 2

The moderating effect of top management support (TMS) on the relationship between human capital (HUC) and team scientific and technological performance (STP). Source: authors’ development.

figure 3

The moderating effect of top management support (TMS) on the relationship between structural capital (SUC) and team scientific and technological performance (STP). Source: authors’ development.

The current study highlights the importance of research and academic institutions effectively enhancing their scientific and technological capabilities to manage their global talent within an international research collaboration framework and meet future challenges. Additionally, it underscores the need for these institutions to facilitate scientific knowledge exchange among their employees and counterparts in different countries. The enhancement of talent management through the exchange of scientific research knowledge can be most effectively accomplished by utilizing a collaborative research system between educational and research institutions (Shofiyyah et al., 2023 ), particularly in the context of the COVID-19 landscape. This study has confirmed that enhancing the higher education and research institutions’ human capital (HUC) and structural capital (STC) could attract and maintain global talent management and lead to more effective scientific and technological progress. The findings indicate that the utilization of human capital (HUC) has a significant and positive effect on scientific and technological term performance (STP) (Hypothesis 1), which is consistent with previous research (Habert & Huc, 2010 ). This study has additionally demonstrated that the implementation of s tructural capital (STC) has a significant and positive effect on team scientific and technological performance (STP), as indicated by hypothesis 2, which is also supported by the previous studies finding in different ways (Sobaih et al., 2022 ). This study has also shown that top management support moderates the association between human capital (HUC) and team scientific and technological performance hypothesis 3a and the association between structural capital (STC) and team scientific and technological performance hypothesis 3b. These hypotheses have garnered support from previous studies’ findings in different domains (Chatterjee et al., 2022 ). The study’s empirical findings also confirm the substantial moderating influence exerted by top management support on the relationships between HUC and STP described in hypothesis 3a and STC and STP described in hypothesis 3b, as evidenced by the results presented in Table 4 . Additionally, graphical representations are conducted to investigate the impacts on hypotheses 3a and 3b resulting from the application of high-top management support (TMS) and weak TMS.

The effect of high-top management support (TMS) and weak TMS on Hypothesis 3a is depicted in Fig. 2 . The solid line illustrates the effects of robust TMS on Hypothesis 3a, while the dashed line shows the effects of weak TMS on Hypothesis 3a. The graphic description validates that, as human capital (HUC) increases, team scientific and technological performance (STP) is more pronounced when influenced by robust TMS than weak TMS. This is evidenced by the steeper slope of the solid line in comparison to the dashed line. This finding suggests that employees within the research and academic sectors are more likely to utilize research collaboration networks when influenced by HUC and receive strong support from the organization’s top management.

The graph in Fig. 3 shows the impact of solid top management support (TMS) and weak TMS on Hypothesis 3b. The dotted lines continuous on the graph correspond to the effects of robust TMS and weak TMS, respectively. Figure 3 illustrates that, with increasing top management support (TMS), scientific and technological performance (STP) increase is more significant for robust TMS than weak TMS. This is evident from the steeper slope of the continuous line compared to the slope of the dotted line. This finding suggests that employees within universities and institutes are more likely to engage in research collaboration systems when they receive strong support from top management despite enhanced structural support.

Theoretical contribution

The current study makes significant contributions to the existing body of knowledge by exploring the intricate dynamics between organizational intellectual capital and team performance within scientific and technological research, especially during the unprecedented times brought about by the COVID-19 pandemic. Through its detailed examination of human and structural capital, alongside the moderating impact of top management support, the study provides a multi-faceted understanding of how these factors interact to enhance team outcomes.

This research enriches the literature on intellectual capital by providing empirical evidence on the positive association between HUC and STC and team performance. HUC, which includes employees’ skills, knowledge, and expertise, is a critical driver of innovation and productivity (Lenihan et al., 2019 ). The study highlights how a team’s collective intelligence and capabilities can lead to superior scientific and technological outputs. This finding aligns with and extends previous research that underscores the importance of skilled HR in achieving organizational success (Luo et al., 2023 ; Salamzadeh et al., 2023 ). Structural capital, encompassing organizational processes, databases, and intellectual property, contributes significantly to team performance(Ling, 2013 ). The study illustrates how well-established structures and systems facilitate knowledge sharing, streamline research processes, and ultimately boost the efficiency and effectiveness of research teams. This aspect of the findings adds depth to the existing literature by demonstrating the tangible benefits of investing in robust organizational infrastructure to support research activities.

Another essential contribution of this study is integrating a research collaboration network as a facilitating factor. This network, including digital platforms and tools that enable seamless communication and collaboration among researchers, has become increasingly vital in remote work and global collaboration (Mitchell, 2023 ). By examining how these systems leverage HUC and STC to enhance team performance, the study provides a practical understanding of the mechanisms through which technology can facilitate team scientific and technological performance.

One of the most novel contributions of this study is its emphasis on the moderating role of top management support. The findings suggest that when top management actively supports research initiatives, provides required resources, and fosters innovation, the positive effects of human and structural capital on team performance are amplified (Zada et al., 2023 ). This aspect of the study addresses a gap in the literature by highlighting the critical influence of top management on the success of intellectual capital investments. It underscores the importance of managerial involvement and strategic vision in driving research excellence and team scientific and technological performance.

Practical implications

The practical implications of the current study are weightage for organizations aiming to enhance their research and innovation capabilities and boost their scientific and technical progress. Organizations should prioritize recruiting, training, and retaining highly skilled and trained researchers and professionals globally. This can be achieved through targeted hiring practices, offering competitive compensation and retention, providing continuous professional development opportunities, and developing proper research collaboration networks. Organizations can leverage their expertize to drive innovative research and technological advancements by nurturing a global, talented workforce. Investing in robust organizational structures, processes, and systems is critical (Joseph & Gaba, 2020 ). This includes developing comprehensive databases, implementing efficient research processes, securing intellectual property, and strengthening collaborations. These factors support efficient knowledge sharing and streamline research activities, leading to higher productivity and quality research outcomes (Azeem et al., 2021 ). Organizations should ensure that their infrastructure is adaptable and can support remote and collaborative work environments.

The current study emphasizes the importance of digital platforms and tools facilitating research collaboration. Organizations should adopt advanced research collaboration networks that enable seamless communication, data sharing, and talent management. These systems are particularly crucial in a globalized research environment where team members may be geographically dispersed. Investing in such technology can significantly enhance research projects’ productivity in a sustainable way (Susanto et al., 2023 ). Top Management plays a vital role in the success of research initiatives and contributes to scientific and technological performance. Top management should actively support research teams by providing required resources, setting clear strategic directions, and fostering a culture of innovation. This includes allocating budgets for organizational research and development, encouraging cross-border collaboration, recognizing and rewarding research achievements, and enhancing overall performance. Effective Management ensures that the intellectual capital within the organization is fully utilized and aligned with organizational developmental goals (Paoloni et al., 2020 ). Organizations should create a working atmosphere that encourages research, creativity, and innovation. This can be done by establishing innovation labs, promoting interdisciplinary research, recruiting international talents, sharing research scholars, and encouraging the sharing of ideas across different departments globally. A research-oriented culture that supports innovation can inspire researchers to pursue groundbreaking work and contribute to the organization’s competitive edge.

Limitations and future research direction

The research presents numerous theoretical and practical implications; however, it has. The potential limitation of common method bias could impact the findings of this study. This concern arises because the data for the study variables were obtained from a single source and relied on self-report measures (Podsakoff, 2003 ). Therefore, it is recommended that future studies be conducted longitudinally to gain additional insights into organizations’ potential to enhance efficiency. Furthermore, it is essential to note that the sample size for this study was limited to 363 respondents who were deemed usable. These respondents were drawn from only ten research and academic institutions explicitly targeting the education and research sector.

Consequently, this restricted sample size may hinder the generalizability of the findings. Future researchers may employ a larger sample size and implement a more systematic approach to the organization to enhance the comprehensiveness and generalizability of findings in the context of global talent management and scientific and technological advancement. Furthermore, in future investigations, researchers may explore alternative boundary conditions to ascertain whether additional factors could enhance the model’s efficacy.

Numerous academic studies have emphasized the significance of examining talent management outcomes in global human resource management (HRM). The continuous international movement of highly qualified individuals is viewed as a driving force behind the development of new technologies, the dissemination of scientific findings, and the collaboration between institutions worldwide. Every organization strives to build a qualified and well-trained team, and the personnel department of the organization focuses on finding ways to transfer knowledge from experienced workers to new hires. This study uses a research collaboration system to examine the relationship between organizational intellectual capital (Human and structural Capital) and team scientific and technological performance. Further, this study underscores the moderating role of top management support. These findings offer a nuanced perspective on how organizations can leverage their intellectual assets to foster higher productivity and innovation, especially in emergencies.

Data availability

Due to respondents’ privacy concerns, data will not be publicly available. However, it can be made available by contacting the corresponding author at a reasonable request.

Al-Tit AA, Al-Ayed S, Alhammadi A, Hunitie M, Alsarayreh A, Albassam W (2022) The impact of employee development practices on human capital and social capital: the mediating contribution of knowledge management. J Open Innov 8(4):218

Article   Google Scholar  

Al Ariss A, Cascio WF, Paauwe J (2014) Talent management: current theories and future research directions. J World Bus 49(2):173–179

Al Jawali H, Darwish TK, Scullion H, Haak-Saheem W (2022) Talent management in the public sector: empirical evidence from the Emerging Economy of Dubai. Int J Hum Resour Manag 33(11):2256–2284

Ali M, Li Z, Khan S, Shah SJ, Ullah R (2021) Linking humble leadership and project success: the moderating role of top management support with mediation of team-building. Int J Manag Proj Bus 14(3):545–562

Almeida F, Santos JD, Monteiro JA (2020) The challenges and opportunities in the digitalization of companies in a post-COVID-19 World. IEEE Eng Manag Rev 48(3):97–103

Alrowwad AA, Abualoush SH, Masa’deh RE (2020) Innovation and intellectual capital as intermediary variables among transformational leadership, transactional leadership, and organizational performance. J Manag Dev 39(2):196–222

Anshari M, Hamdan M (2022) Understanding knowledge management and upskilling in Fourth Industrial Revolution: transformational shift and SECI model. VINE J Inf Knowl Manag Syst 52(3):373–393

Google Scholar  

Arocena R, Sutz J (2021) Universities and social innovation for global sustainable development as seen from the south. Technol Forecast Soc change 162:120399

Article   PubMed   Google Scholar  

Arslan A, Golgeci I, Khan Z, Al-Tabbaa O, Hurmelinna-Laukkanen P (2021) Adaptive learning in cross-sector collaboration during global emergency: conceptual insights in the context of COVID-19 pandemic. Multinatl Bus Rev 29(1):21–42

Azeem M, Ahmed M, Haider S, Sajjad M (2021) Expanding competitive advantage through organizational culture, knowledge sharing and organizational innovation. Technol Soc 66:101635

Bakhsh K, Hafeez M, Shahzad S, Naureen B, Faisal Farid M (2022) Effectiveness of digital game based learning strategy in higher educational perspectives. J Educ e-Learn Res 9(4):258–268

Barney J (1991) Firm resources and sustained competitive advantage. J Manag 17(1):99–120

Barney JB, Clark DN (2007) Resource-based theory: Creating and sustaining competitive advantage. Oup Oxford

Baron A, Armstrong M (2007) Human capital management: achieving added value through people. Kogan Page Publishers

Barpanda S (2021) Role of human and structural capital on performance through human resource practices in Indian microfinance institutions: a mediated moderation approach. Knowl Process Manag 28(2):165–180

Biondi L, Russo S (2022) Integrating strategic planning and performance management in universities: a multiple case-study analysis. J Manag Gov 26(2):417–448

Birasnav M, Mittal R, Dalpati A (2019) Integrating theories of strategic leadership, social exchange, and structural capital in the context of buyer–supplier relationship: an empirical study. Glob J Flex Syst Manag 20:219–236

Bontis N (1998) Intellectual capital: an exploratory study that develops measures and models. Manag Decis 36(2):63–76

Bontis N, Fitz‐enz J (2002) Intellectual capital ROI: a causal map of human capital antecedents and consequents. J Intellect Cap 3(3):223–247

Cavicchi C, Vagnoni E (2017) Does intellectual capital promote the shift of healthcare organizations towards sustainable development? Evidence from Italy. J Clean Prod 153:275–286

Chatterjee S, Chaudhuri R, Vrontis D (2022) Does remote work flexibility enhance organization performance? Moderating role of organization policy and top management support. J Bus Res 139:1501–1512

Council NR (2012) Education for life and work: developing transferable knowledge and skills in the 21st century. National Academies Press

Davenport S, Carr A, Bibby D (2002) Leveraging talent: spin–off strategy at industrial research. RD Manag 32(3):241–254

Dess GG, Picken JC (2000) Changing roles: Leadership in the 21st century. Organ Dyn 28(3):18–34

Duan W, Li C (2023) Be alert to dangers: collapse and avoidance strategies of platform ecosystems. J Bus Res 162:113869

Farahian M, Parhamnia F, Maleki N (2022) The mediating effect of knowledge sharing in the relationship between factors affecting knowledge sharing and reflective thinking: the case of English literature students during the COVID-19 crisis. Res Pract Technol Enhanc Learn 17(1):1–25

Fasi MA (2022) An overview on patenting trends and technology commercialization practices in the university Technology Transfer Offices in USA and China. World Pat Inf 68:102097

Felin T, Hesterly WS (2007) The knowledge-based view, nested heterogeneity, and new value creation: philosophical considerations on the locus of knowledge. Acad Manag Rev 32(1):195–218

Fernandes C, Veiga PM, Lobo CA, Raposo M (2023) Global talent management during the COVID‐19 pandemic? The Gods must be crazy! Thunderbird Int Bus Rev 65(1):9–19

Gogan LM, Duran DC, Draghici A (2015) Structural capital—a proposed measurement model. Procedia Econ Financ 23:1139–1146

Gonzalez-Mulé E, Courtright SH, DeGeest D, Seong J-Y, Hong D-S (2016) Channeled autonomy: the joint effects of autonomy and feedback on team performance through organizational goal clarity. J Manag 42(7):2018–2033

Haak-Saheem W (2020) Talent management in Covid-19 crisis: how Dubai manages and sustains its global talent pool. Asian Bus Manag 19:298–301

Habert B, Huc C (2010) Building together digital archives for research in social sciences and humanities. Soc Sci Inf 49(3):415–443

Haider SA, Akbar A, Tehseen S, Poulova P, Jaleel F (2022) The impact of responsible leadership on knowledge sharing behavior through the mediating role of person–organization fit and moderating role of higher educational institute culture. J Innov Knowl 7(4):100265

Hair JF, Page M, Brunsveld N (2019) Essentials of business research methods. Routledge

Harsch K, Festing M (2020) Dynamic talent management capabilities and organizational agility—a qualitative exploration. Hum Resour Manag 59(1):43–61

He S, Chen W, Wang K, Luo H, Wang F, Jiang W, Ding H (2023) Region generation and assessment network for occluded person re-identification. IEEE Trans Inf Forensic Secur 19:120–132

Henseler J, Ringle CM, Sarstedt M (2015) A new criterion for assessing discriminant validity in variance-based structural equation modeling. J Acad Mark Sci 43:115–135

Jain N, Thomas A, Gupta V, Ossorio M, Porcheddu D (2022) Stimulating CSR learning collaboration by the mentor universities with digital tools and technologies—an empirical study during the COVID-19 pandemic. Manag Decis 60(10):2824–2848

Joseph J, Gaba V (2020) Organizational structure, information processing, and decision-making: a retrospective and road map for research. Acad Manag Ann 14(1):267–302

Kaliannan M, Darmalinggam D, Dorasamy M, Abraham M (2023) Inclusive talent development as a key talent management approach: a systematic literature review. Hum Resour Manag Rev 33(1):100926

Kallmuenzer A, Baptista R, Kraus S, Ribeiro AS, Cheng C-F, Westhead P (2021) Entrepreneurs’ human capital resources and tourism firm sales growth: a fuzzy-set qualitative comparative analysis. Tour Manag Perspect 38:100801

Kim KY, Atwater L, Patel PC, Smither JW (2016) Multisource feedback, human capital, and the financial performance of organizations. J Appl Psychol 101(11):1569

Kwok L (2022) Labor shortage: a critical reflection and a call for industry-academia collaboration. Int J Contemp Hosp Manag 34(11):3929–3943

Lee J-C, Shiue Y-C, Chen C-Y (2016) Examining the impacts of organizational culture and top management support of knowledge sharing on the success of software process improvement. Comput Hum Behav 54:462–474

Lenihan H, McGuirk H, Murphy KR (2019) Driving innovation: public policy and human capital. Res policy 48(9):103791

Ling Y-H (2013) The influence of intellectual capital on organizational performance—knowledge management as moderator. Asia Pac J Manag 30(3):937–964

Luo J, Zhuo W, Xu B (2023) The bigger, the better? Optimal NGO size of human resources and governance quality of entrepreneurship in circular economy. Manag Decis (ahead-of-print) https://doi.org/10.1108/MD-03-2023-0325

Damsgaard J, Lyytinen K (2001) The role of intermediating institutions in the diffusion of electronic data interchange (EDI): how industry associations intervened in Denmark, Finland, and Hong Kong. Inf Soc 17(3):195–210

Marchiori DM, Rodrigues RG, Popadiuk S, Mainardes EW (2022) The relationship between human capital, information technology capability, innovativeness and organizational performance: an integrated approach. Technol Forecast Soc Change 177:121526

Marinelli L, Bartoloni S, Pascucci F, Gregori GL, Briamonte MF (2022) Genesis of an innovation-based entrepreneurial ecosystem: exploring the role of intellectual capital. J Intellect Cap 24(1):10–34

Mehralian G, Nazari JA, Ghasemzadeh P (2018) The effects of knowledge creation process on organizational performance using the BSC approach: the mediating role of intellectual capital. J Knowl Manag 22(4):802–823

Mitchell A (2023) Collaboration technology affordances from virtual collaboration in the time of COVID-19 and post-pandemic strategies. Inf Technol People 36(5):1982–2008

Muñoz JLR, Ojeda FM, Jurado DLA, Peña PFP, Carranza CPM, Berríos HQ, Vasquez-Pauca MJ (2022) Systematic review of adaptive learning technology for learning in higher education. Eurasia J Educ Res 98(98):221–233

Nezam MHK, Ataffar A, Isfahani AN, Shahin A (2013) The impact of structural capital on new product development performance effectiveness—-the mediating role of new product vision and competitive advantage. Int J Hum Resour Stud 3(4):281

O’Dwyer M, Filieri R, O’Malley L (2023) Establishing successful university–industry collaborations: barriers and enablers deconstructed. J Technol Transf 48(3):900–931

Oliver C (1997) Sustainable competitive advantage: combining institutional and resource‐based views. Strateg Manag J 18(9):697–713

Pak J, Heidarian Ghaleh H, Mehralian G (2023) How does human resource management balance exploration and exploitation? The differential effects of intellectual capital‐enhancing HR practices on ambidexterity and firm innovation. Human Resource Manag https://doi.org/10.1002/hrm.22180

Paoloni M, Coluccia D, Fontana S, Solimene S (2020) Knowledge management, intellectual capital and entrepreneurship: a structured literature review. J Knowl Manag 24(8):1797–1818

Patnaik S, Munjal S, Varma A, Sinha S (2022) Extending the resource-based view through the lens of the institution-based view: a longitudinal case study of an Indian higher educational institution. J Bus Res 147:124–141

Pellegrini L, Aloini D, Latronico L (2022) Open innovation and intellectual capital during emergency: evidence from a case study in telemedicine. Knowl Manag Res Pract 21(4), 765–776

Phan LT, Nguyen TV, Luong QC, Nguyen TV, Nguyen HT, Le HQ, Pham QD (2020) Importation and human-to-human transmission of a novel coronavirus in Vietnam. N Engl J Med 382(9):872–874

Article   PubMed   PubMed Central   Google Scholar  

Phuong QN, Le Ngoc M, Dong HT, Thao TLT, Tran T, Cac T (2024) Enhancing employment opportunities for people with disabilities in Vietnam: the role of vocational training and job placement centers. J Chin Hum Resour Manag 15(3):64–75

Podsakoff N (2003) Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol 885(879):10.1037

Podsakoff PM, MacKenzie SB, Lee J-Y, Podsakoff NP (2003) Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol 88(5):879

Preacher KJ, Hayes AF (2008) Asymptotic and resampling strategies for assessing and comparing indirect effects inmultiple mediator models. Behav Res Methods 40(3):879–891

Rådberg KK, Löfsten H (2024) The entrepreneurial university and development of large-scale research infrastructure: Exploring the emerging university function of collaboration and leadership. J Technol Transf 49(1):334–366

Radhamani R, Kumar D, Nizar N, Achuthan K, Nair B, Diwakar S (2021) What virtual laboratory usage tells us about laboratory skill education pre-and post-COVID-19: Focus on usage, behavior, intention and adoption. Educ Inf Technol 26(6):7477–7495

Rasheed MH, Khalid J, Ali A, Rasheed MS, Ali K (2024) Human resource analytics in the era of artificial intelligence: Leveraging knowledge towards organizational success in Pakistan. J Chin Hum Resour Manag 15:3–20

Ringle CM, Sarstedt M, Mitchell R, Gudergan SP (2020) Partial least squares structural equation modeling in HRM research. Int J Hum Resour Manag 31(12):1617–1643

Roos G, Roos J (1997) Measuring your company’s intellectual performance. Long Range Plan 30(3):413–426

Salamzadeh A, Tajpour M, Hosseini E, Brahmi MS (2023) Human capital and the performance of Iranian Digital Startups: the moderating role of knowledge sharing behaviour. Int J Public Sect Perform Manag 12(1-2):171–186

Sarwar A, Mustafa A (2023) Analysing the impact of green intellectual capital on environmental performance: the mediating role of green training and development. Technol Anal Strateg Manag 1–14. https://doi.org/10.1080/09537325.2023.2209205

Scott WR (2008) Institutions and organizations: ideas and interests. Sage

Secundo G, Dumay J, Schiuma G, Passiante G (2016) Managing intellectual capital through a collective intelligence approach: an integrated framework for universities. J Intellect Cap 17(2):298–319

Secundo G, Massaro M, Dumay J, Bagnoli C (2018) Intellectual capital management in the fourth stage of IC research: a critical case study in university settings. J Intellect Cap 19(1):157–177

Shao JJ, Ariss AA (2020) Knowledge transfer between self-initiated expatriates and their organizations: research propositions for managing SIEs. Int Bus Rev 29(1):101634

Shofiyyah NA, Komarudin TS, Hasan MSR (2023) Innovations in Islamic Education Management within the University Context: addressing challenges and exploring future prospects. Nidhomul Haq 8(2):193–209

Sigala M, Ren L, Li Z, Dioko LA (2023) Talent management in hospitality during the COVID-19 pandemic in Macao: a contingency approach. Int J Contemp Hosp Manag 35(8):2773–2792

Singh SK, Gupta S, Busso D, Kamboj S (2021) Top management knowledge value, knowledge sharing practices, open innovation and organizational performance. J Bus Res 128:788–798

Sobaih AEE, Hasanein A, Elshaer IA (2022) Higher education in and after COVID-19: the impact of using social network applications for e-learning on students’ academic performance. Sustainability 14(9):5195

Article   CAS   Google Scholar  

Sommer LP, Heidenreich S, Handrich M (2017) War for talents—how perceived organizational innovativeness affects employer attractiveness. RD Manag 47(2):299–310

Stewart T (1991) Brainpower: how intellectual capital is becoming America’s most valuable. Fortune

Streukens S, Leroi-Werelds S (2016) Bootstrapping and PLS-SEM: A step-by-step guide to get more out of your bootstrap results. Eur Manage J 34(6):618–632

Sulaiman F, Uden L, Eldy EF (2022) Online Learning in Higher Education Institution During COVID-19: A Review and the Way Forward. Paper presented at the International Workshop on Learning Technology for Education Challenges

Susanto P, Sawitri NN, Ali H, Rony ZT (2023) Employee performance and talent management impact increasing construction company productivity. Int J Psychol Health Sci 1(4):144–152

Tan CN-L, Md. Noor S (2013) Knowledge management enablers, knowledge sharing and research collaboration: a study of knowledge management at research universities in Malaysia. Asian J Technol Innov 21(2):251–276

Vaiman V, Sparrow P, Schuler R, Collings DG (2018) Macro talent management: a global perspective on managing talent in developed markets. Routledge

Vătămănescu E-M, Cegarra-Navarro J-G, Martínez-Martínez A, Dincă V-M, Dabija D-C (2023) Revisiting online academic networks within the COVID-19 pandemic–From the intellectual capital of knowledge networks towards institutional knowledge capitalization. J Intellect Cap 24(4):948–973

Wang Y, Lee L-H, Braud T, Hui P (2022) Re-shaping Post-COVID-19 teaching and learning: A blueprint of virtual-physical blended classrooms in the metaverse era. Paper presented at the 2022 IEEE 42nd International Conference on Distributed Computing Systems Workshops (ICDCSW)

Wang Z, Wang N, Liang H (2014) Knowledge sharing, intellectual capital and firm performance. Manag Decis 52(2):230–258

Xu A, Li Y, Donta PK (2024) Marketing decision model and consumer behavior prediction with deep learning. J Organ End Use Comput (JOEUC) 36(1):1–25

Yang G, Xiangming L (2024) Graduate socialization and anxiety: insights via hierarchical regression analysis and beyond. Stud High Educ 1–17. https://doi.org/10.1080/03075079.2024.2375563

Zada M, Khan J, Saeed I, Zada S, Jun ZY (2023) Linking public leadership with project management effectiveness: mediating role of goal clarity and moderating role of top management support. Heliyon 9(5)

Download references

Author information

Authors and affiliations.

School of Economics and Management, Hanjiang Normal University, Shiyan, 442000, China

Muhammad Zada

Facultad de Administración y Negocios, Universidad Autónoma de Chile, Santiago, 8320000, Chile

School of Law, Huazhong University of Science and Technology, Wuhan, Hubie, China

Imran Saeed

College of Management, Shenzhen University, Shenzhen, China

Department of Business Administration, Faculty of Management Sciences, Ilma University, Karachi, Pakistan

Shagufta Zada

Business School Henan University, Kaifeng, Henan, China

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: Muhammad Zada and Imran Saeed. Methodology: Jawad Khan. Software: Shagufta Zada. Data collection: Muhammad Zada, Shagufta Zada and Jawad Khan. Formal analysis: Imran Saeed and Jawad Khan. Resources: Muhammad Zada. Writing original draft preparation: Muhammad Zada and Imran Saeed. Writing review and editing: Jawad Khan, Shagufta Zada. All authors have read and agreed to the published version of the paper.

Corresponding author

Correspondence to Muhammad Zada .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

The author sought and received ethical approval from the Research Ethical Committee School of Economics and Management at Hanjiang Normal University, China, with approval number 2023REC001, and the study complied with ethical standards.

Informed consent statement

Informed consent was obtained from all subjects involved in the study. All the participants were accessed with the support of the HR Department employed in China’s research and academia sector. Response Participants were provided with comprehensive information regarding the study’s purpose and procedures. Confidentiality and privacy were strictly implemented throughout the research process. Using the time lag data collection approach, we collected from 393 employees employed in China’s research and academic sector.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Zada, M., Saeed, I., Khan, J. et al. Navigating post-pandemic challenges through institutional research networks and talent management. Humanit Soc Sci Commun 11 , 1164 (2024). https://doi.org/10.1057/s41599-024-03697-9

Download citation

Received : 28 February 2024

Accepted : 30 August 2024

Published : 09 September 2024

DOI : https://doi.org/10.1057/s41599-024-03697-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

review of literature in education system

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

review of literature in education system

  • The Open University
  • Accessibility hub
  • Guest user / Sign out
  • Study with The Open University

My OpenLearn Profile

Personalise your OpenLearn profile, save your favourite content and get recognition for your learning

Approaching language, literature and childhood

Approaching language, literature and childhood

Course description

Course content, course reviews.

Studying children's literature allows us to learn not just about the books that children are reading, but also about what role reading plays in childhood, and how our ideas about childhood affect the books that they read. In this free course, you will be introduced to some of the key questions that the study of children's literature raises, such as: how do children acquire and use languages and literacies? Why (and how) is language important in children’s literature? Why (and how) is literature important for children and young adults? How is childhood socially constructed? And how is the child represented in literature?

This OpenLearn course is an adapted extract from the Open University course  L310 Language, literature and childhood .

Course learning outcomes

After studying this course, you should be able to:

discuss some of the different theories, approaches and debates in the interdisciplinary field of children's literature

reflect on your own and others' memories of children's literature

consider how the academic disciplines of literature and childhood intersect, bringing different perspectives to the field

describe how texts for children and young people convey and challenge ideas around diversity through an exploration of ethnic diversity.

First Published: 13/09/2024

Updated: 13/09/2024

Rate and Review

Rate this course, review this course.

Log into OpenLearn to leave reviews and join in the conversation.

Create an account to get more

Track your progress.

Review and track your learning through your OpenLearn Profile.

Statement of Participation

On completion of a course you will earn a Statement of Participation.

Access all course activities

Take course quizzes and access all learning.

Review the course

When you have finished a course leave a review and tell others what you think.

For further information, take a look at our frequently asked questions which may give you the support you need.

About this free course

Become an ou student, download this course, share this free course.

logo

  • Aims & Scopes
  • Editorial Board
  • Young Academic Editors
  • Subscription
  • Current Issue
  • Most Downloaded
  • Advance Search
  • Submission & Review
  • Guide for Authors
  • Publishing Process
  • Publishing Ethics
  • Peer View Process
  • Article Publish Charge
  • Submission Template

search

Isaac Oyeyemi Olayode, Bo Du, Alessandro Severino, Tiziana Campisi, Frimpong Justice Alex. 2023: Systematic literature review on the applications, impacts, and public perceptions of autonomous vehicles in road transportation system. Journal of Traffic and Transportation Engineering (English Edition), 10(6): 1037-1060. DOI: 10.1016/j.jtte.2023.07.006

Systematic literature review on the applications, impacts, and public perceptions of autonomous vehicles in road transportation system

  • Isaac Oyeyemi Olayode , 
  • Bo Du , 
  • Alessandro Severino , 
  • Tiziana Campisi , 
  • Frimpong Justice Alex

 alt=

Export File

You can copy and paste references from this page.

IMAGES

  1. 14 Types Of Literature Review

    review of literature in education system

  2. (PDF) Blended learning approaches at higher education institutions to

    review of literature in education system

  3. (PDF) ICT in education: A critical literature review and its implications

    review of literature in education system

  4. (PDF) A Review of Literature on E-Learning Systems in Higher Education

    review of literature in education system

  5. Four steps to write literature review [Critical Analysis of the previous knowledge about your topic]

    review of literature in education system

  6. (PDF) A Review of Literature in Mobile Learning: A New Paradigm in

    review of literature in education system

VIDEO

  1. Systematic Literature Review: An Introduction [Urdu/Hindi]

  2. what is Literature Review?

  3. Literature || Definition || types of literature || Function of Literature || #englishliterature

  4. पिता का कर्ज अदा करने के लिए पूरे दुनिया की दौलत भी कुछ नहीं || Capt. Zile Singh Academy

  5. Literature Review (الجزء الأول)

  6. Studying English Literature in Context: Study Guide & Critical Reflections

COMMENTS

  1. How can education systems improve? A systematic literature review

    Understanding what contributes to improving a system will help us tackle the problems in education systems that usually fail disproportionately in providing quality education for all, especially for the most disadvantage sectors of the population. This paper presents the results of a qualitative systematic literature review aimed at providing a comprehensive overview of what education research ...

  2. Systematic Reviews in Educational Research: Methodology, Perspectives

    A literature review is a scholarly paper which provides an overview of current knowledge about a topic. It will typically include substantive findings, as well as theoretical and methodological contributions to a particular topic (Hart 2018, p. xiii).Traditionally in education 'reviewing the literature' and 'doing research' have been viewed as distinct activities.

  3. Artificial intelligence in education: A systematic literature review

    1. Introduction. Information technologies, particularly artificial intelligence (AI), are revolutionizing modern education. AI algorithms and educational robots are now integral to learning management and training systems, providing support for a wide array of teaching and learning activities (Costa et al., 2017, García et al., 2007).Numerous applications of AI in education (AIED) have emerged.

  4. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  5. Resilience in educational system: A systematic review and directions

    These studies have highlighted the role of the environment, individual experiences and background, and educational institution's programs as enablers of resilience-building. This review contributes to the extant literature by proposing recommendations for research potentials toward building a more resilient educational system.

  6. Digital transformation in education: A systematic review of education 4

    The education system is changing with the development of technology. • Education 4.0 represents both the change in context and technology for learning. • This article reviews the state-of-the-art of existing Education 4.0 literature. • It determines the most used approaches and examines recent trends in Education 4.0. •

  7. Factors Contributing to School Effectiveness: A Systematic Literature

    The studies examined in this systematic literature review also focused on online education and distance teaching and learning, since there was the COVID-19 pandemic during the considered research period; so, we identified some issues related to education during the COVID-19 pandemic [71,72,73,74]. The experience of online distance learning (ODL ...

  8. How can education systems improve? A systematic literature review

    We identify, analyze, and report patterns in the papers included in this systematic review. From the coding process, four drivers for system improvement emerged: (1) system-wide approaches; (2) human capital; (3) governance and macro-micro level bridges; and (4) availability of resources.", keywords = "Comparative education, Educational ...

  9. A systematic literature review of barriers and supports: initiating

    This backdrop sets the stage for the present study, which involved a large-scale systematic review of literature that support or act as barriers to introducing educational change at the system level. Whilst acknowledging that change is messy and non-linear in its implementation, this paper considers factors that system-level leaders and ...

  10. A Literature Review on Impact of COVID-19 Pandemic on Teaching and

    Education system across the world including Bhutan needs to invest on the professional development of teachers, especially on ICT and effective pedagogy, considering the present scenario. Making online teaching creative, innovative and interactive through user-friendly tools is the other area of research and development.

  11. Review of Education

    The literature review presented by Scheerens as part of the integrated multi-level model of education helped categorise the broad range of system-level factors identified. Although the factors fit well with Scheerens' proposed framework, our synthesis of effect sizes shows that not all themes, and not all factors within each theme, were ...

  12. Review of Education

    The aim of this systematic literature review was to introduce a comprehensive model of trust in the multi-level educational system which interconnects various domains of trust. It is based on a systematic literature review of 183 recent research articles on trust in different educational settings.

  13. Full article: Learning management systems: a review of the research

    No prior study was found that has looked at the research designs employed by empirical studies of the impact of LMSs on learner communities, particularly those that compare research from two or more educational partners. This literature review compares and contrasts 23 empirical studies (fourteen with an Australian focus, nine with a Chinese ...

  14. Data Envelopment Analysis and Higher Education: A Systematic Review of

    The interest in Data Envelopment Analysis (DEA) has grown since its first put forward in 1978. In response to the overwhelming interest, systematic literature reviews, as well as bibliometric studies, have been performed in describing the state-of-the-art and offering quantitative outlines with regard to the high-impact papers on global applications of DEA and the higher education system (DEA-HE).

  15. Review of Education

    An official journal of the British Educational Research Association, Review of Education (RoE) is a focal point for the publication of educational research from throughout the world, and on topics of international interest. We specialise in publishing substantial papers (8,000-20,000 words) in order to give authors the opportunity to describe major projects or ideas more fully than would ...

  16. A systematic literature review on educational recommender systems for

    Despite the benefits, there are known issues upon the usage of the recommender system in the educational domain. ... How Mobile augmented reality is applied in education? A systematic literature review. Creative Education. 2019; 10:1589-1627. doi: 10.4236/ce.2019.107115. [Google Scholar] Huang L, Wang C-D, Chao H-Y, Lai J-H, Yu PS. A score ...

  17. Chapter 1: Introduction

    At the graduate or doctoral level, the literature review is an essential feature of thesis and dissertation, as well as grant proposal writing. That is to say, "A substantive, thorough, sophisticated literature review is a precondition for doing substantive, thorough, sophisticated research…A researcher cannot perform significant research ...

  18. PDF Review of literature cover 10-11-05

    Equity concerns arise in relation to many groups' full participation in education of good quality. This includes groups defined by socioeconomic status, location and proximity to schools, special needs, health status, religion, and gender. This review briefly examines only one of these critical equity areas: gender.

  19. Education Literature Review

    The Role of the Literature Review. Your literature review gives your readers an understanding of the evolution of scholarly research on your topic. In your literature review you will: Review the literature in two ways: The literature review is NOT an annotated bibliography. Nor should it simply summarize the articles you've read.

  20. A literature review on the student evaluation of teaching: An

    The current study is anchored on a literature review paradigm. For any study, a literature review is an integral part of the entire process (Fink, 2005; Hart, 1998; Machi and McEvoy, 2016) In general, literature reviews involve database retrievals and searches defined by a specific topic (Rother, 2007).

  21. Literature on School Education, Quality, and Outcomes: A Review

    The literature review indicates that educational performances and the structure of educational system strongly differ between different countries and economic levels. Even if few studies have tried to estimate the relationship between these variables particularly in the case of India, it shows that the extent of the impact varies from region to ...

  22. A Review on Indian Education System with Issues and Challenges

    After an extensive review of the literature and analysis of scientific works of outstanding scientists and thinkers conclude that higher education has become larger and more important to society ...

  23. Navigating post-pandemic challenges through institutional research

    Literature review. An institution's ... exchange of scientific research knowledge can be most effectively accomplished by utilizing a collaborative research system between educational and ...

  24. (PDF) Review of Indian education system

    Abstract — In today's w orld of globalization, Indian education. system is t o be upgrad ed. The pap er focus on the recen t. literature available related to teaching learning approach. T he ...

  25. A Literature Review of Study on Remote Sensing Drought Monitoring System

    Droughts pose a severe threat to agricultural production, ecosystem health, and socioeconomic development worldwide. Effective monitoring and early warning systems are crucial for mitigating drought impacts. In recent decades, remote sensing technology has emerged as a powerful tool for drought monitoring, offering consistent spatial-temporal coverage and cost-effectiveness compared to ...

  26. PDF How can education systems improve? A systematic literature review

    We identify, analyze, and report patterns in the papers included in this systematic review. From the coding process, four drivers for system improvement emerged: (1) system-wide approaches; (2) human capital; (3) governance and macro-micro level bridges; and (4) availability of resources. Keywords Educational change · System-wide improvement ...

  27. Approaching language, literature and childhood

    discuss some of the different theories, approaches and debates in the interdisciplinary field of children's literature. reflect on your own and others' memories of children's literature. consider how the academic disciplines of literature and childhood intersect, bringing different perspectives to the field

  28. Systematic literature review on the applications, impacts, and public

    Isaac Oyeyemi Olayode, Bo Du, Alessandro Severino, Tiziana Campisi, Frimpong Justice Alex. 2023: Systematic literature review on the applications, impacts, and public perceptions of autonomous vehicles in road transportation system. Journal of Traffic and Transportation Engineering (English Edition), 10(6): 1037-1060.

  29. Abernethy Malformation and Gastrointestinal Bleeding: A Case Report and

    A mini literature review of AF patients presented with gastrointestinal (GI) tract bleeding. Research design: Case report and literature review. Data Collection: An electronic search of PubMed was performed from inception to December 2023. Results: 34 AF patients presented with GI tract bleeding were identified published in the literature. The ...

  30. Sources of Sexual Knowledge and Information, and Sexual Attitudes of

    This study sought to synthesise evidence on the sources of sexual knowledge and information and relationship with sexual attitudes of cis men. From a review of existing literature, five categories were obtained from 11 studies and grouped into three syntheses: (1) sources of sexual knowledge and information, (2) sexual attitudes and (3) the relationship between sources of sexual knowledge and ...