Performance Management System. A Literature Review

  • First Online: 01 January 2013

Cite this chapter

literature review of performance measurement

  • Chiara Demartini 2  

Part of the book series: Contributions to Management Science ((MANAGEMENT SC.))

4991 Accesses

2 Citations

This Chapter proposes a broad systematic review of PMS design, describing the evolution of the approaches to PMS design, based on the application of theories; introducing both concepts and frameworks that characterise the field and clearly call out for more research on a comprehensive PMS framework; and showing how PMS mechanisms should relate to each other in order to develop both efficiency and innovation, which result in long-term survival. From the review on PMS design, we can argue that effective design of PMS design is contingent to both external and internal variables; financial performance measures are more and more assessed together with non-financial performance measures; the link between PMS and strategy should be enacted trough different kind of PM mechanisms; PMS is a dynamic package of PM mechanisms, which should be considered as a whole in order to assess the overall effectiveness. Finally, since the analysis of the effect of single mechanisms on the overall effectiveness is partial and problematic, there is a call for more loosely coupled PMSs, which develop both control and flexibility.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

The transition from measurement to management of performance has been called the second wave of knowledge management, since in the first wave “knowledge management – in particular in Nonaka’s view – concerns the single individual’s personal tacit knowledge and the subsequent problem of distributing such knowledge to other individuals in the organisation”, while in the second wave “knowledge management is about management control where managers combine, apply and develop a corporate body of knowledge resources to produce and use value around the company’s services” (Mouritsen and Larsen 2005 : 388).

Anne Huff defined the systematic literature review as the “explicit procedures to identify, select, and critically appraise research relevant to a clearly formulated question” (Huff 2009 : 148).

Although the review is focused on ‘performance management’ and ‘performance management system’, the search terms included other concepts, which are closely related to the main research question.

The sophistication of the management accounting systems has been defined as the “capability of an MAS to provide a broad spectrum of information relevant for planning, controlling, and decision-making all in the aim of creating or enhancing value” (Abdel-Kader and Luther 2008 : 3).

Previous studies on leadership style analysed the effect of this variable on budgetary participation, and the results were statistically significant (Brownell 1983 ).

Tolerance for ambiguity measures “the extent to which one feels threatened by ambiguity or ambiguous situations” (Chong 1998 : 332).

TCE develops the idea that controlling complex economic transactions by “hard contracting” is expensive and an optimal choice between firm and market governance should be taken according to asset specificity. “If assets are non-specific, markets enjoy advantages in both production cost and governance cost respects […]. As assets become more specific, however, the aggregation benefits of markets […] are reduced and exchange takes on a progressively stronger bilateral character” (Williamson 1981 : 558).

Even though the first framework developed four perspectives (financial, internal business, customer, and innovation and improvement), Kaplan and Norton specified that each firm, or unit, using the BSC should adjust the number and focus of perspectives and their measures to the specific case under analysis. Therefore, the number of perspectives can be higher than four and the perspectives caption can be changed according to the strategic issues that the firm has to monitor in order to be successful.

Together with the BSC, other performance measurement systems based on both financial and non-financial performance measures have been developed, such as the Results and Determinants (Fitzgerald et al. 1991 ), the Performance Pyramid (Lynch and Cross 1995 ), and the PISCI (Azofra et al. 2003 ).

According to Kim and Oh, the performance measures related to R&D departments should be based on behavioural and qualitative measures, such as “leadership and mentoring for younger researchers”, and appraised by a “bottom up (e.g., R&D researchers’ evaluation of their own bosses say, R&D managers) as well as horizontal (e.g., peers and/or colleagues)” evaluation scheme (Kim and Oh 2002 : 19).

Simons described the old management control philosophy as a “command-and-control” one, in which strategy setting follows a top-down direction, a lot of emphasis is put on standardization and efficiency, results are compared to and should be aligned to plan, and much effort is devoted to keeping things on track and minimizing the number of “surprises”. On the other hand, he pointed out that the new management control philosophy is more concerned with “creativity […], new organizational forms, […] the importance of knowledge as a competitive asset”, which has resulted in “market-driven strategy, customization, continuous improvement, meeting customer needs, and empowerment” (Simons 1995 : 3).

Mission statement, vision and corporate credo are all examples of “organizational definitions”.

However, Simons also warned about setting boundaries that could inhibit adaptive change and survival ( 1995 : 55–53).

Benefits from managerial creativity relate to all the new alternatives and solutions that managers can invent in trying to either create value for the organization or solve problems (Christenson 1983 ; Nelson and Winter 1982 ), while dysfunctionalities refer to research activities that are either too risky or too vague, and thus not value creating.

Argyris and Schon also called the intended strategy an “espoused theory” in contrast to “theory-in-use” (Argyris and Schon 1978 : 10–11).

Simons argued that critical performance variables are “those factors that must be achieved or implemented successfully for the intended strategy of the business to succeed” (p. 63); they can be identified through effectiveness and efficiency criteria (Anthony 1965 ). He also agreed with Lawler and Rhode ( 1976 ) that critical performance variables should be related to objective, rather than subjective measures; complete, instead of incomplete; and responsive, rather than unresponsive, measures. Simons also posited that all the three features rarely occur in diagnostic control systems (Simons 1995 : 76).

Simons asserted that in “normal competitive conditions, senior managers with a clear sense of strategic vision choose very few – usually only one – management control system at any point in time” (Simons 1991 ). The reasons for this limited choice are related to both economic and cognitive, as well as strategic issues. Since the interactive use of control systems require managerial attention, managers will be distracted by other day-to-day operations, which can be handled only for one system at a time. From a cognitive perspective, individuals can cope and make decisions simultaneously only with a limited amount of information; otherwise they will be overwhelmed by data. From a strategic standpoint, “the primary reason for using a control system interactively is to activate learning and experimentation” (Simons 1995 : 116); therefore it is better to avoid poor analysis, or decision paralysis coming from too many projects under analysis.

Nonetheless, Collier acknowledges the implementation of the beliefs system lever of control (Collier 2005 ).

She also stressed that investigating “how differences in interpretation of strategic contingencies shape management control systems would enrich Simons’ model” (Gray 1990 : 146).

The portfolio of management control mechanisms is made up of “standard operating procedures, position descriptions, personal supervision, budgets, performance measurement, reward systems and internal governance, and accountability arrangements [as well as …] less obtrusive forms of control, such as personnel selection, training and socialization processes” (Abernethy and Chua 1996 : 573).

An example of such frameworks is the value based management tool introduced by Ittner and Larcker ( 2001 ).

Mission has been defined as the “overriding purpose of the organization in line with the values or expectations of stakeholders”, while the vision develops the “desired future state: the aspiration of the organization” (Johnson et al. 2005 : 13).

In their work, Malmi and Brown specified that, although their framework represents a broad typology, it is also a parsimonious one, since it encompasses only five types of control (Malmi and Brown 2008 : 291).

Merchant and Van der Stede’s framework develops different forms of control according to the different objects under control, which are culture, personnel, action and results controls (Merchant and Van der Stede 2007 ).

Nonetheless, the authors acknowledged that culture may sometimes be beyond managerial control.

On the issue of a tentative framework, the authors call for “further research [that] should reveal the missing and unnecessary elements in it” (Malmi and Brown 2008 : 295).

Although budgets can cover a shorter, or longer period, it is usually based on a 12-month period.

Giorgio Brunetti stressed that both purposes should be accomplished by the management control system, although one of the two may be “stressed” (Brunetti 1979 : 69) a little bit further, indeed, he argued that a control system, which is uncoupled from the rewarding system, results in an amount of information aimed at sustaining, rather that coordinating, operations (p. 70).

In line with the contingency approach, the effective design, according to Brunetti, lies in the “congruency”, or “fit” of management control system’s variables with both management control system’s inputs and outputs (Brunetti 1979 : 98).

Other limitations to the cybernetic approach to the design of management control system can be found elsewhere in this work (§ 2.3).

To Mella, a system of transformation is an “‘entity’ able to transform certain ‘objects’ that enter the system into different ‘objects’ which leave the system” (Mella 1992 : 456).

Abdel-Kader M, Luther R (2008) The impact of firm characteristics on management accounting practices: a UK-based empirical analysis. Br Account Rev 40:2–27

Google Scholar  

Abernethy MA, Brownell P (1997) Management control systems in research and development organizations: the role of accounting, behavior and personnel controls. Account Organ Soc 22(3–4):233–248

Abernethy MA, Brownell P (1999) The role of budgets in organizations facing strategic change: an exploratory study. Account Organ Soc 24(3):189–205

Abernethy MA, Chua WF (1996) A field study of control system “Redesign”: the impact of institutional processes on strategic choice. Contemp Account Res 13(2):569–606

Abernethy MA, Bouwens J, van Lent L (2010) Leadership and control system design. Manag Account Res 21:2–16

Ackoff RL (1977) National development planning revisited. Oper Res 25:207–218

Amigoni F (1995) Misurazioni d’azienda. Programmazione e controllo, (a cura di). Giuffré, Milano

Anthony RN (1965) Planning and control systems: a framework for analysis. Harvard Business School Division of Research, Boston

Argyris C, Schon D (1978) Organisational learning: a theory of action perspective. Addison Wesley, Reading

Azofra V, Prieto B, Santidrián A (2003) The usefulness of a performance measurement system in the daily life of an organization: a note on a case study. Br Account Rev 35:367–384

Beekun RI, Glick WH (2001) Organization structure from a loose coupling perspective: a multidimensional approach. Decis Sci 32(2):227–250

Beer SA (1979) The heart of enterprise. Wiley, London/New York

Beer SA (1981) Brain of the firm; second edition. Wiley, London/New York

Berry AJ, Coad AF, Harris EP, Otley DT, Stringer C (2009) Emerging themes in management control: a review of recent literature. Br Account Rev 41:2–20

Bisbe J, Otley D (2004) The effects of an interactive use of control systems on product innovation. Account Organ Soc 29:709–737

Bisbe J, Batista-Foguet J-M, Chenhall R (2007) Defining management accounting constructs: a methodological note on the risks of conceptual misspecification. Account Organ Soc 32:789–820

Brignall S, Ballantine J (2004) Strategic enterprise management systems: new directions from research. Manag Account Res 15(2):225–240

Broadbent J, Laughlin R (2009) Performance management systems: a conceptual model. Manag Account Res 20:283–295

Brownell P (1983) The motivational impact of management-by-exception in a budgetary context. J Account Res 21:456–472

Brunetti G (1979) Il controllo di gestione in condizioni ambientali perturbate. FrancoAngeli, Milano

Brusoni S, Prencipe A, Pavitt K (2001) Knowledge specialisation, organizational coupling and the boundaries of the firm: why firms know more than they make? Adm Sci Q 46(4):597–621

Burns T, Stalker GM (1961) The management of innovation. Tavistock, London

Chandler AD Jr (1962) Strategy and structure: chapters in the history of the American industrial enterprise. MIT Press, Cambridge, MA

Chenhall RH (2003) Management control system design within its organizational context: findings from contingency-based research and directions for the future. Account Organ Soc 28(2–3):127–168

Chenhall R (2005) Integrative strategic performance measurement systems, strategic alignment of manufacturing, learning and strategic outcomes: an exploratory study. Account Organ Soc 30(5):395–422

Chenhall RH (2008) Accounting for the horizontal organization: a review essay. Account Organ Soc 33(4–5):517–550

Choe J, Langfield-Smith K (2004) The effects of national culture on the design of management accounting information systems. J Comp Int Manage 7(1)

Choe JM (1998) The effects of user participation on the design of accounting information systems. Inf Manage 34(3):185–198

Chong VK (1998) Testing the contingency ‘fit’ on the relation between management accounting systems and managerial performance: a research note on the moderating role of tolerance for ambiguity. Br Account Rev 30:331–342

Christenson C (1983) The methodology of positive accounting. Account Rev 53(1):1–22

Collier P (2005) Entrepreneurial control and the construction of a relevant accounting. Manag Account Res 16:321–339

Daft RL (1978) A dual-core model of organizational innovation. Acad Manag J 21(2):193–210

Davila A (2000) An empirical study on the drivers of management control systems’ design in new product development. Account Organ Soc 25(4–5):383–409

Dossi A, Patelli L (2008) The decision-influencing use of performance measurement systems (PMS) in relationships between headquarters and subsidiaries. Manag Account Res 19(1):126–148

Doty HD, Glick WH, Huber GP (1993) Fit, equifinality, and organizational effectiveness: a test of two configurational theories. Acad Manag J 36:1196–1250

Dubois A, Gadde L-E (2002) Systematic combining: an abductive approach to case research. J Bus Res 55(7):553–560

Duff A (1996) The literature search: a library-based model for information skills instruction. Libr Rev 45(4):14–18

Ferreira A, Otley D (2005) The design and use of management control systems: an extended framework for analysis. Social Science Research Network. http://papers.ssrn.com/sol3/papers.cfm?abstract_id¼682984

Ferreira A, Otley D (2009) The design and use of performance management systems: an extended framework for analysis. Manag Account Res 20:263–282

Fitzgerald L, Johnston R, Brignall TJ, Silvestro R, Voss C (1991) Performance measurement in service businesses. The Chartered Institute of Management Accountants, London

Galbraith JK (1973) Designing complex organizations. Addison-Wesley, Reading

Gerdin J (2005) Management accounting system design in manufacturing departments: an empirical investigation using a multiple contingencies approach. Account Organ Soc 30:99–126

Gietzmann MB (1996) Incomplete contracts and the make or buy decision: governance design and attainable flexibility. Account Organ Soc 21(6):611–626

Gray RH (1990) The greening of accountancy: the profession after pearce. ACCA, London

Green SG, Welsh MA (1988) Cybernetics and dependence: reframing the control concept. Acad Manag Rev 13(2):287–301

Gresov C, Drazin R (1997) Equifinality: functional equivalence in organization design. Acad Manag Rev 22(2):403–428

Gupta AK, Govindarajan V (1985) Linking control systems to business unit strategy: impact on performance. Account Organ Soc 10(1):51–66

Harrison GL (1993) Reliance on accounting performance measures in superior evaluative style. The influence of national culture and personality. Account Organ Soc 18:319–339

Harrison G, McKinnon J (1999) Cross-cultural research in management control systems design: a review of the current state. Account Organ Soc 24:483–506

Heider F (1959) The psychology of interpersonal relations. Wiley, New York

Henry J-F (2006) Management control systems and strategy: a resource-based perspective. Account Organ Soc 31:529–558

Huff AS (2009) Designing research for publication. Sage, Thousand Oaks

Ittner CD, Larcker DF (2001) Assessing empirical research in managerial accounting: a value-based management perspective. J Account Econ 32:349–410

Ittner CD, Larcker DF, Randall T (2003) Performance implications of strategic performance measurement in financial services firms. Account Organ Soc 28:715–741

Jensen M, Meckling W (1976) Theory of the firm: managerial behavior, agency costs, and ownership structure. J Financ Econ 3:305–360

Johnson G, Scholes K, Whittington R (2005) Exploring corporate strategy. FT Prentice-Hall, London

Johnson HT, Kaplan RS (1987) Relevance lost: the rise and fall of management accounting. Harvard Business School Press, Boston

Kald M, Nilsson F, Rapp B (2000) On the strategy and management control: the importance of classifying the strategy of the business. Br J Manag 11:197–212

Kaplan RS, Norton DP (1992) The balanced scorecard- measures that drive performance. Harv Bus Rev 70(1):71–79

Kaplan RS, Norton DP (1993) Putting the balanced scorecard to work. Harv Bus Rev 71(5):134–147

Kaplan RS, Norton DP (2000) Having trouble with your strategy? Then map it. Harv Bus Rev 78:167–176

Kaplan RS, Norton DP (2007) Using the balanced scorecard as a strategic management system. Harv Bus Rev 85(7/8):172–180

Kim B, Oh H (2002) An effective R&D performance measurement system: survey of Korean R&D researchers. Omega 30(1):19–31

Kominis G, Emmanuel CR (2007) The expectancy-valence theory revisited: developing an extended model of managerial motivation. Manag Account Res 18:49–75

Langfield-Smith K (1997) Management control systems and strategy: a critical review. Account Organ Soc 22:207–232

Langfield-Smith K (2006) Management accounting: information for managing and creating value. McGraw-Hill, Sydney

Langfield-Smith K (2008) Strategic management accounting: how far have we come in 25 years? Account Audit Account J 21(2):204–228

Langfield-Smith K, Smith D (2003) Management control systems and trust in outsourcing relationships. Manag Account Res 14:281–307

Lawler EE, Porter LW (1967) The effects of performance on job satisfaction. Ind Relat 7:20–28

Lawler EE, Rhode JG (1976) Information and control in organizations. Goodyear, Pacific Palisades

Lawrence PR, Dyer D (1983) Renewing American industry. Free Press, New York

Lax DA, Sebenius JK (1986) The mnager as negotiator: bargaining for cooperation and competitive gain. Free Press, New York

Li P, Tang G (2009) Performance measurement design within its organisational context—evidence from China. Manag Account Res 20:193–207

Lynch RL, Cross KF (1995) Measure up!: how to measure corporate performance. Blackwell, Cambridge, MA

Macintosh NB, Daft RL (1987) Management control systems and departmental interdependencies: an empirical study. Account Organ Soc 12(1):49–61

Mahama H (2006) Management control systems, cooperation and performance in strategic supply relationships: a survey in the mines. Manag Account Res 17:315–339

Malina M, Selto F (2001) Controlling and communicating strategy: an empirical test of the effectiveness of the balanced scorecard. J Manag Account Res 13:47–90

Malina M, Selto F (2004) Choice and change of measures in performance measurement models. Manag Account Res 15(4):441–469

Malmi T, Brown DA (2008) Management control systems as a package – opportunities, challenges and research directions. Manag Account Res 19:287–300

Mella P (1992) Economia aziendale. UTET, Torino

Mella P (1997) Controllo di gestione. UTET, Torino

Mella P (2005) Performance indicators in business value-creating organizations. Econ Aziendale Online 2(2005):25–52

Mella P, Pellicelli M (2008) The origin of value based management: five interpretative models of an unavoidable evolution. Int J Knowl Cult Change Manag 8(2):23–32

Merchant K (1998) Modern management control systems. Upper Saddle River, Prentice Hall

Merchant KA, Otley DT (2007) A review of the literature on control and accountability. In: Chapman CS, Hopwood AG, Shields MD (eds) Handbook of management accounting research. Elsevier, Amsterdam, pp 785–804

Merchant KA, Van der Stede W (2007) Management control systems: performance measurement, evaluation and incentives. Financial Times Press, Harlow

Miles MB, Sullivan E, Gold BA, Taylor BL, Sieber SD, Wilder DE (1978) Designing and starting innova-tive schools: a field study of social architecture in education. Final report. Center for Policy Research, New York

Mouritsen J, Larsen HT (2005) The 2nd wave of knowledge management: the management control of knowledge resources through intellectual capital information. Manag Account Res 16(3):371–394

Nanni AJ, Dixon JR, Vollmann TE (1992) Integrated performance measurement: management accounting to support the new manufacturing realities. J Manag Account Res 4(Fall):1–19

Neely A (2008) Does the balanced scorecard work: an empirical investigation. Research paper series, no. 1/08. Available online at www.som.cranfield.ac.uk/som/research/researchpapers.asp

Nelson RR, Winter S (1982) An evolutionary theory of economic change. The Belknap Press of Harvard University, London

Nilsson F (2000) Parenting styles and value creation: a management control approach. Manag Account Res 11:89–112

Nilsson F (2002) Strategy and management control systems: a study of the design and use of management control systems following takeover. Account Finance 42(1):41–71

Nilsson F, Kald M (2002) Recent advances in performance management: the Nordic case. Eur Manag Rev 20(3):235–245

Norreklit H (2000) The balance on the balanced scorecard: a critical analysis of some of its assumptions. Manag Account Res 11:65–88

Orton JD, Weick KE (1990) Loosely coupled systems: a reconceptualization. Acad Manag Rev 15(2):203–233

Otley DT (1978) Budgetary use and managerial performance. J Account Res 16:122–149

Otley DT (1980) The contingency theory of management accounting: achievement and prognosis. Account Organ Soc 5:413–428

Otley DT (1999) Performance management: a framework for management control systems research. Manag Account Res 10:363–382

Otley DT (2008) Did Kaplan and Johnson get it right? Account Audit Account J 21(2):229–239

Otley D, Berry A (1980) Control, organisation and accounting. Account Organ Soc 5(2):231–244

Ouchi WG (1979) A conceptual framework for the design of organizational control mechanisms. Manag Sci 25:833–848

Perego PM, Hartmann FGH (2009) Aligning performance measurement systems with strategy: the case of environmental strategy. Abacus 45(4):397–428

Perrow C (1970) Organizational analysis: a sociological view. Tavistock Publications, London

Prahalad CK, Bettis RA (1986) The dominant logic: a new linkage between diversity and performance. Strateg Manag J 7(6):485–501

Sandelin M (2008) Operation of management control practices as a package – a case study on control system variety in a growth firm context. Manag Account Res 19:324–343

Senge PM (1990) The fifth discipline: the art and practice of the learning organization. Doubleday Currency, New York

Simons R (1991) Strategic orientation and top management attention to control systems. Strateg Manag J 12:49–62

Simons R (1995) Levers of control: how managers use innovative control systems to drive strategic renewal. Harvard Business School Press, Boston

Spekle RF (2001) Explaining management control structure variety: a transaction cost economics perspective. Account Organ Soc 26(4–5):419–441

Stringer ET (2007) Action research, 3rd edn. Sage, London

Sundbo J, Gallouj F (2000) Innovation as a loosely coupled system in services. Int J Serv Technol Manag 1(1):15–36

Tuomela T (2005) The interplay of different levers of control: a case study of introducing a new performance measurement system. Manag Account Res 16(3):293–320

Weick KE (1976) Education systems as loosely coupled systems. Adm Sci Q 21:1–19

Widener SK (2007) An empirical analysis of the levers of control framework. Account Organ Soc 32(7):757–788

Williamson OE (1975) Markets and hierarchies. Free Press, New York

Williamson OE (1979) Transaction cost economics: the governance of contractual relations. J Law Econ 22:233–261

Williamson OE (1981) The economics of organization: the transaction cost approach. Am J Sociol 87(3):548–577

Woodward J (1958) Industrial organization. Theory and Practice, London

Zappa G (1927) Tendenze nuove negli studi di ragioneria. IES, Milano

Download references

Author information

Authors and affiliations.

Department of Economics and Management, University of Pavia, Pavia, Italy

Chiara Demartini

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Demartini, C. (2014). Performance Management System. A Literature Review. In: Performance Management Systems. Contributions to Management Science. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36684-0_3

Download citation

DOI : https://doi.org/10.1007/978-3-642-36684-0_3

Published : 20 June 2013

Publisher Name : Springer, Berlin, Heidelberg

Print ISBN : 978-3-642-36683-3

Online ISBN : 978-3-642-36684-0

eBook Packages : Business and Economics Business and Management (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

State of the art literature review on performance measurement

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options.

  • van de Ven M Lara Machado P Athanasopoulou A Aysolmaz B Turetken O (2023) Key performance indicators for business models: a systematic review and catalog Information Systems and e-Business Management 10.1007/s10257-023-00650-2 21 :3 (753-794) Online publication date: 19-Sep-2023 https://dl.acm.org/doi/10.1007/s10257-023-00650-2
  • Dey P Yang G Malesios C De D Evangelinos K (2021) Performance Management of Supply Chain Sustainability in Small and Medium-Sized Enterprises Using a Combined Structural Equation Modelling and Data Envelopment Analysis Computational Economics 10.1007/s10614-019-09948-1 58 :3 (573-613) Online publication date: 1-Oct-2021 https://dl.acm.org/doi/10.1007/s10614-019-09948-1
  • Berrah L Clivillé V Montmain J Mauris G (2019) The Contribution concept for the control of a manufacturing multi-criteria performance improvement Journal of Intelligent Manufacturing 10.1007/s10845-016-1227-9 30 :1 (47-58) Online publication date: 1-Jan-2019 https://dl.acm.org/doi/10.1007/s10845-016-1227-9
  • Show More Cited By

Index Terms

Applied computing

Enterprise computing

Recommendations

Management: the key to success.

The author outlines some important management concerns that determine the success of human-factors efforts. She identifies and discusses the issues involved in adding human factors to the design process. They are: scheduling; organizational roles; ...

A change management model and its application in software development projects

  • Change is inevitable for software projects and should be adopted in project management.

Change is inevitable in software projects and software engineers strive to find ways to manage changes. A complete task could be easily in a team`s agenda sometime later due to change demands. Change demands are caused by failures and/...

Exploring the impact of essential IT skills on career satisfaction and organisational commitment of information systems professionals

Knowledge competency and career satisfaction are essential ingredients that increase organisational commitment of key information systems professionals. The study suggested that organisational knowledge and skills both asserted a positive influence on ...

Information

Published in.

Pergamon Press, Inc.

United States

Publication History

Author tags.

  • Change management
  • Information behaviour
  • Management Information Systems (MIS)
  • Management commitment
  • Performance measurement

Contributors

Other metrics, bibliometrics, article metrics.

  • 10 Total Citations View Citations
  • 0 Total Downloads
  • Downloads (Last 12 months) 0
  • Downloads (Last 6 weeks) 0
  • Tadić D Đorđević A Erić M Stefanović M Nestić S (2017) Two-step model for performance evaluation and improvement of New Service Development process based on fuzzy logics and genetic algorithm Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology 10.3233/JIFS-17802 33 :6 (3959-3970) Online publication date: 1-Jan-2017 https://dl.acm.org/doi/10.3233/JIFS-17802
  • Pika A Leyer M Wynn M Fidge C Hofstede A Aalst W (2017) Mining Resource Profiles from Event Logs ACM Transactions on Management Information Systems 10.1145/3041218 8 :1 (1-30) Online publication date: 23-Mar-2017 https://dl.acm.org/doi/10.1145/3041218
  • Pidun T Croenertz O (2016) A Performance Management Software Integrating the Concept of Visibility of Performance International Journal of Information System Modeling and Design 10.4018/IJISMD.2016100102 7 :4 (17-30) Online publication date: 1-Oct-2016 https://dl.acm.org/doi/10.4018/IJISMD.2016100102
  • Daraio C Lenzerini M Leporelli C Moed H Naggar P Bonaccorsi A Bartolucci A (2016) Data integration for research and innovation policy Scientometrics 10.1007/s11192-015-1814-0 106 :2 (857-871) Online publication date: 1-Feb-2016 https://dl.acm.org/doi/10.1007/s11192-015-1814-0
  • Stefanović M Tadic D Arsovski S Pravdic P Abadić N Stefanović N (2015) Determination of the effectiveness of the realization of enterprise business objectives and improvement strategies in an uncertain environment Expert Systems: The Journal of Knowledge Engineering 10.1111/exsy.12102 32 :4 (494-506) Online publication date: 1-Aug-2015 https://dl.acm.org/doi/10.1111/exsy.12102
  • Grosswiele L Röglinger M Friedl B (2013) A decision framework for the consolidation of performance measurement systems Decision Support Systems 10.1016/j.dss.2012.10.027 54 :2 (1016-1029) Online publication date: 1-Jan-2013 https://dl.acm.org/doi/10.1016/j.dss.2012.10.027
  • Pidun T Felden C (2012) On Improving the Visibility of Hard-Measurable Process Performance International Journal of Intelligent Information Technologies 10.4018/jiit.2012040104 8 :2 (59-74) Online publication date: 1-Apr-2012 https://dl.acm.org/doi/10.4018/jiit.2012040104

View options

Login options.

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

  • Open access
  • Published: 18 October 2016

Business process performance measurement: a structured literature review of indicators, measures and metrics

  • Amy Van Looy   ORCID: orcid.org/0000-0002-7992-1528 1 &
  • Aygun Shafagatova 1  

SpringerPlus volume  5 , Article number:  1797 ( 2016 ) Cite this article

44k Accesses

84 Citations

3 Altmetric

Metrics details

Measuring the performance of business processes has become a central issue in both academia and business, since organizations are challenged to achieve effective and efficient results. Applying performance measurement models to this purpose ensures alignment with a business strategy, which implies that the choice of performance indicators is organization-dependent. Nonetheless, such measurement models generally suffer from a lack of guidance regarding the performance indicators that exist and how they can be concretized in practice. To fill this gap, we conducted a structured literature review to find patterns or trends in the research on business process performance measurement. The study also documents an extended list of 140 process-related performance indicators in a systematic manner by further categorizing them into 11 performance perspectives in order to gain a holistic view. Managers and scholars can consult the provided list to choose the indicators that are of interest to them, considering each perspective. The structured literature review concludes with avenues for further research.

Since organizations endeavor to measure what they manage, performance measurement is a central issue in both the literature and in practice (Heckl and Moormann 2010 ; Neely 2005 ; Richard et al. 2009 ). Performance measurement is a multidisciplinary topic that is highly studied by both the management and information systems domains (business process management or BPM in particular). Different performance measurement models, systems and frameworks have been developed by academia and practitioners (Cross and Lynch 1988 ; Kaplan and Norton 1996 , 2001 ; EFQM 2010 ; Kueng 2000 ; Neely et al. 2000 ). While measurement models were initially limited to financial performance (e.g., traditional controlling models), a more balanced and integrated approach was needed beginning in the 1990s due to the challenges of the rapidly changing society and technology; this approach resulted in multi-dimensional models. Perhaps the best known multi-dimensional performance measurement model is the Balanced Scorecard (BSC) developed by Kaplan and Norton ( 1996 , 2001 ), which takes a four-dimensional approach to organizational performance: (1) financial perspective, (2) customer perspective, (3) internal business process perspective, and (4) “learning and growth” perspective. The BSC helps translate an organization’s strategy into operational performance indicators (also called performance measures or metrics) and objectives with targets for each of these performance perspectives. Even today, the BSC is by far the most used performance measurement approach in the business world (Bain Company 2015 ; Sullivan 2001 ; Ulfeder 2004 ).

Equally important for measuring an organization’s performance is process-oriented management or business process management (BPM), which is “about managing entire chains of events, activities and decisions that ultimately add value to the organization and its customers. These ‘chains of events, activities and decisions’ are called processes” (Dumas et al. 2013 : p. 1). In particular, an organization can do more with its current resources by boosting the effectiveness and efficiency of its way of working (i.e., its business processes) (Sullivan 2001 ). In this regard, academic research also suggests a strong link between business process performance and organizational performance, either in the sense of a causal relationship (Melville et al. 2004 ; Smith and Reece 1999 ) or as distinctive indicators that co-exist, as in the BSC (Kaplan and Norton 1996 , 2001 ).

Nonetheless, performance measurement models tend to give little guidance on how business (process) performance indicators can be chosen and operationalized (Shah et al. 2012 ). They are limited to mainly defining performance perspectives, possibly with some examples or steps to derive performance indicators (Neely et al. 2000 ), but without offering concrete indicators. Whereas fairly large bodies of research exist for both performance models and business processes, no structured literature review of (process) performance measurement has been carried out thus far. To the best of our knowledge, existing reviews cover one or another aspect of performance measurement; for instance, reviews on measurement models or evaluation criteria for performance indicators (Heckl and Moormann 2010 ; Neely 2005 ; Richard et al. 2009 ). Despite the considerable importance of a comprehensive and holistic approach to business (process) performance measurement, little is known regarding the state of the research on alternative performance indicators and their operationalization with respect to evaluating the performance of an organization’s work routines. To some extent, this lack of guidance can be explained by the fact that performance indicators are considered organization-dependent, given that strategic alignment is claimed by many measurement models such as the BSC (Kaplan and Norton 1996 , 2001 ). Although the selection of appropriate performance indicators is challenging for practitioners due to the lack of best practices, it is also highly relevant for performance measurement.

The gap that we are studying is the identification and, in particular, the concretization/operationalization of process-related performance indicators. This study enhances the information systems literature, which focuses on the design and development of measurement systems without paying much attention to essential indicators. To fill this gap, our study presents a structured literature review in order to describe the current state of business process performance measurement and related performance indicators. The choice to focus on the business process management (BPM) discipline is motivated by the close link between organizational performance and business process performance, as well as to ensure a clear scope (specifically targeting an organization’s way of working). Accordingly, the study addresses the following research questions.

RQ1. What is the current state of the research on business process performance measurement?

RQ2. Which indicators, measures and metrics are used or mentioned in the current literature related to business process performance?

The objective of RQ1 is to identify patterns in the current body of knowledge and to note weaknesses, whereas RQ2 mainly intends to develop an extended list of measurable process performance indicators, categorized into recognized performance perspectives, which can be tailored to diverse purposes. This list could, for instance, serve as a supplement to existing performance measurement models. Practitioners can use the list as a source for best practice indicators from academic research to find and select a subset of performance indicators that fit their strategy. The study will thus not address the development of specific measurement systems but rather the indicators to be used within such systems. To make our intended list system-independent, we will begin with the BSC approach and extend its performance perspectives. Given this generic approach, the research findings can also be used by scholars when building and testing theoretical models in which process performance is one of the factors that must be concretized.

The remainder of this article is structured as follows. “ Theoretical background ” section describes the theoretical background of performance measurement models and performance indicators. Next, the methodology for our structured literature review is detailed in “ Methods ” section. The subsequent sections present the results for RQ1 (“ Results for RQ1 ” section) and RQ2 (“ Results for RQ2 ” section). The discussion of the results in provided in “ Discussion ” section, followed by concluding comments (“ Conclusion ” section).

Theoretical background

This section addresses the concepts of performance measurement models and performance indicators separately in order to be able to differentiate them further in the study.

Performance measurement models

According to overviews in the performance literature (Heckl and Moormann 2010; Neely 2005 ; Richard et al. 2009 ), some of the most cited performance measurement models are the Balanced Scorecard (Kaplan and Norton 1996 , 2001 ), self-assessment excellence models such as the EFQM ( 2010 ), and the models by Cross and Lynch ( 1988 ), Kueng ( 2000 ) and Neely et al. ( 2000 ). A distinction should, however, be made between models focusing on the entire business (Kaplan and Norton 1996 , 2001 ; EFQM 2010 ; Cross and Lynch 1988 ) and models focusing on a single business process (Kueng 2000 ; Neely et al. 2000 ).

Organizational performance measurement models

Organizational performance measurement models typically intend to provide a holistic view of an organization’s performance by considering different performance perspectives. As mentioned earlier, the BSC provides four perspectives for which objectives and performance indicators ensure alignment between strategies and operations (Fig.  1 ) (Kaplan and Norton 1996 , 2001 ). Other organizational performance measurement models provide similar perspectives. For instance, Cross and Lynch ( 1988 ) offer a four-level performance pyramid: (1) a top level with a vision, (2) a second level with objectives per business unit in market and financial terms, (3) a third level with objectives per business operating system in terms of customer satisfaction, flexibility and productivity, and (4) a bottom level with operational objectives for quality, delivery, process time and costs. Another alternative view on organizational performance measurement is given in business excellence models, which focus on an evaluation through self-assessment rather than on strategic alignment, albeit by also offering performance perspectives. For instance, the EFQM ( 2010 ) distinguishes enablers [i.e., (1) leadership, (2) people, (3) strategy, (4) partnerships and resources, and (5) processes, products and services] from results [i.e., (1) people results, (2) customer results, (3) society results, and (4) key results], and a feedback loop for learning, creativity and innovation.

An overview of the performance perspectives in Kaplan and Norton ( 1996 , 2001 )

Since the BSC is the most used performance measurement model, we have chosen it as a reference model to illustrate the function of an organizational performance measurement model (Kaplan and Norton 1996 , 2001 ). The BSC is designed to find a balance between financial and non-financial performance indicators, between the interests of internal and external stakeholders, and between presenting past performance and predicting future performance. The BSC encourages organizations to directly derive (strategic) long-term objectives from the overall strategy and to link them to (operational) short-term targets. Concrete performance measures or indicators should be defined to periodically measure the objectives. These indicators are located on one of the four performance perspectives in Fig.  1 (i.e., ideally with a maximum of five indicators per perspective).

Table  1 illustrates how an organizational strategy can be translated into operational terms using the BSC.

During periodical measurements using the BSC, managers can assign color-coded labels according to actual performance on short-term targets: (1) a green label if the organization has achieved the target, (2) an orange label if it is almost achieved, or (3) a red label if it is not achieved. Orange and red labels thus indicate areas for improvement.

Furthermore, the BSC assumes a causal or logical relationship between the four performance perspectives. An increase in the competences of employees (i.e., performance related to “learning and growth”) is expected to positively affect the quality of products and services (i.e., internal business process performance), which in turn will lead to improved customer perceptions (i.e., customer performance). The results for the previous perspectives will then contribute to financial performance to ultimately realize the organization’s strategy, mission and vision (Kaplan and Norton 1996 , 2001 ). Hence, indicators belonging to the financial and customer perspectives are assumed to measure performance outcomes, whereas indicators from the perspectives of internal business processes and “learning and growth” are considered as typical performance drivers (Kaplan and Norton 2004 ).

Despite its widespread use and acceptance, the BSC is also criticized for appearing too general by managers who are challenged to adapt it to the culture of their organization (Butler et al. 1997 ) or find suitable indicators to capture the various aspects of their organization’s strategy (Shah et al. 2012 ; Vaivio 1999 ). Additionally, researchers question the choice of four distinct performance perspectives (i.e., which do not include perspectives related to inter-organizational performance or sustainability issues) (EFQM 2010 ; Hubbard 2009 , Kueng 2000 ). Further, the causal relationship among the BSC perspectives has been questioned (Norreklit 2000 ). To some degree, Kaplan and Norton ( 2004 ) responded to this criticism by introducing strategy maps that focus more on the causal relationships and the alignment of intangible assets.

Business process performance measurement models

In addition to organizational models, performance measurement can also focus on a single business process, such as statistical process control, workflow-based monitoring or process performance measurement systems (Kueng 2000 ; Neely et al. 2000 ). The approach taken in business process performance measurement is generally less holistic than the BSC. For instance, in an established BPM handbook, Dumas et al. ( 2013 ) position time, cost, quality and flexibility as the typical performance perspectives of business process performance measurement (Fig.  2 ). Similar to organizational performance measurement, concrete performance measures or indicators should be defined for each process performance perspective. In this sense, the established perspectives of Dumas et al. ( 2013 ) seem to further refine the internal business process performance perspective of the BSC.

An overview of the performance perspectives in Dumas et al. ( 2013 )

Neely et al. ( 2000 ), on the other hand, present ten steps to develop or define process performance indicators. The process performance measurement system of Kueng ( 2000 ) is also of high importance, which is visualized as a “goal and performance indicator tree” with five process performance perspectives: (1) financial view, (2) customer view, (3) employee view, (4) societal view, and (5) innovation view. Kueng ( 2000 ) thus suggests a more holistic approach towards process performance, similar to organizational performance, given the central role of business processes in an organization. He does so by focusing more on the different stakeholders involved in certain business processes.

Performance indicators

Section “ Performance measurement models ” explained that performance measurement models typically distinguish different performance perspectives for which performance indicators should be further defined. We must, however, note that we consider performance measures, performance metrics and (key) performance indicators as synonyms (Dumas et al. 2013 ). For reasons of conciseness, this work will mainly refer to performance indicators without mentioning the synonyms. In addition to a name, each performance indicator should also have a concretization or operationalization that describes exactly how it is measured and that can result in a value to be compared against a target. For instance, regarding the example in Table  1 , the qualitative statements to measure customer satisfaction constitute an operationalization. Nonetheless, different ways of operationalization can be applied to measure the same performance indicator. Since organizations can profit from reusing existing performance indicators and the related operationalization instead of inventing new ones (i.e., to facilitate benchmarking and save time), this work investigates which performance indicators are used or mentioned in the literature on business process performance and how they are operationalized.

Neely et al. ( 2000 ) and Richard et al. ( 2009 ) both present evaluation criteria for performance indicators (i.e., in the sense of desirable characteristics or review implications), which summarize the general consensus in the performance literature. First, the literature strongly agrees that performance indicators are organization-dependent and should be derived from an organization’s objectives, strategy, mission and vision. Secondly, consensus in the literature also exists regarding the need to combine financial and non-financial performance indicators. Nonetheless, disagreement still seems to exist in terms of whether objective and subjective indicators need to be combined, with objective indicators preferred by most advocates. Although subjective (or quasi-objective) indicators face challenges from bias, their use has some advantages; for instance, to include stakeholders in an assessment, to address latent constructs or to facilitate benchmarking when a fixed reference point is missing (Hubbard 2009 ; Richard et al. 2009 ). Moreover, empirical research has shown that subjective (or quasi-objective) indicators are more or less correlated with objective indicators, depending on the level of detail of the subjective question (Richard et al. 2009 ). For instance, a subjective question can be made more objective by using clear definitions or by selecting only well-informed respondents to reduce bias.

We conducted a structured literature review (SLR) to find papers dealing with performance measurement in the business process literature. SLR can be defined as “a means of evaluating and interpreting all available research relevant to a particular research question, topic area, or phenomenon of interest” (Kitchenham 2007 : p. vi). An SLR is a meta study that identifies and summarizes evidence from earlier research (King and He 2005 ) or a way to address a potentially large number of identified sources based on a strict protocol used to search and appraise the literature (Boellt and Cecez-Kecmanovic 2015 ). It is systematic in the sense of a systematic approach to finding relevant papers and a systematic way of classifying the papers. Hence, according to Boellt and Cecez-Kecmanovic ( 2015 ), SLR as a specific type of literature review can only be used when two conditions are met. First, the topic should be well-specified and closely formulated (i.e., limited to performance measurement in the context of business processes) to potentially identify all relevant literature based on inclusion and exclusion criteria. Secondly, the research questions should be answered by extracting and aggregating evidence from the identified literature based on a high-level summary or bibliometric-type of content analysis. Furthermore, King and He ( 2005 ) also refer to a statistical analysis of existing literature.

Informed by the established guidelines proposed by Kitchenham ( 2007 ), we undertook the review in distinct stages: (1) formulating the research questions and the search strategy, (2) filtering and extracting data based on inclusion and exclusion criteria, and (3) synthesizing the findings. The remainder of this section describes the details of each stage.

Formulating the research questions and search strategy

A comprehensive and unbiased search is one of the fundamental factors that distinguish a systematic review from a traditional literature review (Kitchenham 2007 ). For this purpose, a systematic search begins with the identification of keywords and search terms that are derived from the research questions. Based on the research questions stipulated in the introduction, the SLR protocol (Boellt and Cecez-Kecmanovic 2015 ) for our study was defined, as shown in Table  2 .

The ISI Web of Science (WoS) database was searched using predetermined search terms in November 2015. This database was selected because it is used by many universities and results in the most outstanding publications, thus increasing the quality of our findings. An important requirement was that the papers focus on “business process*” (BP). This keyword was used in combination with at least one of the following: (1) “performance indicator*”, (2) “performance metric*”, (3) “performance measur*”. All combinations of “keyword in topic” (TO) and “keyword in title” (TI) have been used.

Table  3 shows the degree to which the initial sample sizes varied, with 433 resulting papers for the most permissive search query (TOxTO) and 19 papers for the most restrictive one (TIxTI). The next stage started with the most permissive search query in an effort to select and assess as many relevant publications as possible.

Filtering and extracting data

Figure  3 summarizes the procedure for searching and selecting the literature to be reviewed. The list of papers found in the previous stage was filtered by deleting 35 duplicates, and the remaining 398 papers were further narrowed to 153 papers by evaluating their title and abstract. After screening the body of the texts, 76 full-text papers were considered relevant for our scope and constituted the final sample (“Appendix 1 ”).

Exclusion of papers and number of primary studies

More specifically, studies were excluded if their main focus was not business process performance measurement or if they did not refer to indicators, measures or metrics for business performance. The inclusion of studies was not restricted to any specific type of intervention or outcome. The SLR thus included all types of research studies that were written in English and published up to and including November 2015. Furthermore, publication by peer-reviewed publication outlets (e.g., journals or conference proceedings) was considered as a quality criterion to ensure the academic level of the research papers.

Synthesizing the findings

The analysis of the final sample was performed by means of narrative and descriptive analysis techniques. For RQ1, the 76 papers were analyzed on the basis of bibliometric data (e.g., publication type, publication year, geography) and general performance measurement issues by paying attention to the methodology and focus of the study. Details are provided in “Appendix 2 ”.

For RQ2, all the selected papers were screened to identify concrete performance indicators in order to generate a comprehensive list or checklist. The latter was done in different phases. In the first phase, the structured literature review allowed us to analyze which performance indicators are mainly used in the process literature and how they are concretized (e.g., in a question or mathematical formulation), resulting in an unstructured list of potential performance indicators. The indicators were also synthesized by combining similar indicators and rephrasing them into more generic terms.

The next phase was a comparative study to categorize the output of phase 1 into the commonly used measurement models in the performance literature (see “ Theoretical background ” section). For the purpose of this study, we specifically looked for those organizational performance models, mentioned in “ Theoretical background ” section, that are cited the most and that suggest categories, dimensions or performance perspectives that can be re-used (Kaplan and Norton 1996 , 2001 ; EFQM 2010 ; Cross and Lynch 1988 ; Kueng 2000 ). Since the BSC (Kaplan and Norton 1996 , 2001 ) is the most commonly used of these measurement models, we began with the BSC as the overall framework to categorize the observed indicators related to business (process) performance, supplemented with an established view on process performance from the process literature (Dumas et al. 2013 ). Subsequently, a structured list of potential performance indicators was obtained.

In the third and final phase, an evaluation study was performed to validate whether the output of phase 2 is sufficiently comprehensive according to other performance measurement models, i.e., not included in our sample and differing from the most commonly used performance measurement models. Therefore, we investigated the degree to which our structured list covers the items in two variants or concretizations of the BSC. Hence, a validation by other theoretical models is provided. We note that a validation by subject-matter experts is out of scope for a structured literature review but relates to an opportunity for further research.

Results for RQ1

The final sample of 76 papers consists of 46 journal papers and 30 conference papers (Fig.  4 ), indicating a wide variety of outlets to reach the audience via operations and production-related journals in particular or in lower-ranked (Recker 2013 ) information systems journals.

The distribution of the sampled papers per publication type (N = 76)

When considering the chronological distribution of the sampled papers, Fig.  5 indicates an increase in the uptake of the topic in recent years, particularly for conference papers but also for journal publications since 2005.

The chronological distribution of the sampled papers per publication type (N = 76)

This uptake seems particularly situated in the Western world and Asia (Fig.  6 ). The countries with five or more papers in our sample are Germany (12 papers), the US (6 papers), Spain (5 papers), Croatia (5 papers) and China (5 papers). Figure  6 shows that business process performance measurement is a worldwide topic, with papers across the different continents. Nonetheless, a possible explanation for the higher coverage in the Western world could be due to its long tradition of measuring work (i.e., BSC origins).

The geographical distribution of the sampled papers per continent, based on a paper’s first author (N = 76)

The vast majority of the sampled papers address artifacts related to business (process) performance measurement. When looking at the research paradigm in which the papers are situated (Fig.  7 ), 71 % address design-science research, whereas 17 % conduct research in behavioral science and 12 % present a literature review. This could be another explanation for the increasing uptake in the Western world, as many design-science researchers are from Europe or North America (March and Smith 1995 ; Peffers et al. 2012 ).

The distribution of the sampled journal papers per research paradigm (N = 76)

Figure  8 supplements Fig.  7 by specifying the research methods used in the papers. For the behavioral-science papers, case studies and surveys are equally used. The 54 papers that are situated within the design-science paradigm explicitly refer to models, meta-models, frameworks, methods and/or tools. When mapping these 54 papers to the four artifact types of March and Smith ( 1995 ), the vast majority present (1) methods in the sense of steps to perform a task (e.g., algorithms or guidelines for performance measurement) and/or (2) models to describe solutions for the topic. The number of papers dealing with (3) constructs or a vocabulary and/or (4) instantiations or tools is much more limited, with 14 construct-related papers and 9 instantiations in our sample. We also looked at which evaluation methods, defined by Peffers et al. ( 2012 ), are typically used in the sampled design-science papers. While 7 of the 54 design-science papers do not seem to report on any evaluation effort, our sample confirms that most papers apply one or another evaluation method. Case studies and illustrative scenarios appear to be the most frequently used methods to evaluate design-science research on business (process) performance measurement.

The distribution of the sampled journal papers per research method (N = 76)

The sampled design-science research papers typically build and test performance measurement frameworks, systems or models or suggest meta-models and generic templates to integrate performance indicators into the process models of an organization. Such papers can focus on the process level, organizational level or even cross-organizational level. Nonetheless, the indicators mentioned in those papers are illustrative rather than comprehensive. An all-inclusive list of generic performance indicators seems to be missing. Some authors propose a set of indicators, but those indicators are specific to a certain domain or sector instead of being generic. For instance, Table  4 shows that 36 of the 76 sampled papers are dedicated to a specific domain or sector, such as technology-related aspects or supply chain management.

Furthermore, the reviewed literature was analyzed with regard to its (1) scope, (2) functionalities, (3) terminology, and (4) foundations.

Starting with scope, it is observed that nearly two-thirds of the sampled papers can be categorized as dealing with process-oriented performance measurement, whereas one-third focuses more on general performance measurement and management issues. Nonetheless, most of the studies of process performance also include general performance measurement as a supporting concept. A minor cluster of eight research papers specifically focuses on business process reengineering and measurement systems to evaluate the results of reengineering efforts. Furthermore, other researchers focus on the measurement and assessment of interoperability issues and supply chain management measurements.

Secondly, while analyzing the literature, two groups of papers were identified based on their functionalities: (1) focusing on performance measurement systems or frameworks, and (2) focusing on certain performance indicators and their categorization. Regarding the first group, it should be mentioned that while the process of building or developing a performance measurement system (PMS) or framework is well-researched, only a small number of papers explicitly address process performance measurement systems (PPMS). The papers in this first group typically suggest concrete steps or stages to be followed by particular organizations or discuss the conceptual characteristics and design of a performance measurement system. Regarding the second group of performance indicators, we can differentiate two sub-groups. Some authors focus on the process of defining performance indicators by listing requirements or quality characteristics that an indicator should meet. However, many more authors are interested in integrating performance indicators into the process models or the whole architecture of an organization, and they suggest concrete solutions to do so. Compared to the first group of papers, this second group deals more with the categorization of performance indicators into domains (financial/non-financial, lag/lead, external/internal, BSC dimensions) or levels (strategic, tactical, operational).

Thirdly, regarding terminology, different terms are used by different authors to discuss performance measurement. Performance “indicator” is the most commonly used term among the reviewed papers. For instance, it is frequently used in reference to a key performance indicator (KPI), a KPI area or a performance indicator (PI). The concept of a process performance indicator (PPI) is also used, mainly in the process-oriented literature. Performance “measure” is another prevalent term in the papers. The least-used term is performance “metric” (i.e., in only nine papers). Although the concepts of performance indicators, measures and metrics are used interchangeably throughout most of the papers, the concepts are sometimes defined in different ways. For instance, paper 17 defines a performance indicator as a metric, and paper 49 defines a performance measure as an indicator. On the other hand, paper 7 defines a performance indicator as a set of measures. Yet another perspective is taken in paper 74, which defines a performance measure as “a description of something that can be directly measured (e.g., number of reworks per day)”, while defining a performance indicator as “a description of something that is calculated from performance measures (e.g., percentage reworks per day per direct employee” (p. 386). Inconsistencies exist not only in defining indicators but also in describing performance goals. For instance, some authors include a sign (e.g., minus or plus) or a verb (e.g., decrease or increase) in front of an indicator. Other authors attempt to describe performance goals in a SMART way—for instance, by including a time indication (e.g., “within a certain period”) and/or target (e.g., “5 % of all orders”)—whereas most of the authors are less precise. Hence, a great degree of ambiguity exists in the formulation of performance objectives among to the reviewed papers.

Finally, regarding the papers’ foundations, “ Performance measurement models ” section already indicated that the BSC plays an important role in the general literature on performance management systems (PMS), while Kueng ( 2000 ) also offers influential arguments on process performance measurement systems (PPMS). In our literature review, we observed that the BSC was mentioned in 43 of the 76 papers and that the results of 19 papers were mainly based on the BSC (Fig.  9 ). This finding provides additional evidence that the BSC can be considered the most frequently used performance model in academia as well. However, the measurement model of Kueng ( 2000 ) was also mentioned in the sampled papers on PPMS, though less frequently (i.e., in six papers).

The importance of the BSC according to the sampled papers (N = 76)

Interestingly, the BSC is also criticized by the sampled papers for not being comprehensive; for instance, due to the exclusion of environmental aspects, supply chain management aspects or cross-organizational processes. In response, some of the sampled papers also define sector-specific BSC indicators or suggest additional steps or indicators to make the process or business more sustainable (see Table  4 ). Nonetheless, the majority of the papers agree on the need for integrated and multidimensional measurement systems, such as the BSC, and on the importance of directly linking performance measurement to an organization’s strategy. However, while these papers mention the required link with strategy, the prioritization of indicators according to their strategic importance has been studied very little thus far.

Results for RQ2

For RQ2, the sampled papers were reviewed to distinguish papers with performance indicators from papers without performance indicators. A further distinction was made between indicators found with operationalization (i.e., concretization by means of a question or formula) and those without operationalization. We note that for many indicators, no operationalization was available. We discovered that only 30 of the 76 sampled papers contained some type of performance indicator (namely 3, 5, 6, 7, 11, 16, 17, 18, 20, 22, 26, 27, 30, 35, 37, 40, 43, 46, 49, 51, 52, 53, 55, 57, 58, 59, 60, 66, 71, 73). In total, approximately 380 individual indicators were found throughout all the sampled papers (including duplicates), which were combined based on similarities and modified to use more generic terms. This resulted in 87 indicators with operationalization (“Appendix 3 ”) and 48 indicators without operationalization (“Appendix 4 ”).

The 87 indicators with operationalization were then categorized according to the four perspectives of the BSC (i.e., financial, customer, business processes, and “learning and growth”) (Kaplan and Norton 1996 , 2001 ) and the four established dimensions of process performance (i.e., time, cost, quality, and flexibility) (Dumas et al. 2013 ). In particular, based in the identified indicators, we revealed 11 sub-perspectives within the initial BSC perspectives to better emphasize the focus of the indicators and the different target groups (Table  5 ): (1) financial performance for shareholders and top management, (2) customer-related performance, (3) supplier-related performance, (4) society-related performance, (5) general process performance, (6) time-related process performance, (7) cost-related process performance, (8) process performance related to internal quality, (9) flexibility-related process performance, (10) (digital) innovation performance, and (11) employee-related performance.

For reasons of objectivity, the observed performance indicators were assigned to a single perspective starting from recognized frameworks (Kaplan and Norton 1996 , 2001 ; Dumas et al. 2013 ). Bias was further reduced by following the definitions of Table  5 . Furthermore, the authors of this article first classified the indicators individually and then reached consensus to obtain a more objective categorization.

Additional rationale for the identification of 11 performance perspectives is presented in Table  6 , which compares our observations with the perspectives adopted by the most commonly used performance measurement models (see “ Theoretical background ” section). This comparison allows us to highlight similarities and differences with other respected models. In particular, Table  6 shows that we did not observe a dedicated perspective for strategy (EFQM 2010 ) and that we did not differentiate between financial indicators and market indicators (Cross and Lynch 1988 ). Nonetheless, the similarities in Table  6 prevail. For instance, Cross and Lynch ( 1988 ) also acknowledge different process dimensions. Further, Kueng ( 2000 ) and the EFQM ( 2010 ) also differentiate employee performance from innovation performance, and they both add a separate perspective for results related to the entire society.

Figure  10 summarizes the number of performance indicators that we identified in the process literature per observed performance perspective. Not surprisingly, the initial BSC perspective of internal business process performance contains most of the performance indicators: 29 of 87 indicators. However, the other initial BSC perspectives are also covered by a relatively high number of indicators: 16 indicators for both financial performance and customer-related performance and 26 indicators for “learning and growth”. This result confirms the close link between process performance and organizational performance, as mentioned in the introduction.

The number of performance indicators with operationalization per performance perspective

A more detailed comparison of the perspectives provides interesting refinements to the state of the research. More specifically, Fig.  10 shows that five performance perspectives have more than ten indicators in the sample, indicating that academic research focuses more on financial performance for shareholders and top management and performance related to customers, process time, innovation and employees. On the other hand, fewer than five performance indicators were found in the sample for the perspectives related to suppliers, society, process costs and process flexibility, indicating that the literature focuses less on those perspectives. The latter remains largely overlooked by academic research, possibly due to the newly emerging character of these perspectives.

We must, however, note that the majority of the performance indicators are mentioned in only a few papers. For instance, 59 of the 87 indicators were cited in a single paper, whereas the remainder are mentioned in more than one paper. Eleven performance indicators are frequently mentioned in the process literature (i.e., by five or more papers). These indicators include four indicators of customer-related performance (i.e., customer complaints, perceived customer satisfaction, query time, and delivery reliability), three indicators of time-related process performance (i.e., process cycle time, sub-process turnaround time, and process waiting time), one cost-related performance indicator (i.e., process cost), two indicators of process performance related to internal quality (i.e., quality of internal outputs and deadline adherence), and one indicator of employee performance (i.e., perceived employee satisfaction).

Consistent with “ Performance indicators ” section, the different performance perspectives are a combination of financial or cost-related indicators with non-financial data. The latter also take the upper hand in our sample. Furthermore, the sample includes a combination of objective and subjective indicators, and the vast majority are objective indicators. Only eight indicators explicitly refer to qualitative scales; for instance, to measure the degree of satisfaction of the different stakeholder groups. For all the other performance indicators, a quantifiable alternative is provided.

It is important to remember that a distinction was made between the indicators with operationalization and those without operationalization. The list of 87 performance indicators, as given in “Appendix 3 ”, can thus be extended with those indicators for which operationalization is missing in the reviewed literature. Specifically, we found 48 additional performance indicators (“Appendix 4 ”) that mainly address supplier performance, process performance related to costs and flexibility, and the employee-related aspects of digital innovation. Consequently, this structured literature review uncovered a total of 135 performance indicators that are directly or indirectly linked to business process performance.

Finally, the total list of 135 performance indicators was evaluated for its comprehensiveness by comparing the identified indicators with other BSC variants that were not included in our sample. More specifically, based on a random search, we looked for two BSC variants in the Web of Science that did not fit the search strategy of this structured literature review: one that did not fit the search term of “business process*” (Hubbard 2009 ) and another that did not fit any of the performance-related search terms of “performance indicator*”, “performance metric*” or “performance measur*” (Bronzo et al. 2013 ). These two BSC variants cover 30 and 17 performance indicators, respectively, and are thus less comprehensive than the extended list presented in this study. Most of the performance indicators suggested by the two BSC variants are either directly covered in our findings or could be derived after recalculations. Only five performance indicators could not be linked to our list of 135 indicators, and these suggest possible refinements regarding (1) the growth potential of employees, (2) new markets, (3) the social performance of suppliers, (4) philanthropy, or (5) industry-specific events.

This structured literature review culminated in an extended list of 140 performance indicators: 87 indicators with operationalization, 48 indicators without operationalization and 5 refinements derived from two other BSC variants. The evaluation of our findings against two BSC variants validated our work in the sense that we present a more exhaustive list of performance indicators, with operationalization for most, and that only minor refinements could be added. However, the comprehensiveness of our findings can be claimed only to a certain extent given the limitations of our predefined search strategy and the lack of empirical validation by subject-matter experts or organizations. Notwithstanding these limitations, conclusions can be drawn from the large sample of 76 papers to respond to the research questions (RQs).

Regarding RQ1 on the state of the research on business process performance measurement, the literature review provided additional evidence for the omnipresence of the BSC. Most of the sampled papers mentioned or used the BSC as a starting point and basis for their research and analysis. The literature study also showed a variety of research topics, ranging from behavioral-science to design-science research and from a focus on performance measurement models to a focus on performance indicators. In addition to inconsistencies in the terminology used to describe performance indicators and targets, the main weakness uncovered in this literature review deals with the concretization of performance indicators supplementing performance measurement systems. The SLR results suggest that none of the reviewed papers offers a comprehensive measurement framework, specifically one that includes and extends the BSC perspectives, is process-driven and encompasses as many concrete performance indicators as possible. Such a comprehensive framework could be used as a checklist or a best practice for reference when defining specific performance indicators. Hence, the current literature review offers a first step towards such a comprehensive framework by means of an extended list of possible performance indicators bundled in 11 performance perspectives (RQ2).

Regarding RQ2 on process performance indicators, the literature study revealed that scholars measure performance in many different ways and without sharing much detail regarding the operationalization of the measurement instruments, which makes a comparison of research results more difficult. As such, the extended list of performance indicators is our main contribution and fills a gap in the literature by providing a detailed overview of performance indicators mentioned or used in the literature on business process performance. Another novel aspect is that we responded to the criticism of missing perspectives in the original BSC (EFQM 2010 ; Hubbard 2009 ; Kueng 2000 ) and identified the narrow view of performance typically taken in the process literature (Dumas et al. 2013 ). Figures  1 and 2 are now combined and extended in a more exhaustive way, namely by means of more perspectives than are offered by other attempts (Table  6 ), by explicitly differentiating between performance drivers (or lead indicators) and performance outcomes (or lag indicators), and by considering concrete performance indicators.

Our work also demonstrated that all perspectives in the BSC (Kaplan and Norton 1996 , 2001 ) relate to business process performance to some degree. In other words, while the BSC is a strategic tool for organizational performance measurement, it is actually based on indicators that originate from business processes. More specifically, in addition to the perspective of internal business processes, the financial performance perspective typically refers to sales or revenues gained while doing business, particularly after executing business processes. The customer perspective relates to the implications of product or service delivery, specifically to the interactions throughout business processes, whereas the “learning and growth” perspective relates to innovations in the way of working (i.e., business processes) and the degree to which employees are prepared to conduct and innovate business processes. The BSC, however, does not present sub-perspectives and thus takes a more high-level view of performance. Hence, the BSC can be extended based on other categorizations made in the reviewed literature; for instance, related to internal/external, strategic/operational, financial/non-financial, or cost/time/quality/flexibility.

Therefore, this study refined the initial BSC perspectives into eleven performance perspectives (Fig.  11 ) by applying three other performance measurement models (Cross and Lynch 1988 ; EFQM 2010 ; Kueng 2000 ) and the respected Devil’s quadrangle for process performance (Dumas et al. 2013 ). Additionally, a more holistic view of business process performance can be obtained by measuring each performance perspective of Fig.  11 than can be achieved by using the established dimensions of time, cost, quality and flexibility as commonly proposed in the process literature (Dumas et al. 2013 ). As such, this study demonstrated a highly relevant synergy between the disciplines of process management, organization management and performance management.

An overview of the observed performance perspectives in the business process literature

We also found out that not all the performance perspectives in Fig.  11 are equally represented in the studied literature. In particular, the perspectives related to suppliers, society, process costs and process flexibility seem under-researched thus far.

The eleven performance perspectives (Fig.  11 ) can be used by organizations and scholars to measure the performance of business processes in a more holistic way, considering the implications for different target groups. For each perspective, performance indicators can be selected that fit particular needs. Thus, we do not assert that every indicator in the extended list of 140 performance indicators should always be measured, since “ Theoretical background ” section emphasized the need for organization-dependent indicators aligned with an organization’s strategy. Instead, our extended list can be a starting point for finding and using appropriate indicators for each performance perspective, without losing much time reflecting on possible indicators or ways to concretize those indicators. Similarly, the list can be used by scholars, since many studies in both the process literature and management literature intend to measure the performance outcomes of theoretical constructs or developed artifacts.

Consistent with the above, we acknowledge that the observed performance indicators originate from different models and paradigms or can be specific to certain processes or sectors. Since our intention is to provide an exhaustive list of indicators that can be applied to measure business process performance, the indicators are not necessarily fully compatible. Instead, our findings allow the recognition of the role of a business context (i.e., the peculiarities of a business activity, an organization or other circumstances). For instance, a manufacturing organization might choose different indicators from our list than a service or non-profit organization (e.g., manufacturing lead time versus friendliness, or carbon dioxide emission versus stakeholder satisfaction).

Another point of discussion is dedicated to the difference between the performance of specific processes (known as “process performance”) and the performance of the entire process portfolio (also called “BPM performance”). While some indicators in our extended list clearly go beyond a single process (e.g., competence-related indicators or employee absenteeism), it is our opinion that the actual performance of multiple processes can be aggregated to obtain BPM performance (e.g., the sum of process waiting times). This distinction between (actual) process performance and BPM performance is useful; for instance, for supplementing models that try to predict the (expected) performance based on capability development, such as process maturity models (e.g., CMMI) and BPM maturity models (Hammer 2007 ; McCormack and Johnson 2001 ). Nonetheless, since this study has shown a close link between process performance, BPM performance, and organizational performance, it seems better to refer to different performance perspectives than to differentiate between such performance types.

In future research, the comprehensiveness of the extended list of performance indicators can be empirically validated by subject-matter experts. Additionally, case studies can be conducted in which organizations apply the list as a supplement to performance measurement models in order to facilitate the selection of indicators for their specific business context. The least covered perspectives in the academic research also seem to be those that are newly emerging (namely, the perspectives related to close collaboration with suppliers, society/sustainability and process flexibility or agility), and these need more attention in future research. Another research avenue is to elaborate on the notion of a business context; for instance, by investigating what it means to have a strategic fit (Venkatraman 1989 ) in terms of performance measurement and which strategies (Miller and Friesen 1986 ; Porter 2008 ; Treacy and Wiersema 1993 ) are typically associated with which performance indicators. Additionally, the impact of environmental aspects, such as market velocity (Eisenhardt and Martin 2000 ), on the choice of performance indicators can be taken into account in future research.

Business quotes such as “If you cannot measure it, you cannot manage it” or “What is measured improves” (P. Drucker) are sometimes criticized because not all important things seem measurable (Ryan 2014 ). Nonetheless, given the perceived need of managers to measure their business and the wide variety of performance indicators (i.e., ranging from quantitative to qualitative and from financial to non-financial), this structured literature review has presented the status of the research on business process performance measurement. This structured approach allowed us to detect weaknesses or inadequacies in the current literature, particularly regarding the definition and concretization of possible performance indicators. We continued by taking a holistic view of the categorization of the observed performance indicators (i.e., measures or metrics) into 11 performance perspectives based on relevant performance measurement models and established process performance dimensions.

The identified performance indicators within the 11 perspectives constitute an extended list from which practitioners and researchers can select appropriate indicators depending on their needs. In total, the structured literature review resulted in 140 possible performance indicators: 87 indicators with operationalization, 48 additional indicators that need further concretization, and 5 refinements based on other Balanced Scorecard (BSC) variants. As such, the 11 performance perspectives with related indicators can be considered a conceptual framework that was derived from the current process literature and theoretically validated by established measurement approaches in organization management.

Future research can empirically validate the conceptual framework by involving subject-matter experts to assess the comprehensiveness of the extended list and refine the missing concretizations, and by undertaking case studies in which the extended list can be applied by specific organizations. Other research avenues exist to investigate the link between actual process performance and expected process performance (as measured in maturity models) or the impact of certain strategic or environmental aspects on the choice of specific performance indicators. Such findings are needed to supplement and enrich existing performance measurement systems.

Abbreviations

behavioral science

business process management

balanced scorecard

design-science

research question

structured literature review

keyword in topic

keyword in title

Bain Company (2015) Management tools and trends 2015. http://www.bain.com/publications/articles/management-tools-and-trends-2015.aspx . Accessed Apr 2016

Boellt SK, Cecez-Kecmanovic D (2015) On being ‘systematic’ in literature reviews in IS. J Inf Technol 30:161–173

Article   Google Scholar  

Bronzo M, de Resende PTV, de Oliveira MP, McCormack KP, de Sousa PR, Ferreira RL (2013) Improving performance aligning business analytics with process orientation. Int J Inf Manag 33(2):300–307

Butler A, Letza SR, Neale B (1997) Linking the balanced scorecard to strategy. Long Range Plann 30(2):242–253

Cross KF, Lynch RL (1988) The “SMART” way to define and sustain success. Natl Product Rev 8(1):1–23

Dumas M, La Rosa M, Mendling J, Reijers HA (2013) Fundamentals of business process management. Springer, Berlin

Book   Google Scholar  

EFQM (2010) EFQM—the official website. http://www.efqm.org . Accessed Apr 2015

Eisenhardt KM, Martin JA (2000) Dynamic capabilities: what are they? Strateg Manag J 21(10–11):1105–1121

Hammer M (2007) The process audit. Harv Bus Rev 4:111–123

Google Scholar  

Heckl D, Moormann J (2010) Process performance management. In: Rosemann M, vom Brocke J (eds) Handbook on business process management 2. Springer, Berlin, pp 115–135

Chapter   Google Scholar  

Hubbard G (2009) Measuring organizational performance: beyond the triple bottom line. Bus Strateg Environ 18(3):177–191

Article   MathSciNet   Google Scholar  

Kaplan RS, Norton DP (1996) The balanced scorecard. Translating strategy into action. Harvard Business School Press, Boston

Kaplan RS, Norton DP (2001) The strategy-focused organization. How balanced scorecard companies thrive in the new business environment. Harvard Business School Press, Boston

Kaplan RS, Norton DP (2004) Strategy maps. Converting intangible assets into tangible outcomes. Harvard Business Press, Massachusetts

King WR, He J (2005) Understanding the role and methods of meta-analysis in IS research. Commun Assoc Inform Sys 16:665–686

Kitchenham B (2007) Guidelines for performing systematic literature reviews in software engineering (version 2.3) (technical report EBSE-2007-01). Keele University and University of Durham

Kueng P (2000) Process performance measurement system: a tool to support process-based organizations. Total Qual Manag 11(1):67–85

March ST, Smith GF (1995) Design and natural science research on information technology. Decis Support Syst 15(4):251–266

McCormack K, Johnson WC (2001) Business process orientation. St. Lucie Press, Florida

Melville N, Kraemer K, Gurbaxani V (2004) Review: information technology and organizational performance: an integrative model of IT business value. MIS Q 28(2):283–322

Miller D, Friesen PH (1986) Porter’s (1980) Generic strategies and performance: an empirical examination with American data part I: testing porter. Organ Stud 7(1):37–55

Neely A (2005) The evolution of performance measurement research. Int J Oper Prod Manag 5(12):1264–1277

Neely A, Mills J, Platts K, Richards H, Gregory M, Bourne M, Kennerley M (2000) Performance measurement system design: developing and testing a process-based approach. Int J Oper Prod Manag 20(10):1119–1145

Norreklit H (2000) The balance on the balanced scorecard. A critical analysis of some of its assumptions. Manag Accoun Res 11(1):65–88

Peffers K, Rothenberger M, Tuunanen T, Vaezi R (2012) Design science research evaluation. In: Peffers K, Rothenberger M, Kuechler B (eds) DESRIST 2012. LNCS 7286. Springer, Berlin, pp 398–410

Porter ME (2008) The five competitive forces that shape strategy. Harv Bus Rev 86(1):78–93

PubMed   Google Scholar  

Recker J (2013) Scientific research in information systems. A beginner’s guide. Springer, Berlin

Richard PJ, Devinney TM, Yip GS, Johnson G (2009) Measuring organizational performance: towards methodological best practice. J Manag 35(3):718–804

Ryan L (2014) ‘If you can’t measure it, you can’t manage it’: not true. http://www.forbes.com/sites/lizryan/2014/02/10/if-you-cant-measure-it-you-cant-manage-it-is-bs/#aca27e3faeda . Accessed Apr 2015

Shah L, Etienne A, Siadat A, Vernadat F (2012) (Value, Risk)-Based performance evaluation of manufacturing processes. In: INCOM proceedings of the 14th symposium on information control problems in manufacturing, 23–25 May 2012. Bucharest, Romania, pp 1586–1591

Smith TM, Reece JS (1999) The relationship of strategy, fit, productivity, and business performance in a services setting. J Oper Manag 17(2):145–161

Sullivan T (2001) Scorecards ease businesses’ balance act. Infoworld, 8 Jan, p 32

Treacy M, Wiersema F (1993) Customer intimacy and other value disciplines. Harv Bus Rev 71(1):84–93

Ulfeder S (2004) The new imperative. Enterprise leadership. CIO advertising supplements, 15 Feb, p S5

Vaivio J (1999) Exploring a non-financial management accounting change. Manag Acc Res 10(4):409–437

Venkatraman N (1989) The concept of fit in strategy research: toward verbal and statistical correspondence. Acad Manag Rev 14(3):423–444

Download references

Authors’ contributions

AVL initiated the conception and design of the study, while AS was responsible for the collection of data (sampling) and identification of performance indicators. The analysis and interpretation of the data was conducted by both authors. AVL was involved in drafting and coordinating the manuscript, and AS in reviewing it critically. Both authors read and approved the final manuscript.

Acknowledgements

We thank American Journal Experts (AJE) for English language editing.

Competing interests

The authors declare that they have no competing interests.

Availability of data and materials

The datasets supporting the conclusions of this article are included within the article (and its additional files).

Consent for publication

Not applicable.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Author information

Authors and affiliations.

Faculty of Economics and Business Administration – Department of Business Informatics and Operations Management, Ghent University, Tweekerkenstraat 2, 9000, Ghent, Belgium

Amy Van Looy & Aygun Shafagatova

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Amy Van Looy .

See Table  7 .

Appendix 2: The mapping of the structured literature review

The mapping details per sampled paper can be found here.

https://drive.google.com/file/d/0B_2VpjwsRLrlRHhfRHJ4ZFBWdEE/view?usp=sharing .

See Table  8 .

See Table  9 .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Van Looy, A., Shafagatova, A. Business process performance measurement: a structured literature review of indicators, measures and metrics. SpringerPlus 5 , 1797 (2016). https://doi.org/10.1186/s40064-016-3498-1

Download citation

Received : 17 June 2016

Accepted : 10 October 2016

Published : 18 October 2016

DOI : https://doi.org/10.1186/s40064-016-3498-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Business process
  • Performance measurement
  • Structured literature review
  • Systematic literature review

literature review of performance measurement

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springerplus

Logo of splus

Business process performance measurement: a structured literature review of indicators, measures and metrics

Amy van looy.

Faculty of Economics and Business Administration – Department of Business Informatics and Operations Management, Ghent University, Tweekerkenstraat 2, 9000 Ghent, Belgium

Aygun Shafagatova

Associated data.

The datasets supporting the conclusions of this article are included within the article (and its additional files).

Measuring the performance of business processes has become a central issue in both academia and business, since organizations are challenged to achieve effective and efficient results. Applying performance measurement models to this purpose ensures alignment with a business strategy, which implies that the choice of performance indicators is organization-dependent. Nonetheless, such measurement models generally suffer from a lack of guidance regarding the performance indicators that exist and how they can be concretized in practice. To fill this gap, we conducted a structured literature review to find patterns or trends in the research on business process performance measurement. The study also documents an extended list of 140 process-related performance indicators in a systematic manner by further categorizing them into 11 performance perspectives in order to gain a holistic view. Managers and scholars can consult the provided list to choose the indicators that are of interest to them, considering each perspective. The structured literature review concludes with avenues for further research.

Since organizations endeavor to measure what they manage, performance measurement is a central issue in both the literature and in practice (Heckl and Moormann 2010 ; Neely 2005 ; Richard et al. 2009 ). Performance measurement is a multidisciplinary topic that is highly studied by both the management and information systems domains (business process management or BPM in particular). Different performance measurement models, systems and frameworks have been developed by academia and practitioners (Cross and Lynch 1988 ; Kaplan and Norton 1996 , 2001 ; EFQM 2010 ; Kueng 2000 ; Neely et al. 2000 ). While measurement models were initially limited to financial performance (e.g., traditional controlling models), a more balanced and integrated approach was needed beginning in the 1990s due to the challenges of the rapidly changing society and technology; this approach resulted in multi-dimensional models. Perhaps the best known multi-dimensional performance measurement model is the Balanced Scorecard (BSC) developed by Kaplan and Norton ( 1996 , 2001 ), which takes a four-dimensional approach to organizational performance: (1) financial perspective, (2) customer perspective, (3) internal business process perspective, and (4) “learning and growth” perspective. The BSC helps translate an organization’s strategy into operational performance indicators (also called performance measures or metrics) and objectives with targets for each of these performance perspectives. Even today, the BSC is by far the most used performance measurement approach in the business world (Bain Company 2015 ; Sullivan 2001 ; Ulfeder 2004 ).

Equally important for measuring an organization’s performance is process-oriented management or business process management (BPM), which is “about managing entire chains of events, activities and decisions that ultimately add value to the organization and its customers. These ‘chains of events, activities and decisions’ are called processes” (Dumas et al. 2013 : p. 1). In particular, an organization can do more with its current resources by boosting the effectiveness and efficiency of its way of working (i.e., its business processes) (Sullivan 2001 ). In this regard, academic research also suggests a strong link between business process performance and organizational performance, either in the sense of a causal relationship (Melville et al. 2004 ; Smith and Reece 1999 ) or as distinctive indicators that co-exist, as in the BSC (Kaplan and Norton 1996 , 2001 ).

Nonetheless, performance measurement models tend to give little guidance on how business (process) performance indicators can be chosen and operationalized (Shah et al. 2012 ). They are limited to mainly defining performance perspectives, possibly with some examples or steps to derive performance indicators (Neely et al. 2000 ), but without offering concrete indicators. Whereas fairly large bodies of research exist for both performance models and business processes, no structured literature review of (process) performance measurement has been carried out thus far. To the best of our knowledge, existing reviews cover one or another aspect of performance measurement; for instance, reviews on measurement models or evaluation criteria for performance indicators (Heckl and Moormann 2010 ; Neely 2005 ; Richard et al. 2009 ). Despite the considerable importance of a comprehensive and holistic approach to business (process) performance measurement, little is known regarding the state of the research on alternative performance indicators and their operationalization with respect to evaluating the performance of an organization’s work routines. To some extent, this lack of guidance can be explained by the fact that performance indicators are considered organization-dependent, given that strategic alignment is claimed by many measurement models such as the BSC (Kaplan and Norton 1996 , 2001 ). Although the selection of appropriate performance indicators is challenging for practitioners due to the lack of best practices, it is also highly relevant for performance measurement.

The gap that we are studying is the identification and, in particular, the concretization/operationalization of process-related performance indicators. This study enhances the information systems literature, which focuses on the design and development of measurement systems without paying much attention to essential indicators. To fill this gap, our study presents a structured literature review in order to describe the current state of business process performance measurement and related performance indicators. The choice to focus on the business process management (BPM) discipline is motivated by the close link between organizational performance and business process performance, as well as to ensure a clear scope (specifically targeting an organization’s way of working). Accordingly, the study addresses the following research questions.

  • RQ1. What is the current state of the research on business process performance measurement?
  • RQ2. Which indicators, measures and metrics are used or mentioned in the current literature related to business process performance?

The objective of RQ1 is to identify patterns in the current body of knowledge and to note weaknesses, whereas RQ2 mainly intends to develop an extended list of measurable process performance indicators, categorized into recognized performance perspectives, which can be tailored to diverse purposes. This list could, for instance, serve as a supplement to existing performance measurement models. Practitioners can use the list as a source for best practice indicators from academic research to find and select a subset of performance indicators that fit their strategy. The study will thus not address the development of specific measurement systems but rather the indicators to be used within such systems. To make our intended list system-independent, we will begin with the BSC approach and extend its performance perspectives. Given this generic approach, the research findings can also be used by scholars when building and testing theoretical models in which process performance is one of the factors that must be concretized.

The remainder of this article is structured as follows. “ Theoretical background ” section describes the theoretical background of performance measurement models and performance indicators. Next, the methodology for our structured literature review is detailed in “ Methods ” section. The subsequent sections present the results for RQ1 (“ Results for RQ1 ” section) and RQ2 (“ Results for RQ2 ” section). The discussion of the results in provided in “ Discussion ” section, followed by concluding comments (“ Conclusion ” section).

Theoretical background

This section addresses the concepts of performance measurement models and performance indicators separately in order to be able to differentiate them further in the study.

Performance measurement models

According to overviews in the performance literature (Heckl and Moormann 2010; Neely 2005 ; Richard et al. 2009 ), some of the most cited performance measurement models are the Balanced Scorecard (Kaplan and Norton 1996 , 2001 ), self-assessment excellence models such as the EFQM ( 2010 ), and the models by Cross and Lynch ( 1988 ), Kueng ( 2000 ) and Neely et al. ( 2000 ). A distinction should, however, be made between models focusing on the entire business (Kaplan and Norton 1996 , 2001 ; EFQM 2010 ; Cross and Lynch 1988 ) and models focusing on a single business process (Kueng 2000 ; Neely et al. 2000 ).

Organizational performance measurement models

Organizational performance measurement models typically intend to provide a holistic view of an organization’s performance by considering different performance perspectives. As mentioned earlier, the BSC provides four perspectives for which objectives and performance indicators ensure alignment between strategies and operations (Fig.  1 ) (Kaplan and Norton 1996 , 2001 ). Other organizational performance measurement models provide similar perspectives. For instance, Cross and Lynch ( 1988 ) offer a four-level performance pyramid: (1) a top level with a vision, (2) a second level with objectives per business unit in market and financial terms, (3) a third level with objectives per business operating system in terms of customer satisfaction, flexibility and productivity, and (4) a bottom level with operational objectives for quality, delivery, process time and costs. Another alternative view on organizational performance measurement is given in business excellence models, which focus on an evaluation through self-assessment rather than on strategic alignment, albeit by also offering performance perspectives. For instance, the EFQM ( 2010 ) distinguishes enablers [i.e., (1) leadership, (2) people, (3) strategy, (4) partnerships and resources, and (5) processes, products and services] from results [i.e., (1) people results, (2) customer results, (3) society results, and (4) key results], and a feedback loop for learning, creativity and innovation.

An external file that holds a picture, illustration, etc.
Object name is 40064_2016_3498_Fig1_HTML.jpg

An overview of the performance perspectives in Kaplan and Norton ( 1996 , 2001 )

Since the BSC is the most used performance measurement model, we have chosen it as a reference model to illustrate the function of an organizational performance measurement model (Kaplan and Norton 1996 , 2001 ). The BSC is designed to find a balance between financial and non-financial performance indicators, between the interests of internal and external stakeholders, and between presenting past performance and predicting future performance. The BSC encourages organizations to directly derive (strategic) long-term objectives from the overall strategy and to link them to (operational) short-term targets. Concrete performance measures or indicators should be defined to periodically measure the objectives. These indicators are located on one of the four performance perspectives in Fig.  1 (i.e., ideally with a maximum of five indicators per perspective).

Table  1 illustrates how an organizational strategy can be translated into operational terms using the BSC.

Table 1

An example of translating an organizational strategy into operational terms using the BSC

PerspectiveStrategyObjectiveIndicator, measure or metricTargetInitiative
Year 1 (%)Year 2 (%)Year 3 (%)
CustomerOperational excellenceIndustry-leading customer loyaltyCustomer satisfaction rating808590Mystery shopper program
Customer loyalty program

During periodical measurements using the BSC, managers can assign color-coded labels according to actual performance on short-term targets: (1) a green label if the organization has achieved the target, (2) an orange label if it is almost achieved, or (3) a red label if it is not achieved. Orange and red labels thus indicate areas for improvement.

Furthermore, the BSC assumes a causal or logical relationship between the four performance perspectives. An increase in the competences of employees (i.e., performance related to “learning and growth”) is expected to positively affect the quality of products and services (i.e., internal business process performance), which in turn will lead to improved customer perceptions (i.e., customer performance). The results for the previous perspectives will then contribute to financial performance to ultimately realize the organization’s strategy, mission and vision (Kaplan and Norton 1996 , 2001 ). Hence, indicators belonging to the financial and customer perspectives are assumed to measure performance outcomes, whereas indicators from the perspectives of internal business processes and “learning and growth” are considered as typical performance drivers (Kaplan and Norton 2004 ).

Despite its widespread use and acceptance, the BSC is also criticized for appearing too general by managers who are challenged to adapt it to the culture of their organization (Butler et al. 1997 ) or find suitable indicators to capture the various aspects of their organization’s strategy (Shah et al. 2012 ; Vaivio 1999 ). Additionally, researchers question the choice of four distinct performance perspectives (i.e., which do not include perspectives related to inter-organizational performance or sustainability issues) (EFQM 2010 ; Hubbard 2009 , Kueng 2000 ). Further, the causal relationship among the BSC perspectives has been questioned (Norreklit 2000 ). To some degree, Kaplan and Norton ( 2004 ) responded to this criticism by introducing strategy maps that focus more on the causal relationships and the alignment of intangible assets.

Business process performance measurement models

In addition to organizational models, performance measurement can also focus on a single business process, such as statistical process control, workflow-based monitoring or process performance measurement systems (Kueng 2000 ; Neely et al. 2000 ). The approach taken in business process performance measurement is generally less holistic than the BSC. For instance, in an established BPM handbook, Dumas et al. ( 2013 ) position time, cost, quality and flexibility as the typical performance perspectives of business process performance measurement (Fig.  2 ). Similar to organizational performance measurement, concrete performance measures or indicators should be defined for each process performance perspective. In this sense, the established perspectives of Dumas et al. ( 2013 ) seem to further refine the internal business process performance perspective of the BSC.

An external file that holds a picture, illustration, etc.
Object name is 40064_2016_3498_Fig2_HTML.jpg

An overview of the performance perspectives in Dumas et al. ( 2013 )

Neely et al. ( 2000 ), on the other hand, present ten steps to develop or define process performance indicators. The process performance measurement system of Kueng ( 2000 ) is also of high importance, which is visualized as a “goal and performance indicator tree” with five process performance perspectives: (1) financial view, (2) customer view, (3) employee view, (4) societal view, and (5) innovation view. Kueng ( 2000 ) thus suggests a more holistic approach towards process performance, similar to organizational performance, given the central role of business processes in an organization. He does so by focusing more on the different stakeholders involved in certain business processes.

Performance indicators

Section “ Performance measurement models ” explained that performance measurement models typically distinguish different performance perspectives for which performance indicators should be further defined. We must, however, note that we consider performance measures, performance metrics and (key) performance indicators as synonyms (Dumas et al. 2013 ). For reasons of conciseness, this work will mainly refer to performance indicators without mentioning the synonyms. In addition to a name, each performance indicator should also have a concretization or operationalization that describes exactly how it is measured and that can result in a value to be compared against a target. For instance, regarding the example in Table  1 , the qualitative statements to measure customer satisfaction constitute an operationalization. Nonetheless, different ways of operationalization can be applied to measure the same performance indicator. Since organizations can profit from reusing existing performance indicators and the related operationalization instead of inventing new ones (i.e., to facilitate benchmarking and save time), this work investigates which performance indicators are used or mentioned in the literature on business process performance and how they are operationalized.

Neely et al. ( 2000 ) and Richard et al. ( 2009 ) both present evaluation criteria for performance indicators (i.e., in the sense of desirable characteristics or review implications), which summarize the general consensus in the performance literature. First, the literature strongly agrees that performance indicators are organization-dependent and should be derived from an organization’s objectives, strategy, mission and vision. Secondly, consensus in the literature also exists regarding the need to combine financial and non-financial performance indicators. Nonetheless, disagreement still seems to exist in terms of whether objective and subjective indicators need to be combined, with objective indicators preferred by most advocates. Although subjective (or quasi-objective) indicators face challenges from bias, their use has some advantages; for instance, to include stakeholders in an assessment, to address latent constructs or to facilitate benchmarking when a fixed reference point is missing (Hubbard 2009 ; Richard et al. 2009 ). Moreover, empirical research has shown that subjective (or quasi-objective) indicators are more or less correlated with objective indicators, depending on the level of detail of the subjective question (Richard et al. 2009 ). For instance, a subjective question can be made more objective by using clear definitions or by selecting only well-informed respondents to reduce bias.

We conducted a structured literature review (SLR) to find papers dealing with performance measurement in the business process literature. SLR can be defined as “a means of evaluating and interpreting all available research relevant to a particular research question, topic area, or phenomenon of interest” (Kitchenham 2007 : p. vi). An SLR is a meta study that identifies and summarizes evidence from earlier research (King and He 2005 ) or a way to address a potentially large number of identified sources based on a strict protocol used to search and appraise the literature (Boellt and Cecez-Kecmanovic 2015 ). It is systematic in the sense of a systematic approach to finding relevant papers and a systematic way of classifying the papers. Hence, according to Boellt and Cecez-Kecmanovic ( 2015 ), SLR as a specific type of literature review can only be used when two conditions are met. First, the topic should be well-specified and closely formulated (i.e., limited to performance measurement in the context of business processes) to potentially identify all relevant literature based on inclusion and exclusion criteria. Secondly, the research questions should be answered by extracting and aggregating evidence from the identified literature based on a high-level summary or bibliometric-type of content analysis. Furthermore, King and He ( 2005 ) also refer to a statistical analysis of existing literature.

Informed by the established guidelines proposed by Kitchenham ( 2007 ), we undertook the review in distinct stages: (1) formulating the research questions and the search strategy, (2) filtering and extracting data based on inclusion and exclusion criteria, and (3) synthesizing the findings. The remainder of this section describes the details of each stage.

Formulating the research questions and search strategy

A comprehensive and unbiased search is one of the fundamental factors that distinguish a systematic review from a traditional literature review (Kitchenham 2007 ). For this purpose, a systematic search begins with the identification of keywords and search terms that are derived from the research questions. Based on the research questions stipulated in the introduction, the SLR protocol (Boellt and Cecez-Kecmanovic 2015 ) for our study was defined, as shown in Table  2 .

Table 2

The structured literature review protocol for this study, based on Boellt and Cecez-Kecmanovic ( 2015 )

Protocol elementsTranslation to this study
1/Research questionRQ1. What is the current state of the research on business process performance measurement?
RQ2. Which indicators, measures and metrics are used or mentioned in the current literature related to business process performance?
2/Sources searchedWeb of science database (until November 2015)
3/Search termsCombining “business process*” and “performance indicator*”/“performance metric*”/“performance measur*”
4/Search strategyDifferent search queries, with keywords in topic and title (Table  )
5/Inclusion criteriaInclude only papers containing a combination of search terms, defined in the search queries
Include only papers indexed in the Web of Science from all periods until November 2015
Include only papers written in English
6/Exclusion criteriaExclude unrelated papers, i.e., if they do not explicitly claim addressing the measurement of business process performance
7/Quality criteriaOnly peer-reviewed papers are indexed in the web of science database

The ISI Web of Science (WoS) database was searched using predetermined search terms in November 2015. This database was selected because it is used by many universities and results in the most outstanding publications, thus increasing the quality of our findings. An important requirement was that the papers focus on “business process*” (BP). This keyword was used in combination with at least one of the following: (1) “performance indicator*”, (2) “performance metric*”, (3) “performance measur*”. All combinations of “keyword in topic” (TO) and “keyword in title” (TI) have been used.

Table  3 shows the degree to which the initial sample sizes varied, with 433 resulting papers for the most permissive search query (TOxTO) and 19 papers for the most restrictive one (TIxTI). The next stage started with the most permissive search query in an effort to select and assess as many relevant publications as possible.

Table 3

The number of papers in the web of science per search query (until November 2015)

(1) “Performance indicator*”(2) “Performance metric*”(3) “Performance measur*”TOTAL
BP-TO15330250433
BP-TI3146499
BP-TO1926283
BP-TI501419

Filtering and extracting data

Figure  3 summarizes the procedure for searching and selecting the literature to be reviewed. The list of papers found in the previous stage was filtered by deleting 35 duplicates, and the remaining 398 papers were further narrowed to 153 papers by evaluating their title and abstract. After screening the body of the texts, 76 full-text papers were considered relevant for our scope and constituted the final sample (“Appendix 1 ”).

An external file that holds a picture, illustration, etc.
Object name is 40064_2016_3498_Fig3_HTML.jpg

Exclusion of papers and number of primary studies

More specifically, studies were excluded if their main focus was not business process performance measurement or if they did not refer to indicators, measures or metrics for business performance. The inclusion of studies was not restricted to any specific type of intervention or outcome. The SLR thus included all types of research studies that were written in English and published up to and including November 2015. Furthermore, publication by peer-reviewed publication outlets (e.g., journals or conference proceedings) was considered as a quality criterion to ensure the academic level of the research papers.

Synthesizing the findings

The analysis of the final sample was performed by means of narrative and descriptive analysis techniques. For RQ1, the 76 papers were analyzed on the basis of bibliometric data (e.g., publication type, publication year, geography) and general performance measurement issues by paying attention to the methodology and focus of the study. Details are provided in “Appendix 2 ”.

For RQ2, all the selected papers were screened to identify concrete performance indicators in order to generate a comprehensive list or checklist. The latter was done in different phases. In the first phase, the structured literature review allowed us to analyze which performance indicators are mainly used in the process literature and how they are concretized (e.g., in a question or mathematical formulation), resulting in an unstructured list of potential performance indicators. The indicators were also synthesized by combining similar indicators and rephrasing them into more generic terms.

The next phase was a comparative study to categorize the output of phase 1 into the commonly used measurement models in the performance literature (see “ Theoretical background ” section). For the purpose of this study, we specifically looked for those organizational performance models, mentioned in “ Theoretical background ” section, that are cited the most and that suggest categories, dimensions or performance perspectives that can be re-used (Kaplan and Norton 1996 , 2001 ; EFQM 2010 ; Cross and Lynch 1988 ; Kueng 2000 ). Since the BSC (Kaplan and Norton 1996 , 2001 ) is the most commonly used of these measurement models, we began with the BSC as the overall framework to categorize the observed indicators related to business (process) performance, supplemented with an established view on process performance from the process literature (Dumas et al. 2013 ). Subsequently, a structured list of potential performance indicators was obtained.

In the third and final phase, an evaluation study was performed to validate whether the output of phase 2 is sufficiently comprehensive according to other performance measurement models, i.e., not included in our sample and differing from the most commonly used performance measurement models. Therefore, we investigated the degree to which our structured list covers the items in two variants or concretizations of the BSC. Hence, a validation by other theoretical models is provided. We note that a validation by subject-matter experts is out of scope for a structured literature review but relates to an opportunity for further research.

Results for RQ1

The final sample of 76 papers consists of 46 journal papers and 30 conference papers (Fig.  4 ), indicating a wide variety of outlets to reach the audience via operations and production-related journals in particular or in lower-ranked (Recker 2013 ) information systems journals.

An external file that holds a picture, illustration, etc.
Object name is 40064_2016_3498_Fig4_HTML.jpg

The distribution of the sampled papers per publication type (N = 76)

When considering the chronological distribution of the sampled papers, Fig.  5 indicates an increase in the uptake of the topic in recent years, particularly for conference papers but also for journal publications since 2005.

An external file that holds a picture, illustration, etc.
Object name is 40064_2016_3498_Fig5_HTML.jpg

The chronological distribution of the sampled papers per publication type (N = 76)

This uptake seems particularly situated in the Western world and Asia (Fig.  6 ). The countries with five or more papers in our sample are Germany (12 papers), the US (6 papers), Spain (5 papers), Croatia (5 papers) and China (5 papers). Figure  6 shows that business process performance measurement is a worldwide topic, with papers across the different continents. Nonetheless, a possible explanation for the higher coverage in the Western world could be due to its long tradition of measuring work (i.e., BSC origins).

An external file that holds a picture, illustration, etc.
Object name is 40064_2016_3498_Fig6_HTML.jpg

The geographical distribution of the sampled papers per continent, based on a paper’s first author (N = 76)

The vast majority of the sampled papers address artifacts related to business (process) performance measurement. When looking at the research paradigm in which the papers are situated (Fig.  7 ), 71 % address design-science research, whereas 17 % conduct research in behavioral science and 12 % present a literature review. This could be another explanation for the increasing uptake in the Western world, as many design-science researchers are from Europe or North America (March and Smith 1995 ; Peffers et al. 2012 ).

An external file that holds a picture, illustration, etc.
Object name is 40064_2016_3498_Fig7_HTML.jpg

The distribution of the sampled journal papers per research paradigm (N = 76)

Figure  8 supplements Fig.  7 by specifying the research methods used in the papers. For the behavioral-science papers, case studies and surveys are equally used. The 54 papers that are situated within the design-science paradigm explicitly refer to models, meta-models, frameworks, methods and/or tools. When mapping these 54 papers to the four artifact types of March and Smith ( 1995 ), the vast majority present (1) methods in the sense of steps to perform a task (e.g., algorithms or guidelines for performance measurement) and/or (2) models to describe solutions for the topic. The number of papers dealing with (3) constructs or a vocabulary and/or (4) instantiations or tools is much more limited, with 14 construct-related papers and 9 instantiations in our sample. We also looked at which evaluation methods, defined by Peffers et al. ( 2012 ), are typically used in the sampled design-science papers. While 7 of the 54 design-science papers do not seem to report on any evaluation effort, our sample confirms that most papers apply one or another evaluation method. Case studies and illustrative scenarios appear to be the most frequently used methods to evaluate design-science research on business (process) performance measurement.

An external file that holds a picture, illustration, etc.
Object name is 40064_2016_3498_Fig8_HTML.jpg

The distribution of the sampled journal papers per research method (N = 76)

The sampled design-science research papers typically build and test performance measurement frameworks, systems or models or suggest meta-models and generic templates to integrate performance indicators into the process models of an organization. Such papers can focus on the process level, organizational level or even cross-organizational level. Nonetheless, the indicators mentioned in those papers are illustrative rather than comprehensive. An all-inclusive list of generic performance indicators seems to be missing. Some authors propose a set of indicators, but those indicators are specific to a certain domain or sector instead of being generic. For instance, Table  4 shows that 36 of the 76 sampled papers are dedicated to a specific domain or sector, such as technology-related aspects or supply chain management.

Table 4

The number of sampled papers dedicated to a specific domain or sector (N = 76)

Domain or sectorNumber of papers
IS/IT7
Supply chain5
Business network3
Manufacturing3
Services3
Automobile2
Banking/financial2
Government2
Health2
Helpdesk/maintenance2
Construction1
HR1
SME1
Strategic planning1
Telecom1
Total36

Furthermore, the reviewed literature was analyzed with regard to its (1) scope, (2) functionalities, (3) terminology, and (4) foundations.

Starting with scope, it is observed that nearly two-thirds of the sampled papers can be categorized as dealing with process-oriented performance measurement, whereas one-third focuses more on general performance measurement and management issues. Nonetheless, most of the studies of process performance also include general performance measurement as a supporting concept. A minor cluster of eight research papers specifically focuses on business process reengineering and measurement systems to evaluate the results of reengineering efforts. Furthermore, other researchers focus on the measurement and assessment of interoperability issues and supply chain management measurements.

Secondly, while analyzing the literature, two groups of papers were identified based on their functionalities: (1) focusing on performance measurement systems or frameworks, and (2) focusing on certain performance indicators and their categorization. Regarding the first group, it should be mentioned that while the process of building or developing a performance measurement system (PMS) or framework is well-researched, only a small number of papers explicitly address process performance measurement systems (PPMS). The papers in this first group typically suggest concrete steps or stages to be followed by particular organizations or discuss the conceptual characteristics and design of a performance measurement system. Regarding the second group of performance indicators, we can differentiate two sub-groups. Some authors focus on the process of defining performance indicators by listing requirements or quality characteristics that an indicator should meet. However, many more authors are interested in integrating performance indicators into the process models or the whole architecture of an organization, and they suggest concrete solutions to do so. Compared to the first group of papers, this second group deals more with the categorization of performance indicators into domains (financial/non-financial, lag/lead, external/internal, BSC dimensions) or levels (strategic, tactical, operational).

Thirdly, regarding terminology, different terms are used by different authors to discuss performance measurement. Performance “indicator” is the most commonly used term among the reviewed papers. For instance, it is frequently used in reference to a key performance indicator (KPI), a KPI area or a performance indicator (PI). The concept of a process performance indicator (PPI) is also used, mainly in the process-oriented literature. Performance “measure” is another prevalent term in the papers. The least-used term is performance “metric” (i.e., in only nine papers). Although the concepts of performance indicators, measures and metrics are used interchangeably throughout most of the papers, the concepts are sometimes defined in different ways. For instance, paper 17 defines a performance indicator as a metric, and paper 49 defines a performance measure as an indicator. On the other hand, paper 7 defines a performance indicator as a set of measures. Yet another perspective is taken in paper 74, which defines a performance measure as “a description of something that can be directly measured (e.g., number of reworks per day)”, while defining a performance indicator as “a description of something that is calculated from performance measures (e.g., percentage reworks per day per direct employee” (p. 386). Inconsistencies exist not only in defining indicators but also in describing performance goals. For instance, some authors include a sign (e.g., minus or plus) or a verb (e.g., decrease or increase) in front of an indicator. Other authors attempt to describe performance goals in a SMART way—for instance, by including a time indication (e.g., “within a certain period”) and/or target (e.g., “5 % of all orders”)—whereas most of the authors are less precise. Hence, a great degree of ambiguity exists in the formulation of performance objectives among to the reviewed papers.

Finally, regarding the papers’ foundations, “ Performance measurement models ” section already indicated that the BSC plays an important role in the general literature on performance management systems (PMS), while Kueng ( 2000 ) also offers influential arguments on process performance measurement systems (PPMS). In our literature review, we observed that the BSC was mentioned in 43 of the 76 papers and that the results of 19 papers were mainly based on the BSC (Fig.  9 ). This finding provides additional evidence that the BSC can be considered the most frequently used performance model in academia as well. However, the measurement model of Kueng ( 2000 ) was also mentioned in the sampled papers on PPMS, though less frequently (i.e., in six papers).

An external file that holds a picture, illustration, etc.
Object name is 40064_2016_3498_Fig9_HTML.jpg

The importance of the BSC according to the sampled papers (N = 76)

Interestingly, the BSC is also criticized by the sampled papers for not being comprehensive; for instance, due to the exclusion of environmental aspects, supply chain management aspects or cross-organizational processes. In response, some of the sampled papers also define sector-specific BSC indicators or suggest additional steps or indicators to make the process or business more sustainable (see Table  4 ). Nonetheless, the majority of the papers agree on the need for integrated and multidimensional measurement systems, such as the BSC, and on the importance of directly linking performance measurement to an organization’s strategy. However, while these papers mention the required link with strategy, the prioritization of indicators according to their strategic importance has been studied very little thus far.

Results for RQ2

For RQ2, the sampled papers were reviewed to distinguish papers with performance indicators from papers without performance indicators. A further distinction was made between indicators found with operationalization (i.e., concretization by means of a question or formula) and those without operationalization. We note that for many indicators, no operationalization was available. We discovered that only 30 of the 76 sampled papers contained some type of performance indicator (namely 3, 5, 6, 7, 11, 16, 17, 18, 20, 22, 26, 27, 30, 35, 37, 40, 43, 46, 49, 51, 52, 53, 55, 57, 58, 59, 60, 66, 71, 73). In total, approximately 380 individual indicators were found throughout all the sampled papers (including duplicates), which were combined based on similarities and modified to use more generic terms. This resulted in 87 indicators with operationalization (“Appendix 3 ”) and 48 indicators without operationalization (“Appendix 4 ”).

The 87 indicators with operationalization were then categorized according to the four perspectives of the BSC (i.e., financial, customer, business processes, and “learning and growth”) (Kaplan and Norton 1996 , 2001 ) and the four established dimensions of process performance (i.e., time, cost, quality, and flexibility) (Dumas et al. 2013 ). In particular, based in the identified indicators, we revealed 11 sub-perspectives within the initial BSC perspectives to better emphasize the focus of the indicators and the different target groups (Table  5 ): (1) financial performance for shareholders and top management, (2) customer-related performance, (3) supplier-related performance, (4) society-related performance, (5) general process performance, (6) time-related process performance, (7) cost-related process performance, (8) process performance related to internal quality, (9) flexibility-related process performance, (10) (digital) innovation performance, and (11) employee-related performance.

Table 5

A description of the observed performance perspectives, linked to the Balanced scorecard (Kaplan and Norton 1996 , 2001 )

Initial BSC perspectivesObserved perspectives based on target groups and focusScope of the performance indicators
1. Financial performance1.1 Financial performance for shareholders and top managementStrategic financial data
2. Customer-related performance2.1 Customer performanceOutcomes of external quality or meeting end user needs
2.2 Supplier performanceExternal collaboration and process dependencies
2.3 Society performanceOutcomes for other stakeholders and the environment during process work
3. Internal business process performance3.1 General process performanceDescriptive data of process work, not related to time, costs, quality or flexibility
3.2 Time-related process performanceTime-related data of process work
3.3 Cost-related process performanceOperational financial data
3.4 Process performance related to internal qualityCapability of meeting end user needs and internal user needs
3.5 Flexibility-related process performanceData of changes or variants in process work
4. Performance related to “learning and growth”4.1 (Digital) innovation performanceInnovation of processes and innovation projects
4.2 Employee performanceStaff contributions to process work and personal development

For reasons of objectivity, the observed performance indicators were assigned to a single perspective starting from recognized frameworks (Kaplan and Norton 1996 , 2001 ; Dumas et al. 2013 ). Bias was further reduced by following the definitions of Table  5 . Furthermore, the authors of this article first classified the indicators individually and then reached consensus to obtain a more objective categorization.

Additional rationale for the identification of 11 performance perspectives is presented in Table  6 , which compares our observations with the perspectives adopted by the most commonly used performance measurement models (see “ Theoretical background ” section). This comparison allows us to highlight similarities and differences with other respected models. In particular, Table  6 shows that we did not observe a dedicated perspective for strategy (EFQM 2010 ) and that we did not differentiate between financial indicators and market indicators (Cross and Lynch 1988 ). Nonetheless, the similarities in Table  6 prevail. For instance, Cross and Lynch ( 1988 ) also acknowledge different process dimensions. Further, Kueng ( 2000 ) and the EFQM ( 2010 ) also differentiate employee performance from innovation performance, and they both add a separate perspective for results related to the entire society.

Table 6

The comparison of our observed performance perspectives with the perspectives taken in the most commonly used performance measurement models in the literature (Kaplan and Norton 1996 , 2001 ; EFQM 2010 ; Kueng 2000 ; Cross and Lynch 1988 )

Balanced scorecard (Kaplan and Norton , )EFQM ( )Kueng ( )Cross and Lynch ( )Our observed performance perspectives
Financial perspectiveKey resultsFinancial viewFinancial measures
Market measures
Financial performance for shareholders and top management
Customer perspectiveCustomer resultsCustomer viewCustomer satisfactionCustomer performance
Supplier performance
Society performance
Internal business processes perspectiveEnablers (processes/products/services, people, strategy, partnerships/resources, leadership)Overall process performance based on the other views as driving forcesFlexibility
Productivity
Quality
Delivery
Process time
Cost
General process performance
Time-related process performance
Cost-related process performance
Process performance related to internal quality
Flexibility-related process performance
“Learning and growth” perspectivePeople results
Learning, creativity and innovation
Employee view
Innovation view
(Digital) innovation performance
Employee performance
Society resultsSocietal viewSociety performance as a sub-perspective of customer performance (see above)

Figure  10 summarizes the number of performance indicators that we identified in the process literature per observed performance perspective. Not surprisingly, the initial BSC perspective of internal business process performance contains most of the performance indicators: 29 of 87 indicators. However, the other initial BSC perspectives are also covered by a relatively high number of indicators: 16 indicators for both financial performance and customer-related performance and 26 indicators for “learning and growth”. This result confirms the close link between process performance and organizational performance, as mentioned in the introduction.

An external file that holds a picture, illustration, etc.
Object name is 40064_2016_3498_Fig10_HTML.jpg

The number of performance indicators with operationalization per performance perspective

A more detailed comparison of the perspectives provides interesting refinements to the state of the research. More specifically, Fig.  10 shows that five performance perspectives have more than ten indicators in the sample, indicating that academic research focuses more on financial performance for shareholders and top management and performance related to customers, process time, innovation and employees. On the other hand, fewer than five performance indicators were found in the sample for the perspectives related to suppliers, society, process costs and process flexibility, indicating that the literature focuses less on those perspectives. The latter remains largely overlooked by academic research, possibly due to the newly emerging character of these perspectives.

We must, however, note that the majority of the performance indicators are mentioned in only a few papers. For instance, 59 of the 87 indicators were cited in a single paper, whereas the remainder are mentioned in more than one paper. Eleven performance indicators are frequently mentioned in the process literature (i.e., by five or more papers). These indicators include four indicators of customer-related performance (i.e., customer complaints, perceived customer satisfaction, query time, and delivery reliability), three indicators of time-related process performance (i.e., process cycle time, sub-process turnaround time, and process waiting time), one cost-related performance indicator (i.e., process cost), two indicators of process performance related to internal quality (i.e., quality of internal outputs and deadline adherence), and one indicator of employee performance (i.e., perceived employee satisfaction).

Consistent with “ Performance indicators ” section, the different performance perspectives are a combination of financial or cost-related indicators with non-financial data. The latter also take the upper hand in our sample. Furthermore, the sample includes a combination of objective and subjective indicators, and the vast majority are objective indicators. Only eight indicators explicitly refer to qualitative scales; for instance, to measure the degree of satisfaction of the different stakeholder groups. For all the other performance indicators, a quantifiable alternative is provided.

It is important to remember that a distinction was made between the indicators with operationalization and those without operationalization. The list of 87 performance indicators, as given in “Appendix 3 ”, can thus be extended with those indicators for which operationalization is missing in the reviewed literature. Specifically, we found 48 additional performance indicators (“Appendix 4 ”) that mainly address supplier performance, process performance related to costs and flexibility, and the employee-related aspects of digital innovation. Consequently, this structured literature review uncovered a total of 135 performance indicators that are directly or indirectly linked to business process performance.

Finally, the total list of 135 performance indicators was evaluated for its comprehensiveness by comparing the identified indicators with other BSC variants that were not included in our sample. More specifically, based on a random search, we looked for two BSC variants in the Web of Science that did not fit the search strategy of this structured literature review: one that did not fit the search term of “business process*” (Hubbard 2009 ) and another that did not fit any of the performance-related search terms of “performance indicator*”, “performance metric*” or “performance measur*” (Bronzo et al. 2013 ). These two BSC variants cover 30 and 17 performance indicators, respectively, and are thus less comprehensive than the extended list presented in this study. Most of the performance indicators suggested by the two BSC variants are either directly covered in our findings or could be derived after recalculations. Only five performance indicators could not be linked to our list of 135 indicators, and these suggest possible refinements regarding (1) the growth potential of employees, (2) new markets, (3) the social performance of suppliers, (4) philanthropy, or (5) industry-specific events.

This structured literature review culminated in an extended list of 140 performance indicators: 87 indicators with operationalization, 48 indicators without operationalization and 5 refinements derived from two other BSC variants. The evaluation of our findings against two BSC variants validated our work in the sense that we present a more exhaustive list of performance indicators, with operationalization for most, and that only minor refinements could be added. However, the comprehensiveness of our findings can be claimed only to a certain extent given the limitations of our predefined search strategy and the lack of empirical validation by subject-matter experts or organizations. Notwithstanding these limitations, conclusions can be drawn from the large sample of 76 papers to respond to the research questions (RQs).

Regarding RQ1 on the state of the research on business process performance measurement, the literature review provided additional evidence for the omnipresence of the BSC. Most of the sampled papers mentioned or used the BSC as a starting point and basis for their research and analysis. The literature study also showed a variety of research topics, ranging from behavioral-science to design-science research and from a focus on performance measurement models to a focus on performance indicators. In addition to inconsistencies in the terminology used to describe performance indicators and targets, the main weakness uncovered in this literature review deals with the concretization of performance indicators supplementing performance measurement systems. The SLR results suggest that none of the reviewed papers offers a comprehensive measurement framework, specifically one that includes and extends the BSC perspectives, is process-driven and encompasses as many concrete performance indicators as possible. Such a comprehensive framework could be used as a checklist or a best practice for reference when defining specific performance indicators. Hence, the current literature review offers a first step towards such a comprehensive framework by means of an extended list of possible performance indicators bundled in 11 performance perspectives (RQ2).

Regarding RQ2 on process performance indicators, the literature study revealed that scholars measure performance in many different ways and without sharing much detail regarding the operationalization of the measurement instruments, which makes a comparison of research results more difficult. As such, the extended list of performance indicators is our main contribution and fills a gap in the literature by providing a detailed overview of performance indicators mentioned or used in the literature on business process performance. Another novel aspect is that we responded to the criticism of missing perspectives in the original BSC (EFQM 2010 ; Hubbard 2009 ; Kueng 2000 ) and identified the narrow view of performance typically taken in the process literature (Dumas et al. 2013 ). Figures  1 and ​ and2 2 are now combined and extended in a more exhaustive way, namely by means of more perspectives than are offered by other attempts (Table  6 ), by explicitly differentiating between performance drivers (or lead indicators) and performance outcomes (or lag indicators), and by considering concrete performance indicators.

Our work also demonstrated that all perspectives in the BSC (Kaplan and Norton 1996 , 2001 ) relate to business process performance to some degree. In other words, while the BSC is a strategic tool for organizational performance measurement, it is actually based on indicators that originate from business processes. More specifically, in addition to the perspective of internal business processes, the financial performance perspective typically refers to sales or revenues gained while doing business, particularly after executing business processes. The customer perspective relates to the implications of product or service delivery, specifically to the interactions throughout business processes, whereas the “learning and growth” perspective relates to innovations in the way of working (i.e., business processes) and the degree to which employees are prepared to conduct and innovate business processes. The BSC, however, does not present sub-perspectives and thus takes a more high-level view of performance. Hence, the BSC can be extended based on other categorizations made in the reviewed literature; for instance, related to internal/external, strategic/operational, financial/non-financial, or cost/time/quality/flexibility.

Therefore, this study refined the initial BSC perspectives into eleven performance perspectives (Fig.  11 ) by applying three other performance measurement models (Cross and Lynch 1988 ; EFQM 2010 ; Kueng 2000 ) and the respected Devil’s quadrangle for process performance (Dumas et al. 2013 ). Additionally, a more holistic view of business process performance can be obtained by measuring each performance perspective of Fig.  11 than can be achieved by using the established dimensions of time, cost, quality and flexibility as commonly proposed in the process literature (Dumas et al. 2013 ). As such, this study demonstrated a highly relevant synergy between the disciplines of process management, organization management and performance management.

An external file that holds a picture, illustration, etc.
Object name is 40064_2016_3498_Fig11_HTML.jpg

An overview of the observed performance perspectives in the business process literature

We also found out that not all the performance perspectives in Fig.  11 are equally represented in the studied literature. In particular, the perspectives related to suppliers, society, process costs and process flexibility seem under-researched thus far.

The eleven performance perspectives (Fig.  11 ) can be used by organizations and scholars to measure the performance of business processes in a more holistic way, considering the implications for different target groups. For each perspective, performance indicators can be selected that fit particular needs. Thus, we do not assert that every indicator in the extended list of 140 performance indicators should always be measured, since “ Theoretical background ” section emphasized the need for organization-dependent indicators aligned with an organization’s strategy. Instead, our extended list can be a starting point for finding and using appropriate indicators for each performance perspective, without losing much time reflecting on possible indicators or ways to concretize those indicators. Similarly, the list can be used by scholars, since many studies in both the process literature and management literature intend to measure the performance outcomes of theoretical constructs or developed artifacts.

Consistent with the above, we acknowledge that the observed performance indicators originate from different models and paradigms or can be specific to certain processes or sectors. Since our intention is to provide an exhaustive list of indicators that can be applied to measure business process performance, the indicators are not necessarily fully compatible. Instead, our findings allow the recognition of the role of a business context (i.e., the peculiarities of a business activity, an organization or other circumstances). For instance, a manufacturing organization might choose different indicators from our list than a service or non-profit organization (e.g., manufacturing lead time versus friendliness, or carbon dioxide emission versus stakeholder satisfaction).

Another point of discussion is dedicated to the difference between the performance of specific processes (known as “process performance”) and the performance of the entire process portfolio (also called “BPM performance”). While some indicators in our extended list clearly go beyond a single process (e.g., competence-related indicators or employee absenteeism), it is our opinion that the actual performance of multiple processes can be aggregated to obtain BPM performance (e.g., the sum of process waiting times). This distinction between (actual) process performance and BPM performance is useful; for instance, for supplementing models that try to predict the (expected) performance based on capability development, such as process maturity models (e.g., CMMI) and BPM maturity models (Hammer 2007 ; McCormack and Johnson 2001 ). Nonetheless, since this study has shown a close link between process performance, BPM performance, and organizational performance, it seems better to refer to different performance perspectives than to differentiate between such performance types.

In future research, the comprehensiveness of the extended list of performance indicators can be empirically validated by subject-matter experts. Additionally, case studies can be conducted in which organizations apply the list as a supplement to performance measurement models in order to facilitate the selection of indicators for their specific business context. The least covered perspectives in the academic research also seem to be those that are newly emerging (namely, the perspectives related to close collaboration with suppliers, society/sustainability and process flexibility or agility), and these need more attention in future research. Another research avenue is to elaborate on the notion of a business context; for instance, by investigating what it means to have a strategic fit (Venkatraman 1989 ) in terms of performance measurement and which strategies (Miller and Friesen 1986 ; Porter 2008 ; Treacy and Wiersema 1993 ) are typically associated with which performance indicators. Additionally, the impact of environmental aspects, such as market velocity (Eisenhardt and Martin 2000 ), on the choice of performance indicators can be taken into account in future research.

Business quotes such as “If you cannot measure it, you cannot manage it” or “What is measured improves” (P. Drucker) are sometimes criticized because not all important things seem measurable (Ryan 2014 ). Nonetheless, given the perceived need of managers to measure their business and the wide variety of performance indicators (i.e., ranging from quantitative to qualitative and from financial to non-financial), this structured literature review has presented the status of the research on business process performance measurement. This structured approach allowed us to detect weaknesses or inadequacies in the current literature, particularly regarding the definition and concretization of possible performance indicators. We continued by taking a holistic view of the categorization of the observed performance indicators (i.e., measures or metrics) into 11 performance perspectives based on relevant performance measurement models and established process performance dimensions.

The identified performance indicators within the 11 perspectives constitute an extended list from which practitioners and researchers can select appropriate indicators depending on their needs. In total, the structured literature review resulted in 140 possible performance indicators: 87 indicators with operationalization, 48 additional indicators that need further concretization, and 5 refinements based on other Balanced Scorecard (BSC) variants. As such, the 11 performance perspectives with related indicators can be considered a conceptual framework that was derived from the current process literature and theoretically validated by established measurement approaches in organization management.

Future research can empirically validate the conceptual framework by involving subject-matter experts to assess the comprehensiveness of the extended list and refine the missing concretizations, and by undertaking case studies in which the extended list can be applied by specific organizations. Other research avenues exist to investigate the link between actual process performance and expected process performance (as measured in maturity models) or the impact of certain strategic or environmental aspects on the choice of specific performance indicators. Such findings are needed to supplement and enrich existing performance measurement systems.

Authors’ contributions

AVL initiated the conception and design of the study, while AS was responsible for the collection of data (sampling) and identification of performance indicators. The analysis and interpretation of the data was conducted by both authors. AVL was involved in drafting and coordinating the manuscript, and AS in reviewing it critically. Both authors read and approved the final manuscript.

Acknowledgements

We thank American Journal Experts (AJE) for English language editing.

Competing interests

The authors declare that they have no competing interests.

Availability of data and materials

Consent for publication.

Not applicable.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Abbreviations

BHbehavioral science
BPMbusiness process management
BSCbalanced scorecard
DSdesign-science
RQresearch question
SLRstructured literature review
TOkeyword in topic
TIkeyword in title

See Table  7 .

Table 7

The final list of sampled papers (N = 76)

1Huang SY, Lee CH, Chiu AA, Yen DC (2015) How business process reengineering affects information technology investment and employee performance under different performance measurement. Inf Syst Front 17(5):1133–1144. doi: 10.1007/s10796-014-9487-4
2Padua, SID, Jabbour CJC (2015) Promotion and evolution of sustainability performance measurement systems from a perspective of business process management: From a literature review to a pentagonal proposal. Bus Process Manag J 21(2):403–418. doi:10.1108/BPMJ-10-2013-0139
3Rinaldi M, Montanari R, Bottani E (2015) Improving the efficiency of public administrations through business process reengineering and simulation: A case study. Bus Process Manag J 21(2):419–462. doi:10.1108/BPMJ-06-2014-0054
4Camara MS, Ducq Y, Dupas R (2014) A methodology for the evaluation of interoperability improvements in inter-enterprises collaboration based on causal performance measurement models. Int J Comput Integr Manuf 27(2):103–119
5Lehnert M, Linhart A, Röglinger M (2014) Chopping down trees versus sharpening the axe—Balancing the development of BPM capabilities with process improvement. In: Sadiq S, Soffer P, Völzer H (Eds) BPM 2014. LNCS 8659. Springer, Switzerland, pp 151–167
6del-Rio-Ortega A, Resinas M, Cabanillas C, Ruiz-Cortes A (2013) On the definition and design-time analysis of process performance indicators. Inf Syst 38(4): 470–490
7Balaban N, Belic K, Gudelj M (2011) Business process performance management: theoretical and methodological approach and implementation. Manag Inf Syst 6(4):003–009
8Glykas M (2013) Fuzzy cognitive strategic maps in business process performance measurement. Expert Syst Appl 40(1):1–14. doi:10.1016/j.eswa.2012.01.078
9Hernaus T, Bach MP, Bosilj-Vuksic V (2012) Influence of strategic approach to BPM on financial and non-financial performance. Balt J Manag 7(4):376–396. doi:10.1108/17465261211272148
10Akyuz GA, Erkan TE (2010) Supply chain performance measurement : a literature review. Int J Prod Res 48(17):5137–5155. doi:10.1080/00207540903089536
11Han KH, Choi SH, Kang JG, Lee G (2010) Performance-centric business activity monitoring framework for continuous process improvement. AIKED Proceedings of WSEAS, pp 40–45. Available via . Accessed Apr 2016
12Han KH, Kang JG, Song M (2009) Two-stage process analysis using the process-based performance measurement framework and business process simulation. Expert Syst Appl 36(3):7080–7086. doi:10.1016/j.eswa.2008.08.035
13Cheng MY, Tsai HC, Lai YY (2009) Construction management process reengineering performance measurements. Autom Constr 18(2):183–193. doi:10.1016/j.autcon.2008.07.005
14Alfaro JJ, Rodriguez–Rodriguez R, Verdecho MJ, Ortiz, A (2009) Business process interoperability and collaborative performance measurement. Int J Comput Integr Manuf 22(9):877–889. doi:10.1080/09511920902866112
15Pakseresht M, Seyyedi MA, Zade MM, Gardesh H (2009) Business process measurement model based on the fuzzy multi agent systems. AIKED Proceedings of WSEAS, pp 501–506
16Bosilj-Vuksic V, Milanovic L, Skrinjar R, Indihar-Stemberger M (2008) Organizational performance measures for business process management: A performance measurement guideline. Tenth International Conference on Computer Modeling and Simulation (UKSIM Proceedings), pp 94–99. doi:10.1109/UKSIM.2008.114
17Wetzstein B, Ma Z, Leymann F (2008) Towards measuring key performance indicators of semantic business processes. In: Abramowicz W, Fensel D (Eds) BIS 2008, LNBIP vol 7. Springer, Berlin Heidelberg, pp 227–238. doi:10.1007/978-3-540-79396-0_20
18Glavan LM (2012) Understanding process performance measurement systems. Bus Sys. Res J 2(2):25–38. doi:10.2478/v10305-012-0014-0
19vom Brocke J (2007) Service portfolio measurement: evaluating financial performance of service-oriented business processes. Int J Web Serv Res 4(2):1–33
20Korherr B, List B (2007a) Extending the EPC with performance measures. ACM Symposium on Applied Computing, pp 1265–1266
21Korherr B, List B (2007b) Extending the EPC and the BPMN with business process goals and performance measures. ICEIS Proceedings, pp 287–294
22Herzog NV, Polajnar A, Pizmoht P (2006) Performance measurement in business process re-engineering. J Mech Eng 52(4):210–224
23Korherr B, List B (2006) Extending the UML 2 activity diagram with business process goals and performance measures and the mapping to BPEL. In: Roddick JF et al. (Eds) ER Workshops 2006. LNCS, vol 4231. Springer, Berlin Heidelberg, pp 7–18. doi:10.1007/11908883_4
24Lenz K, Mevius M, Oberweis A (2005) Process-oriented business performance management with Petri nets. IEEE Proceedings, pp 89–92
25Kuwaiti ME (2004) Performance measurement process: definition and ownership. International Journal of Operations & Production Management, 24(1):55–78
26Kutucuoglu KY, Hamali J, Sharp JM, Irani Z (2002) Enabling BPR in maintenance through a performance measurement system framework. Int J Oper Prod Manag 14(1): 33–52. doi:10.1023/A:1013870802492
27Jagdev H, Bradley P, Molloy O (1997) A QFD based performance measurement tool. Comput Ind 33(2–3):357–366. doi:10.1016/S0166-3615(97)00041-9
28Bititci US, Carrie AS, McDevitt L (1997) Performance management: A business process view. IFIP WG 5.7 Proceedings, pp 284–297
29del-Rio-Ortega A, Cabanillas C, Resinas M, Ruiz-Cortes A (2013) PPINOT tool suite: a performance management solution for process-oriented organisations. In: Basu S et al. (Eds) ICSOC Proceedings. LNCS, vol 8274. Springer, Berlin Heidelberg, pp 675–678. doi:10.1007/978-3-642-45005-1_58
30Mirsu DB (2013) Monitoring help desk process using KPI. In: Balas VE et al. (Eds) Soft Comput Appl 195:637–647
31Koetter F, Kochanowski M (2012) Goal-oriented model-driven business process monitoring using ProGoalML. In: Abramowicz W et al. (Eds) BIS 2012. LNBIP, vol 117. Springer, Berlin Heidelberg, pp 72–83. doi:10.1007/978-3-642-30359-3_7
32del-Rio-Ortega A, Resinas M, Duran A, Ruiz-Cortes A (2012) Defining process performance indicators by using templates and patterns. In: Barros A, Gal A, Kindler E (Eds) BPM 2012. LNCS, vol 7481. Springer, Berlin Heidelberg, pp 223–228. doi:10.1007/978-3-642-32885-5_18
33Arigliano F, Bianchini D, Cappiello C, Corallo A, Ceravolo P, Damiani E, De Antonellis V, Pernici B, Plebani P, Storelli D, Vicari C (2012) Monitoring business processes in the networked enterprise. In: Aberer K, Damiani E, Dillon T (Eds) SIMPDA 2011. LNBIP, vol 116. Springer, Berlin Heidelberg, pp 21–38
34Wetzstein B, Leitner P, Rosenberg F, Dustdar S, Leymann F (2011) Identifying influential factors of business process performance using dependency analysis. Enterp Inf Syst 5(1):79–98. doi:10.1080/17517575.2010.493956
35Shamsaei A, Pourshahid A, Amyot D (2011) Business process compliance tracking using key performance indicators. In: zur Muehlen M, Su J (Eds) BPM 2010 Workshops. LNBIP, vol 66. Springer, Berlin Heidelberg, pp 73–84
36del-Rio-Ortega A, Resinas M, Ruiz-Cortes A (2010) Defining process performance indicators: An ontological approach. In: Meersman R et al. (Eds) OTM 2010, Part 1. LNCS, vol 6426. Springer, Berlin Heidelberg, pp 555–572
37Pourshahid A, Amyot D, Peyton L, Ghanavati S, Chen P, Weiss M, Forster A J (2009) Business process management with the user requirements notation. Electron Commer Res 9(4):269–316. doi:10.1007/s10660-009-9039-z
38Wetzstein B, Leitner P, Rosenberg F, Brandic I, Dustdar S, Leymann F (2009) Monitoring and analyzing influential factors of business process performance. IEEE EDOC Proceedings, pp 141–150. doi:10.1109/EDOC.2009.18
39Liu B, Fan Y, Huang S (2008) A service-oriented business performance evaluation model and the performance-aware service selection method. Concurr Comput Pract Exp 20(15):1821–1836
40Longo A, Motta G (2006) Design processes for sustainable performances: a model and a method. In: Bussler C et al. (Eds) BPM 2005 Workshops. LNCS, vol 3812. Springer, Berlin Heidelberg, pp 399–407
41Zakarian A, Wickett P, Siradeghyan Y (2006) Quantitative model for evaluating the quality of an automotive business process. Int J Prod Res 44(6):1055–1074. doi:10.1080/00207540500371949
42Wieland U, Fischer M, Pfitzner M, Hilbert A (2015) Process performance measurement system—towards a customer-oriented solution. Bus Process Manag J 21(2):312–331. doi:10.1108/BPMJ-04-2014-0032
43Vernadat F, Shah L, Etienne A, Siadat A (2013) VR-PMS: a new approach for performance measurement and management of industrial systems. Int J Prod Res 51(23–24):7420–7438
44Zutshi A, Grilo A, Jardim-Goncalves R (2012) The business interoperability quotient measurement model. Comput Ind 63(5):389–404. doi:10.1016/j.compind.2012.01.002
45Ciemleja G, Lace N (2011) The model of sustainable performance of small and medium-sized enterprise. Eng Econ 22(5):501–509. doi: 10.5755/j01.ee.22.5.968
46Chimhamhiwa D, van der Molen P, Mutanga O, Rugege D (2009) Towards a framework for measuring end to end performance of land administration business processes—A case study. Comput Environ Urban Syst 33(4):293–301. doi: 10.1016/j.compenvurbsys.2009.04.001
47Albayrak CA, Gadatsch A, Olufs D (2009) Life cycle model for IT performance measurement: a reference model for small and medium enterprises (SME). In: Dhillon G, Stahl BC, Baskerville R (Eds) CreativeSME 2009. IFIP AICT, vol 301, pp 180–191. Available via
48Hinrichs N, Barke E (2008) Applying performance management on semiconductor design processes. IEEE IEEM Proceedings, pp 278–281. doi:10.1109/IEEM.2008.4737874
49Adams TM, Danijarsa M, Martinelli T, Stanuch G, Vonderohe A (2003) Performance measures for winter operations. Transp Res Rec J Transp Res Board 1824:87–97. doi: 10.3141/1824-10
50Kueng P (2000) Process performance measurement system: a tool to support process-based organizations. Total Qual Manag 11(1):67–85. doi: 10.1080/0954412007035
51Kueng P, Krahn AJW (1999) Process performance measurement system: some early experiences. J Scien Ind Res 58(3–4):149–159
52Walsh P (1996) Finding key performance drivers: some new tools. Total Quality Management. 7(5):509–519. doi: 10.1080/09544129610612
53Fogarty DW (1992) Work in process: performance measures. Int J Prod Econ 26(1–3):169–172. doi:10.1016/0925-5273(92)90059-G
54Gunasekaran A, Patel C, McGaughey RE (2004) A framework for supply chain performance measurement. Int J Prod Econ 87(3):333–347. doi:10.1016/j.ijpe.2003.08.003
55Gunasekaran A, Kobu B (2007) Performance measures and metrics in logistics and supply chain management : a review of recent literature (1995—2004) for research and applications. Int J Prod Res 45(12):37–41. doi:10.1080/00207540600806513
56Wang CH, Lu IY, Chen CB (2010) Integrating hierarchical balanced scorecard with non-additive fuzzy integral for evaluating high technology firm performance. Int J Prod Econ 128(1):413–426. doi:10.1016/j.ijpe.2010.07.042
57Wu HY (2012) Constructing a strategy map for banking institutions with key performance indicators of the balanced scorecard. Eval Program Plann 35(3):303–320. doi:10.1016/j.evalprogplan.2011.11.009
58Martinsons M, Davison R, Tse D (1999) The balanced scorecard: a foundation for the strategic management of information systems. Decis Support Syst 25(1):71–88. doi: 10.1016/S0167-9236(98)00086-4
59Grigoroudis E, Orfanoudaki E, Zopounidis C (2012) Strategic performance measurement in a healthcare organisation: A multiple criteria approach based on balanced scorecard. Omega 40(1):104–119. doi:10.1016/j.omega.2011.04.001
60Bhagwat R, Sharma MK (2007) Performance measurement of supply chain management: a balanced scorecard approach. Comput Ind Eng 53(1):43–62. doi:10.1016/j.cie.2007.04.001
61Al-Mashari M, Al-Mudimigh A, Zairi M (2003) Enterprise resource planning: a taxonomy of critical factors. Eur J Oper Res 146(2):52–364. doi:10.1016/S0377-2217(02)00554-4
62Jalali NSG, Aliahmadi AR, Jafari EM (2011) Designing a mixed performance measurement system for environmental supply chain management using evolutionary game theory and balanced scorecard: a case study of an auto industry supply chain. Resour Conserv Recycl 55(6):593–603. doi: 10.1016/j.resconrec.2010.10.008
63Huang HC (2009) Designing a knowledge-based system for strategic planning: a balanced scorecard perspective. Expert Syst Appl 36(1):209–218. doi:10.1016/j.eswa.2007.09.046
64Bosilj-Vuksic V, Glavan LM, Susa D (2015) The role of process performance measurement in BPM adoption outcomes in Croatia. Econ Bus Rev 17(1):117–143. Available via
65Jahankhani H, Ekeigwe JI (2005) Adaptation of the balanced scorecard model to the IT functions. IEEE ICITA Proceedings, pp 784–787. doi:10.1109/ICITA.2005.52
66Spremic M, Zmirak Z, Kraljevic K (2008) IT and business process performance management: case study of ITIL implementation in finance service industry. ITI Proceedings, pp 243–250. doi:10.1109/ITI.2008.4588415
67Li S, Zhu H (2008) Generalized stochastic workflow net-based quantitative analysis of business process performance. IEEE ICINFA Proceedings, pp 1040–1044. doi:10.1109/ICINFA.2008.4608152
68Cardoso ECS (2013) Towards a methodology for goal-oriented enterprise management. IEEE EDOC Proceedings, pp 94–103. doi:10.1109/EDOCW.2013.17
69Tung A, Baird K, Schoch HP (2011) Factors influencing the effectiveness of performance measurement systems. Int J Oper Prod Manag 31(12):1287–1310. doi:10.1108/01443571111187457
70Koetter F, Kochanowski M (2015) A model-driven approach for event-based business process monitoring. Inf Syst E-bus Manag 13(1):5–36. doi:10.1007/s10257-014-0233-8
71Banker RD, Chang H, Janakiraman SN, Konstans C (2004) A balanced scorecard analysis of performance metrics. Eur J Oper Res 154(2):423–436. doi:10.1016/S0377-2217(03)00179-6
72Peng Y, Zhou L (2011) A performance measurement system based on BSC. In: Zhu M (Ed) ICCIC 2011, Part V. CCIS, vol 235. Springer, Berlin Heidelberg, pp 309–315
73van Heck G, van den Berg J, Davarynejad M, van Duin R, Roskott B (2010) Improving inventory management performance using a process-oriented measurement framework. In: Quintela Varajao JE et al. (Eds) CENTERIS 2010, Part I. CCIS, vol 109. Springer, Berlin Heidelberg, pp 279–288
74Caputo E, Corallo A, Damiani E, Passiante G (2010) KPI modeling in MDA Perspective. In: Meersman R et al. (Eds) OTM 2010 Workshops. LNCS, vol 6428. Springer, Berlin Heidelberg, pp 384–393. doi:10.1007/978-3-642-16961-8_59
75Behrouzi F, Shaharoun AM, Ma’aram A (2014) Applications of the balanced scorecard for strategic management and performance measurement in the health sector. Aust Heal Rev 38(2):208–217. doi:10.1071/AH13170
76Skrinjar R, Indihar-Stemberger M (2009) Improving organizational performance by raising the level of business process orientation maturity: empirical test and case study. In: Barry C et al. (Eds) Information Systems Development: Challenges in Practice, Theory and Education. Springer, Heidelberg, pp 723–740. doi:10.1007/978-0-387-78578-3_11

Appendix 2: The mapping of the structured literature review

The mapping details per sampled paper can be found here.

https://drive.google.com/file/d/0B_2VpjwsRLrlRHhfRHJ4ZFBWdEE/view?usp=sharing .

See Table  8 .

Table 8

The list of performance indicators with operationalization

PerspectivesIndicators/measures/metricsOperationalizationPapers
1/Financial performance
Sales performance[Achieved total sales]/[planned sales] * 1007
Inventory turnover[Annual total sales]/[average inventory] * 10059
Market share% of growth in the last years [Sales volumes of products and services]/[total market demands] * 10016, 57
Earnings per share (EPS)[After-tax net earnings − preferred share dividends]/[weighted average nr of shares outstanding]57
Average order value[Aggregated monthly sales]/[monthly nr of orders]7
Order growth[Number of orders in the current month]/[total nr of orders]7
Revenue growth[Revenue from new sources]/[total revenue] * 10016
Operating revenueSales revenues57
Return on investment (ROI)[After-tax profit or loss]/[total costs]
[Revenue − cost]/[cost]
57, 55
Return on assets (ROA)[After-tax profit or loss]/[average total assets]57, 16
Circulation of assets[Operating revenues]/[assets] * 10059
Current ratio[Current assets]/[current liabilities] * 10059
Net profit margin[After-tax profit or loss]/[total operating revenues] [Total operating revenues − operating expenses − non-operating expenses]/[total operating revenues]16, 57, 59
Profit per customer[After-tax earnings]/[total nr of online, offline or all customers]57
Management efficiency[Operating expenses]/[operating revenues] * 10059
Debt ratio, leverage level[Debts]/[assets]57, 59
2/Customer performance
2.1/Customer performance
Customer complaints, return rateNr of complaints, criticisms or notifications due to dissatisfaction about or non-compliance of orders, products and services
Nr or % of orders returned, rework or services to be redone (e.g., incorrect deliveries, incorrect documentation)
27, 30, 37, 40, 51, 57, 59
Perceived customer satisfactionQualitative scale on general satisfaction (e.g., Likert), possibly indexed as the weighted sum of judgements on satisfaction dimensions (e.g., satisfaction with products and services, perceived value, satisfying end-user needs, being the preferred suppliers for products or services, responsiveness, appearance, cleanliness, comfort, friendliness, communication, courtesy, competence, availability, security)5, 16, 22, 40, 46, 11, 55 57, 59, 58, 60
Perceived customer easinessQualitative scale (e.g., Likert) on the degree of easiness to find information and regulations, to fill out applications, and to understand the presentation of bureaucratic language40
Customer retentionNr of returning customers57
Customer growthNr of new customers57
Customer query time, resolution time, response timeAverage time between issuing and addressing a customer problem or inquiry for information30, 40, 46, 58, 59, 60
Customer waiting time[Time for information about a product or service] + [time for following status updates] + [time for receiving the product or service]
Max nr of customers in the queue or waiting room
[Handled requests]/[total requests]
3, 40, 52, 59
Punctuality, delivery reliability[Late deliveries or requests]/[total nr of deliveries or requests]
% of On-time deliveries according to the planning or schedule
16, 18, 26, 27, 40, 51, 55, 60, 73
Payment reliability[Nr of collected orders paid within due date]/[total nr of orders] * 1007
Information access cost, information availabilityInformation provided/not provided
Time spent in asking for information about a product or service (in days)
Time required to get updated about the status of a product or service
Cost of information (euro)
40
Customer costProduct cost or the cost of using a service (euro)40
2.2/Supplier performance
External delaysNr of delayed deliveries due to outage or delays of third-party suppliers26, 73
External mistakes% of Incorrect orders received27
Transfers, partnerships% of Cases transferred to a partner59
2.3/Society performance
Perceived society satisfactionQualitative scale on general satisfaction (e.g., Likert), possibly indexed as the weighted sum of judgements on satisfaction dimensions
% of Society satisfied with the organization’s outcomes
46
Societal responsibility, sustainability, ecology, greenNumber of realized ecology measures (e.g., waste, carbon dioxide, energy, water)
Quantity of carbon dioxide emitted per man month
51
3/Business process performance
3.1/General process performance
Process complexityNumber of elementary operations to complete the task40
General process informationNr of orders received or shipped per time unit
Nr of incoming calls per time unit
Nr of process instances
6, 27, 52
Order execution[Nr of executed orders]/[total nr of orders] * 1007
Perceived sales performanceQualitative scale (e.g., Likert) on the successful promotion of both efficiency and effectiveness of sales57
Perceived management performanceQualitative scale (e.g., Likert) on the improvement of effectiveness, efficiency, and quality of each objective and routine tasks57
Surplus inventory% of current assets
Value of surplus inventory (e.g., pharmaceutical material) to total assets ratio
59
Occupancy rateAverage  % occupancy, e.g., of hospital beds59
Time-related process performance
ThroughputNr of processed requests/time unit46
Process duration, efficiency[Σ(finish date − start date) of all finished business objects]/[number of all finished business objects]17
Process cycle time, order cycle time, process duration, average lifetime, completion time, process lead timeTime for handling a process instance end-to-end
Aggregated time of all activities associated with a process (per instance)
[Application submission time] − [application response time]
5, 6, 11, 37, 40, 43, 46, 60, 73
Average sub-process turnaround time, task time, activity time[Sub-process start time] − [Sub-process finish time]6, 37, 40, 52, 60
Processing timeTime that actual work is performed on a request46
Average order execution time, order fulfillment time, order lead time[Σ(Dispatch time − creation time)]/[total number of orders]
[order entry time] + [order planning time] + [order sourcing, assembly and follow-up time] + [finished goods delivery time]
7, 46, 60, 73
Average order collection time[Σ(Collection time − creation time)]/[number of collected orders]7
Average order loading time[Σ(Final distribution time − distribution creation time)]/[number of loaded orders]7
Process waiting time, set-up timeAverage time lag between sub-processes, when a process instance is waiting for further processing
Time between the arrival of a request and the start of work on it (=time spent on hold)
Average waiting time for all products and services
3, 5, 20, 37, 46, 52
Manufacturing cycle efficiency[setup time + (nr of parts * operation time)]/[manufacturing lead time]53
Manufacturing lead time[setup time + (nr of parts * operation time) + queue time + wait time + movement time]18, 53, 55
Value added efficiency[Operation time]/[manufacturing lead time]53
3.3/Cost-related process performance
Activity costCost of carrying out an activity46
Process cost, cost of quality, cost of producing, customer order fulfilment costSum of all activity costs associated with a process (per instance)5, 11, 16, 18, 20, 22, 26, 27, 40, 43, 46
Unit costNr of employees (headcount) per application, product or service40
Information sharing cost[Time for system data entry] + [time for system delivery output]40
3.4/Process performance related to internal quality
Quality of internal outputs, external versus internal quality, error prevention% of instance documents processed free of error
Number of mistakes
[Nr of tasks with errors]/[Total nr of tasks per process]
Nr of syntactic errors
Nr of repeated problems
Presence of non-technical anomaly management (yes/no)
5, 16, 18, 20, 22, 37, 40, 43, 46, 55, 60, 66
Deadline adherence, schedule compliance, due date performance effectiveness, responsiveness% of Activity cycle times realized according to the planning or schedule
[Number of finished business objects on time]/[number of all finished business objects] * 100
16, 17, 18, 26, 43
Process yieldMultiply the yield per process steps, e.g., (1 − scrap parts/total parts)  * (1 − scrap parts/total parts) 43
Rework time, transaction efficiencyTime to redo work for an incident that was solved partially or totally incorrect the first time
Average time spent on solving problems occurring during transactions
30, 43, 57
Integration capabilityTime to access and integrate information40
3.5/Process performance related to flexibility
Special requestsNr of special cases or requests40
4/“Learning and growth”-performance
4.1/(Digital) innovation performance
Degree of digitalization% Reduction in processing time due to computerization
[Nr of process steps replaced by computer systems]/[Total nr of steps in the entire process]
Nr of digital products or services
40, 46, 71
Degree of rationalization% of Procedures and processes systemized by documentation, computer software, etc.57
Time for training on the procedureMeasured in hours40
Novelty in outputNr of new product or service items57
Customer responseNr of suggestions provided by customers about products and services57
Third-party collaborationNr of innovation projects conducted with external parties59
Innovation projectsNr of innovations proposed per quarter year
Nr of innovations implemented per quarter year
51
IS development efficiencyNr of change requests (+per type of change or per project)
Time spent to repair bugs and finetune new applications
Time required to develop a standard-sized new application
% of Application programming with re-used code
6, 58, 66
Relative IT/IS budget[Total IT/IS budget]/[Total revenue of the organization] * 10058
Budget for buying IT/IS[Budget of IT/IS bought]/[Total budget of the organization] * 10059
Budget for IS training[IS training budget]/[overall IS budget] * 10058
Budget for IS research[IS research budget]/[overall IS budget] * 10058
Perceived management competenceQualitative scale (e.g., Likert) on the improvement in project management, organizational capability, and management by objectives (MBO)57
Perceived relationship between IT management and top managementQualitative scale (e.g., Likert) on the perceived relationship, time spent in meetings between IT and top management, and satisfaction of top management with the reporting on how emerging technologies may be applicable to the organization58
4.2/Employee performance
Perceived employee satisfactionQualitative scale on general satisfaction (e.g., Likert), possibly indexed as the weighted sum of judgements on satisfaction dimensions
Qualitative scale (e.g., Likert) on satisfaction about hardware and software provided by the organization
16, 43, 11, 57, 58, 59
Average employee saturation, resource utilization for process work[Time spent daily on working activities]/[total working time] * 100
[Work time]/[available time]
 % of operational time that a resource is busy
3, 40, 46
Resource utilization for (digital innovation)IS expenses per employee
% of Resources devoted to IS development
% of Resources devoted to strategic projects
58
Process usersNr of employees involved in a process37
Working timeActual time a business process instance is being executed by a role20
WorkloadNr of products or services handled per employee71
Staff turnover% of Employees discontinuing to work and replaced, compared to the previous year16, 57, 58
Employee retention, employee stability% of Employees continuing to work in the organization, compared to the previous year16, 57, 58, 59
Employee absenteeism[Total days of absence]/[total working days for all staff] * 10059
Motivation of employeesAverage number of overtime hours per employee16
Professional training, promotion and personal development% of Employees trained
% of Employees participated in a training program per year
Nr of professional certifications or training programs per employee
57, 59, 22
Professional conferences% of Employees participating in conferences59

See Table  9 .

Table 9

Additional list of performance indicators without operationalization

PerspectivesPerformance indicators/measures/metricsPapers
1/Financial performance
Selling price18, 55
Cash flow22
2/Customer performance
2.1/Customer performance
Customer relationship management, direct customer cooperation, efficiency of customer cooperation, establishing and maintaining relationships with the user community11, 22, 58
Warranty cost55
Delivery cost27
Delivery frequency18, 60, 73
2.2/Supplier performance
Efficiency of cooperation with vendors, buyer–supplier partnership level, degree of collaboration and mutual assistance, nr of supplier contracts11, 60, 73
Information carrying costs, level and degree of information sharing60
Supplier rejection rate60
Buyer-vendor cost saving initiatives60
Delivery frequency60
Supplier ability to respond to quality problems60
Supplier’s booking in procedures60
Supplier lead time against industry norms60
3/Business process performance
3.3/Cost-related process performance
Cost of risks58
Cost per operating hour, running cost18, 60
Material cost22
Service cost18, 22
Inventory cost (e.g., incoming stock level, work-in-progress, scrap value, finished goods in transit)22, 55, 60
Overhead cost55
Obsolescence cost55
Transportation cost55
Maintenance cost26
3.4/Process performance related to internal quality
Conformance to specifications55
Compliance with regulation18, 43, 55
Verification mismatches73
Forecasting accuracy, accuracy of scheduling55, 60, 73
3.5/Process performance related to flexibility
Process flexibility22, 58
General flexibility5, 22, 40
Product or service variety55
Range of products or services60
Modification of products or services, volume mix, resource mix18, 22, 55
Flexibility of service systems to meet particular customer needs60
Effectiveness of delivery invoice methods60
Payment methods52
Order entry methods60
Responsiveness to urgent deliveries60
4/“Learning and growth”-performance
4.1/(Digital) innovation performance
R&D performance, investment in R&D and innovations11, 16
New product or service development costs22
Knowledge base16
4.2/Employee performance
Productivity11, 22, 40
Labor efficiency55
Labor cost22
Employee availability22, 26, 40, 52
Expertise with specific existing technologies58
Expertise with specific emerging technologies58
% of multi-skilled workforce26
Age distribution of IS staff58

Contributor Information

Amy Van Looy, Phone: +32 9 264 95 36, Email: [email protected] .

Aygun Shafagatova, Email: [email protected] .

  • Bain Company (2015) Management tools and trends 2015. http://www.bain.com/publications/articles/management-tools-and-trends-2015.aspx . Accessed Apr 2016
  • Boellt SK, Cecez-Kecmanovic D. On being ‘systematic’ in literature reviews in IS. J Inf Technol. 2015; 30 :161–173. doi: 10.1057/jit.2014.26. [ CrossRef ] [ Google Scholar ]
  • Bronzo M, de Resende PTV, de Oliveira MP, McCormack KP, de Sousa PR, Ferreira RL. Improving performance aligning business analytics with process orientation. Int J Inf Manag. 2013; 33 (2):300–307. doi: 10.1016/j.ijinfomgt.2012.11.011. [ CrossRef ] [ Google Scholar ]
  • Butler A, Letza SR, Neale B. Linking the balanced scorecard to strategy. Long Range Plann. 1997; 30 (2):242–253. doi: 10.1016/S0024-6301(96)00116-1. [ CrossRef ] [ Google Scholar ]
  • Cross KF, Lynch RL. The “SMART” way to define and sustain success. Natl Product Rev. 1988; 8 (1):1–23. doi: 10.1002/npr.4040080105. [ CrossRef ] [ Google Scholar ]
  • Dumas M, La Rosa M, Mendling J, Reijers HA. Fundamentals of business process management. Berlin: Springer; 2013. [ Google Scholar ]
  • EFQM (2010) EFQM—the official website. http://www.efqm.org . Accessed Apr 2015
  • Eisenhardt KM, Martin JA. Dynamic capabilities: what are they? Strateg Manag J. 2000; 21 (10–11):1105–1121. doi: 10.1002/1097-0266(200010/11)21:10/11<1105::AID-SMJ133>3.0.CO;2-E. [ CrossRef ] [ Google Scholar ]
  • Hammer M. The process audit. Harv Bus Rev. 2007; 4 :111–123. [ PubMed ] [ Google Scholar ]
  • Heckl D, Moormann J. Process performance management. In: Rosemann M, vom Brocke J, editors. Handbook on business process management 2. Berlin: Springer; 2010. pp. 115–135. [ Google Scholar ]
  • Hubbard G. Measuring organizational performance: beyond the triple bottom line. Bus Strateg Environ. 2009; 18 (3):177–191. doi: 10.1002/bse.564. [ CrossRef ] [ Google Scholar ]
  • Kaplan RS, Norton DP. The balanced scorecard. Translating strategy into action. Boston: Harvard Business School Press; 1996. [ Google Scholar ]
  • Kaplan RS, Norton DP. The strategy-focused organization. How balanced scorecard companies thrive in the new business environment. Boston: Harvard Business School Press; 2001. [ Google Scholar ]
  • Kaplan RS, Norton DP. Strategy maps. Converting intangible assets into tangible outcomes. Massachusetts: Harvard Business Press; 2004. [ Google Scholar ]
  • King WR, He J. Understanding the role and methods of meta-analysis in IS research. Commun Assoc Inform Sys. 2005; 16 :665–686. [ Google Scholar ]
  • Kitchenham B (2007) Guidelines for performing systematic literature reviews in software engineering (version 2.3) (technical report EBSE-2007-01). Keele University and University of Durham
  • Kueng P. Process performance measurement system: a tool to support process-based organizations. Total Qual Manag. 2000; 11 (1):67–85. doi: 10.1080/0954412007035. [ CrossRef ] [ Google Scholar ]
  • March ST, Smith GF. Design and natural science research on information technology. Decis Support Syst. 1995; 15 (4):251–266. doi: 10.1016/0167-9236(94)00041-2. [ CrossRef ] [ Google Scholar ]
  • McCormack K, Johnson WC. Business process orientation. Florida: St. Lucie Press; 2001. [ Google Scholar ]
  • Melville N, Kraemer K, Gurbaxani V. Review: information technology and organizational performance: an integrative model of IT business value. MIS Q. 2004; 28 (2):283–322. [ Google Scholar ]
  • Miller D, Friesen PH. Porter’s (1980) Generic strategies and performance: an empirical examination with American data part I: testing porter. Organ Stud. 1986; 7 (1):37–55. doi: 10.1177/017084068600700103. [ CrossRef ] [ Google Scholar ]
  • Neely A. The evolution of performance measurement research. Int J Oper Prod Manag. 2005; 5 (12):1264–1277. doi: 10.1108/01443570510633648. [ CrossRef ] [ Google Scholar ]
  • Neely A, Mills J, Platts K, Richards H, Gregory M, Bourne M, Kennerley M. Performance measurement system design: developing and testing a process-based approach. Int J Oper Prod Manag. 2000; 20 (10):1119–1145. doi: 10.1108/01443570010343708. [ CrossRef ] [ Google Scholar ]
  • Norreklit H. The balance on the balanced scorecard. A critical analysis of some of its assumptions. Manag Accoun Res. 2000; 11 (1):65–88. doi: 10.1006/mare.1999.0121. [ CrossRef ] [ Google Scholar ]
  • Peffers K, Rothenberger M, Tuunanen T, Vaezi R. Design science research evaluation. In: Peffers K, Rothenberger M, Kuechler B, editors. DESRIST 2012. LNCS 7286. Berlin: Springer; 2012. pp. 398–410. [ Google Scholar ]
  • Porter ME. The five competitive forces that shape strategy. Harv Bus Rev. 2008; 86 (1):78–93. [ PubMed ] [ Google Scholar ]
  • Recker J. Scientific research in information systems. A beginner’s guide. Berlin: Springer; 2013. [ Google Scholar ]
  • Richard PJ, Devinney TM, Yip GS, Johnson G. Measuring organizational performance: towards methodological best practice. J Manag. 2009; 35 (3):718–804. [ Google Scholar ]
  • Ryan L (2014) ‘If you can’t measure it, you can’t manage it’: not true. http://www.forbes.com/sites/lizryan/2014/02/10/if-you-cant-measure-it-you-cant-manage-it-is-bs/#aca27e3faeda . Accessed Apr 2015
  • Shah L, Etienne A, Siadat A, Vernadat F (2012) (Value, Risk)-Based performance evaluation of manufacturing processes. In: INCOM proceedings of the 14th symposium on information control problems in manufacturing, 23–25 May 2012. Bucharest, Romania, pp 1586–1591
  • Smith TM, Reece JS. The relationship of strategy, fit, productivity, and business performance in a services setting. J Oper Manag. 1999; 17 (2):145–161. doi: 10.1016/S0272-6963(98)00037-0. [ CrossRef ] [ Google Scholar ]
  • Sullivan T (2001) Scorecards ease businesses’ balance act. Infoworld, 8 Jan, p 32
  • Treacy M, Wiersema F. Customer intimacy and other value disciplines. Harv Bus Rev. 1993; 71 (1):84–93. [ Google Scholar ]
  • Ulfeder S (2004) The new imperative. Enterprise leadership. CIO advertising supplements, 15 Feb, p S5
  • Vaivio J. Exploring a non-financial management accounting change. Manag Acc Res. 1999; 10 (4):409–437. doi: 10.1006/mare.1999.0112. [ CrossRef ] [ Google Scholar ]
  • Venkatraman N. The concept of fit in strategy research: toward verbal and statistical correspondence. Acad Manag Rev. 1989; 14 (3):423–444. [ Google Scholar ]

To read this content please select one of the options below:

Please note you do not have access to teaching notes, understanding the features of performance measurement system: a literature review.

Measuring Business Excellence

ISSN : 1368-3047

Article publication date: 11 November 2013

The purpose of this paper is to identify the required factors that can be considered necessary in conceptualizing the features of an efficient and effective performance measurement system (PMS) that is appropriate in the modern organizational setting. The field of PMS is heavily researched and yet certain fundamentals of PMS, in particular the precise meaning and application of the features of PMS: data, measuring attributes consisting of measures, metrics and indicators, and methods of measurement remain unclear.

Design/methodology/approach

The paper uses a systematic approach in reviewing and examining existing PMS and non-PMS articles that focus on the features for measurement from 1990 (January) to 2012 (November). Further, citation analysis was used to support the review as well-cited articles are considered well-read by researchers, and hence they should have sufficient impact in the academic community.

The outcomes of this review contribute and update existing literature on PMS in three ways: identification of gaps in terms of practical usefulness and academic research; suggestions of solutions in the form of a conceptual framework to improve measurement and performance measurement using the correct features of PMS; and recommendation of a direction for future research with regard to the features of PMS.

Research limitations/implications

Although the proposed concept has many merits, the conceptualized features (as yet) have not reached the stage beyond normative reasoning.

Practical implications

The paper defines a measure, metric and an indicator which relates to the use of accounting and non-accounting data, and suggests some of the non-accounting methods of measurement and performance measurement that can be used generally in various organizations. Using an illustration, this paper suggests appropriate ways to construct and implement the measuring attributes using appropriate measuring methods in a car maker. This paper also warns users to construct and use the features of PMS judiciously as incorrect uses of these attributes can easily be misconstrued, and thus causing incorrect inferences in decision-making.

Originality/value

This is a seminar paper that defines the measuring attributes of PMS for the conceptualization of a workable framework of the features of PMS.

  • Performance measurement
  • Measurement
  • Measuring attribute
  • Measuring methods

Keong Choong, K. (2013), "Understanding the features of performance measurement system: a literature review", Measuring Business Excellence , Vol. 17 No. 4, pp. 102-121. https://doi.org/10.1108/MBE-05-2012-0031

Emerald Group Publishing Limited

Copyright © 2013, Emerald Group Publishing Limited

Related articles

All feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Approach in inputs & outputs selection of Data Envelopment Analysis (DEA) efficiency measurement in hospitals: A systematic review

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

Affiliations Medical Development Division, Ministry of Health Malaysia, Putrajaya, Malaysia, Department of Public Health Medicine, Faculty of Medicine, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Department of Public Health Medicine, Faculty of Medicine, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia

ORCID logo

Roles Conceptualization, Data curation, Formal analysis, Investigation, Writing – original draft, Writing – review & editing

Affiliation Occupational and Aviation Medicine Department, University of Otago Wellington, Wellington, New Zealand

Affiliation Department of Public Health, Faculty of Medicine, Universiti Sultan Zainal Abidin, Terengganu, Malaysia

Roles Writing – original draft, Writing – review & editing

Affiliation Medical Development Division, Ministry of Health Malaysia, Putrajaya, Malaysia

Affiliations Department of Public Health Medicine, Faculty of Medicine, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia, Medical Practice Division, Ministry of Health Malaysia, Putrajaya, Malaysia

  • M. Zulfakhar Zubir, 
  • A. Azimatun Noor, 
  • A. M. Mohd Rizal, 
  • A. Aziz Harith, 
  • M. Ihsanuddin Abas, 
  • Zuriyati Zakaria, 
  • Anwar Fazal A. Bakar

PLOS

  • Published: August 14, 2024
  • https://doi.org/10.1371/journal.pone.0293694
  • Peer Review
  • Reader Comments

Fig 1

The efficiency and productivity evaluation process commonly employs Data Envelopment Analysis (DEA) as a performance tool in numerous fields, such as the healthcare industry (hospitals). Therefore, this review examined various hospital-based DEA articles involving input and output variable selection approaches and the recent DEA developments. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology was utilised to extract 89 English articles containing empirical data between 2014 and 2022 from various databases (Web of Science, Scopus, PubMed, ScienceDirect, Springer Link, and Google Scholar). Furthermore, the DEA model parameters were determined using information from previous studies, while the approaches were identified narratively. This review grouped the approaches into four sections: literature review, data availability, systematic method, and expert judgement. An independent single strategy or a combination with other methods was then applied to these approaches. Consequently, the focus of this review on various methodologies employed in hospitals could limit its findings. Alternative approaches or techniques could be utilised to determine the input and output variables for a DEA analysis in a distinct area or based on different perspectives. The DEA application trend was also significantly similar to that of previous studies. Meanwhile, insufficient data was observed to support the usability of any DEA model in terms of fitting all model parameters. Therefore, several recommendations and methodological principles for DEA were proposed after analysing the existing literature.

Citation: Zubir MZ, Noor AA, Mohd Rizal AM, Harith AA, Abas MI, Zakaria Z, et al. (2024) Approach in inputs & outputs selection of Data Envelopment Analysis (DEA) efficiency measurement in hospitals: A systematic review. PLoS ONE 19(8): e0293694. https://doi.org/10.1371/journal.pone.0293694

Editor: André Ramalho, FMUP: Universidade do Porto Faculdade de Medicina, PORTUGAL

Received: October 29, 2023; Accepted: June 26, 2024; Published: August 14, 2024

Copyright: © 2024 Zubir et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the manuscript and its Supporting information files.

Funding: The author(s) received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

1. Introduction

Efficiency is a well-established concept in the field of economics. Farrell proposed that efficiency measurement should consider all inputs and outputs, avoiding index number issues and providing practical calculation methods. Hence, efficiency-related articles have attracted significant attention from various fields, including statisticians, economists, healthcare, and medicine [ 1 ]. Nevertheless, insufficient agreement regarding the optimal method is observed for measuring efficiency. For example, various methodologies are often used for efficiency-related articles in health facilities, including data envelopment analysis (DEA), stochastic frontier analysis (SFA), Pabon Lasso, and ratio analysis [ 2 – 4 ]. The World Health Organization (WHO) also introduced a unique approach to evaluating the effectiveness of healthcare systems in the Global Programme on Evidence for Health Policy Discussion Paper Series. Compared to a previous study in this field [ 5 ], this approach introduced numerous objectives of the healthcare system, including responsiveness (level and distribution), fair finance, health inequality, and the more traditional goal of improving population health.

Another report by the WHO evaluated the performance of the health system by assessing how well national health systems achieved three main objectives: good health, expectation responsiveness to the population, and fairness of financial contribution [ 6 ]. Despite the acknowledgement of this method, disagreement and criticism have occurred over the methodology employed. Conversely, a consensus has been observed regarding the importance of accurately directing these assessments, performing a more critical analysis, adopting a more constructive approach, and facilitating a crucial dialogue among stakeholders in the healthcare system [ 7 – 9 ]. Recently, the DEA has been used to compute the effectiveness of healthcare systems in 180 countries. This assessment is based on six key dimensions: clinical outcomes, health-adjusted life years, access, equity, safety, and resources [ 10 ]. Stakeholders must comprehend that universally applicable efficiency metrics for all healthcare systems are impossible. Therefore, a comprehensive understanding of the institutional arrangements, data, and measurements is necessary to select suitable measures, resources, and other health system components.

A framework is required following the analysis process. The optimal approach to implementing performance measurement is not to identify a minor adjustment as a supporting role to enhance one aspect of the health system outcomes. Instead, this identification should be utilised as a general strategy in gauging performance among the various system components [ 11 , 12 ]. Numerous indicators, such as activity and expense comparison measures, are also available to assess whether limited health resources are utilised most efficiently. The primary focus of these indicators is based on quantitative metrics for evaluating hospital performance. Furthermore, the quality of hospital services can be examined using various indicators [ 13 , 14 ]. Efficiency comparisons can also be assessed objectively using techniques from a solid economic theory. Currently, the DEA and SFA approaches are frequently applied to measure the efficiency of the healthcare industry [ 11 ]. Since the publication of Nunamaker’s study, these strategies have been widely used in healthcare settings over the past 40 years [ 15 – 19 ]. Although the theoretical and methodological limitations have been acknowledged in DEA, this method has attracted interest from researchers who aim to address the limitations. Hence, these studies have developed multiple methods integrating DEA with other statistical techniques and methodologies to improve efficiency evaluation [ 20 , 21 ].

1.1 DEA as an efficiency analysis tool in hospital

The DEA is a mathematical technique for assessing the relative efficiency of homogenous decision-making units (DMUs) with many input and output variables. Initially, this method was developed within operations research and econometrics. The effectiveness of a DMU is then evaluated concerning the effectiveness of each other members of the group. Nevertheless, one drawback of the DEA is its non-parametric and deterministic nature, suggesting that outliers are more easily detected. Meanwhile, an efficient DMU usually involves maximum output production while utilising the same input levels as all other DMUs [ 17 , 22 ]. Various DEA-related articles have proposed that this outcome is denoted as the Charnes, Cooper, and Rhodes (CCR) model or constant return to scale (CRS) assumption. This observation allows for examining input-output correlation without considering any congestion effects, indicating that the outputs can present a precise linear correlation with the inputs [ 10 , 23 ].

Banker expanded the CCR model and the CRS assumption using a Banker, Charnes, and Cooper (BCC) model and variable returns to scale (VRS) assumption. This assumption suggests that the scales of the economies shifted with higher DMU size [ 23 , 24 ]. The DEA approach also considers the model orientation (input or output-oriented) alongside the model type and returns to scale assumption. For example, a DMU in the input orientation assumption can control more inputs than outputs. Nonetheless, this statement can be argued that organisations can improve their outputs by utilising efficiency-oriented inputs [ 23 , 25 ]. Hence, the input and output variables should be carefully considered when using the DEA to measure the effectiveness of a DMU or an organisation. This suggestion indicates that a precise, thorough, pertinent, and appropriate selection and combination of the input and output variables is necessary to effectively portray the functionality of a hospital while meeting the stakeholders’ expectations and assessing its efficiency [ 18 , 21 ]. Numerous advanced analyses have also been incorporated into DEA, such as the advanced CCR and BCC models, longitudinal or window analysis (Malmquist index), and statistical analysis (regression and bootstrapping methods) [ 20 , 23 , 25 – 27 ].

1.2 Input and output selections for hospital-based DEA applications

Multiple articles have demonstrated the practicality and potential of DEA in evaluating hospital efficiency [ 28 – 31 ]. Despite that DEA rating comparisons across several hospital-based articles produce helpful hypotheses, significant drawbacks are observed as follows:

  • The input and output metrics vary across different timeframes.
  • The DEA score distribution is highly skewed, rendering it inaccurate to rely on standard measures of central tendency.
  • The output metrics in the articles present significant divergence from each other.
  • The hospital production models and types possess substantial differences.

Certain hospital-based articles have reported that innovative strategies can provide valuable insights to decision-makers [ 23 , 32 ]. These articles have also included the DEA for hospital-based applications. Generally, DEA-based applications involve health care performance measurement [ 15 , 16 , 18 ], categorisation or clustering of DEA techniques [ 20 , 33 ], DEA comparison with other methods, countries or durations [ 28 , 29 , 30 , 34 ], and development of novel knowledge and approaches concerning DEA assessment [ 17 , 21 ]. Likewise, each stage in a systematic literature review (SLR) employs organised, transparent, and reproducible techniques to identify and integrate relevant articles to a particular topic comprehensively. The reviewer’s methodology is meticulously recorded, allowing readers to track the decisions and actions taken and evaluate them [ 35 ]. Although numerous hospital-based DEA articles have been recorded, inadequate complete analysis has been observed. Consequently, this outcome requires further investigation, leading to a research gap involving hospital-based DEA articles.

This review investigated various hospital-based DEA articles for selecting the most suitable input and output variables. Notably, hospital institutions were chosen due to the significant challenges in assessing their efficiency. This limitation was further complicated by the dynamic nature of service production and variation across several providers [ 25 , 36 , 37 ]. To the authors’ knowledge, no reviews regarding hospital-based DEA articles involving optimal input and output variable selections were reported. Thus, this review addressed this research gap by observing the current trends in hospital-based DEA analyses. The remainder of this review is structured as follows: Section 2 describes the methodology used and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statements. Section 3 presents the literature review of the relevant articles and their corresponding discussions concerning the input and output variable selections for hospital-based DEA applications. Finally, Section 5 highlights the limitations and conclusions of this review.

2. Methodology

This section discusses the methodology used to obtain relevant hospital-based DEA articles. The PRISMA methodology involved Web of Science, Scopus, PubMed, ScienceDirect, Springer Link, and Google Scholar databases. This process conducted the SLR, eligibility with exclusion criteria, review stages (identification, screening, and eligibility), and data abstraction with analysis.

The PRISMA methodology concisely collects components for documenting SLRs and meta-analyses based on supported evidence. Even though this approach typically focuses on reporting reviews evaluating intervention effects, it can also be used as a basis for publishing SLRs with objectives other than assessing interventions (Appendices A and B) [ 38 ]. Hence, a comprehensive manual on the SLR methodological approach is required for future researchers. This SLR initiates with developing and verifying the review method, publication standard, and reporting standard or guidance. These articles can then provide a systematic guideline for researchers outlining the factors necessitating consideration during the review process [ 39 ].

2.2 Journal databases

Various articles published from 2014 until 2022 were obtained on 5 th April 2023, using six databases: Web of Science, Scopus, PubMed, ScienceDirect, Springer Link, and Google Scholar. The analysis of the search engines revealed significant performance discrepancies, indicating the absence of an optimal search approach. Therefore, searchers must be well-trained, capable of evaluating the strengths and weaknesses of a system, and able to determine where and how to search based on that information to use them effectively. The six databases were selected based on their potential to provide a meticulously curated medical database with recall-enhancing features, tools, and other alternatives to optimise precision [ 40 , 41 ].

2.3 Identification

This systematic review process comprised four stages (identification, screening, quality appraisal, analysis). Several search terms were identified during the first stage, which involved searching previous articles using various terms: “efficiency*”, “performance*”, “productivity*”, “benchmark*”, “hospital*”, “data envelopment analysis”, and “DEA”. The search string was modified according to the requirement of database. The records were exported from the databases into Microsoft Excel sheet for screening. The final query string is as follow:

  • (((("hospital") AND ("efficiency")) OR (("hospital") AND ("performance")) OR (("hospital") AND ("benchmark")) OR (("hospital") AND ("productivity"))) AND (("data envelopment analysis") OR ("DEA")))

2.4 Screening

The inclusion and exclusion criteria were established in this review. The titles and abstracts were independently screened by three reviewers. Only articles containing empirical data were initially selected, and this process excluded review articles (SLR and SR), book series, books, book chapters, and conference proceedings. Non-English articles were then excluded in the search attempts, avoiding any ambiguity or difficulty in translation. Subsequently, a nine-year duration was chosen for the chronology (2014–2022) to observe significant developments in research and relevant articles. This duration also functioned as a continuation of a previous study by O’Neill et al . (1984–2004), Cantor and Poh (1994–2017), and Kohl et al . (2005–2016). Consequently, 89 articles were finalised for the quality appraisal stage. Fig 1 depicts the PRISMA diagram, which provides a detailed description of the entire search procedure.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0293694.g001

2.5 Quality appraisal

A quality appraisal stage was conducted to ensure that the methodology and analysis of the selected articles were performed satisfactorily. This process contained two quality appraisal tools: knowledge transfer [ 29 , 42 ] and economic evaluations and efficiency measurement [ 43 , 44 ]. Mitton et al . developed a 15-point scale that covered several topics: literature evaluation, research gap identification, question, design, validity and reliability, data collection, population, sampling, and result analysis and report. These criteria were evaluated using a score range between 0 and 3: 0 for not being present or reported, 1 for being present but of low quality, 2 for being present and mid-range quality, or 3 for being present and of high quality [ 42 ].

Another checklist by Varabyova and Müller employed four dimensions: reporting, external validity, bias, and power. All items on the quality assessment checklist were assigned a score of either 0 (indicating no or unclear) or 1 (indicating yes). One item in the checklist also focused on conducting a second-stage analysis to investigate potential sources of bias in the study. The articles with and without second-stage analysis received maximum scores of 14 and 13, respectively. This checklist assessed the article from an economic perspective to ensure the findings could be used in policy analysis and managerial decisions. Only the items relevant to the design of the article were utilised to establish the maximum score (100%) for each study [ 44 ]. Overall, no recognised standards for assessing the planning or implementation of research on healthcare efficiency indicators were recorded. Thus, the scientific soundness of the chosen article was investigated using two tools to improve robustness and minimise bias. Two co-authors from different institutions evaluated each selected article separately using both tools to enhance reliability. A third reviewer was then requested to assess an article if a disagreement occurred.

2.6 Data extraction and analysis

The selected articles were subjected to examination and analysis. Specific articles were also prioritised to meet the objectives directly. The data extraction process could be performed by reading the abstracts and the entire article. Meanwhile, content and quantitative and qualitative analyses were used to determine the input and output selection approaches for the hospital-based DEA articles. Four reviewers extracted the data independently using a standardized data extraction form which is organized using Microsoft Excel. The information in this form included publication year, country of study, studied hospital type, number of hospitals, number of observations (DMUs), model type, returns to scale, model orientation, measured efficiency type, input, output, number of models, application of second stage analysis, and approaches used in selecting input or output variables.

2.7 Statistical analysis

In evaluating the studies, the intra-class correlation (ICC) was used to measure the agreement between two raters (co-authors). This process examined the dependability of ratings by comparing the variability between various evaluations of the same subject regarding the overall variation observed across all ratings and subjects. Each of the evaluation processes was also quantitative. Meanwhile, the ICC coefficient values for Mitton et al .’s (15-point scale) and Varabyova and Müller’s (economic evaluation and efficiency measurement) studies were 0.956 and 0.984, respectively. No articles were excluded at this stage as the review encompassed qualitative and quantitative aspects. Nonetheless, highly rated articles were considered highly in the data analysis and result interpretation processes.

All the 89 articles included in this analysis were retrospective studies published between 2014 and 2022. Appendices C and D contain a comprehensive summary of all the selected articles.

3.1 Efficiency analysis

The efficiency analysis in DEA primarily focused on the data by quantifying the set performance of DMUs. Given that the definition of DMU is generic and broad, this review focused on “hospital”. Typically, the four main efficiency concepts are technical, scale, pricing, and allocative efficiencies [ 25 ]. Certain studies also described efficiency as technical, pure, scale, allocative, cost, and congestion. The DEA could perform efficiency analysis at a single point in time and over time [ 26 ]. Thus, the data could be categorised as cross-sectional (single period) or longitudinal (panel data). Specifically, the longitudinal analysis of DEA utilised two approaches to quantify efficiency: the Malmquist Productivity Index (MPI) and Window Analysis (WA).

literature review of performance measurement

Fig 2 provides a geometric representation of the concepts involved in efficiency measurement using the DEA. Of the 89 articles, 26 (29.21%) measured the overall TE [ 76 – 101 ]. Additionally, 24 (26.97%) computed the TE, PTE, and SE [ 74 , 75 , 102 – 125 ]. Even though the remaining articles were assessed by combining TE and PTE, certain articles did not explicitly specify the tested efficiency type (see S5 Appendix ).

thumbnail

https://doi.org/10.1371/journal.pone.0293694.g002

3.2 Model parameters

The DEA was applied using four considerations specified by the researcher: model type, technological assumption of the delivery process, model orientation, and input-output combination [ 112 , 121 ]. This model could be further analysed or extended through a second stage or integrated with other statistical methods. Consequently, this process could improve efficiency measurement, understanding of the variation or difference in organisational performance, and evaluation of the productivity of the organisation over a specific period [ 91 , 110 , 119 ]. Considering that the performance was analysed over a certain period, the data type was also essential.

3.2.1 Model type.

The DEA has been utilised to assess the performance of various entities involved in diverse activities under different circumstances. This process leads to numerous models and extensions explaining the intricate and frequently unpredictable correlations between multiple inputs and outputs in organisation activities or productions [ 106 , 124 ]. Hence, these models can be described as basic DEA and extension models. Certain articles have also denoted the model as Radial, Non-radial and Oriented, Non-radial and Non-oriented, and Radial and Non-radial [ 23 , 125 ]. Most articles in this review (80.90%, 72 of 89) used Radial DEA models [ 45 , 47 – 51 , 53 – 61 , 64 – 76 , 78 , 79 , 81 – 85 , 88 – 93 , 95 – 99 , 101 – 115 , 117 – 123 , 126 – 129 ].

The BCC, CCR, or a mixture of both models were used to measure efficiency by examining the radial changes in input and output values. Nevertheless, only 7.87% (7 of 89) [ 46 , 52 , 62 , 63 , 94 , 100 , 130 ] or 4.49% (4 of 89) [ 80 , 86 , 87 , 124 ] employed Non-radial and Oriented or Non-radial and Non-oriented models, respectively. The Non-radial model deviated from the conventional approach of proportional input or output changes and instead focused on addressing slacks directly. Only one article was observed using the Radial and Non-radial models [ 77 ], while one combined Radial, Non-radial, and Oriented models to measure efficiency [ 116 ]. The remaining four articles did not explicitly specify the model employed in the study (see S6 Appendix ) [ 131 – 134 ].

3.2.2 Model orientation.

Orientation refers to the specific direction in which input or output is measured to determine efficiency. The primary evaluation objective is to either increase output or decrease input. Most articles in this review (55.06%, 49 of 89) applied input-orientated DEA models [ 45 – 47 , 49 – 53 , 57 , 59 , 63 – 68 , 70 – 72 , 74 – 76 , 79 , 90 – 93 , 95 , 97 , 98 , 102 – 106 , 108 – 110 , 112 – 115 , 117 , 120 , 123 , 126 , 127 , 129 , 134 ]. These articles selected the input orientation to align with the standard practice in healthcare facilities of minimising inputs while achieving a desired output level. Thus, the organisation acquired minimal or non-existent authority over the output [ 45 , 50 , 109 ].

Approximately 25.84% (23 of 89) of the articles presented contradictory findings [ 48 , 54 – 56 , 58 , 60 , 61 , 69 , 73 , 78 , 81 – 85 , 101 , 107 , 111 , 116 , 118 , 121 , 122 , 133 ]. Given the fixed and non-flexible nature of the input, the organisation should strive to raise its output. This outcome implied that output-orientated DEA models were more appropriate in their respective settings [ 78 , 82 , 83 ]. Meanwhile, only 5.62% (5 of 89) [ 80 , 86 , 87 , 99 , 124 ] or 3.37% (3 of 89) [ 94 , 100 , 128 ] employed non-orientated or combined input and output-orientated DEA models, respectively. The remaining articles did not specify the orientation used in their measurements and did not clearly state their orientation (see S7 Appendix ) [ 62 , 77 , 88 , 89 , 96 , 119 , 130 – 132 ].

3.2.3 Returns to scale assumption.

Approximately one-third of the articles (35.96%, 32 of 89) involving the returns to scale assumption combined CRS and VRS assumptions in evaluating efficiency [ 74 , 75 , 78 , 91 , 101 – 124 , 126 – 129 ]. These articles compared the efficiency score to acquire a more comprehensive understanding of the organisation. Moreover, these articles provided additional knowledge on how they might utilise each assumption to enhance hospital services [ 101 , 108 , 113 ]. Another one-third of the articles (32.58%, 29 of 89) used the VRS assumption and implied a significant correlation between the outputs of organisations (DMUs) (increase or decrease) and inputs [ 45 – 73 ]. Likewise, 20.22% (18 of 89) [ 76 , 79 , 81 – 85 , 88 – 90 , 92 , 93 , 95 – 100 ] assumed that the outputs of their organisations (DMUs) varied (increase or decrease) similarly to the inputs (see S8 Appendix ).

3.2.4 Input and output selections.

Appropriate input and output selections are necessary for conducting a comprehensive efficiency evaluation. Therefore, identifying the key attributes depicting the investigated process or output is critical. This process implies that all relevant resources should be incorporated into the inputs, while the administrative objectives of the organisations (DMUs) should be outlined in the outputs [ 52 , 104 ]. Nonetheless, suitable inputs and outputs can present varying features depending on the situation. Data availability also requires significant consideration alongside appropriate input and output selections. Hence, various recommendations have been presented involving locating suitable measures [ 76 , 131 , 133 ]. This process is further discussed as the main objective of this review.

Several articles employed input and output classifications for measuring efficiency, including capacity, labour, and expenses-related or capital investment, labour, and operating expenses. Specific articles also further delineated this classification process into sub-categories. For example, the outputs were classified as inpatient with outpatient services and effectiveness (quality). Other outputs were classified into two categories: activity (inpatient and outpatient) and quality-related (effectiveness dimension) [ 20 , 21 , 25 , 32 ]. Table 1 lists the input and output classification and sub-classification processes in this review. Tables 2 and 3 summarise the details of each sub-classification frequency distribution and percentages.

thumbnail

https://doi.org/10.1371/journal.pone.0293694.t001

thumbnail

https://doi.org/10.1371/journal.pone.0293694.t002

thumbnail

https://doi.org/10.1371/journal.pone.0293694.t003

3 . 2 . 4 . 1 Capacity-related inputs . The size, capacity, and functioning of a hospital as a health service are determined mainly by its number of fully staffed and operating beds. Out of the 89 articles, 75 of them (84.27%) considered the number of beds (general, intensive care unit and special) as inputs in their analyses [ 47 – 54 , 56 – 61 , 63 – 73 , 75 , 77 – 92 , 94 – 100 , 102 – 108 , 111 – 115 , 117 , 118 , 120 – 124 , 127 – 130 , 132 – 134 ]. Only seven (9.33%) of the 75 articles used bed-related data as their inputs (bed type, cost, or ratio) [ 60 , 65 , 70 , 77 , 97 , 100 , 128 ]. Another 12 (16.00%) of the 75 articles studies combined beds and capital assets as capacity-related inputs [ 69 , 77 , 84 , 88 , 89 , 100 , 102 , 105 , 108 , 121 , 124 , 132 ]. Even though only one article employed capital assets as its input, it was combined with cost-related assets. Unlike other articles, it became apparent why this article did not include beds as part of its input [ 55 ]. Overall, the primary capacity-related input in these articles was the number of general beds. This input was followed by the number of facility types and the number of medical equipment.

3 . 2 . 4 . 2 Cost-related inputs . Cost-related input was the least utilised in all the articles. Of the 89 articles, only 31 (34.83%) were applicable. Another three of the 31 articles specifically used cost-related inputs [ 62 , 101 , 119 ]. Interestingly, most of these 31 articles (90.32%) combined capacity and staff-related inputs in their analyses [ 45 – 47 , 50 , 55 , 61 – 63 , 69 , 74 – 78 , 82 , 90 , 93 , 94 , 100 , 101 , 108 , 110 , 111 , 113 , 115 , 119 , 124 , 128 , 129 , 131 , 132 ]. Overall, the primary cost-related input utilised in these articles was total operational cost, followed by fixed costs. Subsequently, service and consumable costs followed behind.

3 . 2 . 4 . 3 Staff-related inputs . Most articles (93.26%, 83 of 89) employed staff-related inputs [ 45 – 54 , 56 – 61 , 63 – 89 , 91 – 118 , 120 – 124 , 126 – 130 , 133 , 134 ]. Another two of the 83 articles combined the number of staff (staff-related) and labour cost (cost-related) as their inputs [ 74 , 77 ]. Alternatively, cost-related inputs (labour or operating costs) substituted staff-related input as proxies in six articles [ 55 , 62 , 90 , 119 , 131 , 132 ]. The staff-related input values exhibited variability across the observed articles, while most articles employed arithmetic numbers (actual). A full-time equivalent and a ratio of specific values followed this input. Overall, these articles demonstrated that the number of doctors was the most common input, followed by the number of nurses and clinical staff.

3 . 2 . 4 . 4 Production-related outputs . A significant portion (98.88%, 88 of 89) of the articles highlighted production-related outputs [ 45 – 91 , 93 – 124 , 126 – 134 ]. Only eight (8 of 89) articles combined production and quality-related outputs [ 60 , 72 , 83 , 86 , 94 , 97 , 127 , 132 ]. The most prevalent production-related output in these articles was the number of outpatients. This output was sequentially followed by the number of inpatients (admission and discharge), the total number of operations, and the number of inpatients.

3 . 2 . 4 . 5 Quality-related outputs . Quality-related outputs were less prominent than production-related outputs, and only nine (9 of 89) articles employed quality-related outputs [ 60 , 72 , 83 , 86 , 92 , 94 , 97 , 127 , 132 ]. Notably, one article focused exclusively on a quality-related output in their research, aligning with the objectives of the study [ 92 ]. Overall, the mortality rate (infant, adult, and specific diseases) was the most applied quality-related output. This output was followed by revisit rates (outpatients and emergency) and the number of students.

3.2.5 Extended analysis and data type.

Approximately 80 articles (89.89%, 80 of 89) conducted extended analysis in their analyses [ 45 – 55 , 57 – 63 , 65 – 68 , 70 – 76 , 78 – 93 , 95 – 97 , 100 – 115 , 117 – 124 , 127 – 134 ]. The applied data type was also almost equally distributed. Among them, 51 articles (57.30%, 51 of 89) [ 46 , 48 , 51 , 57 – 59 , 66 , 69 – 72 , 74 – 76 , 78 , 79 , 81 – 84 , 87 , 90 , 91 , 93 , 95 , 96 , 99 , 100 , 102 , 104 – 108 , 110 , 111 , 113 – 115 , 119 – 121 , 123 , 124 , 126 , 128 – 134 ] used panel data, In contrast, 38 articles (42.70%, 38 of 89) [ 45 , 47 , 49 , 50 , 52 – 56 , 60 – 65 , 67 , 68 , 73 , 77 , 80 , 81 , 85 , 86 , 88 , 89 , 92 , 94 , 97 , 98 , 101 , 103 , 109 , 112 , 116 – 118 , 122 , 127 ] employed cross-sectional data in their investigations.

Forty extended analyses were identified within the included articles, in which one or multiple extended analyses were used for each study (some mentioned as “stages”). Out of the extended analysis-related 80 articles, 46 integrated two or more extended analysis in their DEA measurements [ 45 , 46 , 50 – 53 , 57 , 59 , 62 , 63 , 66 , 67 , 70 , 72 – 74 , 76 , 78 , 79 , 82 – 85 , 87 , 90 – 93 , 95 , 100 , 101 , 106 – 108 , 113 – 115 , 118 – 120 , 127 , 129 , 130 , 132 – 134 ]. The maximum number of extended analyses in all 80 articles were five [ 66 , 91 , 93 ], in which regression analysis (29.11%, 46 of 158) was primarily used for assessing hospital efficiency. This analysis type was sequentially followed by production function analysis (16.46%, 26 of 158), statistical analysis (15.82%, 25 of 158) and resampling methods (15.82%, 25 of 15). Table 4 tabulates the complete list of the specific analyses for each classification (see S4 Appendix ).

thumbnail

https://doi.org/10.1371/journal.pone.0293694.t004

3.3 Input and output selection approaches

Various approaches or methods were adopted by the 89 articles in selecting inputs and outputs for hospital-based DEA. Each article relied on previous studies or literature reviews as the main or partial component of their methodology for selecting input and result variables. Only a few articles explicitly indicated using a local DEA efficiency study from their respective country as the reference for input and output selections. Certain articles also employed a combination of methodologies. Meanwhile, 63 of the 89 articles (70.79%) utilised only literature review to determine the input and output variables [ 45 – 47 , 49 – 54 , 58 , 59 , 61 , 63 – 77 , 80 , 81 , 84 , 85 , 88 – 91 , 93 , 95 – 97 , 100 , 102 , 104 , 107 , 109 , 110 , 112 , 114 – 121 , 123 , 124 , 126 , 127 , 129 , 131 – 134 ]. Meanwhile, the remaining articles employed a literature review in combination with other approaches, highlighting diverse combinations. Nonetheless, the most prevalent combination approach identified was a literature review combined with data availability (13.48%, 12 of 89) [ 55 , 56 , 78 , 82 , 103 , 105 , 106 , 111 , 113 , 122 , 128 , 130 ]. This combination was followed by the literature review with systematic method (5.62%, 5 of 89) [ 57 , 60 , 83 , 101 , 108 ] and the literature review with DMU limitation (5.62%, 5 of 89) [ 48 , 79 , 86 , 94 , 98 ]. A maximum combination of four was also observed in one article [ 87 ]. Table 5 lists the complete list of various specific approaches.

thumbnail

https://doi.org/10.1371/journal.pone.0293694.t005

4. Discussions

The size of the DEA universe can be intimidating to unfamiliar individuals. Even when the literature is limited to healthcare applications, reading every previous study to acquire knowledge from their experiences is exceedingly challenging. Consequently, the 89 articles were subjected to meticulous examination to accomplish the objectives of this review. Nunamaker was the first to publish a health application involving the DEA to examine nursing services. Subsequently, Sherman released a second DEA article evaluating the medical and surgical departments of seven hospitals [ 135 , 136 ]. Hence, these articles have evolved DEA applications in the healthcare industry over the past four decades. The quality of the articles has advanced due to access to resources and information technology [ 23 , 25 , 137 , 138 ].

4.1 Researchers’ input and output selection approaches involving DEA for hospital efficiency measurement

The relative effectiveness of various institutions, including businesses, hospitals, universities, and government agencies, is frequently assessed using DEA. Conversely, these assessments are different from traditional-based analyses. Providing healthcare services in hospital-based environments is also distinct from the manufacturing process. Raw materials undergo a physical transformation to become final commodities in a conventional factory, in which participation and co-production are absent due to the exclusion of the customer component. Therefore, identifying the appropriate variables is difficult due to the involvement of patients in the process. The effectiveness (quality component) in healthcare is also equally crucial alongside performance and efficiency [ 86 , 92 , 127 ]. Even though DEA studies have not highlighted a standard set of input and output, several guidelines have been recommended using analytic procedures or principles to aid the optimal variable selection process [ 139 – 143 ].

4.1.1 Literature review.

A literature review remains a commonly employed method and is often regarded as one of the most effective techniques to place a study within the body of knowledge. This method contains numerous review types (narrative, rapid, scoping, or systematic reviews) functioning as foundations or building blocks for knowledge advancement, theory development, and improvement area identifications [ 144 – 146 ]. The literature reviews examined in this study were also the most prevalent approach for input and output selections concerning hospital efficiency-based DEA analyses. This review revealed that all the articles utilised literature review either as the primary method or as part of a combined approach.

None of the articles provided detailed information about their literature review approaches. Nevertheless, few articles explicitly mentioned selecting literature from their local country to compare their findings with previous local studies [ 49 , 85 , 104 , 124 ]. Typically, the DEA is a non-parametric technique relying entirely on the observed input-output combinations of the sampled units. This process does not necessitate any presumptions regarding the functional structure correlations between inputs and outputs [ 76 , 114 ]. Given that the DEA could measure the efficiency value depending on the objectives (even if it yielded a less significant value), an advantage was observed for these articles involving input and output selections based on the literature review [ 139 , 142 ]. Even though this method remained valid for academicians, other assessors (managers, economists or policymakers) could perceive it as contradictory to their practical perspectives. Hence, several factors must be considered from these individuals’ perspectives, including different indicators, production objectives, and policies.

4.1.2 Data availability.

The DEA relies on the homogeneity of the assessment of a unit. The DMU is presumed to produce comparable activities or products using resources and technology in the same environment. Therefore, a common set of similar inputs and outputs can be established. Certain factors also require consideration when large hospital-related datasets are involved, including data quality, availability, scale, and type [ 139 , 141 ]. Although this review indicated that the examined articles used literature reviews to select the input and output variables, this selection method depended on data availability. These articles only developed a few solutions to address the limitation. For example, few articles exclusively gathered data available within their scopes [ 87 , 103 , 130 ]. Certain articles also omitted the DMUs with incomplete data, focusing their analyses on DMUs with complete data [ 78 , 105 ]. Thus, the DEA was advantageous in measuring only the relative efficiency or the production frontier of the units included in the analysis. Specific articles also applied the DEA during a defined data availability period to ensure all necessary input and output variables were complete [ 55 , 113 ]. Although the DEA with missing incomplete data could be addressed, none of the reviewed articles attempted to resolve this issue [ 147 , 148 ].

4.1.3 Systematic method.

Many possible factors can be listed when determining the input and output variables. Nevertheless, this phenomenon can produce two significant issues: a lengthy input and output list and a negative impact on the DEA in accurately measuring efficiency if a limited number of DMUs are observed [ 139 , 142 ]. Hence, selecting the significant variables and simultaneously accurately measuring efficiency is essential. The process has also been evaluated in various articles by incorporating systematic procedures. This review identified four systematic approaches: Delphi, Preference Ranking Organisation Method for Enrichment Evaluation (PROMETHEE), bibliometric analysis, and variance filter [ 48 , 57 , 60 , 101 , 108 ]. Consequently, these systematic approaches could formalise the judgmental process of stakeholder viewpoints (managers, economists, and policymakers). The Delphi method focuses on gathering the most reliable consensus of expert opinion for challenging situations. This forecasting method was initially presented in the 1950s by Olaf Helmer and Norman Dalkey of the Rand Corporation based on the responses from several iterations of questionnaires distributed to a panel of experts [ 149 , 150 ]. Therefore, the Delphi method is widely recognised in assessing DEA efficiency in various areas [ 151 – 153 ].

The PROMETHEE method was initially developed in 1982 and underwent more advancements in 1985 [ 154 – 156 ]. This method is recognised as highly utilised and practical for multiple criteria decision aid (MCDA), including its application with DEA [ 157 – 160 ]. Compared to other MCDA methods, the PROMETHEE is considered a straightforward and computationally simple ranking system. The system incorporates weights indicating the relative significance of each criterion alongside the preference function associated with each criterion. One of the critical applications of PROMETHEE involves its capability to assist decision-makers in choosing the optimal options for evaluating hospital performance. This process enables investigations to include PROMETHEE in DEA-based applications [ 161 – 164 ].

Pritchard is credited with coining the term "bibliometric" in 1969. This method is defined as an “application of mathematics and statistical methods to books and other media of communication”. Therefore, bibliometric analysis evaluates the bibliographic information or metadata properties from a database or collection of documents to enhance understanding of the topic under investigation [ 165 , 166 ]. Numerous articles have applied bibliometric analysis for various objectives as follows:

  • To identify new trends in journal performance, collaborative styles, and research components
  • To lay the groundwork for the new and significant advancements in a field
  • To systematically understand the massive amounts of unstructured data to interpret and map the cumulative scientific knowledge and evolutionary nuances of established domains [ 167 , 168 ]

Hence, bibliometric analysis can determine the input and output variables in DEA studies involving the healthcare industry, such as hospitals.

The variance filter is a mechanism used in feature selection. The process determines and retains the most essential traits, helping to reduce noise, lower the computational expense of a model, and occasionally boost model performance. This review suggested that certain studies involved listing the crucial input and output variables [ 169 , 170 ]. The variance filter (feature selection) allowed these studies to eliminate variables (input or output) with minimal or negligible impact on the efficiency measurement of DMUs. This method was also well accepted and commonly used in DEA-based articles [ 101 , 171 – 173 ].

4.1.4 Expert judgement.

Expert judgement can be utilised by researchers, stakeholders, or decision-makers to refine the input and output variable selections. A value judgement is “logical constructs used in efficiency assessment research that reflect the decision makers’ preferences during the efficiency assessment procedure”. This process includes the decision to exclude or assign a zero weight in the variable [ 142 , 174 ], which depends on the efficiency measurement capacity of DEA based on the necessities and requirements of the decision-makers. Nonetheless, various constraints can develop, such as selection bias, input and output exclusions significantly impacting efficiency measurement or incorrect input and output weights. Therefore, researchers are encouraged to incorporate expert or value judgement to achieve their objectives. Different motivations are also observed for managers, economists, government policy makers, and academicians. Despite these individuals being committed to improving productivity, different judgements involving variable selections are presented. Considering that the DEA application possesses benefits and drawbacks, understanding the managerial and statistical implications of employing value judgment in input and output selections is crucial [ 175 ].

4.2 Managerial and economic implications in the input and output selection processes

This review empirically provided the approaches used in input and output variable selections for hospital evaluation-based DEA methods. The healthcare sector encounters daily challenges from public policy, resulting in new organisations, laws, and technology. Hence, managers must address these concerns by implementing practical performance evaluation and decision-making strategies. These concerns necessitate careful performance and decision-making evaluations from economic and managerial perspectives. Generally, the input and output selections in DEA comprise two components: selected methods and variables. This selection process in the analysis can significantly impact the DEA outcomes. Conversely, this review indicated little consideration was devoted to the input and output variable selections in a real-world scenario. The DEA often owns an extensive initial list of potential variables to consider. Therefore, each resource that a DMU uses should be considered as an input variable. The assessment made by a manager or economist to justify the selection process also holds significant importance in practical situations.

The selection process in actual conditions is made more complex by the objective of production economics. These factors can include profit, quality control, and customer satisfaction. Hence, evaluating these several competing factors is a difficult task due to the influence of multiple decision-makers. For example, a profit-oriented DEA assessment can conflict with customer satisfaction. The results may not represent the production objective if the manager combines these measurements simultaneously. After analysing this review, certain judgements made from management and economic perspectives are proposed as follows:

  • Establish the objective production of the analysis, ensuring that all stakeholders easily understand it
  • Utilise the existing selection approaches to the greatest extent possible
  • Introduce a managerial-level committee to evaluate the variables before deciding on the final model
  • Physical units and managerial or economic perspectives differ according to the objective production of the analysis, such as comparing salary in dollars against the number of employees

4.3 Common DEA model parameters in hospital efficiency evaluation

4.3.1 model type..

Significant advancements and transformations throughout time have been observed in the DEA application. Hence, numerous models are available to assess efficiency, ranging from general models to the more specialised use of DEA. Approximately 80.90% (72 of 89) of the examined articles in this review applied the radial DEA model. The models included were the BCC, CCR, and a combined BCC and CCR model. Consequently, these findings align with other healthcare-based reviews [ 17 , 21 , 33 ]. The focus of a radial DEA model is typically on the proportionate change in input or output values. Therefore, slacks (excess inputs or shortfall outputs still existing in the model) are ignored or treated as optional. Even though the radial model demonstrated various limitations, it is still commonly employed due to its fundamental nature, simplicity, and ease of application (minimal requirements on the production criteria of the DMUs) [ 47 , 79 , 176 ].

4.3.2 Model orientation.

Several factors or arguments can influence the DEA orientation selection, such as the decision maker’s level of control, the nature of production, and the researcher’s purpose from the model [ 25 , 32 , 177 ]. Healthcare organisations or hospitals generally possess limited or less control over their outputs. Nevertheless, this observation does not imply that a DEA efficiency evaluation in a hospital must be focused solely on input orientation. Thus, this review discovered that 55.06% (49 of 89) of the articles used input-oriented DEA models. Previous articles also highlighted similar findings with varying proportions [ 17 , 20 , 21 ]. Researchers and hospital managers viewed reducing inputs while achieving a desired output level as a more appropriate measure of hospital efficiency. This outcome was attributed to the limited control hospitals possessed over their outputs.

4.3.3 Returns to scale assumption.

Ongoing discussions have been observed concerning which of the two fundamental models (CRS or VRS) are superior. Hospital managers are actively searching for the most effective evaluation methods to assess the efficiency impact of various inputs and outputs on their organisations. Hence, selecting a return to scale involving a hospital is determined by the size of the hospital [ 64 ], organisation factors [ 75 , 90 ], input and output process flow [ 110 , 126 ], and technological involvement [ 81 , 121 ]. Adopting an inappropriate return to scale can result in an excessively constrained search region for effective DMUs. Therefore, both assumptions should be examined to comprehend the implications of using either one [ 178 ]. This review discovered that most articles applied CRS and VRS assumptions for comparison (35.96%), followed by only VRS assumptions (32.58%). Previous articles also demonstrated a trend towards replacing CRS with VRS assumption in DEA-based applications [ 20 , 21 , 33 ]. Specifically, most hospital efficiency assessments focused on economies of scale and considered the non-proportional correlation between inputs and outputs in the healthcare production function.

4.3.4 Input and output selections.

The methodologies involving input and output selections were effectively covered in this review, achieving the main objective of this study. Thus, appropriate input and output selections were crucial for the DEA analysis. Many previous articles denoted that the effectiveness of the DEA efficiency analysis was heavily influenced by the quality and quantity of these indicators [ 15 – 18 , 20 , 21 , 30 , 33 ]. Most articles (52.76%) used staff-related input as one of the variables in efficiency measurement. Given that human resources were significant in any organisation (including hospitals), this outcome was not surprising. Consequently, this finding was similar to previous DEA-related and performance-based articles on healthcare services [ 2 , 13 ].

Typically, the analysis unit for staff-related factors is contingent upon the operational dynamics of the organisation. This review suggested two staff-related factors: the actual number of staff and the full-time equivalent. Various staff types were observed, from clinical to non-clinical (see S3 and S4 Appendices). The number of general beds was the highest (74.49%) when examining the sub-type of inputs, which hospital beds were a fundamental capital input for a hospital. This factor was a key indicator to assess hospital performance, capacity, and competency while comparing healthcare services across different countries [ 179 – 181 ]. Meanwhile, most articles applied production-related outputs rather than quality-related outputs. This phenomenon was attributed to the fact that it was simpler to quantify production-based data and provide stakeholders with a clear objective to improve upon it. Healthcare managers also would not prioritise effectiveness (quality) over efficiency [ 25 , 182 ]. Overall, this review indicated that the common outputs applied were the number of inpatients, outpatients, and operations. Given that these outputs were the fundamental components of hospital services, the extensive utilisation of these factors was not unexpected.

4.3.5 Extended analysis.

The classic DEA model is considered insufficient on its own because of the complexity of hospital processes and the continuous efforts of researchers and practitioners to enhance healthcare efficiency assessment. The majority of recent research studies on healthcare efficiency assessment integrate DEA with various approaches and techniques in order to address the weaknesses of the latter and offer a comprehensive and accurate picture of healthcare efficiency.

Forty extended analysis were observed within the reviewed articles. Despite the fact that each study had a different rationale for performing an extended analysis, a consistent theme was found.

  • To ascertain how contextual or environmental factors affect the efficiency scores [ 86 , 108 ]
  • To quantitatively compare efficiency scores [ 52 , 63 ]
  • To resolve the issues with serially linked estimates and produce bias-corrected efficiency estimates by utilising simulated distributions to compute the indices’ standard errors and confidence ranges [ 82 , 129 ]
  • To assess healthcare facilities’ long-term performance using panel data analysis [ 76 , 133 ]
  • To ascertain the relationship between the indicators that were to be included in the DEA model for input and output [ 45 , 53 ]
  • To forecast, following the consideration of exogenous elements in the efficiency assessment, whether or not a healthcare unit should be deemed efficient [ 67 , 132 ]

Consequently, in order to gain a better understanding of how these approaches were applied and helped the various researchers achieve their goals, it is necessary to recognise and credit these ways.

5. Limitations and conclusion

This novel systematic review represented the comprehensive investigation methods used to identify input-output variables for measuring hospital efficiency using DEA. To the authors’ knowledge, no prior studies were conducted on this topic. The primary objective of this systematic review was to offer an overview of the existing approaches. This review also provided an update on the current application of DEA models for evaluating hospital efficiency. Approximately 89 articles were reviewed and assessed thoroughly with the specified objectives, and the literature review was primarily employed as a method for selecting inputs and output variables in DEA. These articles utilised literature review as a single method or combined with other approaches to enhance the robustness and vigour of the selection process. Considering that the selection of variables in DEA could lead to varying efficiency measurement outcomes, this process was considered crucial [ 139 ]. Nevertheless, no definitive approach or methodology could be identified for selecting variables (input-output) in DEA, concurrently representing its advantages and disadvantages [ 183 – 185 ].

Researchers and stakeholders should use the DEA to assess the effectiveness of their organisation according to their preferences. Conversely, these individuals should be aware of the limitations and potential constraints of DEA [ 139 , 142 ]. Even though this review specifically examined methodologies employed in hospital settings, the scope of the findings could be restricted. Alternative procedures or methods could be utilised to select input and output variables for DEA studies in different fields or based on other perspectives [ 186 ]. Given that researchers and healthcare professionals aim to improve healthcare efficiency assessment, an optimal input-output selection approach should be identified. Hence, examining past, present, and potential developments in the DEA literature is essential due to its significant impact on DEA studies. The parameters for the DEA models also did not present any evidence to support an optimal or universally fitting model, for which almost all models were utilised multiple times (see S3 and S4 Appendices). Consequently, this review offered guidelines and methodological principles for conducting DEA studies based on established research. This process can provide insights to hospital managers, healthcare workers, policy officials, and students on the efficiency evaluation using DEA.

5.1 Registration and protocol

This study was registered at OSF Registries ( https://osf.io/registries ). All information regarding the registration and study protocol can be accessed at https://osf.io/nby9m or https://osf.io/e7mj9/?view_only=53deec8e6c6946eeaf0ea6fe2f0f212a .

Supporting information

S1 appendix. prisma abstract checklist..

https://doi.org/10.1371/journal.pone.0293694.s001

S2 Appendix. PRISMA 2020 main checklist.

https://doi.org/10.1371/journal.pone.0293694.s002

S3 Appendix. Table 6 summary of 89 reviewed publications.

https://doi.org/10.1371/journal.pone.0293694.s003

S4 Appendix. Table 7 summary of 89 reviewed publications.

https://doi.org/10.1371/journal.pone.0293694.s004

S5 Appendix. Table 8 types of efficiency studied.

https://doi.org/10.1371/journal.pone.0293694.s005

S6 Appendix. Table 9 model types applied in the studies.

https://doi.org/10.1371/journal.pone.0293694.s006

S7 Appendix. Table 10 model orientation applied in the studies.

https://doi.org/10.1371/journal.pone.0293694.s007

S8 Appendix. Table 11 return to scale assumption applied in the studies.

https://doi.org/10.1371/journal.pone.0293694.s008

  • View Article
  • Google Scholar
  • PubMed/NCBI
  • 5. Tandon A, Murray C, Lauer J, Evans DB. Measuring Overall Health System Performance for 191 Countries. 2000. https://www.who.int/publications
  • 6. World Health Organisation. The World Health Report 2000 Health Systems: Improving Performance. 2000. https://www.who.int/publications/i/item/924156198X
  • 10. Benneyan J, Ceyhan M, Sunnetci A. Data Envelopment Analysis of National Healthcare Systems and Their Relative Efficiencies. The 37th International Conference on Computers and Industrial Engineering, pp251–261. 2007.
  • 11. Cylus Jonathan, Papanicolas I, Smith PC. Health Systems Efficiency How to make measurement matter for policy and management. Organização Mundial da Saúde, editor. Organização Mundial da Saúde; 2016. https://www.ncbi.nlm.nih.gov/books/NBK436888/
  • 12. Papanicolas I, Rajan D, Karanikolos M, Soucat A, Figueras J 1959-. Health system performance assessment A framework for policy analysis. 2022. https://www.who.int/publications/i/item/9789240042476
  • 17. Giancotti M, Pipitone V, Mauro M, Guglielmo A. 20 Years of Studies on Technical and Scale efficiency in the Hospital Sector: a Review of Methodological Approaches. 2016.
  • 23. Cooper WW, Seiford LM, Zhu J. Handbook on Data Envelopment Analysis. Cooper WW, Seiford LM, Zhu J, editors. Boston, MA: Springer US; 2011.
  • 25. Ozcan YA, Tone K. Health Care Benchmarking and Performance Evaluation. Boston, MA: Springer US; 2014.
  • 27. Aljunid S, Moshiri H, Ahmed Z. Measuring Hospital Efficiency: Theory and Methods. Casemix Solutions Sdn Bhd, Kuala Lumpur; 2013.
  • 30. Eklom B, Callander E. A Systematic Review of Hospital Efficiency and Productivity Studies: Lessons from Australia, UK and Canada. 2020 [cited 3 Nov 2022].
  • 31. Fitriana, Hendrawan H. Analisis Efisiensi dengan Data Envelopment (DEA) di Rumah Sakit dan PUSKESMAS. CV. Amerta Media; 2021.
  • 32. Irwandy. Efisiensi dan Produktifitas Rumah Sakit: Teori dan Aplikasi Pengukuran dengan Pendekatan Data Envelopment Analysis. CV. Social Politic Genius (SIGn); 2019.
  • 65. Klangrahad C. Evaluation of Thailand’s regional hospital efficiency: An application of data envelopment analysis. ACM International Conference Proceeding Series. 2017;Part F131202: 104–109.
  • 97. Hung SY, Wu TH. Healthcare quality and efficiency in Taiwan. ACM International Conference Proceeding Series. 2018; 83–87.
  • 98. Ho CC, Jiang YB, Chen MS. The healthcare quality and performance evaluation of hospitals with different ownerships-demonstrated by Taiwan hospitals. Proceedings - 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, CISP-BMEI 2017. 2018;2018-January: 1–4.
  • 125. DEA-Solver-Pro. DEA-Solver-Pro Newsletter No. 20. In: https://saitech.capoo.jp/en/profile-of-dea-solverpro/ [Internet]. 2022 [cited 26 Jun 2023].
  • 138. Emrouznejad A, Cabanda E. Managing Service Productivity Using Frontier Efficiency Methodologies and Multicriteria Decision Making for Improving Service Performance. Emrouznejad A, Cabanda E, editors. Berlin, Heidelberg: Springer Berlin Heidelberg; 2014. https://doi.org/10.1007/978-3-662-43437-6
  • 145. Webster J, Watson RT. Analyzing the Past to Prepare for the Future: Writing a Literature Review. Management Information Systems Research Center, University of Minnesota. 2002;26: 1–11. https://www.jstor.org/stable/4132319
  • 149. Dalkey NC, Helmer-Hirschberg O. An Experimental Application of the Delphi Method to the Use of Experts. 1962 [cited 17 Jul 2023]. https://www.rand.org/pubs/research_memoranda/RM727z1.html
  • 154. Brans J-P. L’ingenierie de la decision, l’laboration d’instruments d’aidea la decision. In Proceedings of the Colloque sur l’Aidea la Decision. Faculte des Sciences de l’Administration, Universite Laval, Québec, QC, Canada. 1982. https://books.google.com.my/books?hl=en&lr=&id=y4rp7dDqgZcC&oi=fnd&pg=PA183&ots=pdHrJn-_W4&sig=8nuZkvcobLpQP97dewmppGobopE&redir_esc=y#v=onepage&q&f=false
  • 165. Ahmi Aidi. Bibliometric Analysis for Beginners. UUM Press, Universiti Utara Malaysia; 2022. https://aidi-ahmi.com/index.php/bibliometric-analysis-for-beginners
  • 166. Pritchard A. Statistical bibliography or bibliometrics. Journal of Documentation. 1969.
  • 169. Agarwal R. The 5 Feature Selection Algorithms every Data Scientist should know. In: https://towardsdatascience.com/the-5-feature-selection-algorithms-every-data-scientist-need-to-know-3a6b566efd2 [Internet]. 2019 [cited 18 Jul 2023].
  • 170. Rasulov Z. Feature Selection—Filter Method. In: https://medium.com/analytics-vidhya/feature-selection-filter-method-43f7369cd2a5 [Internet]. 2021 [cited 18 Jul 2023].

REVIEW article

Positive psychology and employee adaptive performance: systematic literature review.

Guihong Tang

  • Department of Management and Marketing, Faculty of Business and Economics, University of Malaya, Kuala Lumpur, Malaysia

Adaptive performance will increasingly be confronted with new insights as society today changes constantly. This raises questions as to what factors will impact employee’s adaptive performance and what is their inner psychological mechanism. The terms of positive psychology and adaptive performance are important concepts in the domain of organizational behavior and human resource development areas. The literature, however, lacks a systematic review of it. Our research seeks to explore the inherence of employee adaptive performance via the prism of positive psychology, including Psychological Capital and PERMA (Positive Emotions, Engagement, Relationships, Meaning and Accomplishment). We selected 27 papers out of 382, which were generated from Web of Science and Scopus databases associated the keywords of the two concepts, and used the 2020 PRISMA flow program for the paper screening. By analyzing the underpin theories, the causation, and the measurement, we discovered that there is a complex and nuanced relationship between positive psychology and adaptive performance, and most of the research to date suggests that positive psychology components improve employee adaptive performance. This study maps the current knowledge at the nexus of positive psychology and adaptive performance to identify existing gaps and potential for further investigation.

1 Introduction

The current state of the global and technical environment has become more complex, confusing, and dynamic. Working in an era of complex demand that professionals need to be prepared to use their extensive experience bases, develop new knowledge ( Mylopoulos et al., 2018 ) and quickly acquire new skills when required. In most areas, people will have to keep coming up with new ideas and changing the way things are seen, and people who can handle these changes are known as “adaptive experts” in the literature ( van Tartwijk et al., 2023 ). For example, in the food and beverage industry, it is imperative for enterprises and their employees to swiftly respond to the dynamic nature of customer demands and preferences in order to optimize customer satisfaction and gain a competitive edge ( Reig-Botella et al., 2024 ). Additionally, since the COVID-19 pandemic crisis has made the workplace more uncertain and unpredictable, it is essential that we examine at potential approaches to enhance employee’s motivation and adaptive performance ( Junça-Silva and Menino, 2022 ).

Adaptive performance-“employees’ ability to adapt to fast-changing work conditions” ( Ilgen and Pulakos, 1999 ), therefore has gain a better comprehension of the capabilities and performance of employees in the face of ever-evolving circumstances ( Jundt et al., 2015 ; Park and Park, 2019 ). This line of research is anticipated to offer guidance to employers on how to foster employee expertise and capacity development that is most appropriate for the new work environments ( Jundt et al., 2015 ).

There are many factors that influence adaptive performance. In their review article, Park and Park (2019) break down these factors into four categories: individual, job, group and organization. However, as talent becomes increasingly crucial in today’s business environment, the importance of individual is getting more significant, as innovation, productivity and customer satisfaction are all dependent on talent ( Sondhi and Nirmal, 2013 ). In numerous disciplines, the effect of experts’ motivation is becoming more widely acknowledged ( Wang and Wang, 2021 ), which is crucial and cost-effective. Employee’s motivation is significantly determined by their psychology states ( Chintalapti, 2021 ), therefore, to improve the positive psychology of the employees is essential in nowadays workplace.

Nearly a decade ago, in a paper titled “The Future of Positive Psychology,” Seligman and Csikszentmihalyi (2000) called for “positive psychology” should include a study of human well-being, happiness, excellence and optimal human functioning. There’s no denying the importance of positive psychology in improving organizational performance and it’s become increasingly popular in recent years ( Seligman et al., 2005 ). Positive psychology emphasizes human qualities, such as positive attributes and individual strengths, which has been widely accepted as having a beneficial effect in fostering an organizational culture that appreciates the potential of individuals ( Peterson and Spiker, 2005 ). In recent years, positive psychology has seen a surge in popularity and has been utilized at a variety of levels, however, its application to the workplace and its impact to talent’s work performance has not been as widely explored. It would be highly intriguing to investigate the potential and the numerous advantages that positive psychology can bring to adaptive performance. Additionally, there is a lack of consensus regarding the methodology used, the theoretical frameworks adopted, and the location and identity of the investigation’s topics ( Vada and Prentice, 2022 ). Therefore, this research is trying to conduct a systematic literature review (SLR) of research on adaptive performance from the prism of positive psychology and to provide the literature gaps for future studies.

We extract and analyze their findings across relevant results from existing papers, this is a straightforward and well-organized process to search for and locate several peer-reviewed publications on connected research issues in the same field of study ( Kraus et al., 2022 ). An overview is employed to lay out the evidence that is currently available and pinpoint literature gaps ( Lunny et al., 2018 ). The research on employee adaptive performance in the area of positive psychology is summarized in our overview of reviews.

The research questions are:

RQ1. What are the theories underlying positive psychology’s application to adaptive performance?

RQ2. What is the causality between positive psychology and adaptive performance?

RQ3. What positive psychology factors and adaptive performance measurements are used in this type of research?

In order to evaluate employee adaptive performance research within the context of positive psychology, we structure our study in a systematic manner. RQ1 is essential for comprehending the underlying presumptions utilized to create the adaptive performance conceptualization as it sheds light on the theoretical underpinnings of positive psychology and the methods employed by researchers to establish these linkages. RQ2 is important because determining the cause-effect relationship is an essential part of method development for employee adaptive performance research. RQ3 is a critical factor in determining the optimal selection of positive psychology constructs for adaptive performance. This research utilizes a systematic literature review methodology to address these questions. The remaining portions of this study are organized as follows: The research methods used in the literature review are described in Section 2, the bibliographic commentary of the prior literature is presented in Section 3, the content of the literature is analyzed in Section 4, the study is concluded with a summary of the findings in Section 5, and the limitations, identified gaps in the literature, and areas for future research are listed in Section 6.

2 Research method

2.1 inclusion and exclusion criteria of the study.

By employing a systematic literature review, this research builds on the work of Tranfield et al. (2003) and Xiao and Watson (2019) . “Adaptive performance” here refers the capacity to adapt and modify one’s behavior as a result of changes in circumstances or information. It is characterized by adaptability, the ability to learn from experiences, and the capacity to adjust to new situations ( Zheng et al., 2020 ; Liu et al., 2021 ). We consider “positive psychology” to encompass all facets of one’s inner resources, such as virtues, psychic powers, self-discipline, resilience, self-efficacy, optimism, hope and self-confidence ( Chintalapti, 2021 ). Numerous contexts, such as psycho-oncology, education, and the workplace, can benefit from the use of positive psychology ( Galanakis and Tsitouri, 2022 ). Observing the research objectives, the inclusion criteria of this study should be papers that investigate the relation between positive psychology and “adaptive performance” in workplace, including keywords of “psychological capital” (Optimism”, “Resilience”, “Hope”, “Self-efficacy”) and “PERMA” (“positive emotion”, “engagement”, “relationship”, “meaning”, “accomplishment”) and their dimensions. While duplication, papers not written in English, conference reviews, papers with unrelated substance, and so forth are examples of exclusion criteria.

2.2 Search strategy

This research uses the Web of Science and Scopus databases to compile articles on positive psychology and adaptive performance. These two databases are the most reputable platforms for this study since they allow researchers to map excellent research papers from many fields. In addition to the user-friendly nature of the databases, it is possible to access a comprehensive set of research article profiles through the use of discipline-specific keywords and search terms ( Tranfield et al., 2003 ). The first keywords enter (“positive psychology” OR “PERMA” OR “positive emotion” OR “engagement” OR “relationship” OR “meaning” OR “accomplishment” OR “psychological capital” OR “PsyCap” OR “Optimism” OR “Resilience” OR “Hope” OR “Self-efficacy”) AND (“work adaptive performance” OR “employee adaptive performance” OR “adaptive performance”) includes in titles and abstracts all the terms linked to positive psychology and adaptive performance. The papers are collected on 6 July 2024. To find the most relevant publications, we do not employ additional search parameters such as the publication date. Furthermore, according to the 2020 version of PRISMA framework ( Haddaway et al., 2022 ), we use the same keywords to search pertinent publications in our prior research and records.

2.3 Study selection process

With the strain set, we obtain 185 papers from Web of Science, 197 papers from Scopus, and a selection process starts with a total of 382 publications. 131 papers are found duplicated in step 1 and after removing, there are 251 left. Step 2 reports are screened via title and abstract. The publications that most closely match the research goals and have a high probability of contributing to the RQs are found in this stage. The terms “adaptive performance” and “positive psychology,” as well as their synonyms, bring up 47 papers after our initial search. Upon the closer examination, total 204 documents have been determined that in question are related to no abstract can be retrieved, conference papers and conference reviews, topics on physics and engineering, medical research, migrants’ adaptation, leadership, training or even sports; none of which apply to this study as they are not towards the individual adaptive performance in the workplace. Table 1 is the summary of these 204 publications that are categorized as inappropriate:

www.frontiersin.org

Table 1 . Category of inappropriate papers.

After the abstract filtering, we select 47 most relevant articles to this study. The results of the search were then filtered based on the criteria of inclusion and exclusion. We restrict our search to complete English-language articles written in the workplace. This implies excluding studies which without full texts, do not write in English, not in workplace, and are not published in good journal, means that the journal which cannot be found by the SCImago Journal and Country Rank. The relationships between positive psychology and adaptive performance, including their dimensions, are explained in the paper’s content and papers answering the research questions are considered to meet the inclusion criterion. Following this last filter, we obtain the 27 articles that are most pertinent, which are published between 1989 and 2024 ( Table 2 ).

www.frontiersin.org

Table 2 . Inclusion and Exclusion process for the full-text paper selection.

According to the requirement of 2020 version of PRISMA, records identified from database is 382, prior studies and records have been screened and checked as well, 43 papers are considered related after keywords searching, but none of them apply to this research after full content examination. 6 papers from citation appear to relate to the topic but 0 of them has been selected due to duplication or not as relevant as expected. Table 2 demonstrates the detail of the whole inclusion and exclusion process, and Figure 1 indicates the decision-making process of paper selection in PRISMA.

www.frontiersin.org

Figure 1 . Publication PRISMA flow diagram ( Haddaway et al., 2022 ).

3 Bibliographic analysis

3.1 the positive psychology and adaptive performance publication trends.

We discover that the earlier research done on positive psychology and adaptive performance conducted in the China (11.11%, n  = 3), India (11.11%, n  = 3), Malaysia (7.41%, n  = 2), United States (7.41%, n  = 2), Turkey (7.41%, n  = 2), Korea (7.41%, n  = 2), the rest (48.15%, n  = 13) are from Finland, France, Dubai, Indonesia, The Netherlands, Pakistan, Saudi Arabia, United Kingdom, Spain and South Africa, etc. It reveals that this topic has attracted attention around the world although the publication number is still few. With the majority of studies appearing between 2020 and 2022, the focus on positive psychology and adaptive performance has been continuously increasing since 1989 ( Figure 2 ).

www.frontiersin.org

Figure 2 . Distribution of papers based on publication year.

Our article results are based on the previously mentioned 6 July 2024, Web of Science and Scopus search date. Consequently, we evaluate the frequency of the publications provided in this work with caution. Based on the most recent ranking supplied by SJR (SCImago Journal and Country Rank) Best Quartile Year 2023, we summarize the journals in Table 3 . According to the SJR Best Quartile report, majority of the articles we review (51.85%, n  = 14) are published in journals with a WOS-Q2 index, while the remaining papers are published in journals with a WOS-Q1 index (25.93% n  = 7), Scopus-Q4 index (11.11%, n  = 3) and 1 paper each with WOS-Q3, WOS-Q4 and Scopus Q3.

The 27 peer-reviewed studies ( Table 4 ) are published in 22 different journals. The majority of the articles have been published in the top management journals, including Human Performance, Journal of Hospitality and Tourism Management and Journal of Managerial Psychology. As we anticipate, research with a global setting is common. It indicates that change is becoming more widespread around the world and to motivate employees to adapt to the challenge is getting more and more essential ( Yang et al., 2022 ; Akyürek et al., 2023 ). It often involves the use of engagement, self-efficacy, meaning and other factors to achieve the objective and leading to the adaptive performance result of the professionals, bring cost-efficacy to the organizations ( Kossek and Perrigino, 2016 ), and eventfully boosting the development of the companies ( Reig-Botella et al., 2024 ).

www.frontiersin.org

Table 3 . Percentage of journal sources.

www.frontiersin.org

Table 4 . Existing articles and their ranking.

3.2 Research design distribution

The primary objective of the majority of these papers is to investigate and evaluate adaptive performance via positive psychology. Consequently, it is to be expected that quantitative research techniques will predominate in the literature. 92.59% of the 27 studies examine adaptive performance by employing the quantitative approach ( n  = 25), which include a variety of technical techniques like explanatory and models that are predictive. The related research is largely based on the use of archive methods, with a few experiments. Another paper (3.70%) use qualitative methods, the remaining 1 paper (3.70%) uses mixed method for analysis. These papers focus on the mechanism of positive factors and how they affect employee behavior and their adaptive performance.

3.3 Distribution based on dimensions

Both psychological capital and PERMA are positive psychological resources, and they are conceptualized as being multidimensional and including various psychological aspects ( Martínez et al., 2019 ). In this study, “hope, resilience, self-efficacy and optimism” are the four dimensions of psychological capital, and “Positive emotions, engagement, relationships, meaning and accomplishment” are the dimensions of PERMA. We identify the positive psychology with 7 factors are being discussed 30 times in the previous research (see Figure 3 ) in the selected articles, some papers use more than 1 elements to conduct the research. The most common used elements are engagement ( n  = 11), self-efficacy ( n  = 8), meaning ( n  = 4) and resilience ( n  = 3) and psychological capital ( n  = 2). There are another 2 papers each on optimism and positive emotion. Hospitality industry, digital technology-based industry, aerospace, bank and travel agencies, healthcare, railway, IT and military organization are the industries that have been separately addressed in different papers. We take this approach to comprehend adaptive performance in various working contexts more thoroughly, which the majority of the previous studies have approved that positive psychology has the significant effect on employee’s behavior and adaptive performance. This study finds that individuals who possess high engagement, self-efficacy or other facets related to positive psychology can have better adaptive performance and work outcome in their career. Adaptive performance has multiple dimensions, according to Pulakos et al. (2002) , it has been specified into 8 dimensions, 26 articles out of the selected 27 records in this review adapt the quantitative methodology (including 1 mixed method analysis), 73.08% ( n  = 19) analyze it with uni- dimensional, while the rest 26.92% ( n  = 7) treat it as multi-dimensions.

www.frontiersin.org

Figure 3 . Positive psychology factors discussed in the papers.

4 Content analysis

4.1 underpin theories.

There are 18 theories or models from the 27 peer-reviewed studies that are either cited or used ( Table 5 ). Not all theories arise from the field of positive psychology, for example, the most frequently used theory is conservation of resources theory, driving mechanisms responsible for a variety of stress-related responses and coping strategies ( Liao et al., 2022 ). The other theories that are frequently employed in positive psychology are the self-determination theory, social exchange theory, job demands-resources theory and self-efficacy theory.

www.frontiersin.org

Table 5 . Theories using in the peer-reviewed articles.

Conservation of resources theory (COR) takes up 18% of the theories using in the peer-reviewed articles, it elucidates the human psychological motivations to protect, acquire, and utilize resources through the continual alteration of a resource’s internal mechanism, which opens up new possibilities for resource depletion and provides a novel perspective to address and recognize stress-related and psychological issues ( Tang et al., 2022 ). The COR theory promotes the development of psychological capital to serve as a conduit or enrichment of the development of other important resources ( Al-zyoud and Mert, 2019 ). According to COR, Luo et al. (2021) explore the formation mechanism of adaptive performance, and the study demonstrates that the psychological capital has a positive effect on employees’ adaptive performance. Another study conducted by Van den Heuvel et al. (2020) , it predicts that work engagement trajectories during change are crucial for successful adaptation building on conservation of resources theory.

Three selected papers in this review use the self-determination theory, account for 9% of the total. According to this theory, various goal-directed behavioral norms that are reflective of psychological states influence motivation, and motivation may be intrinsic or extrinsic ( Diener, 2009 ). Extrinsic motivation seems to be less helpful than intrinsic motivation when it comes to an individual’s optimal functioning like happiness and performance. Hamid explains that when an individual’s intrinsic drive and well-being are encouraged, their inherent needs like competence, autonomy, and relatedness can be addressed based on self-determination theory, and they help people to dig out the meaning of job and have positive effect on their work and adaptive performance ( Abdul Hamid, 2022 ). Additionally, the self-determination theory suggests that although behavioral restrictions are distinct, they are arranged along a single continuum of self-determination ( Junça-Silva and Menino, 2022 ).

The broaden and build theory of positive emotions ( Vakola et al., 2021 ) is the another important theory used theories in positive psychology analysis as well. It states that certain positive emotions can expand a person’s ability to think and act in the present moment ( Bhambri, 2022 ). This broadened perspective leads to the building of personal resources like resilience, optimism, and social connections ( Xiang and Yuan, 2021 ), and increase flexibility to help people approach challenges from different angles and find innovative solutions. Meanwhile empirical studies have demonstrated that positive emotions can assist individuals in managing difficult situations ( Sriwidharmanely et al., 2021 ).

Other theories including career motivation theory ( Kossek and Perrigino, 2016 ), self-regulation theory ( Bruch et al., 1989 ), self-efficacy theory ( Şahin and Gürbüz, 2014 ), they focusing on how people motivate and regulate their own behavior in order to achieve their goals. In addition to these theories, other essential concepts are employed to construct research frameworks are person-environment fit theory, job demand-resource theory, social exchange theory ( Elshaer and Saad, 2022 ), the minnesota theory of work adjustment ( Griffin and Hesketh, 2003 ; Lowmiller, 2022 ), 6 out of 27 papers are found no specific theory applied in their studies.

4.2 Direction of causality between positive psychology and adaptive performance

In workplace adaptive performance investigation, researchers employ several methods based on theoretical underpinnings to describe the impact of positive psychology. Research has examined the correlation between adaptive performance and work-related psychological states and has demonstrated a positive correlation between work-related psychological health and adaptive performance ( Rowe et al., 2023 ). First, Psychological capital is the state of mind that motivates and encourages people to reach their full potential, employees who experience positive emotions are able to expand their cognitive abilities, resulting in more imaginative and exploratory thought and action ( Luo et al., 2021 ). Second, talent who with high engagement are more likely to remain motivated despite a decrease in resources, are willing to go above and beyond their duties to meet the objectives of their organization, and are able to compensate for temporary shortages of resources by drawing from larger resources ( Bakker and Oerlemans, 2016 ; Vakola et al., 2021 ; Kaltiainen and Hakanen, 2022 ). Vakola et al. (2021) reveal that individuals who are highly engaged in their work have an increased likelihood of adapting to organizational changes, as opposed to those who are more likely to be ambivalent. Third, individuals who view themselves as highly efficacy tend to put in more effort, which, when done correctly, leads to successful results ( Şahin and Gürbüz, 2014 ; Mujeeb et al., 2021 ). In contrast, Individuals with a low level of self-efficacy are more likely to give up in challenging circumstances and restrict their participation in similar activities ( Bruch et al., 1989 ). And last but not least, job meaningfulness is based on the notion that individuals experience a positive sense of purpose in their work, individual who perceive work as the primary source of meaning and believe that their work contributes to a greater purpose. People search for meaning in their work based on their experience, such as those who acknowledge their presence, their sense of belonging, their relationships, who they are, and their worth and contribution to the work ( Van den Heuvel et al., 2020 ; Abdul Hamid, 2022 ; Budhiraja and Rathi, 2022 ; Junça-Silva and Menino, 2022 ). Hence, job meaningfulness increases employees` sense of purpose and value, thus enabling them to rise to the challenge and foster adaptive performance.

4.3 Positive psychology and adaptive performance measurements

4.3.1 measurement of positive psychology variables.

There are total 7 different positive psychology facets that are mentioned and analyzed in these 27 peer-reviewed articles ( Table 6 ) for 34 times, engagement ( n  = 13), self-efficacy ( n  = 9), positive psychology ( n  = 2). Work engagement is a state of contentment and satisfaction associated with work. It is characterized by three dimensions: enthusiasm, commitment, and absorption ( Schaufeli et al., 2002 ). In the study conducted in Indonesia ( Nandini et al., 2022 ), participants were asked to answer a series of questions using the nine-point Utrecht Work Engagement Scale (UwES-9). Within the selected papers, Van den Heuvel et al. (2020) adapt the Utrecht work engagement scale as well, but they measure with six items of Two items per subscale. Another frequent used facet to measure positive psychology in this review study is self-efficacy, an empirical assessment in France ( Joie-La Marle et al., 2023 ) use the 10-item scale, which is designed to measure adaptation and coping abilities, particularly in relation to unforeseen situations ( Luszczynska et al., 2005 ). The original English version of the scale was translated into French through a back translation process due to difficulties in understanding the existing French version. Participants rated the items of the scale on a scale of 1 (absolutely false) to 4 (absolutely true), with scores ranging from 1 to 4 ( Joie-La Marle et al., 2023 ). However, study conducted by Griffin and Hesketh (2003) adapt the 14-item scale to measure participants’ self-efficacy for adaptive behavior, participants were asked to indicate their level of confidence in achieving each of the behaviors at work, ranging from “1” meaning no confidence to “5” meaning very confidence. Jundt et al. (2015) in their review paper summaries that according to Griffin and Hesketh (2003) , role-wide self-efficacy was positively associated with self-reported adaptation frequency in the preceding month. Mujeeb et al. (2021) adapt 7-item scale for measure the self-efficacy and confirm that adaptive performance and task performance are not directly impacted by servant leadership, rather self-efficacy has a beneficial effect and acts as a mediator in understanding their relationship. However, according to Pulakos et al. (2002) and Griffin and Hesketh (2003) , self-effectiveness for each of the eight dimensions was positively correlated with supervisor ratings of total adaptive performance, but did not demonstrate incremental validity over cognitive capacity and personality ( Jundt et al., 2015 ).

www.frontiersin.org

Table 6 . Positive psychology facets mentioned in the papers.

4.3.2 Measurement of adaptive performance variables

Pulakos et al. (2002) studied 1,000 significant occurrences from 25 job classifications in the U.S. Army and illustrated the worldwide items of adaptive work performance, the scales with 8 dimensions. In the review of the peer 27 articles in this research, Luo et al. (2021) in their 2 articles adapt the measurement items from previous studies with preliminary questionnaire, consisting of 56 statements (ranging from 1 to 5) and participant demos (with 1 indicating strongly disagreement and 5 indicating strongly agree). The scale was translated using the Translation/Back-translation method ( McGorry, 2000 ). The translation was done in English, which was then translated into Chinese with the help of independent bilingual experts, and then re-translated into English to guarantee the quality of the translation. To assess the readiness of the preliminary instrument for use in the present study, four hotel human resources directors and over 50 frontline hotel employees were pretested ( Luo et al., 2022 ). Two behavioral constructs are utilized by another researches ( Van den Heuvel et al., 2020 ) to measure adaptive performance, which are based on adaptive work role performance and extra-role performance. Korean researchers ( Park et al., 2020 ) adapt the shorter version proposed by Charbonnier-Voirin et al., the original scale consists of 19 items, each of which measures five adaptive performance domains, however they select three items from each of the 19 subscales, resulting in a total of 15 items and the reliability of the scale (Cronbach’s α) result to be 0.886. Turkey scholars use originally 8 dimensions scale to measure the adaptive performance ( Şahin and Gürbüz, 2014 ) and two thirds of the articles in this study measure adaptive performance as uni-dimensional ( Bruch et al., 1989 ; Pradhan et al., 2017 ; Murali and Aggarwal, 2020 ; Abdullahi et al., 2021 ; Elshaer and Saad, 2022 ).

5 Conclusion

5.1 overall outcomes.

In this study, we provide an SLR to evaluate and synthesize the investigation stream of adaptive performance in a workplace context. Descriptive and substantive results are presented in our bibliography analysis and content analysis, respectively. The papers come from the Web of Science and Scopus databases, the restriction is workplace and without other preference to maximize the search result. We screen the papers and examine them throughout the history of this research.

As a result, 27 papers published from 1989 to 2024 are selected to carry out the systematic literature review to find the most appropriate papers to address our research questions. This study draws upon existing research findings regarding the relationship between positive psychology and employee work adaptive performance. Researchers have looked at conservation of resources theory, self-determination theory, person-environment fit theory and other theories explain the relationship between positive psychology facets and adaptive performance. In addition, this study analyzes the causality relationship of the constructs and reveal the underlying logic why positive psychology of the individual can impact their adaptive performance and work outcome. Measurements of variables are collected and compared in different context. The study finds that there is a significant relationship between positive psychology and employee’s adaptive performance, specifically antecedent positive emotion, engagement, meaning, psychological capital, resilience, optimism and self-efficacy improve the employee’s adaptive performance, empirical data from the banking, IT, hotel, and food and beverage sectors, among others.

5.2 The implications for management practice

Individual positive psychology can play an important role for employees to effectively adapting to the changeable working environment, this is the practical implication of this study. Employees who are engaged, or optimistic and self-efficacy are gregarious, determined, and committed, which may offer them the positive energy they need to adapt to change ( Van den Heuvel et al., 2020 ). Understanding that positive psychology is a critical and essential approach, recognizing that work engagement and self-efficacy as a prolonged and continuous process, managers must grasp the pivotal motivational role in fostering positive psychological states and subsequently influencing performance outcomes. It is imperative for employers to proactively furnish employees with training, career opportunities, and rewards, fostering a sense of obligation that prompts elevated levels of adaptive performance ( Isah Leontes and Hoole, 2024 ).

In conclusion, the connection between positive psychology and adaptive performance is effective and multifaceted. The majority of the existing research indicates that the positive psychology elements have a beneficial effect on the employee adaptive performance, however, further empirical research is necessary to determine the extent to which the single or multiple settings can affect the individual adaptive performance. It is critical to consider each of these elements, and cross-disciplinary research is essential in order to further understand the relationship between positive psychology and adaptive performance.

6 Limitations and future research

Despite the increasing prevalence of positive psychology and research on employee performance, there is still a lack of research on some aspects of positive psychology, such as “relationships,” “hope” and “accomplishment.” In addition, further empirical studies may be necessary to develop more reliable scales for certain components of the construct, such as self-efficacy, optimism, hope and resilience.

The selection of the research terms and the scope of the research is a limitation of a systematic literature review approach. In this review, only psychological capital, PERMA, and their subsidiary characteristics that were based on prior research are searched for. Peer-reviewed publications in academic journals published in English only are included in this comprehensive literature assessment. This might have limited the availability of pertinent material published in other languages or sources.

In light of the limitations and conclusions of this review, the following areas of future research are proposed in this study. Additional positive psychology ideas should be covered in further reviews, such as well-being, happiness, wellness and peace of mind. In this study, we consider adaptive performance as the dependent variable, researcher may examine if adaptive performance has the opposite impact on positive psychology in subsequent studies. Negative emotions also can be employed as an inverse term and investigate the topic from different perspectives. Academic journals published in languages other than English and a wider range of sources may also be included in the inclusion criteria. During future research, it is possible to conduct a search for relevant articles and examine the documents for which the full text cannot be obtained at this time. And investigations are encouraged to conduct in different contexts, such as different countries and areas, diverse cultures and various industries.

Author contributions

GT: Writing – original draft, Data curation. RA: Writing – review & editing, Supervision, Formal analysis. SO: Writing – review & editing, Supervision, Methodology.

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Abdul Hamid, R. (2022). The role of employees’ technology readiness, job meaningfulness and proactive personality in adaptive performance. Sustain. For. 14:15696. doi: 10.3390/su142315696

Crossref Full Text | Google Scholar

Abdullahi, M., Raman, K., and Solarin, S. (2021). Effect of organizational culture on employee performance: a mediating role of employee engagement in Malaysia educational sector. Int J Supply Operat Manage 8, 232–246. doi: 10.22034/IJSOM.2021.3.1

Akyürek, S., Can, Ü., and Kiliçalp, M. (2023). The mediating role of work engagement on the relationship between interpersonal adaptability and dealing with uncertain and unpredictable work situations. Eur J Tour Hospital Recreat 13, 142–153. doi: 10.2478/ejthr-2023-0012

Al-zyoud, M. F., and Mert, I. S. (2019). Does employees’ psychological capital buffer the negative effects of incivility? Euro Med J Bus. 14, 239–250. doi: 10.1108/EMJB-03-2018-0021

Bakker, A. B., and Oerlemans, W. G. (2016). Momentary work happiness as a function of enduring burnout and work engagement. J. Psychol. 150, 755–778. doi: 10.1080/00223980.2016.1182888

PubMed Abstract | Crossref Full Text | Google Scholar

Bhambri, S. (2022). Effect of positive emotions on mental health .

Google Scholar

Bruch, M., Chesser, E. S., and Meyer, V. (1989). The role of evaluative self-schemata in cognitive processing and performance: the impact on self-efficacy, self-evaluation and task outcome. Cogn. Behav. Ther. 18, 71–84. doi: 10.1080/16506078909455847

Budhiraja, S., and Rathi, N. (2022). Continuous learning during crises: achieving change-efficacy, meaningful work and adaptive performance. Int. J. Product. Perform. Manag. 72, 2317–2334.

Chintalapti, N. R. (2021). Impact of employee motivation on work performance. ANUSANDHAN–NDIM's J Bus Manage Res 3, 24–33. doi: 10.56411/anusandhan.2021.v3i2.24-33

Diener, E. (2009). Subjective well-being. The science of well-being. Soc İndic Res Series 37, 11–58. doi: 10.1007/978-90-481-2350-6_2

Elshaer, I. A., and Saad, S. K. (2022). Learning from failure: building resilience in small-and medium-sized tourism enterprises, the role of servant leadership and transparent communication. Sustain. For. 14:15199. doi: 10.3390/su142215199

Galanakis, M. D., and Tsitouri, E. (2022). Positive psychology in the working environment. Job demands-resources theory, work engagement and burnout: a systematic literature review. Front. Psychol. 13:1022102. doi: 10.3389/fpsyg.2022.1022102

Griffin, B., and Hesketh, B. (2003). Adaptable behaviours for successful work and career adjustment. Aust. J. Psychol. 55, 65–73. doi: 10.1080/00049530412331312914

Haddaway, N. R., Page, M. J., Pritchard, C. C., and McGuinness, L. A. (2022). PRISMA2020: an R package and shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and open synthesis. Campbell Syst. Rev. 18:e1230. doi: 10.1002/cl2.1230

Ilgen, D. R., and Pulakos, E. D. (1999). The changing nature of performance: Implications for staffing, motivation, and development . San Francisco, CA: Jossey-Bass Inc., Publishers.

Isah Leontes, N., and Hoole, C. (2024). Bridging the gap: Exploring the impact of human capital management on employee performance through work engagement. Admin Sci 14:129. doi: 10.3390/admsci14060129

Joie-La Marle, C., Parmentier, F., Weiss, P.-L., Storme, M., Lubart, T., and Borteyrou, X. (2023). Effects of a new soft skills metacognition training program on self-efficacy and adaptive performance. Behav. Sci. 13:202. doi: 10.3390/bs13030202

Junça-Silva, A., and Menino, C. (2022). How job characteristics influence healthcare workers’ happiness: a serial mediation path based on autonomous motivation and adaptive performance. Sustain. For. 14:14251. doi: 10.3390/su142114251

Jundt, D. K., Shoss, M. K., and Huang, J. L. (2015). Individual adaptive performance in organizations: a review. J. Organ. Behav. 36, S53–S71. doi: 10.1002/job.1955

Kaltiainen, J., and Hakanen, J. (2022). Fostering task and adaptive performance through employee well-being: the role of servant leadership. BRQ Bus. Res. Q. 25, 28–43. doi: 10.1177/2340944420981599

Kossek, E. E., and Perrigino, M. B. (2016). Resilience: a review using a grounded integrated occupational approach. Acad. Manag. Ann. 10:878. doi: 10.5465/19416520.2016.1159878

Kraus, S., Breier, M., Lim, W. M., Dabić, M., Kumar, S., Kanbach, D., et al. (2022). Literature reviews as independent studies: guidelines for academic practice. Rev. Manag. Sci. 16, 2577–2595. doi: 10.1007/s11846-022-00588-8

Liao, H., Huang, L., and Hu, B. (2022). Conservation of resources theory in the organizational behavior context: Theoretical evolution and challenges. Adv Psychol Sci 30, 449–463. doi: 10.3724/SP.J.1042.2022.00449

Liu, Y., Yao, D., Li, H., and Lu, R. (2021). Distributed cooperative compound tracking control for a platoon of vehicles with adaptive NN. IEEE Trans Cybernetics 52, 7039–7048. doi: 10.1109/TCYB.2020.3044883

Lowmiller, H. (2022). Exploring satisfaction among research administrators at their current place of employment . Doctoral dissertation, Johns Hopkins University.

Lunny, C., Brennan, S. E., McDonald, S., and McKenzie, J. E. (2018). Toward a comprehensive evidence map of overview of systematic review methods: paper 2—risk of bias assessment; synthesis, presentation and summary of the findings; and assessment of the certainty of the evidence. Syst. Rev. 7, 1–31. doi: 10.1186/s13643-018-0784-8

Luo, C.-Y., Tsai, C.-H., Chen, M.-H., and Gao, J.-L. (2021). The effects of psychological capital and internal social capital on frontline hotel employees’ adaptive performance. Sustain. For. 13:5430. doi: 10.3390/su13105430

Luo, C.-Y., Tsai, C.-H. K., Su, C.-H. J., Kim, H. J., Gao, J.-L., and Chen, M.-H. (2022). How does hotel employees’ psychological capital promote adaptive performance? The role of change readiness. J. Hosp. Tour. Manag. 51, 491–501. doi: 10.1016/j.jhtm.2022.05.006

Luszczynska, A., Scholz, U., and Schwarzer, R. (2005). The general self-efficacy scale: multicultural validation studies. J. Psychol. 139, 439–457. doi: 10.3200/JRLP.139.5.439-457

Martínez, I. M., Youssef-Morgan, C. M., Chambel, M. J., and Marques-Pinto, A. (2019). Antecedents of academic performance of university students: academic engagement and psychological capital resources. Educ. Psychol. 39, 1047–1067. doi: 10.1080/01443410.2019.1623382

McGorry, S. Y. (2000). Measurement in a cross-cultural environment: survey translation issues. Qual. Mark. Res. Int. J. 3, 74–81. doi: 10.1108/13522750010322070

Mujeeb, T., Khan, N. U., Obaid, A., Yue, G., Bazkiaei, H. A., and Samsudin, N. A. (2021). Do servant leadership self-efficacy and benevolence values predict employee performance within the banking industry in the post-COVID-19 era: using a serial mediation approach. Admin Sci 11:114. doi: 10.3390/admsci11040114

Murali, S. R., and Aggarwal, D. M. (2020). A study on the impact of transformational leadership style on employee engagement and employee performance in ICT industry–(a study with reference to the ICT industry in United Arab Emirates). Int. J. Manag. 11.

Mylopoulos, M., Kulasegaram, K., and Woods, N. N. (2018). Developing the experts we need: fostering adaptive expertise through education. J. Eval. Clin. Pract. 24, 674–677. doi: 10.1111/jep.12905

Nandini, W., Gustomo, A., and Sushandoyo, D. (2022). The mechanism of an Individual’s internal process of work engagement, Active Learning and Adaptive Performance. Economies 10:165. doi: 10.3390/economies10070165

Park, Y., Lim, D. H., Kim, W., and Kang, H. (2020). Organizational support and adaptive performance: the revolving structural relationships between job crafting, work engagement, and adaptive performance. Sustain. For. 12:4872. doi: 10.3390/su12124872

Park, S., and Park, S. (2019). Employee adaptive performance and its antecedents: review and synthesis. Hum. Resour. Dev. Rev. 18, 294–324. doi: 10.1177/1534484319836315

Peterson, S. J., and Spiker, B. K. (2005). Establishing the positive contributory value of older workers. Organ. Dyn. 34, 153–167. doi: 10.1016/j.orgdyn.2005.03.002

Pradhan, R. K., Panda, P., and Jena, L. K. (2017). Purpose, passion, and performance at the workplace: Exploring the nature, structure, and relationship. Psychol Manager J 20, 222–245. doi: 10.1037/mgr0000059

Pulakos, E. D., Schmitt, N., Dorsey, D. W., Arad, S., Borman, W. C., and Hedge, J. W. (2002). Predicting adaptive performance: further tests of a model of adaptability. Hum. Perform. 15, 299–323. doi: 10.1207/S15327043HUP1504_01

Reig-Botella, A., Fernández-del Río, E., Ramos-Villagrasa, P. J., and Clemente, M. (2024). Don’t curb your enthusiasm! The role of work engagement in predicting job performance. J. Work Organ. Psychol. 40, 51–60. doi: 10.5093/jwop2024a5

Rowe, S. W., Arghode, V., and Bhattacharyya, S. S. (2023). A study on adaptive performance, work-related psychological health and demographics in episcopal church bishops. J Work Appl Manage. 16:15. doi: 10.1108/JWAM-02-2023-0015

Şahin, F., and Gürbüz, S. (2014). Cultural intelligence as a predictor of individuals’ adaptive performance: a study in a multicultural environment. Int Area Stud Rev 17, 394–413. doi: 10.1177/2233865914550727

Schaufeli, W. B., Salanova, M., González-Romá, V., and Bakker, A. B. (2002). The measurement of engagement and burnout: a two sample confirmatory factor analytic approach. J. Happiness Stud. 3, 71–92. doi: 10.1023/A:1015630930326

Seligman, M. E., and Csikszentmihalyi, M. (2000). Positive psychology: An introduction. Am Psychol Assoc 55, 5–14. doi: 10.1037/0003-066X.55.1.5

Seligman, M. E., Steen, T. A., Park, N., and Peterson, C. (2005). Positive psychology progress: empirical validation of interventions. Am. Psychol. 60, 410–421. doi: 10.1037/0003-066X.60.5.410

Sondhi, V., and Nirmal, P. S. (2013). Strategic human resource management: a reality check. Rev Manage 3:4. doi: 10.4324/9780429490217

Sriwidharmanely, S., Sumiyana, S., Mustakini, J. H., and Nahartyo, E. (2021). Encouraging positive emotions to cope with technostress’s adverse effects: insights into the broaden-and-build theory. Behav. Inform. Technol. 41, 2201–2214. doi: 10.1080/0144929X.2021.1955008

Tang, Z., Hu, H., and Xu, C. (2022). A federated learning method for network intrusion detection. Concurr Comput Pract Exp 34:e 6812. doi: 10.1002/cpe.6812

Tranfield, D., Denyer, D., and Smart, P. (2003). Towards a methodology for developing evidence-informed management knowledge by means of systematic review. Br. J. Manag. 14, 207–222. doi: 10.1111/1467-8551.00375

Vada, S., and Prentice, C. (2022). “Tourist well-being, experience and behaviours: a positive psychological perspective” in Handbook on the Tourist Experience . Edward Elgar Publishing, 176–194.

Vakola, M., Petrou, P., and Katsaros, K. (2021). Work engagement and job crafting as conditions of ambivalent employees’ adaptation to organizational change. J. Appl. Behav. Sci. 57, 57–79. doi: 10.1177/0021886320967173

Van den Heuvel, M., Demerouti, E., Bakker, A. B., Hetland, J., and Schaufeli, W. B. (2020). How do employees adapt to organizational change? The role of meaning-making and work engagement. Span. J. Psychol. 23:e56. doi: 10.1017/SJP.2020.55

van Tartwijk, J., van Dijk, E. E., Geertsema, J., Kluijtmans, M., and van der Schaaf, M. (2023). Teacher expertise and how it develops during teachers' professional lives .

Wang, B., and Wang, Y. (2021). Job burnout among safety professionals: a Chinese survey. Int. J. Environ. Res. Public Health 18:8343. doi: 10.3390/ijerph18168343

Xiang, Y., and Yuan, R. (2021). Why do people with high dispositional gratitude tend to experience high life satisfaction? A broaden-and-build theory perspective. J. Happiness Stud. 22, 2485–2498.

Xiao, Y., and Watson, M. (2019). Guidance on conducting a systematic literature review. J. Plan. Educ. Res. 39, 93–112. doi: 10.1177/0739456X17723971

Yang, H., Weng, Q., Li, J., and Wu, S. (2022). Exploring the relationship between trait emotional intelligence and adaptive performance: the role of situational strength and self-efficacy. Personal. Individ. Differ. 196:111711. doi: 10.1016/j.paid.2022.111711

Zheng, Z., Xie, D., Pu, J., and Wang, F. (2020). MELODY: Adaptive task definition of COP prediction with metadata for HVAC control and electricity saving. In: Proceedings of the Eleventh ACM International Conference on Future Energy Systems .

Keywords: positive psychology, psychological capital, PERMA, engagement, self-efficacy, adaptive performance

Citation: Tang G, Abu Bakar R and Omar S (2024) Positive psychology and employee adaptive performance: systematic literature review. Front. Psychol . 15:1417260. doi: 10.3389/fpsyg.2024.1417260

Received: 14 April 2024; Accepted: 18 July 2024; Published: 14 August 2024.

Reviewed by:

Copyright © 2024 Tang, Abu Bakar and Omar. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Raida Abu Bakar, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Gen AI’s next inflection point: From employee experimentation to organizational transformation

After nearly two years of debate, the verdict is in: generative AI (gen AI) is here to stay, and its business potential is massive . We’ve already witnessed an exponential rate of gen-AI-related innovation , which promises to accelerate automation and enhance productivity, innovation, and the quality of work, as well as the employee and customer experience. The companies that fail to act and adapt now will likely struggle to catch up in the future.

Despite all the buzz, most companies have yet to scratch the surface of gen AI’s promise. A recent McKinsey Global Survey  reveals that employees are far ahead of their organizations in using gen AI, 1 The online survey was in the field from February 27 to March 8, 2024, and garnered responses from 592 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 127 say they are using publicly available or internal gen AI tools almost always at work; 51 say they use public tools never or rarely and that they never use internal tools, or that internal tools are not available to them; and the other 414 say they use either internal or public tools sometimes, often, or at varying frequencies by the type of tool. as companies have been slow to adopt in ways that could realize gen AI’s trillion-dollar opportunity. To harness employees’ enthusiasm and stay ahead, companies need a holistic approach to transforming how the whole organization works with gen AI; the technology alone won’t create value . This means applying gen AI in ways that enable the business strategy: by reinventing operating models and entire domains , 2 That is, specific workflows, processes, journeys, and even functions. by reimagining talent and skilling, and by reinforcing changes through robust governance and infrastructure.

To harness employees’ enthusiasm and stay ahead, companies need a holistic approach to transforming how the whole organization works with gen AI; the technology alone won’t create value.

Employee use is at an inflection point, while their organizations lag behind

According to our research, employees are forging ahead with gen AI, a broadly accessible technology that puts AI’s potential at everyone’s fingertips. Nearly all respondents (91 percent) say they use gen AI for work and the vast majority are enthusiastic about it (Exhibit 1). Nine in ten also believe the tools could positively impact their work experience and most believe gen AI will help with a range of skills, from critical thinking to creativity.

In this respect, most companies are lagging behind their employees. As high as employee usage is, organizational maturity with gen AI is strikingly low. In our survey, only 13 percent of respondents’ companies have implemented multiple use cases, a group we call “early adopters” (Exhibit 2). 3 We define early adopters as those companies, according to respondents, that have implemented six or more gen AI use cases to date. Among them, there’s a larger share of heavy users: that is, employees who use either public or internal gen AI tools every day or two. Compared with others, this group is likelier to use gen AI for a range of work activities and report greater productivity gains. 4 Survey respondents were asked to rate the frequency of their use of gen AI tools, both publicly available and internally developed tools, at work—as well as their use of public tools for nonwork purposes. For all of these questions, potential responses were “never,” “rarely (that is, once per month),” “sometimes (that is, once per week),” “often (that is, two to three times per week),” and “almost always (that is, every day).”

The chief information officer of a global heavy industry company sees these trends at his own organization. Employees are experimenting with gen AI through publicly available and embedded tools, 5 Such as OpenAI’s ChatGPT and Microsoft’s Copilot. which is increasing curiosity and encouraging greater openness to experimentation. Yet he notes that there’s no easy-to-prove business case for employee-driven adoption and the piecemeal implementation of use cases.

The next inflection point: Moving from individual experimentation to strategic value capture

Technology adoption for its own sake has never created value , which is also true with gen AI. Whether technology is itself the core strategy (for example, developing gen-AI-based products) or supports other business strategies, its deployment should link to value creation opportunities and measurable outcomes (for more, see sidebar “‘People led, tech powered’: Walmart’s vision for gen AI”). Our survey findings suggest that early adopters are on track: 63 percent of early-adopter respondents say their organizations’ AI and gen AI strategies align greatly with their business strategies, compared with only 17 percent of respondents at “experimenter” companies. 6 We define experimenters as those companies, according to respondents, that have implemented one to five gen AI use cases.

“People led, tech powered”: Walmart’s vision for gen AI

At Walmart, leaders have created a technology vision and strategy that aligns with its strategic focus on customer and employee experience, the two domains the company targeted with its generative AI (gen AI) implementation. For customers, Walmart introduced gen-AI-based features such as autogenerated shopping lists, “Shop with Friends” (a social shopping app), and “InHome” (an automated delivery service). For associates, Walmart invested in tools such as My Assistant, which minimizes time spent on administrative and HR tasks, and the Me@Walmart app, which includes a reality-powered feature for real-time inventory management.

To capture gen AI’s full potential, companies must consider how the technology can redefine the way the organization works. Our experience and research point to three steps to prepare for gen AI’s next inflection point: reinvent the operating model by translating vision into value, domain by domain; reimagine the talent and skilling strategy; and reinforce changes through formal and informal mechanisms that ensure continuous adaptation.

Reinvent domains by translating vision into value

Companies can only reap gen AI’s full benefits, which range from faster innovation and enhanced productivity to improved employee and customer experience, when they use technology to make transformative changes. More specifically, this means embracing holistic changes to the operating model, including key processes, ways of working, capabilities, and culture. Because anyone can use gen AI, these tools can act as a gateway technology for all other digital and tech transformations.

To start, companies should prioritize the right unit of transformation by focusing on specific domains, such as product development, marketing, and customer service. This domain-based approach  allows for end-to-end, technology-led transformation that integrates multiple use cases within a single value-creating workflow, process, journey, or, occasionally, entire function. Since domains often span organizational boundaries, implementing gen AI and other technologies at the domain level can deliver greater value than one-off solutions.

Here are examples of what’s possible with a domain-based transformation, and the implications for roles and day-to-day work:

  • In software development, gen AI can revolutionize work by delivering higher-quality, resilient products much faster; think days instead of months. This will require changes across the product life cycle and closer collaboration between product and engineering teams. Comprehensive product data, prompt-based proofs of concept, and automated requirements can shorten ideation-to-prototyping timelines, allowing for a greater number of iterations. The use of self-writing code, autogenerated user guidance, and continuous code testing would also transform engineers from task completers to systems designers.
  • In marketing, gen AI could (finally) enable the vision of true personalization at scale. Companies such as Netflix and Spotify have started on this path with hyperpersonalized video previews and personalized user playlists. These types of practices can enhance engagement and loyalty, allow brands to integrate seamlessly into customers’ lives, boost productivity of content creation, and improve ROI across the sales and marketing funnel. By doing so, the marketing organization’s silos could break down, especially between the creative and analytics teams.
  • In customer service, gen AI can transform teams into centers of customer delight by proactively addressing issues and offering new products, all at reduced costs. AI-empowered humans will work with gen AI agents , using real-time trends and customer insights to become empathetic problem solvers and supervisors of customer experience. In the process, customer service agents and supervisors will make more use of technology, apply systems thinking, balance empathetic and commercial mindsets, and work more closely with the customer experience and product teams.
  • Gen AI is also set to revolutionize cross-cutting domains, such as performance management and team management. For the latter, gen AI can put coaching prompts at managers’ fingertips and make it easier to access employee resources. This can meaningfully shift the time managers spend on certain tasks: less on administrative to-dos, more on checking in with team members and developing soft skills.

Reimagine talent and skilling by putting people at the center

As the examples above highlight, gen AI’s implications for talent and skill needs are massive. The technology’s potential to accelerate automation and transform operating models will significantly affect the roles and skills that organizations need. According to other McKinsey research , half of today’s work activities could be automated between 2030 and 2060, accelerating previous, pre-gen-AI projections  by a decade. This puts pressure on organizations to understand their talent and skill needs quickly, adopt various strategies to close skill gaps, and invest in upskilling and reskilling. A gen-AI-based talent transformation isn’t something companies can simply hire their way out of, as it affects the entire organization and its ways of working.

Our research shows that early adopters prioritize talent and the human side of gen AI  more than other companies (Exhibit 3). Our survey shows that nearly two-thirds of them have a clear view of their talent gaps and a strategy to close them, compared with just 25 percent of the experimenters. Early adopters focus heavily on upskilling and reskilling as a critical part of their talent strategies, as hiring alone isn’t enough to close gaps and outsourcing can hinder strategic-skills development. Finally, 40 percent of early-adopter respondents say their organizations provide extensive support to encourage employee adoption, versus 9 percent of experimenter respondents.

Companies can capitalize on employees’ enthusiasm for gen AI by investing in both technology adoption and skills (for more, see sidebar “Taking the granular view on gen AI’s workforce implications”). As previous McKinsey research  shows, macroeconomic investments in both enable productivity gains that organizations can also see. This will require a tailored approach to reskilling and upskilling and close collaboration between business and tech leaders and HR. Given the criticality of people topics, HR plays an especially important role in gen AI and technology transformations, both by transforming the people domain and by acting as a gen AI copilot for all employees. One executive noted that for every $1 spent on technology, $5 should be spent on people.

Taking the granular view on gen AI’s workforce implications

A collection of Asian financial institutions did a thorough assessment of generative AI’s (gen AI’s) implications on its roles and skills. They first analyzed the potential capacity that could be freed up across all roles. Then, based on gen AI’s potential impact on certain roles and cohorts, the institutions determined their upskilling, reskilling, and employee redeployment needs. Using this comprehensive fact base, these institutions defined specific interventions to prepare each cohort for gen AI’s effects on its work. For example, shifting skill proficiencies toward technical areas (such as application development and integration) and designing learning journeys for technical team members in areas such as large language model operations and responsible AI policy.

With gen AI, building capabilities across the entire enterprise is crucial. As it’s a rapidly evolving, widely accessible technology, employees must adapt to the new skills (such as prompt writing, contextualization, and data-driven decision making) that gen AI demands. While specific skills shifts will vary greatly by company, all organizations will need to take a dynamic approach to talent development, based on their operating-model transformations; building skills is an ongoing process. As gen AI and automation reshape roles, employees will also need strong cognitive, strategic thinking and social and emotional skills to handle more complex tasks that complement AI.

Within specific roles, the tech talent who are scaling gen AI and future technologies will need to build, train, and fine-tune AI models. These newer skills will require immersive learning in areas such as software development, cloud integration, and security. Tech talent must also be able to contextualize and apply their judgment when translating business needs into technology solutions. Furthermore, companies will need tech-adjacent roles to manage the governance, operational, HR, and legal aspects of AI. Some roles, such as chief AI officers, will be brand new.

Would you like to learn more about our People and Organizational Performance Practice ?

Visit the page

For domain-based talent, many will need intensive upskilling as their roles evolve. This will include different types of on-the-job learning and formal training opportunities. For example, healthcare professionals might take courses on personalized treatment planning and AI-driven diagnostics that are supplemented with mentoring and real-world projects.

And for all employees, including leaders and managers, it’s vital that everyone learns to use gen AI effectively and safely. Examples include comprehensive learning programs that cover responsible use and effective interaction with AI, as well as more augmentation-focused trainings, such as using gen AI coaching that allows managers to practice giving feedback.

About QuantumBlack, AI by McKinsey

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

A European telecommunications company put tailor-made reskilling into practice by implementing an AI coach for its customer service agents. By analyzing call transcripts from frontline employees, the AI coach assessed people across 20 different soft and hard skills. Both team members and leaders could access a dashboard that tracked progress on these skills and delivered real-time feedback using customer quotes and examples. The AI coach also suggested improvements and learning content based on agents’ performance and behavior, creating a hyperpersonalized learning experience. This tool resulted in a 10 percent reduction in average handling time, a 20 percent increase in customer satisfaction, and a 15 percent increase in the rate of first-time-right responses.

Reinforce the changes to continue transforming

How, exactly, should organizations tackle these massive transformational changes? Real success with gen AI requires a comprehensive, integrated approach to creating value. Our survey indicates that the most useful enabler of future adoption is better integration of gen AI into existing systems, cited by 60 percent of respondents. To make gen AI changes stick, organizations need the right infrastructure to support continuous change and win over hearts and minds.

McKinsey Global Surveys

McKinsey’s original survey research

The first step is establishing the right governance for gen AI (for more, see sidebar “Good gen AI governance at work”). In our experience, this means creating a centralized structure that oversees the organization’s AI adoption, sometimes with a chief AI officer leading these efforts. Nearly all early-adopter respondents (91 percent) say they have implemented some governance structure for gen AI, compared with a smaller share (77 percent) of experimenters. A centralized model with a gen-AI-dedicated center of excellence helps align AI vision with execution. This model also facilitates the implementation of strategy, continuous measurement, adaptation to new insights, and further experimentation—specifically, which experiments to scale or to stop, based on priorities and risks.

Good gen AI governance at work

To enhance its productivity with generative AI (gen AI), a leading multinational bank identified the processes with the highest potential for improvement. This exercise enabled the development of a clear strategy and road map and of a business-led center of excellence, including experts in technology, AI, and risk management. The center of excellence evaluates use cases, implements AI guardrails, tracks metrics, and shares knowledge across the organization. What’s more, the bank integrated active use of gen AI into performance evaluations, ensuring a formal commitment to AI integration.

The second step is treating these changes like a true transformation . This means defining the transformation’s infrastructure, roles, and measurement criteria; ensuring accountability within business units; and implementing a regular cadence to monitor progress—and adjusting as needed.

Third is addressing employee mindsets and behaviors across the organization. We know from extensive transformation research  and countless conversations with executives that changing mindsets and behaviors is vital to any successful transformation. Indeed, in our survey, early adopters focus more than others on the four tenets of the influence model  that enables such changes: role modeling, fostering understanding and conviction, building capabilities, and reinforcing new ways of working (Exhibit 4).

In the gen AI context, this means:

  • Role modeling. Leaders should visibly adopt generative AI in their own ways of working. For instance, using AI tools to generate insights and make data-driven decisions showcases the technology’s benefits. It sets a strong example when a CEO uses AI to streamline workflows or a senior executive uses AI-driven analytics for business reviews, encouraging others to follow suit.
  • Fostering understanding and conviction. Organizations should communicate the reasons behind implementing gen-AI-related changes through internal communications, town hall meetings, and training sessions. Highlighting AI’s potential to improve efficiency, accuracy, and decision making aligns the team with the new direction. Informative content such as video tutorials and success stories can build collective conviction in AI’s advantages.
  • Building capabilities. Successful AI adoption requires comprehensive training programs. This includes training on data analysis, machine learning algorithms, and understanding AI-generated outputs. Collaborating with online education platforms to provide courses and setting up internal AI boot camps for hands-on experience ensures proficiency in AI technologies.
  • Reinforcing new ways of working. Companies should integrate AI goals into performance metrics and evaluation processes. They can set targets related to AI adoption, measure AI’s impact on key performance indicators, and recognize employees who effectively incorporate AI into their work. For instance, sales teams could set targets that include leveraging AI for customer segmentation and lead generation, with bonuses tied to successful AI-driven strategies. Tracking and celebrating milestones such as efficiency gains or innovative AI applications embeds these practices within the organization’s fabric.

No matter where an organization is on its gen AI journey, the time for making transformational change is now. Employees are already asking their organizations for more, and some companies have begun moving from experimentation to value capture. By gen AI’s next inflection point, the downside of lagging behind—and missing out on gen AI’s potential benefits—may be even greater. With employees’ embrace of gen AI and the technology’s rapid evolution, companies can capitalize on the current momentum by addressing organizational barriers to adoption, which requires no less than fundamentally transforming the company’s operations and preparing people for continuous change.

Charlotte Relyea is a senior partner in McKinsey’s New York office, Dana Maor is a senior partner in the Tel Aviv office, Sandra Durth is a partner in the Cologne office, and Jan Bouly is an associate partner in the Brussels office.

The authors wish to thank Alex Sukharevsky, Ariel Cohen Codar, Bryan Hancock, Cleo De Laet, Esther Wang, Federico Marafante, Joachim Talloen, Julian Raabe, Julie Goran, Kiera Jones, Michael Chui, Nina Gandhi, Rita Calvão, and Sanjna Parasrampuria for their contributions to this article.

This article was edited by Daniella Seiler, an executive editor in the Washington, DC, office.

Explore a career with us

Related articles.

A man and woman in office attire made out of flowing digital wires. Additional wires with glowing data points swirl around them.

The human side of generative AI: Creating a path to productivity

What every CEO should know about generative AI

What every CEO should know about generative AI

Multiple differently colored lines merging into single line ahead. - stock illustration

Choose the right transformation ‘bite size’

Renew Your Membership

ACC/AHA Add Nine New Performance and Quality Measures to Updated 2024 Heart Failure Measure Set

Aug 08, 2024

ACC News Story

Three new performance measures and six new quality measures are included as part of updated "Clinical Performance and Quality Measures for Adults With Heart Failure " released by the ACC and the American Heart Association (AHA) in collaboration with the Heart Failure Society of America (HFSA) on Aug. 8. The new document is a focused update of previous performance and quality measures released in 2020 and reflect the strongest recommendations from the "2022 AHA/ACC/HFSA Guideline for the Management of Heart Failure."

Developed by the ACC/AHA Task Force on Performance Measures and led by Chair Michelle M. Kittleson, MD, PHD, FACC , and Vice Chair Khadijah Breathett, MD, MS, FACC , the document describes performance measures for heart failure that are appropriate for public reporting or pay-for-performance programs.

Of note, the three new performance measures address optimal blood pressure control in patients with heart failure with preserved ejection fraction; the use of SGLT2 inhibitors for patients with heart failure with reduced ejection fraction; and the use of guideline-directed medical therapy in hospitalized patients.

The six new quality measures focus on:

  • Use of SGLT2 inhibitors in patients with heart failure with mildly reduced and preserved ejection fraction
  • Optimization of guideline-directed medical therapy prior to intervention for chronic secondary severe mitral regurgitation
  • Continuation of guideline-directed medical therapy for patients with heart failure with improved ejection fraction
  • Assessment of social determinants of health and known cardiovascular risk
  • Counseling regarding contraception and pregnancy risks for individuals with cardiomyopathy
  • Monoclonal protein screening when interpreting a bone scintigraphy scan assessing for suspected transthyretin cardiac amyloidosis

According to the Writing Committee, the quality measures "are not yet ready for public reporting or pay-for-performance but might be useful to clinicians and health care organizations for quality improvement." In addition, they add: "For all measures, if the clinician determines the care is not appropriate for the patient based on objective evidence to support decision-making, or if the patient declines treatment, that patient is excluded from the measure."

Aside from the additional new measures, no measures were retired from the original 2020 ACC/AHA Heart Failure Measure Set, nor were any revised following careful review by the committee.

Read the full document .

Clinical Topics: Heart Failure and Cardiomyopathies, Acute Heart Failure

Keywords: Heart Failure, Sodium-Glucose Transporter 2 Inhibitors, Cardiac Amyloidosis, Diuretics, Hospitalization

You must be logged in to save to your library.

Jacc journals on acc.org.

  • JACC: Advances
  • JACC: Basic to Translational Science
  • JACC: CardioOncology
  • JACC: Cardiovascular Imaging
  • JACC: Cardiovascular Interventions
  • JACC: Case Reports
  • JACC: Clinical Electrophysiology
  • JACC: Heart Failure
  • Current Members
  • Campaign for the Future
  • Become a Member
  • Renew Your Membership
  • Member Benefits and Resources
  • Member Sections
  • ACC Member Directory
  • ACC Innovation Program
  • Our Strategic Direction
  • Diversity and Inclusion
  • Our History
  • Our Bylaws and Code of Ethics
  • Leadership and Governance
  • Annual Report
  • Industry Relations
  • Support the ACC
  • Jobs at the ACC
  • Press Releases
  • Social Media
  • Book Our Conference Center

Clinical Topics

  • Acute Coronary Syndromes
  • Anticoagulation Management
  • Arrhythmias and Clinical EP
  • Cardiac Surgery
  • Cardio-Oncology
  • Chronic Angina
  • Congenital Heart Disease and     Pediatric Cardiology
  • COVID-19 Hub
  • Diabetes and Cardiometabolic     Disease
  • Dyslipidemia
  • Geriatric Cardiology
  • Heart Failure and Cardiomyopathies
  • Hypertriglyceridemia
  • Invasive Cardiovascular Angiography    and Intervention
  • Noninvasive Imaging
  • Pericardial Disease
  • Pulmonary Hypertension and Venous     Thromboembolism
  • Sports and Exercise Cardiology
  • Stable Ischemic Heart Disease
  • Valvular Heart Disease
  • Vascular Medicine

Latest in Cardiology

  • Clinical Updates & Discoveries
  • Advocacy & Policy
  • Perspectives & Analysis
  • Meeting Coverage
  • ACC Member Publications
  • ACC Podcasts

Education and Meetings

  • Online Learning Catalog
  • Understanding MOC
  • Products and Resources
  • Image and Slide Gallery
  • Certificates and Certifications
  • Annual Scientific Session

Tools and Practice Support

  • Quality Improvement for Institutions
  • CardioSmart
  • Accreditation Services
  • Clinical Solutions
  • Clinician Well-Being Portal
  • Mobile and Web Apps
  • Advocacy at the ACC
  • Cardiology as a Career Path
  • Cardiology Careers
  • Practice Solutions

Heart House

  • 2400 N St. NW
  • Washington , DC 20037
  • Contact Member Care
  • Phone: 1-202-375-6000
  • Toll Free: 1-800-253-4636
  • Fax: 1-202-375-6842
  • Media Center
  • Advertising & Sponsorship Policy
  • Clinical Content Disclaimer
  • Editorial Board
  • Privacy Policy
  • Registered User Agreement
  • Terms of Service
  • Cookie Policy

© 2024 American College of Cardiology Foundation. All rights reserved.

American Psychological Association

Title Page Setup

A title page is required for all APA Style papers. There are both student and professional versions of the title page. Students should use the student version of the title page unless their instructor or institution has requested they use the professional version. APA provides a student title page guide (PDF, 199KB) to assist students in creating their title pages.

Student title page

The student title page includes the paper title, author names (the byline), author affiliation, course number and name for which the paper is being submitted, instructor name, assignment due date, and page number, as shown in this example.

diagram of a student page

Title page setup is covered in the seventh edition APA Style manuals in the Publication Manual Section 2.3 and the Concise Guide Section 1.6

literature review of performance measurement

Related handouts

  • Student Title Page Guide (PDF, 263KB)
  • Student Paper Setup Guide (PDF, 3MB)

Student papers do not include a running head unless requested by the instructor or institution.

Follow the guidelines described next to format each element of the student title page.

Paper title

Place the title three to four lines down from the top of the title page. Center it and type it in bold font. Capitalize of the title. Place the main title and any subtitle on separate double-spaced lines if desired. There is no maximum length for titles; however, keep titles focused and include key terms.

Author names

Place one double-spaced blank line between the paper title and the author names. Center author names on their own line. If there are two authors, use the word “and” between authors; if there are three or more authors, place a comma between author names and use the word “and” before the final author name.

Cecily J. Sinclair and Adam Gonzaga

Author affiliation

For a student paper, the affiliation is the institution where the student attends school. Include both the name of any department and the name of the college, university, or other institution, separated by a comma. Center the affiliation on the next double-spaced line after the author name(s).

Department of Psychology, University of Georgia

Course number and name

Provide the course number as shown on instructional materials, followed by a colon and the course name. Center the course number and name on the next double-spaced line after the author affiliation.

PSY 201: Introduction to Psychology

Instructor name

Provide the name of the instructor for the course using the format shown on instructional materials. Center the instructor name on the next double-spaced line after the course number and name.

Dr. Rowan J. Estes

Assignment due date

Provide the due date for the assignment. Center the due date on the next double-spaced line after the instructor name. Use the date format commonly used in your country.

October 18, 2020
18 October 2020

Use the page number 1 on the title page. Use the automatic page-numbering function of your word processing program to insert page numbers in the top right corner of the page header.

1

Professional title page

The professional title page includes the paper title, author names (the byline), author affiliation(s), author note, running head, and page number, as shown in the following example.

diagram of a professional title page

Follow the guidelines described next to format each element of the professional title page.

Paper title

Place the title three to four lines down from the top of the title page. Center it and type it in bold font. Capitalize of the title. Place the main title and any subtitle on separate double-spaced lines if desired. There is no maximum length for titles; however, keep titles focused and include key terms.

Author names

 

Place one double-spaced blank line between the paper title and the author names. Center author names on their own line. If there are two authors, use the word “and” between authors; if there are three or more authors, place a comma between author names and use the word “and” before the final author name.

Francesca Humboldt

When different authors have different affiliations, use superscript numerals after author names to connect the names to the appropriate affiliation(s). If all authors have the same affiliation, superscript numerals are not used (see Section 2.3 of the for more on how to set up bylines and affiliations).

Tracy Reuter , Arielle Borovsky , and Casey Lew-Williams

Author affiliation

 

For a professional paper, the affiliation is the institution at which the research was conducted. Include both the name of any department and the name of the college, university, or other institution, separated by a comma. Center the affiliation on the next double-spaced line after the author names; when there are multiple affiliations, center each affiliation on its own line.

 

Department of Nursing, Morrigan University

When different authors have different affiliations, use superscript numerals before affiliations to connect the affiliations to the appropriate author(s). Do not use superscript numerals if all authors share the same affiliations (see Section 2.3 of the for more).

Department of Psychology, Princeton University
Department of Speech, Language, and Hearing Sciences, Purdue University

Author note

Place the author note in the bottom half of the title page. Center and bold the label “Author Note.” Align the paragraphs of the author note to the left. For further information on the contents of the author note, see Section 2.7 of the .

n/a

The running head appears in all-capital letters in the page header of all pages, including the title page. Align the running head to the left margin. Do not use the label “Running head:” before the running head.

Prediction errors support children’s word learning

Use the page number 1 on the title page. Use the automatic page-numbering function of your word processing program to insert page numbers in the top right corner of the page header.

1

IMAGES

  1. literature review of performance measurement

    literature review of performance measurement

  2. (PDF) Performance Evaluation and Measurement in Public Organizations: A

    literature review of performance measurement

  3. Literature review of performance measurement and management models

    literature review of performance measurement

  4. (PDF) A Literature Review on Performance Measurement

    literature review of performance measurement

  5. (PDF) Understanding the features of performance measurement system: A

    literature review of performance measurement

  6. (PDF) “Measuring up”: a systematic literature review of performance

    literature review of performance measurement

COMMENTS

  1. Performance Measurement and Performance Indicators: A Literature Review

    This article reviews the literature on performance measurement and its implementation, and proposes a model to guide the development and implementation of PMSs. ... State of the art literature review on performance measurement. Computers & Industrial Engineering, 60, 279-290. Crossref. Web of Science. Google Scholar. Ogata K., Goodkey R. (2002 ...

  2. Performance Measurement: Issues, Approaches, and Opportunities

    The starting point for selecting performance measures, whether for an individual or an enterprise, is the customer; hence the need for stakeholder analysis. For each of the answers to (1), the concept of 'quality' has to be identified. (1) and (2) then form the basis for measuring a good outcome for the customer.

  3. (PDF) Performance measurement system design: A literature review and

    Design/methodology/approach - Focuses on the process of performance measurement system design, rather than the detail of specific measures. Following a comprehensive review of the literature ...

  4. A Literature Review of Financial Performance Measures and Value

    The study is based on the theory background and relevant researches in the areas of performance measures disclosed in financial statements. The sample of the case studies and sorts of literature are specifically collected from the well-known and respected accounting journals investigating in performance measures areas from 2010 to 2016, which are available on open access.

  5. Performance measurement and management: theory and practice

    The use of theory in performance measurement and management. Early literature on PMMS focused on the four phases of PMMS, design, implementation, use and refresh ( Neely et al., 2000; Bourne et al., 2000 ). There is a strong argument that the debate has now moved on from the design and implementation of PMMS to its use, Franco-Santos and Bourne ...

  6. Performance measurement system design: A literature review and research

    Seeks to bring together this diverse body of knowledge into a coherent whole. To ensure that the key issues are identified, focuses on the process of performance measurement system design, rather than the detail of specific measures. Following a comprehensive review of the literature, proposes a research agenda.

  7. A Literature Review on Performance Measurement

    3.5 Summary. This chapter presented a review of the concept of Performance Measurement (PM) with the aim to identify gaps in relation to facilities management. The chapter commenced with an overview of the general theoretical concepts of PM. The chapter also reviewed performance measurement within the context of facilities management.

  8. A review of performance measurement: Towards performance management

    Describes the evolution of performance measurement (PM) in four sections: recommendations, frameworks, systems and inter-organisational performance measurement. ... A literature review and integrative performance measurement framework for multinational companies. Marketing Intelligence and Planning, 21 (3) (2003), pp. 134-142.

  9. Performance Management System. A Literature Review

    Although most studies within the management control field refer to 'performance measurement' rather than 'performance management', the present review will focus on 'performance management' and 'performance management system' as defined in Chap. 2. 1. The Chapter is organized as follows. The second section presents the ...

  10. State of the art literature review on performance measurement

    Abstract. The performance measurement revolution started in the late 1970s with the dissatisfaction of traditional backward looking accounting systems. Since then the literature in this field is emerging. Most of the focus was on designing performance measurement system (PMS), with few studies illustrating the issues in implementing and using PMS.

  11. Business process performance measurement: a structured literature

    Since organizations endeavor to measure what they manage, performance measurement is a central issue in both the literature and in practice (Heckl and Moormann 2010; Neely 2005; Richard et al. 2009).Performance measurement is a multidisciplinary topic that is highly studied by both the management and information systems domains (business process management or BPM in particular).

  12. "Measuring up": a systematic literature review of performance

    This paper aims to review the landscape of publications that discuss performance measurement (PM) practices in Australian and New Zealand local government contexts and identify implications for future research.,A systematic review methodology was used to identify a shortlist of publications.

  13. Implementing Performance Measurement Systems: A Literature Review

    is now being labelled in the literature and in practice as 'performance measu rement' [10]. For example, the literature review in th is paper will show: • Performance measurement (as ...

  14. (PDF) A Literature Review and Overview of Performance Management: A

    Abstract. The underlining presuppo sition and the supposition of performance management as a st udy field have been controversial. or have a non -defined concept ever s ince the field was introd ...

  15. Performance measurement based on machines data: Systematic literature

    In view of these facts, the research aimed to evaluate how the question of performance measurement based on data for machines in the context of Industry 4.0 is being dealt with. For this, the PROKNOW-C method was applied, a systematic literature review methodology for the selection of a bibliographic portfolio and bibliometric analysis.

  16. Business process performance measurement: a structured literature

    Background. Since organizations endeavor to measure what they manage, performance measurement is a central issue in both the literature and in practice (Heckl and Moormann 2010; Neely 2005; Richard et al. 2009).Performance measurement is a multidisciplinary topic that is highly studied by both the management and information systems domains (business process management or BPM in particular).

  17. PDF Concept & Perspectives of Organizational Performance Measurement

    performance measurement. According to Wee (2016), the methodology of literature review papers should at least contain the themes informing the review, databases used, keywords, and some of the primary sources consulted. In this work, organizational performance measurement was the most relevant theme.

  18. PDF State of the art literature review on performance measurement

    wide range of literature, including performance measurement, MIS and change management. However, with numerous papers in each field (i.e. performance measurement, change management and MIS) it would be unrealistic and indeed of little value to con-duct a comprehensive literature review across the entirety of the three fields.

  19. A Literature Review for Identification of Performance Measures for

    selecting appropriate performance measures for supply chain analysis is particularly critical, since the system of interest is generally large and complex. The purpose of this research is to present an extensive literature review, so as to develop a basis for establishing a framework for performance measurement in supply chains.

  20. Supply chain performance measurement: a literature review

    Supply chain performance measurement: a literature review. Goknur Arzu Akyuz Department of Industrial Engineering, Atilim University, Kizilcasar Mahallesi, 06836 Incek Gölbasi, ... This paper is intended to provide a critical literature review on supply chain performance measurement. The study aims at revealing the basic research methodologies ...

  21. Full article: The role of performance measurement and management

    The measurement items included in the questionnaire were selected based on a comprehensive literature review. To measure the success of the administrative reform ... their comparison with the goals and expected results and the review of the key measures. The performance management construct comprised items related to continuous improvement ...

  22. Understanding the features of performance measurement system: a

    The outcomes of this review contribute and update existing literature on PMS in three ways: identification of gaps in terms of practical usefulness and academic research; suggestions of solutions in the form of a conceptual framework to improve measurement and performance measurement using the correct features of PMS; and recommendation of a ...

  23. Approach in inputs & outputs selection of Data Envelopment Analysis

    A literature review remains a commonly employed method and is often regarded as one of the most effective techniques to place a study within the body of knowledge. ... See SL, John Cantor VM, Lauren Tan ML, Joy Yu RS. A DEA-based Performance Measurement Mathematical Model and Software Application System Applied to Public Hospitals in the ...

  24. Frontiers

    By employing a systematic literature review, this research builds on the work of Tranfield et al. (2003) and Xiao and Watson (2019). "Adaptive performance" here refers the capacity to adapt and modify one's behavior as a result of changes in circumstances or information.

  25. Full article: Ethical leadership and public sector performance

    Section 2 provides a hypothesis development and literature review. Section 3 describes the methodology and measures used in this study. Section 4 presents the analysis and results. Section 5 presents the discussion. Section 6 presents the conclusions of the study. Section 7 outlines the theoretical and practical implications of the study.

  26. Performance measurement and its recent challenge: A literature review

    The purpose of this literature review is to identify the recent developmental changes in organisational performance measurement (PM) literature and identification of knowledge gaps in the field of ...

  27. Gen AI's next inflection point: From employee experimentation to

    A centralized model with a gen-AI-dedicated center of excellence helps align AI vision with execution. This model also facilitates the implementation of strategy, continuous measurement, adaptation to new insights, and further experimentation—specifically, which experiments to scale or to stop, based on priorities and risks.

  28. ACC/AHA Add Nine New Performance and Quality Measures to Updated 2024

    Three new performance measures and six new quality measures are included as part of updated "Clinical Performance and Quality Measures for Adults With Heart Failure" released by the ACC and the American Heart Association (AHA) in collaboration with the Heart Failure Society of America (HFSA) on Aug. 8. The new document is a focused update of previous performance and quality measures released ...

  29. Trends in recovery from repeated concussion in collegiate athletes in

    Sports-related concussion (SRC) is a growing concern across all age groups, particularly among collegiate athletes facing rigorous academic and athletic challenges linked to performance. For this reason, a large body of research is about this demographic. While our approach to studying concussions is more sophisticated than ever, much remains to be learned.

  30. Title page setup

    The student title page includes the paper title, author names (the byline), author affiliation, course number and name for which the paper is being submitted, instructor name, assignment due date, and page number, as shown in this example.