Ethics of AI: A systematic literature review of principles and challenges

Ethics in AI becomes a global topic of interest for both policymakers and academic researchers. In the last few years, various research organizations, lawyers, think tankers and regulatory bodies get involved in developing AI ethics guidelines and principles. However, there is still debate about the implications of these principles. We conducted a systematic literature review (SLR) study to investigate the agreement on the significance of AI principles and identify the challenging factors that could negatively impact the adoption of AI ethics principles. The results reveal that the global convergence set consists of 22 ethical principles and 15 challenges. Transparency, privacy, accountability and fairness are identified as the most common AI ethics principles. Similarly, lack of ethical knowledge and vague principles are reported as the significant challenges for considering ethics in AI. The findings of this study are the preliminary inputs for proposing a maturity model that assess the ethical capabilities of AI systems and provide best practices for further improvements.

1 Introduction

Artificial intelligence (AI) technologies are considered important across a vast array of industries including health, manufacturing, banking and retail [1]. However, the promises of AI systems like improving productivity, reducing costs, and safety has now been considered with worries, that these complex systems might bring more ethical harm than economical good [1].

Artificial intelligence (AI) and autonomous systems have a significant effect on the development of humanity [2]. The autonomous decision-making nature of these systems raises fundamental questions i.e., what are the potential risks involved in those systems, how these systems should perform, how to control such systems and what to do with AI-based systems? [2]. Autonomous systems are beyond the concepts of automation by characterising them with decision-making capabilities. The development of autonomous system components, such as intelligent awareness and self-decision making is based on AI concepts.

There is a political and ethical discussion to develop policies for different technologies including nuclear power, manufacturing etc. to control the ethical damage they could bring. The same ethical potential harm also exits in AI systems and more specifically they might end human control [2].The real-world failure and misuse incidents of AI systems bring the demand and discussion for AI ethics [3]. The ethical studies of AI technologies revealed that AI and autonomous systems should not be only considered a technological effort. There is a broad discussion that the design and use of AI-based systems are culturally and ethically embedded [4]. Developing AI-based systems not only need technical efforts but also include economic, political, societal, intellectual and legal aspects [4]. These systems significantly impact the cultural norms and values of the people [4]. The AI industry and specifically the practitioners should have a deep understanding of ethics in this domain. Recently, AI ethics get press coverage and public voices, which supports significant related research [3]. However, the topic is still not sufficiently investigated both academically and in the real-world environment [4]. There are very few academic studies conducted on this topic, but it is still largely unknown to AI practitioners. The Ethically Aligned Design (EAD) [6] guidelines of IEEE mentioned that ethics in AI is still far from being mature in an industrial setting [5]. The limited or no knowledge of ethics for the AI industry develop the gap, which indicates the need for further academic and in practice research.

The aim of this study is to conduct a systematic literature review (SLR) and explore the available literature to identify the AI ethics principles. Moreover, the SLR study uncover the key challenging factors that are the demotivators for considering the ethics of AI. The following research questions are developed to achieve the given core objectives:

RQ1: What are the key principles of AI ethics?

RQ2: What are the challenges of adopting ethics in AI?

The remaining content of paper is structured as follow: Section 2 presents the background of the study and the research methodology is reported in Section 3 . The SLR data are provided in Section 4 and results and analysis are discussed in Section 5 . Finally, Section 6 provides an overview of threats to the validity of the study and Section 7 conclude the findings with future directions.

2 Background

The implementation of AI or machine intelligence concepts brings a technological revolution that change both science and society. Human to machine power transformation sparked important societal debate about the principles and policies that guide the use and deployment of AI systems [7]. Various organizations have developed ad hoc committees to draft the policy documents for AI ethics. These organizations reportedly developed AI policies and guide documents [7]. In 2018, technology corporates such as SAP and Google publicly introduced guidelines and policies for AI-based systems [7]. Similarly, Amnesty International, the Association of Computing Machinery (ACM) and Access Now comes up with principles and recommendations for AI technologies. The Trustworthy AI European Commission’s guidelines were developed with the aim to promote lawful, ethically sound and robust AI systems [8]. The report “Preparing for the Future of Artificial Intelligence” prepared by the Obama administration’s presents a thorough survey that focuses on the current AI research, its applications and impact on society [9]. The report further presents recommendations for future AI related actions. The “Beijing AI Principles” guidelines [10] proposed various principles in the domain of AI research, development, use and governance. These principles present a framework that focus on AI ethics.

The world largest technical professional organization, IEEE launches the guidelines Ethically Aligned Design (EAD) [6] that provides a framework to address the ethical and technical values of AI systems based on a set of principles and recommendations. The EAD framework consists of the following eight general principles to guide the development and implementations of AI-based systems: human rights, well-being, data agency, effectiveness, transparency, accountability and awareness of misuse. Organizations such as ISO and IEC also embark on developing standard for AI [11]. ISO/IEC JTC 1/SC 42 is a joint ISO/IEC international standard committee that focus on the entire AI ecosystem including ethical and social concerns, standardization, AI governance, AI computational approach and trustworthiness [11]. The effort of different organizations to shape AI ethics not only determine the need of guidelines, tools, techniques, but also the interest of these organizations to manage ethics in a way that meet their respective priorities.

However, recently published studies reported that the existing guidelines developed for ethics of AI are not effective and adopted in practice [12]. It is evident from the empirical study conducted by McNamara et al. [13] to test the influence of the ACM code of ethics in the decision-making process of software development. The results of the study revealed that the ACM code of ethics have no impact in making ethical decisions. The lack of effective techniques makes it challenging to successfully scale the available guidelines into practice [12]. Vakkuri et al. [12] used the accountability, responsibility, and transparency (ART) framework [14] and developed the conceptual model to explore ethical consideration in the AI environment. The conceptual model is empirically validated by conducting multiple case studies. The empirical results are concluded by highlighting that AI ethics principles are still not in practice; however, some common concepts are considered such as documentation. Moreover, the study findings revealed that practitioners consider the social impact of AI systems [12].

There are no tools, methods or frameworks that fill the gap between the AI principles and their implementation in practice. Further studies in this area should conduct that explicitly discuss the AI ethics principles, challenges and provide evaluation standards/models that guide AI industry to consider ethics in practice.

3 Research Method

Systematic literature review (SLR) approach is used to explore the available primary studies. SLR is a widely adopted literature survey method in evidence-based software engineering domain. SLR is “a means of evaluating and interpreting all available research relevant to a particular research question, topic area, or phenomenon of interest” [15]. The Kitchenham and Charters [15] SLR guidelines are used to conduct this study and systematically address the research questions. The SLR process plan is provided in Fig. 1 and thoroughly discussed in the following sections.

Refer to caption

3.1 Research questions (RQs)

Research questions development in SLR studies is the most significant phase [15]. Developing research questions require deep understanding of the research area in general and the research problem in specific. We primarily studied relevant articles [3-9] to better understand the problem and develop the questions of interest. The questions are finally developed based on the research concepts discussed in the mentioned research sources [3-9]. The details of the research questions are provided in Section 1 .

3.2 Data sources

The authors had a series of team discussions to identify the list of digital data sources. The selected digital repositories are explored to extract the relevant data in order to address the given research questions (see Section 1 ). Finally, the following digital libraries are selected based on the authors SLR experience, discussions and guidelines provided by Chen et al. [16]: Springer Link, Science Direct, IEEE Xplore, Wiley Online Library and ACM Digital Library. These are the world leading digital data sources which collect a large number of original information and communication technology studies [16].

3.3 Search strategy

The research questions are analysed by the second and third authors to extract the terms or keywords used for the search process. All the authors participated in the group discussion to finalise the search terms and retrieve the relevant data from the selected repositories. Pilot search terms and strings are made that finally contributed to develop the following agreed search string:

("artificial intelligence ethics" OR "AI ethics" OR "machine learning ethics" OR "software ethics") AND (“resistance” OR “barriers” OR “limitations” OR “challenges”)

The “principles” and “guidelines” terms were excluded from the final search string because these terms return irrelevant data from different other domains. The given search string was specifically testified during the pilot attempts to explore the data related to the AI “principles” and “guidelines” and we noticed that it precisely returns the desire results related to the RQ1, i.e., “principles” and “guidelines”.

The search terms are concatenated using “AND” and “OR” operators to develop the search strings. The selected digital repositories have a customised search mechanism. The search strings are executed using the personalised search mechanism of electronic data sources.

3.4 Inclusion/Exclusion criteria

The inclusion/exclusion criteria are developed to filter the search string findings and remove irrelevant, not accessible, redundant and low-quality studies. The criteria are developed by the first and fifth authors, which are finalised by all the authors in the regular consensus meeting (see Table 1 ).

3.5 Study selection

The search string discussed in Section 3.3 is used to explore the selected digital repositories. The search process was initiated on 23rd December 2020 and ended on 5th February 2021. The search string retrieved total 811 studies in the first phase, which were further filtered based on the study title, abstract and keywords (see Fig. 2 ). In the second phase of the selection process, the inclusion/exclusion of the 60 studies are performed based on the full-text review. Finally, 24 primary studies are shortlisted using the SLR approach. Moreover, backward snowballing [17] is performed to search the references of the selected 24 studies. The backward snowballing is previously used by Tingting et al. [18] to explore text analysis techniques in software architectural and we used with the aim to explore the references list of the selected primary studies to identify relevant studies that are missed during the SLR process. Additionally, 5 studies are selected, which are further filtered using the inclusion/exclusion criteria (see Fig. 2 ). Eventually, only 3 studies fulfil the selection criteria and the final data sets consist of total 27 primary studies (24 SLR + 3 backward snowballing). Final set of the selected studies is provided in Appendix A, where each study is labeled as [Sn] to differentiate from the general list of references.

Refer to caption

3.6 Quality assessment (QA)

The assessment criteria are developed to evaluate the quality of the selected primary studies and remove the research bias. The quality assessment phase interprets the significance and completeness of each selected primary study [15]. The QA criteria checklist provided by Kitchenham and Charters [15] are analysed and designed the QA questions provided in Table 2 . Each selected primary study evaluated against the quality assessment questions (QA1-QA6). Score (1) assigned if the study comprehensively addresses the quality assessment questions (see Table 2 ). Similarly, 0.5 points are assigned to those who have partially addressed the QA questions. Studies with no evidence of addressing the QA questions are assigned 0 points.

3.7 Data extraction

The relevant data to address the RQs are collected by thoroughly reading the selected primary studies and extract the AI ethics principles (RQ1) and challenges (RQ2). The extracted data are recorded on excel sheets. Most of the data are collected by the second and third authors. They assess the quality of the primary studies based on the criteria discussed in Section 3.6 . Moreover, the first, fourth and fifth authors participated in the review meeting to finalize the QA score of each study (Appendix A).

4 Reporting the review

The data collected from the selected 27 primary studies are analyzed and discussed in the following sections.

4.1 Temporal distribution

The year wise distribution of the primary studies is shown in Fig. 3 . Of the 27 studies, total 2, 19, 4 and 2 are respectively published in 2021 (till 5th February), 2020, 2019 and 2018. The first relevant study was found in 2018 and since then, there has been a gradual increase in the number of research publication. The SLR string was finally executed on 5th February 2021, therefore the given results only cover the first two months of 2021. The increasing number of publications reveal that AI ethics is significant, and state of the art research direction. There is still need of substantial research work to explore ethics in AI.

Refer to caption

4.2 Publication type

The selected primary studies are classified across four major types i.e., journal, conference (including workshop), book chapter and magazine. Fig. 4 shows that 19 (70%) studies are published in journals, 3 (11%) in conferences, 4 (15%) book chapters and 1 (4%) is a magazine article. We noticed that journals are the most active venues to publish relevant studies.

Refer to caption

5 Detail results and analysis

The detail results to address RQ1 and RQ2 are discussed in the following sections.

5.1 RQ1 (AI Ethics Principles)

The final set of the primary studies consist of 27 articles and total 21 AI ethics principles are extracted from these articles. The identified principles along with their respective references are provided in Table 3 . Moreover, a word cloud is generated to graphically represent the significance of the reported principles (See Fig. 5 ). Of the 21 principles, transparency (n=17) is the most frequently mentioned principle, followed by privacy (n=16). The third and fourth most common principles are accountability (n=15) and fairness (n=14) respectively.

Refer to caption

5.1.1 Transparency

Transparency of operations is a major concern in AI/autonomous systems [S5]. It answers how and why a specific decision is made by the system and further triggers the other constructs including interpretability and explainability . It should not only consider for the AI system operations, but must be part of the technical process [S5] to make the decision-making actions more transparent and trustworthy. Both operational and technical transparency could be achieved by developing standards and models that measure and testify the levels of transparency. Such standards could assist the AI system development organizations to assess their level of transparency and provide best practices for further improvements. Moreover, transparency should consider for a wide range of system stakeholders; however, the level of transparency should be varied for them [S4].

5.1.2 Privacy

AI/autonomous system must assure user and data privacy throughout the system lifecycle. It could broadly be defined as “the right to control information about oneself” [S22]. Regulatory institutions are consistently involved in establishing legislation for data privacy and protection [S7]. However, privacy becomes more challenging in data driven AI environment, where the system subsequently processes user data including cleaning, merging, and interpretation [S7]. The data access in self-governing AI systems develop the primary concern of data privacy, which is commonly related to security and transparency [S21]. It is worth noting that AI technologies bring complex challenges associated with data privacy and integrity, which demand more relevant future research [S22].

5.1.3 Accountability

Accountability is the third most frequently reported principle which specifically focuses on liability issues [S5]. It refers to safeguard justice by assigning responsibility and prevent harm [S3]. The stakeholders must be accountable for the system decisions and actions to minimize the culpability problems [S4, S5]. Ensure both technical and social accountability before and after the system development, implementation and operation [S5]. Accountability is closely linked with transparency because the system must be understood before making the liability decisions [S5].

5.1.4 Fairness

Fairness is considered a significant principle of AI ethics. Discrimination between individuals or groups made by the decision-making systems lead to ethical fairness problems, which impact public values including dignity and justice [S11]. Avoiding unfair biases of AI systems could foster social fairness. AI and autonomous systems should not deceive people by impairing their autonomy [S4]. It could achieve by explicitly making the decision-making process more transparent and identifying the accountable entities.

Analysis. Based on the SLR findings, we identified that the above principles received significant attention, which are compatible with the widely adopted accountability, responsibility and transparency (ART) framework [14] of ethics in AI. Responsibility is not a highly cited principle in the selected primary studies and the reason might be that it is considered an associated one with accountability [8]. Moreover, Vakkuri et al. [S5] developed a relational framework based on the key ART constructs with an additional fairness principle. The framework is empirically evaluated to know the opinions and perceptions of the practitioners [S5]. However, the findings of their study are only based on the five major principles and have not considered the other significant principles reported in Table 3 .

5.2 RQ2 (Challenges)

Systematic review of the 27 primary studies returns total 15 challenging factors (See Table 4 ). The frequencies of the identified challenging factors are provided in Fig. 6 , moreover, word cloud is generated to demonstrate the significance of the reported factors (See Fig. 7 ). Following are the details of the highly cited challenges:

Refer to caption

5.2.1 Lack of ethical knowledge

Lack of ethical knowledge is one of the main reasons that AI ethics in practice is still far from being mature [S14]. AI systems development organizations believe that government institutions are not in the position of providing experts to this emerging area, while some opine that establishing ethics in AI is not possible without the political approach [S15]. Similarly, management and technical staff are not aware of the moral and ethical complexity of the AI systems. AI ethics are in their infancy, not enough ethical standards and frameworks are available that provide details guidelines to the AI industry.

5.2.2 Vague principles

There are various AI ethics principles as we discussed in Section 5.1 . However, in practice, majority of the organizations are reluctant to adopt these principles which are highly vague in their definition [S23]. For example, it is not clear how specifically consider “fairness” and “human dignity” in AI ethics [S17]. It is very challenging to consider AI ethics in real world settings using these vaguely formulated principles [S3].

Refer to caption

5.2.3 Highly general

The available principles are highly general and broad in concept to specifically consider in the AI industry [S18]. They are subjective in the term and used in various other domains than AI. Policymakers involved in drafting AI ethics principles might not have strong technical understanding of AI system development processes, which makes the principles more general and ambiguous.

5.2.4 Conflict in practice

Organizations, committees and groups involved in developing the AI ethics guidelines and principles have opinion conflicts regarding the real world implementation of AI ethics [S13, S16]. For example, the UK house of lords suggested that robots cannot solely be in operation, but they should be guided by human beings [S10], on the other hand, in various hospitals’ robots make autonomous decisions in diagnosis and surgical endeavours. It shows interpretation and understanding conflict for AI ethics in practice.

5.2.5 Interpret principles differently

AI ethics principles are widely considered ambiguous and general by majority of the organizations [S20]. It has been found that tech firms involved in the development of AI and autonomous systems follow ethical guidelines based on their own understandings [S27]. There are no universally agreed ethical principles that can bring all the institutions on one page.

5.2.6 Lack of technical understanding

The policymakers have lack of technical knowledge, which makes AI ethics in practice a challenging effort [S10, S13]. They are not aware of the technical aspects of AI systems and the advancement in AI technologies as well their limitations. Lack of technical understanding develops the gap between system design and ethical thinking [S10]. The ethicists must have skills of grasping technical knowledge using their ethical framework [S10].

Analysis. The above reported challenges provide an overview of the most common and frequently cited factors that could be potential barriers for scaling ethics in AI. Lack of ethical knowledge is identified as the most common challenge of AI ethics. Major ethical mistakes are made because of no moral awareness of specific problem [S14]. Practitioners only consider software development activities as the main responsibilities; however, they have limited interest to consider ethical aspects [S5]. The ethical uncertainty in AI systems could only be diminish by acquiring ethical knowledge. Continuous awareness of ethical policies, codes and regulations assist to properly manage the ethical values in AI and autonomous systems.

We noticed that very few studies are published where the barriers of AI ethics are directly or indirectly mentioned. It is evident from the frequency distribution of the challenging factors given in Table 4 . This finding reveals that the AI ethics challenges aspect is very young field and requires considerable research effort from diverse disciplines to be mature. The significance of AI technologies in various sectors calls for rush research to uncover the relevant challenges that hinder the process of considering ethics in AI.

Moreover, the challenging factors having low frequency are not discussed in details because of the page limitation. However, the complete list of the identified factors is provided in Table 4 .

The long-term plan of the research is to propose a maturity model that could be used to evaluate the ethical capabilities of the organizations involved in developing AI systems. The findings of this systematic review are the initial inputs for the development of the proposed model. Figure 8 shows the preliminary structure of the model and demonstrates how the findings of this review contribute in the development of the principles and challenges component. The identified principles and challenges will be classified across capability and maturity levels. Moreover, best practices will provide to tackle the identified challenges and implement the AI ethics principles. The given model is a proposed idea that will be systematically developed based on the industrial empirical studies and the concepts of the widely adopted CMMI process model [19]. Case study approach is selected to evaluate the real-world significance of the model.

Refer to caption

6 Threats to validity

6.1 construct validity.

The primary studies selection process might affect the quality of the data collected for synthesis. However, we define the formal search strategy and constantly revised it during the regular consensus meetings. Moreover, the given search string might not cover all the relevant articles and missed quality studies. Therefore, we tried to avoid this threat by conducting a pilot search using multiple strings. The final string was developed based on the results returned by the pilot strings. Finally, backward snowballing is performed to identify any additional primary studies that were missed during the SLR process.

6.2 Internal validity

In SLR studies, internal validity refers to the rigorousness of the review process including the development of research questions, data sources, search strategy, study selection, string development etc. This study is conducted by following the formal SLR process guidelines proposed by Kitchenham and Charters [15]. The step-by-step flow of the SLR phases is methodically discussed in Section 3 .

6.3 External validity

External validity is related to the generalizability of the study findings. The results are summaries based on 27 primary studies, because of the novelty of the research topic, very few studies published in this domain. The primary studies sample size (n=27) might not be strong enough to generalize the study findings, however, we plan to extend this study in future by conducting an industrial study to evaluate the SLR findings and know the perceptions of the practitioners.

7 Conclusions and future directions

Ethics in AI gets significant attention in the last couple of years and there is a need of systematic literature study that discuss the principles and uncover the key challenges of AI ethics. This study is conducted to fill the given research gap by following the SLR approach. We identified total 27 relevant primary studies and the systematic review of the selected studies return 22 principles and 15 challenging factors. We noticed that most of the studies focus on four major principles i.e., transparency, privacy, accountability and fairness, which should consider by the AI system designers. Moreover, the decision-making systems should also be aware of the ethical principles to know the implications of their actions.

The challenges of ethics in AI are identified to provide an understanding of the factors that hinder the implementation of ethical principles. The most frequently reported challenging factors are lack of ethical knowledge and vague principles. The knowledge and understanding of ethics are important for both management and technical teams. It further removes the vagueness in AI principles. Lack of ethical knowledge could undermine the significance of decision-making systems.

We plan to extend this study by conducting an industrial survey to investigate the understanding of AI ethics in practice and identify the best practices to tackle the given challenging factors and manage the reported principles. Moreover, industrial case studies will be conducted in AI industry to assess the effectiveness of the proposed maturity model in practice.

8 Acknowledgments

  • (1) Christina Pazzanese. 2020. Ethical concerns mount as AI takes bigger decision-making role in more industries. Retrieved January 15, 2021 from https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
  • (2) Vincent C. Müller. 2020. Ethics of Artificial Intelligence and Robotics. The Stanford Encyclopedia of Philosophy. Retrieved January 15, 2021 from https://plato.stanford.edu/archives/win2020/entries/ethics-ai/
  • (3) Ville Vakkuri, Kai-Kristian Kemell and Pekka Abrahamsson. 2019. Implementing Ethics in AI: Initial Results of an Industrial Multiple Case Study. In International Conference on Product-Focused Software Process Improvement, Lecture Notes in Computer Science, vol 11915. Springer, Cham, 331-338. https://doi.org/10.1007/978-3-030-35333-9_24
  • (4) Jaana Leikas, Raija Koivisto, and Nadezhda Gotcheva. 2019. Ethical framework for designing autonomous intelligent systems. Journal of Open Innovation: Technology, Market, and Complexity 5, 1 (2019), 18. https://doi.org/10.3390/joitmc5010018
  • (5) Ville Vakkuri, Kai-Kristian Kemell and Pekka Abrahamsson. 2019. AI ethics in industry: a research framework. arXiv preprint arXiv:1910.12695.
  • (6) IEEE. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems, first edition. Retrieved January 17, 2021 from https://tinyurl.com/yah4jzb6
  • (7) Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, no. 9 (2019), 389-399. https://doi.org/10.1038/s42256-019-0088-2
  • (8) Pekka Ala-Pietilä, Wilhelm Bauer, Urs Bergmann, Mária Bieliková, Cecilia Bonefeld-Dahl, Yann Bonnet, Loubna Bouarfa et al. (2018). The European Commission’s high-level expert group on artificial intelligence: Ethics guidelines for trustworthy AI. Working Document for stakeholders’ consultation. Retrieved January 17, 2021 from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  • (9) Alan Bundy. 2016. Preparing for the future of artificial intelligence. Executive Office of the President National Science and Technology Council Committee on Technology Washington, D.C, USA. Retrieved January 23, 2021 from https://cra.org/ccc/wp-content/uploads/sites/2/2016/11/NSTC_preparing_for_the_future_of_ai.pdf
  • (10) Beijing Academy of Artificial Intelligence. 2019. Beijing AI principles. Retrieved January 23, 2021 from https://www.baai.ac.cn/news/beijing-ai-principles-en.html
  • (11) ISO/IEC. ISO/IEC JTC 1/SC 42 Artificial intelligence, Retrieved 25th January 2021, https://www.iso.org/committee/6794475.html
  • (12) Ville Vakkuri, Kai-Kristian Kemell, Marianna Jantunen, and Pekka Abrahamsson. 2020. “This is Just a Prototype”: How Ethics Are Ignored in Software Startup-Like Environments. In International Conference on Agile Software Development, Lecture Notes in Business Information Processing, vol 383. Springer, Cham. 195-210. https://doi.org/10.1007/978-3-030-49392-9_13
  • (13) Andrew McNamara, Justin Smith, and Emerson Murphy-Hill. 2018. Does ACM’s code of ethics change ethical decision making in software development? In Proceedings of the 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2018). Association for Computing Machinery, New York, NY, USA, 729–733. DOI:https://doi.org/10.1145/3236024.3264833
  • (14) Virginia Dignum. 2017. Responsible autonomy. Preprint arXiv:1706.02513
  • (15) Barbara Kitchenham and Stuart Charters. 2007. Guidelines for performing systematic literature reviews in software engineering. Technical report, Ver. 2.3 EBSE Technical Report. School of Computer Science and Mathematics, Keele University, UK.
  • (16) Lianipng Chen, Muhammad Ali Babar, and He Zhang. 2010. Towards an evidence-based understanding of electronic data sources. In Proceedings of the 14th international conference on Evaluation and Assessment in Software Engineering (EASE’10). BCS Learning & Development Ltd., Swindon, GBR, 135–138.
  • (17) Claes Wohlin. 2014. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering (EASE ’14). Association for Computing Machinery, New York, USA, 1–10. DOI:https://doi.org/10.1145/2601248.2601268
  • (18) Tingting Bi, Peng Liang, Antony Tang, and Chen Yang. 2018. A systematic mapping study on text analysis techniques in software architecture. Journal of Systems and Software 144, (2018), 533-558.https://doi.org/10.1016/j.jss.2018.07.055
  • (19) CMMI Product Team. 2002. Capability maturity model® integration (CMMI SM), version 1.1. CMMI for systems engineering, software engineering, integrated product and process development, and supplier sourcing (CMMI-SE/SW/IPPD/SS, V1. 1) 2 (2002).

9 Appendices

Appendix A: Selected primary studies

ar5iv homepage

Advertisement

Advertisement

The Ethics of AI Ethics: An Evaluation of Guidelines

  • Open access
  • Published: 01 February 2020
  • Volume 30 , pages 99–120, ( 2020 )

Cite this article

You have full access to this open access article

ai ethics literature review

  • Thilo Hagendorff   ORCID: orcid.org/0000-0002-4633-2153 1  

200k Accesses

794 Citations

214 Altmetric

10 Mentions

Explore all metrics

A Publisher Correction to this article was published on 28 July 2020

This article has been updated

Current advances in research, development and application of artificial intelligence (AI) systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems—and how the effectiveness in the demands of AI ethics can be improved.

Similar content being viewed by others

ai ethics literature review

Approaches to Ethical AI

ai ethics literature review

The global landscape of AI ethics guidelines

ai ethics literature review

Ethical approaches in designing autonomous and intelligent systems: a comprehensive survey towards responsible development

Explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

1 Introduction

The current AI boom is accompanied by constant calls for applied ethics, which are meant to harness the “disruptive” potentials of new AI technologies. As a result, a whole body of ethical guidelines has been developed in recent years collecting principles, which technology developers should adhere to as far as possible. However, the critical question arises: Do those ethical guidelines have an actual impact on human decision-making in the field of AI and machine learning? The short answer is: No, most often not. This paper analyzes 22 of the major AI ethics guidelines and issues recommendations on how to overcome the relative ineffectiveness of these guidelines.

AI ethics—or ethics in general—lacks mechanisms to reinforce its own normative claims. Of course, the enforcement of ethical principles may involve reputational losses in the case of misconduct, or restrictions on memberships in certain professional bodies. Yet altogether, these mechanisms are rather weak and pose no eminent threat. Researchers, politicians, consultants, managers and activists have to deal with this essential weakness of ethics. However, it is also a reason why ethics is so appealing to many AI companies and institutions. When companies or research institutes formulate their own ethical guidelines, regularly incorporate ethical considerations into their public relations work, or adopt ethically motivated “self-commitments”, efforts to create a truly binding legal framework are continuously discouraged. Ethics guidelines of the AI industry serve to suggest to legislators that internal self-governance in science and industry is sufficient, and that no specific laws are necessary to mitigate possible technological risks and to eliminate scenarios of abuse (Calo 2017 ). And even when more concrete laws concerning AI systems are demanded, as recently done by Google ( 2019 ), these demands remain relatively vague and superficial.

Science- or industry-led ethics guidelines, as well as other concepts of self-governance, may serve to pretend that accountability can be devolved from state authorities and democratic institutions upon the respective sectors of science or industry. Moreover, ethics can also simply serve the purpose of calming critical voices from the public, while simultaneously the criticized practices are maintained within the organization. The association “Partnership on AI” ( 2018 ) which brings together companies such as Amazon, Apple, Baidu, Facebook, Google, IBM and Intel is exemplary in this context. Companies can highlight their membership in such associations whenever the notion of serious commitment to legal regulation of business activities needs to be stifled.

This prompts the question as to what extent ethical objectives are actually implemented and embedded in the development and application of AI, or whether merely good intentions are deployed. So far, some papers have been published on the subject of teaching ethics to data scientists (Garzcarek and Steuer 2019 ; Burton et al. 2017 ; Goldsmith and Burton 2017 ; Johnson 2017 ) but by and large very little to nothing has been written about the tangible implementation of ethical goals and values. In this paper, I address this question from a theoretical perspective. In a first step, 22 of the major guidelines of AI ethics will be analyzed and compared. I will also describe which issues they omit to mention. In a second step, I compare the principles formulated in the guidelines with the concrete practice of research and development of AI systems. In particular, I critically examine to what extent the principles have an effect. In a third and final step, I will work out ideas on how AI ethics can be transformed from a merely discursive phenomenon into concrete directions for action.

2 Guidelines in AI Ethics

Research in the field of AI ethics ranges from reflections on how ethical principles can be implemented in decision routines of autonomous machines (Anderson and Anderson 2015 ; Etzioni and Etzioni 2017 ; Yu et al. 2018 ) over meta-studies about AI ethics (Vakkuri and Abrahamsson 2018 ; Prates et al. 2018 ; Boddington 2017 ; Greene et al. 2019 ; Goldsmith and Burton 2017 ) or the empirical analysis on how trolley problems are solved (Awad et al. 2018 ) to reflections on specific problems (Eckersley 2018 ) and comprehensive AI guidelines (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019 ). This paper mainly deals with the latter issue. The list of ethics guidelines considered in this article therefore includes compilations that cover the field of AI ethics as comprehensively as possible. To the best of my knowledge, a few preprints and papers are currently available, which also deal with the comparison of different ethical guidelines (Zeng et al. 2018 ; Fjeld et al. 2019 ; Jobin et al. 2019 ). While especially the paper from Jobin et al. ( 2019 ) is a systematic scoping review of all the existing literature on AI ethics, this paper does not aim at a full analysis of every available soft-law or non-legal norm document on AI, algorithm, robot, or data ethics, but rather a semi-systematic overview of issues and normative stances in the field, demonstrating how the details of AI ethics relate to a bigger picture.

The selection and compilation of 22 major ethical guidelines were based on a literature analysis. This selection was undertaken in two phases. In the first phase, I searched different databases, namely Google, Google Scholar, Web of Science, ACM Digital Library, arXiv, and SSRN for hits or articles on “AI ethics”, “artificial intelligence ethics”, “AI principles”, “artificial intelligence principles”, “AI guidelines”, and “artificial intelligence guidelines, following every link in the first 25 search results, while at the same time ignoring duplicates in the search process. During the analysis of the search results, I also sifted through the references in order to manually find further relevant guidelines. Furthermore, I used Algorithm Watch’s AI Ethics Guidelines Global Inventory, a crowdsourced, comprehensive list of ethics guidelines, to check whether I missed relevant guidelines. Via the list, I found three further guidelines that meet the criteria for the selection. In this context, a shortcoming one has to consider is that my selection is biased towards documents which are western/northern in nature, excluding guidelines which are not written in English.

I rejected all documents older than 5 years in order to only take guidelines into account that are relatively new. Documents that only refer to a national context—such as for instance position papers of national interest groups (Smart Dubai Smart Dubai 2018 ), the report of the British House of Lords (Bakewell et al. 2018 ), or the Nordic engineers’ stand on Artificial Intelligence and Ethics (Podgaiska and Shklovski)—were excluded from the compilation. Nevertheless, I included the European Commission’s “Ethics Guidelines for Trustworthy AI” (Pekka et al. 2018 ), the Obama administration’s “Report on the Future of Artificial Intelligence” (Holdren et al. 2016 ), and the “Beijing AI Principles” (Beijing Academy of Artificial Intelligence 2019 ), which are backed by the Chinese Ministry of Science and Technology. I have included these three guidelines because they represent the three largest AI “superpowers”. Furthermore, I included the “OECD Principles on AI” (Organisation for Economic Co-operation and Development 2019 ) due to their supranational character. Scientific papers or texts that fall into the category of AI ethics but focus on one or more specific aspects of the topic were not considered either. The same applies to guidelines or toolkits, which are not specifically about AI but rather about big data, algorithms or robotics (Anderson et al. 2018 ; Anderson and Anderson 2011 ). I further excluded corporate policies, with the exception of the “Information Technology Industry AI Policy Principles” ( 2017 ), the principles of the “Partnership on AI” ( 2018 ), the IEEE first and second version of the document on “Ethically Aligned Design” (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2016 , 2019 ), as well as the brief principle lists of Google ( 2018 ), Microsoft ( 2019 ), DeepMind (DeepMind), OpenAI ( 2018 ), and IBM (Cutler et al.  2018 ) which have become well-known through media coverage. Other large companies such as Facebook or Twitter have not yet published any systematic AI guidelines, but only isolated statements of good conduct. Paula Boddington’s book on ethical guidelines ( 2017 ) funded by the Future of Life Institute was also not considered as it merely repeats the Asilomar principles ( 2017 ).

The decisive factor for the selection of ethics guidelines was not the depth of detail of the individual document, but the discernible intention of a comprehensive mapping and categorization of normative claims with regard to the field of AI ethics. In Table  1 , I only inserted green markers if the corresponding issues were explicitly discussed in one or more paragraphs. Isolated mentions without further explanations were not considered, unless the analyzed guideline is so short that it consists entirely of brief mentions altogether.

2.2 Multiple Entries

As shown in Table  1 , several issues are unsurprisingly recurring across various guidelines. Especially the aspects of accountability , privacy or fairness appear altogether in about 80% of all guidelines and seem to provide the minimal requirements for building and using an “ethically sound” AI system. What is striking here is the fact that the most frequently mentioned aspects are those for which technical fixes can be or have already been developed. Enormous technical efforts are undertaken to meet ethical targets in the fields of accountability and explainable AI (Mittelstadt et al. 2019 ), fairness and discrimination aware data mining (Gebru et al. 2018 ), as well as privacy (Baron and Musolesi 2017 ). Many of those endeavors are unified under the FAT ML or XAI community (Veale and Binns 2017 ; Selbst et al. 2018 ). Several tech-companies already offer tools for bias mitigation and fairness in machine learning. In this context, Google, Microsoft and Facebook have issued the “AI Fairness 360” tool kit, the “What-If Tool”, “Facets”, “fairlern.py” and “Fairness Flow”, respectively (Whittaker et al. 2018 ).

Accountability , explainability , privacy , justice , but also other values such as robustness or safety are most easily operationalized mathematically and thus tend to be implemented in terms of technical solutions. With reference to the findings of psychologist Carol Gilligan, one could argue at this point that the way AI ethics is performed and structured constitutes a typical instantiation of a male-dominated justice ethics (Gilligan 1982 ). In the 1980s, Gilligan demonstrated in empirical studies that women do not, as men typically do, address moral problems primarily through a “calculating”, “rational”, “logic-oriented” ethics of justice, but rather interpret them within a wider framework of an “empathic”, “emotion-oriented” ethics of care. In fact, no different from other parts of AI research, the discourse on AI ethics is also primarily shaped by men. My analysis of the distribution of female and male authors of the guidelines, as far as authors were indicated in the documents, showed that the proportion of women was 41.7%. This ratio appears to be close to balance. However, it should be considered that the ratio of female to male authors is reduced to a less balanced 31.3% if the four AI Now Reports are discarded, which come from an organization that is deliberately led by women. The proportion of women is lowest at 7.7% in the FAT ML community’s guidelines which are focused predominantly on technical solutions (Diakopoulos et al.). Accordingly, the “male way” of thinking about ethical problems is reflected in almost all ethical guidelines by way of mentioning aspects such as accountability , privacy or fairness . In contrast, almost no guideline talks about AI in contexts of care, nurture, help, welfare, social responsibility or ecological networks. In AI ethics, technical artefacts are primarily seen as isolated entities that can be optimized by experts so as to find technical solutions for technical problems. What is often lacking is a consideration of the wider contexts and the comprehensive relationship networks in which technical systems are embedded. In accordance with that, it turns out that precisely the reports of AI Now (Crawford et al. 2016 , 2019 ; Whittaker et al. 2018 ; Campolo et al. 2017 ), an organization primarily led by women, do not conceive AI applications in isolation, but within a larger network of social and ecological dependencies and relationships (Crawford and Joler 2018 ), corresponding most closely with the ideas and tenets of an ethics of care (Held 2013 ).

What are further insights from my analysis of the ethics guidelines, as summarized in Table  1 ? On the one hand, it is noticeable that guidelines from industrial contexts name on average 9.1 distinctly separated ethical aspects, whereas the average for ethics codes from science is 10.8. The principles of Microsoft’s AI ethics are the most brief and minimalistic (Microsoft Corporation 2019 ). The OpenAI Charta names only four points and is thus situated at the bottom of the list (OpenAI 2018 ). Conversely, the IEEE guideline contains the largest volume with more than 100.000 words (The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems 2019 ). Finally, yet importantly, it is noteworthy that almost all guidelines suggest that technical solutions exist for many of the problems described. Nevertheless, there are only two guidelines which contain genuinely technical explanations at all—albeit only very sparsely. The authors of the guideline on the “Malicious Use of AI” provide the most extensive commentary here (Brundage et al. 2018 ).

2.3 Omissions

Despite the fact that the guidelines contain various parallels and several recurring topics, what are issues the guidelines do not discuss at all or only very occasionally? Here, I want to give a (non-exhaustive) overview of issues that are missing. Two things should be considered in this context. First, the sampling method used to select the AI ethics guidelines has an effect on the list of issues and omissions. When deliberately excluding for instance robot ethics guidelines, this has the effect that the list of entries lacks issues that are connected with robotics. Second, not all omissions can be treated equally. There are omissions which are missing or severely underrepresented without any good reason—for instance the aspect of political abuse or “hidden” social and ecological costs of AI systems—, and omissions that can be justified—for instance deliberations on artificial general intelligence or machine consciousness, since those technologies are purely speculative.

Nevertheless, in view of the fact that significant parts of the AI community see the emergence of artificial general intelligence as well as associated dangers for humanity or existential threats as a likely scenario (Müller and Bostrom 2016 ; Bostrom 2014 ; Tegmark 2017 ; Omohundro 2014 ), one could argue that those topics could be discussed in ethics guidelines under the umbrella of potential prohibitions to pursue certain research strands in this area (Hagendorff 2019 ). The fact that artificial general intelligence is not discussed in the guidelines may be due to the fact that most of the guidelines are not written by research groups from philosophy or other speculative disciplines, but by researchers with a background directly in computer science or its application. In this context, it is noteworthy that the fear of the emergence of superintelligence is more frequently expressed by people who lack technical experience in the field of AI—one just has to think of people like Stephen Hawking, Elon Musk or Bill Gates—while “real” experts generally regard the idea of a strong AI as rather absurd (Calo 2017 , 26). Perhaps the same holds true for the question of machine consciousness and the ethical problems associated with it (Lyons 2018 ), as this topic is also omitted from all examined ethical guidelines. What is also striking is the fact that only the Montréal Declaration for Responsible Development of Artificial Intelligence ( 2018 ) as well as the AI Now 2019 Report ( 2019 ) explicitly addresses the aspect of democratic control, governance and political deliberation of AI systems. The mentioned documents are also the only guidelines that explicitly prohibits imposing certain lifestyles or concepts of “good living” on people by AI systems, as it is for example demonstrated in the Chinese scoring system (Engelmann et al. 2019 ). The former document further criticizes the application of AI systems for the reduction of social cohesion , for example by isolating people in echo chambers (Flaxman et al. 2016 ). In addition, hardly any guideline discusses the possibility for political abuse of AI systems in the context of automated propaganda, bots, fake news, deepfakes, micro targeting, election fraud, and the like. What is also largely absent from most guidelines is the issue of a lack in diversity within the AI community. This lack of diversity is prevailing in the field of artificial intelligence research and development, as well as in the workplace cultures shaping the technology industry. In the end, a relatively small group of predominantly white men determines how AI systems are designed, for what purposes they are optimized, what is attempted to realize technically, etc. The famous AI startup “nnaisense” run by Jürgen Schmidhuber, which aims at generating an artificial general intelligence, to name just one example, employs only two women—one scientist and one office manager—in its team, but 21 men. Another matter, which is not covered at all or only very rarely mentioned in the guidelines, are aspects of robot ethics . As mentioned in the methods chapter, specific guidelines for robot ethics exist, most prominently represented by Asimov’s three laws of robotics (Asimov 2004 ), but those guidelines were intentionally excluded from the analysis. Nonetheless, advances in AI research contribute, for instance, to increasingly anthropomorphized technical devices. The ethical question that arises in this context echoes Immanuel Kant’s “brutalization argument” and states that the abuse of anthropomorphized agents—as, for example, is the case with language assistants (Brahnam 2006 )—also promotes the likelihood of violent actions between people (Darling 2016 ). Apart from that, the examined ethics guidelines pay little attention to the rather popular trolley problems (Awad et al. 2018 ) and their alleged relation to ethical questions surrounding self-driving cars or other autonomous vehicles. In connection to this, no guideline deals in detail with the obvious question where systems of algorithmic decision making are superior or inferior, respectively, to human decision routines. And finally, virtually no guideline deals with the “hidden” social and ecological costs of AI systems. At several points in the guidelines, the importance of AI systems for approaching a sustainable society is emphasized (Rolnick et al. 2019 ). However, it is omitted—with the exception of the AI Now 2019 Report ( 2019 )—that producer and consumer practices in the context of AI technologies may in themselves contradict sustainability goals. Issues such as lithium mining, e-waste, the one-way use of rare earth minerals, energy consumption, low-wage “clickworkers” creating labels for data sets or doing content moderation are of relevance here (Crawford and Joler 2018 ; Irani 2016 ; Veglis 2014 ; Fang 2019 ; Casilli 2017 ). Although “clickwork” is a necessary prerequisite for the application of methods of supervised machine learning, it is associated with numerous social problems (Silberman et al. 2018 ; Irani 2015 ; Graham et al. 2017 ), such as low wages, work conditions and psychological work consequences, which tend to be ignored by the AI community. Finally, yet importantly, not a single guideline raises the issue of public – private partnerships and industry - funded research in the field of AI. Despite the massive lack of transparency regarding the allocation of research funds, it is no secret that large parts of university AI research are financed by corporate partners. In light of this, it remains questionable to what extent the ideal of freedom of research can be upheld—or whether there will be a gradual “buyout” of research institutes.

3 AI in Practice

3.1 business versus ethics.

The close link between business and science is not only revealed by the fact that all of the major AI conferences are sponsored by industry partners. The link between business and science is also well illustrated by the AI Index 2018 (Shoham et al. 2018 ). Statistics show that, for example, the number of corporate-affiliated AI papers has grown significantly in recent years. Furthermore, there is a huge growth in the number of active AI startups, each supported by huge amounts of annual funding from Venture Capital firms. Tens of thousands of AI-related patents are registered each year. Different industries are incorporating AI applications in a broad variety of fields, ranging from manufacturing, supply-chain management, and service development, to marketing and risk assessment. All in all, the global AI market comprises more than 7 billion dollars (Wiggers 2019 ).

A critical look at this global AI market and the use of AI systems in the economy and other social systems sheds light primarily on unwanted side effects of the use of AI, as well as on directly malevolent contexts of use. These occur in various areas (Pistono and Yampolskiy 2016 ; Amodei et al. 2017 ). Leading, of course, is the military use of AI in cyber warfare or regarding weaponized unmanned vehicles or drones (Ernest and Carroll 2016 ; Anderson and Waxman 2013 ). According to media reports, the US government alone intends to invest two billion dollars in military AI projects over the next 5 years (Fryer-Biggs 2018 ). Moreover, governments can use AI applications for automated propaganda and disinformation campaigns (Lazer et al. 2018 ), social control (Engelmann et al. 2019 ), surveillance (Helbing 2019 ), face recognition or sentiment analysis (Introna and Wood 2004 ), social sorting (Lyon 2003 ), or improved interrogation techniques (McAllister 2017 ). Notwithstanding the above, companies can cause massive job losses due to AI implementation (Frey and Osborne 2013 ), conduct unmonitored forms of AI experiments on society without informed consent (Kramer et al. 2014 ), suffer from data breaches (Schneier 2018 ), use unfair, biased algorithms (Eubanks 2018 ), provide unsafe AI products (Sitawarin et al. 2018 ), use trade secrets to disguise harmful or flawed AI functionalities (Whittaker et al. 2018 ), rush to integrate and put immature AI applications on the market and many more. Furthermore, criminal or black-hat hackers can use AI to tailor cyberattacks, steal information, attack IT infrastructures, rig elections, spread misinformation for example through deepfakes, use voice synthesis technologies for fraud or social engineering (Bendel 2017 ), or disclose personal traits that are actually secret or private via machine learning applications (Kosinski and Wang 2018 ; Kosinski et al. 2013 , 2015 ). All in all, only a very small number of papers is published about the misuse of AI systems, even though they impressively show what massive damage can be done with those systems (Brundage et al. 2018 ; King et al. 2019 ; O’Neil 2016 ).

3.2 AI Race

While the United States currently has the largest number of start-ups, China claims to be the “world leader in AI” in 2030 (Abacus 2018 ). This claim is supported by the sheer amount of data that China has at its disposal to train its own AI systems, as well as by the large label companies that take over the manual preparation of data sets for supervised machine learning (Yuan 2018 ). Conversely, China is seen to have a weakness vis-à-vis the USA in that the investments of the market leaders Baidu, Alibaba and Tencent are too application-oriented comprising areas such as autonomous driving, finance or home appliances, while important basic research on algorithm development, chip production or sensor technology is neglected (Hao 2019 ). The constant comparison between China, the USA and Europe renders the fear of being inferior to each other an essential motive for efforts in the research and development of artificial intelligence.

Another justification for competitive thinking is provided by the military context. If the own “team”, framed in a nationalist way, does not keep pace, so the consideration, it will simply be overrun by the opposing “team” with superior AI military technology. In fact, potential risks emerge from the AI race narrative, as well as from an actual competitive race to develop AI systems for technological superiority (Cave and ÓhÉigeartaigh 2018 ). One risk of this rhetoric is that “impediments” in the form of ethical considerations will be eliminated completely from research, development and implementation. AI research is not framed as a cooperative global project, but as a fierce competition. This competition affects the actions of individuals and promotes a climate of recklessness, repression, and thinking in hierarchies, victory and defeat. The race for the best AI, whether a mere narrative or a harsh reality, reduces the likelihood of the establishment of technical precaution measures as well as of the development of benevolent AI systems, cooperation, and dialogue between research groups and companies. Thus, the AI race stands in stark contrast to the idea of developing an “AI4people” (Floridi et al. 2018 ). The same holds true for the idea of an “AI for Global Good”, as was proposed at the 2017’s ITU summit, or the large number of leading AI researchers who signed the open letter of the “Future of Life Institute”, embracing the norm that AI should be used for prosocial purposes.

Despite the downsides, in less public discourses and in concrete practice, an AI race has long since established itself. Along with that development, in- and outgroup-thinking has intensified. Competitors are seen more or less as enemies or at least as threats against which one has to defend oneself. Ethics, on the other hand, in its considerations and theories always stresses the danger of an artificial differentiation between in- and outgroups (Derrida 1997 ). Constructed outgroups are subject to devaluation, are perceived de-individualized and in the worst case can become victims of violence simply because of their status as “others” (Mullen and Hu 1989 ; Vaes et al. 2014 ). I argue that only by abandoning such thinking in- and outgroups may the AI race be reframed into a global cooperation for beneficial and safe AI.

3.3 Ethics in Practice

Do ethical guidelines bring about a change in individual decision-making regardless of the larger social context? In a recent controlled study, researchers critically reviewed the idea that ethical guidelines serve as a basis for ethical decision-making for software engineers (McNamara et al. 2018 ). In brief, their main finding was that the effectiveness of guidelines or ethical codes is almost zero and that they do not change the behavior of professionals from the tech community. In the survey, 63 software engineering students and 105 professional software developers were scrutinized. They were presented with eleven software-related ethical decision scenarios, testing whether the influence of the ethics guideline of the Association for Computing Machinery (ACM) (Gotterbarn et al. 2018 ) in fact influences ethical decision-making in six vignettes, ranging from responsibility to report, user data collection, intellectual property, code quality, honesty to customer to time and personnel management. The results are disillusioning: “No statistically significant difference in the responses for any vignette were found across individuals who did and did not see the code of ethics, either for students or for professionals.” (McNamara et al. 2018 , 4).

Irrespective of such considerations on the microsociological level, the relative ineffectiveness of ethics can also be explained at the macrosociological level. Countless companies are eager to monetize AI in a huge variety of applications. This strive for a profitable use of machine learning systems is not primarily framed by value- or principle-based ethics, but obviously by an economic logic. Engineers and developers are neither systematically educated about ethical issues, nor are they empowered, for example by organizational structures, to raise ethical concerns. In business contexts, speed is everything in many cases and skipping ethical considerations is equivalent to the path of least resistance. Thus, the practice of development, implementation and use of AI applications has very often little to do with the values and principles postulated by ethics. The German sociologist Ulrich Beck once stated that ethics nowadays “plays the role of a bicycle brake on an intercontinental airplane” (Beck 1988 , 194). This metaphor proves to be particularly true in the context of AI, where huge sums of money are invested in the development and commercial utilization of systems based on machine learning (Rosenberg 2017 ), while ethical considerations are mainly used for public relations purposes (Boddington 2017 , 56).

In their AI Now 2017 Report, Kate Crawford and her team state that ethics and forms of soft governance “face real challenges” (Campolo et al. 2017 , 5). This is mainly due to the fact that ethics has no enforcement mechanisms reaching beyond a voluntary and non-binding cooperation between ethicists and individuals working in research and industry. So what happens is that AI research and development takes place in “closed-door industry settings”, where “user consent, privacy and transparency are often overlooked in favor of frictionless functionality that supports profit-driven business models” (Campolo et al. 2017 , 31 f.). Despite this dispensation of ethical principles, AI systems are used in areas of high societal significance such as health, police, mobility or education. Thus, in the AI Now Report 2018, it is repeated that the AI industry “urgently needs new approaches to governance”, since, “internal governance structures at most technology companies are failing to ensure accountability for AI systems” (Whittaker et al. 2018 , 4). Thus, ethics guidelines often fall into the category of a “’trust us’ form of [non-binding] corporate self-governance” (Whittaker et al. 2018 , 30) and people should “be wary of relying on companies to implement ethical practices voluntarily” (Whittaker et al. 2018 , 32).

The tension between ethical principles and wider societal interests on the one hand, and research, industry, and business objectives on the other can be explained with recourse to sociological theories. Especially on the basis of system theory it can be shown that modern societies differ in their social systems, each working with their own codes and communication media (Luhmann 1984 , 1997 , 1988 ). Structural couplings can lead decisions in one social system to influence other social systems. Such couplings, however, are limited and do not change the overall autonomy of social systems. This autonomy, which must be understood as an exclusive, functionalist orientation towards the system’s own codes is also manifested in the AI industry, business and science. All these systems have their own codes, their own target values, and their own types of economic or symbolic capital via which they are structured and based upon which decisions are made (Bourdieu 1984 ). Ethical intervention in those systems is only possible to a very limited extent (Hagendorff 2016 ). A certain hesitance exists towards every kind of intervention as long as these lie beyond the functional laws of the respective systems. Despite that, unethical behavior or unethical intentions are not solely caused by economic incentives. Rather, individual character traits like cognitive moral development, idealism, or job satisfaction play a role, let alone organizational environment characteristics like an egoistic work climate or (non-existent) mechanisms for the enforcement of ethical codes (Kish-Gephart et al. 2010 ). Nevertheless, many of these factors are heavily influenced by the overall economic system logic. Ethics is then, so to speak, “operationally effectless” (Luhmann 2008 ).

And yet, such system-theoretical considerations apply only on a macro level of observation and must not be generalized. Deviations from purely economic behavioral logics in the tech industry occur as well, for example when Google withdrew from the military project “Maven” after protests from employees (Statt 2018 ) or when people at Microsoft protested against the company’s cooperation with Immigration and Customs Enforcement (ICE) (Lecher 2018 ). Nevertheless, it must also be kept in mind here that, in addition to genuine ethical motives, the significance of economically relevant reputation losses should not be underestimated. Hence, the protest against unethical AI projects can in turn be interpreted in an economic logic, too.

3.4 Loyalty to Guidelines

As indicated in the previous sections, the practice of using AI systems is poor in terms of compliance with the principles set out in the various ethical guidelines. Great progress has been made in the areas of privacy, fairness or explainability. For example, many privacy-friendly techniques for the use of data sets and learning algorithms have been developed, using methods where AI systems’ “sight” is “darkened” via cryptography, differential or stochastic privacy (Ekstrand et al. 2018 ; Baron and Musolesi 2017 ; Duchi et al. 2013 ; Singla et al. 2014 ). Nevertheless, this contradicts the observation that AI has been making such massive progress for several years precisely because of the large amounts of (personal) data available. Those data are collected by privacy-invasive social media platforms, smartphone apps, as well as Internet of Things devices with its countless sensors. In the end, I would argue that the current AI boom coincides with the emergence of a post-privacy society. In many respects, however, this post-privacy society is also a black box society (Pasquale 2015 ), in which, despite technical and organizational efforts to improve explainability, transparency and accountability, massive zones of non-transparency remain, caused both by the sheer complexity of technological systems and by strategic organizational decisions.

For many of the issues mentioned in the guidelines, it is difficult to assess the extent to which efforts to meet the set objectives are successful or whether conflicting trends prevail. This is the case in the areas of safety and cybersecurity, the science-policy link, future of employment, public awareness about AI risks, or human oversight. In other areas, including the issue of hidden costs and sustainability, the protection of whistleblowers, diversity in the field of AI, the fostering of solidarity and social cohesion, the respect for human autonomy, the use of AI for the common good or the military AI arms race, it can certainly be stated that the ethical goals are being massively underachieved. One only has to think of the aspect of gender diversity: Even though ethical guidelines clearly demand its improvement, the state of affairs is that on average 80% of the professors at the world’s leading universities such as Stanford, Oxford, Berkeley or the ETH are male (Shoham et al. 2018 ). Furthermore, men make up more than 70% of applicants for AI jobs in the U.S. (Shoham et al. 2018 ). Alternatively, one can take human autonomy: As repeatedly demanded in various ethical guidelines, people should not be treated as mere data subjects, but as individuals. In fact, however, countless examples show that computer decisions, regardless of their susceptibility to error, are attributed a strong authority which results in the ignorance of individual circumstances and fates (Eubanks 2018 ). Furthermore, countless companies strive for the opposite of human autonomy, employing more and more subtle techniques for manipulating user behavior via micro targeting, nudging, UX-design and so on (Fogg 2003 ; Matz et al. 2017 ). Another example is that of cohesion: Many of the major scandals of the last years would have been unthinkable without the use of AI. From echo chamber effects (Pariser 2011 ) to the use of propaganda bots (Howard and Kollanyi 2016 ), or the spread of fake-news (Vosoughi et al. 2018 ), AI always played a key role to the effect of diminishing social cohesion, fostering instead radicalization, the decline of reason in public discourse and social divides (Tufekci 2018 ; Brady et al. 2017 ).

4 Advances in AI Ethics

4.1 technical instructions.

Given the relative lack of tangible impact of the normative objectives set out in the guidelines, the question arises as to how the guidelines could be improved to make them more effective. At first glance, the most obvious potential for improvement of the guidelines is probably to supplement them with more detailed technical explanations—if such explanations can be found. Ultimately, it is a major problem to deduce concrete technological implementations from the very abstract ethical values and principles. What does it mean to implement justice or transparency in AI-systems? What does a “human-centered” AI look like? How can human oversight be obtained? The list of questions could easily be continued.

The ethics guidelines examined refer exclusively to the term “AI”. They never or very seldom use more specific terminology. However, “AI” is just a collective term for a wide range of technologies or an abstract large-scale phenomenon. The fact that not a single prominent ethical guideline goes into greater technical detail shows how deep the gap is between concrete contexts of research, development, and application on the one side, and ethical thinking on the other. Ethicists must partly be capable of grasping technical details with their intellectual framework. That means reflecting on the ways data are generated, recorded, curated, processed, disseminated, shared, and used (Bruin and Floridi 2017 ), on the ways of designing algorithms and code, respectively (Kitchin 2017 ; Kitchin and Dodge 2011 ), or on the ways training data sets are selected (Gebru et al. 2018 ). In order to analyze all this in sufficient depth, ethics has to partially transform to “microethics”. This means that at certain points, a substantial change in the level of abstraction has to happen insofar as ethics aims to have a certain impact and influence in the technical disciplines and the practice of research and development of artificial intelligence (Morley et al. 2019 ). On the way from ethics to “microethics”, a transformation from ethics to technology ethics, to machine ethics, to computer ethics, to information ethics, to data ethics has to take place. As long as ethicists refrain from doing so, they will remain visible in a general public, but not in professional communities.

A good example of such a microethical work which can be implemented easily and concretely in practice is the paper by Gebru et al. ( 2018 ). The researchers propose the introduction of standardized datasheets listing the properties of different training data sets, so that machine learning-practitioners can check to what extent certain data sets are best suitable for their purposes, what the original intention was when the data set was created, what data the data set is composed of, how the data was collected and pre-processed, etc. The paper by Gebru et al. makes it possible for practitioners to obtain a more informed decision on the selection of certain training data sets, so that supervised machine learning ultimately becomes fairer, and more transparent, and avoids cases of algorithmic discrimination (Buolamwini and Gebru 2018 ). Such work is, however, an exception.

In general, ethical guidelines postulate very broad, overarching principles which are then supposed to be implemented in a widely diversified set of scientific, technical and economic practices, and in sometimes geographically dispersed groups of researchers and developers with different priorities, tasks and fragmental responsibilities. Ethics thus operates at a maximum distance from the practices it actually seeks to govern. Of course, this does not remain unnoticed among technology developers. In consequence, the generality and superficiality of ethical guidelines in many cases not only prevents actors from bringing their own practice into line with them, but rather encourages the devolution of ethical responsibility to others.

4.2 Virtue Ethics

Regardless of the fact that normative guidelines should be accompanied by in-depth technical instructions—as far as they can reasonably be identified—, the question still arises how the precarious situation regarding the application and fulfillment of AI ethics guidelines can be improved. To address this question, one needs to take a step back and look at ethical theories in general. In ethics, several major strands of theories were created and shaped by various philosophical traditions. Those theories range from deontological to contractualistic, utilitarian, or virtue ethical approaches (Kant 1827 ; Rawls 1975 ; Bentham 1838 ; Hursthouse 2001 ). In the following, two of these approaches—deontology and virtue ethics—will be selected to illustrate different approaches in AI ethics. The deontological approach is based on strict rules, duties or imperatives. The virtue ethics approach, on the other hand, is based on character dispositions, moral intuitions or virtues—especially “technomoral virtues” (Vallor 2016 ). In the light of these two approaches, the traditional type of AI ethics can be assigned to the deontological concept (Mittelstadt 2019 ). Ethics guidelines postulate a fixed set of universal principles and maxims which technology developers should adhere to (Ananny 2016 ). The virtue ethics approach, on the other hand, focuses more on “deeper-lying” structures and situation-specific deliberations, on addressing personality traits and behavioral dispositions on the part of technology developers (Leonelli 2016 ). Virtue ethics does not define codes of conduct but focusses on the individual level. The technologists or software engineers and their social context are the primary addressees of such an ethics (Ananny 2016 ), not technology itself.

I argue that the prevalent approach of deontological AI ethics should be augmented with an approach oriented towards virtue ethics aiming at values and character dispositions. Ethics is then no longer understood as a deontologically inspired tick-box exercise, but as a project of advancing personalities, changing attitudes, strengthen responsibilities and gaining courage to refrain from certain actions, which are deemed unethical. When following the path of virtue ethics, ethics as a scientific discipline must refrain from wanting to limit, control, or steer (Luke 1995 ). Very often, ethics or ethical guidelines are perceived as something whose purpose is to stop or prohibit activity, to hamper valuable research and economic endeavors (Boddington 2017 , 8). I want to resign this negative notion of ethics. It should not be the objective of ethics to stifle activity, but to do the exact opposite, i.e. broadening the scope of action, uncovering blind spots, promoting autonomy and freedom, and fostering self-responsibility.

In view of AI ethics, approaches that focus on virtues aim at cultivating a moral character, expressing technomoral virtues such as honesty, justice, courage, empathy, care, civility, or magnanimity, to name just a few (Vallor 2016 ). Those virtues are supposed to raise the likelihood of ethical decision-making practices in organizations that develop and deploy AI applications. Cultivating a moral character, in terms of virtue ethics, means to educate virtues in families, schools, communities, as well as companies. At best, every individual, every member of a society should encourage this cultivation, by generating the motivation to adopt and habituate practices that influence technology development and use in a positive manner. Especially the subject of responsibility diffusion can only be circumvented when virtue ethics is adopted on a broad and collective level in communities of tech professionals. Simply every person involved in data science, data engineering and data economies related to applications of AI has to take at least some responsibility for the implications of their actions (Leonelli 2016 ). This is why researchers such as Floridi argue that every actor who is causally relevant for bringing about the collective consequence or impacts in question, has to be held accountable (Floridi 2016 ). Interestingly, Floridi uses the backpropagation method known from Deep Learning to describe the way in which responsibilities can be assigned, except that here backpropagation is used in networks of distributed responsibility. When working in groups, actions that are on first glance allegedly morally neutral can nevertheless have consequences or impacts—intended or non-intended—that are morally wrong. This means that practitioners from AI communities always need to discern the overarching, short- and long-term consequences of the technical artefacts they are building or maintaining, as well as to explore alternative ways of developing software or using data, including the option of completely refraining from carrying out particular tasks, which are considered unethical.

In addition to the endorsement of virtue ethics in tech communities, several institutional changes should take place. They include the adoption of legal framework conditions, the establishment of mechanisms for an independent auditing of technologies, the establishment of institutions for complaints, which also compensate for harms caused by AI systems, and the expansion of university curricula in particular through content from ethics of technology, media, and information (Floridi et al. 2018 ; Cowls and Floridi 2018 ; Eaton et al. 2017 ; Goldsmith and Burton 2017 ). So far, however, hardly any of these demands have been met.

5 Conclusion

Currently, AI ethics is failing in many cases. Ethics lacks a reinforcement mechanism. Deviations from the various codes of ethics have no consequences. And in cases where ethics is integrated into institutions, it mainly serves as a marketing strategy. Furthermore, empirical experiments show that reading ethics guidelines has no significant influence on the decision-making of software developers. In practice, AI ethics is often considered as extraneous, as surplus or some kind of “add-on” to technical concerns, as unbinding framework that is imposed from institutions “outside” of the technical community. Distributed responsibility in conjunction with a lack of knowledge about long-term or broader societal technological consequences causes software developers to lack a feeling of accountability or a view of the moral significance of their work. Especially economic incentives are easily overriding commitment to ethical principles and values. This implies that the purposes for which AI systems are developed and applied are not in accordance with societal values or fundamental rights such as beneficence, non-maleficence, justice, and explicability (Taddeo and Floridi 2018 ; Pekka et al. 2018 ).

Nevertheless, in several areas ethically motivated efforts are undertaken to improve AI systems. This is particularly the case in fields where technical “fixes” can be found for specific problems, such as accountability, privacy protection, anti-discrimination, safety, or explainability. However, there is also a wide range of ethical aspects that are significantly related to the research, development and application of AI systems, but are not or very seldomly mentioned in the guidelines. Those omissions range from aspects like the danger of a malevolent artificial general intelligence, machine consciousness, the reduction of social cohesion by AI ranking and filtering systems on social networking sites, the political abuse of AI systems, a lack of diversity in the AI community, links to robot ethics, the dealing with trolley problems, the weighting between algorithmic or human decision routines, “hidden” social and ecological costs of AI, to the problem of public–private-partnerships and industry-funded research. Again, as mentioned earlier, the list of omissions is not exhaustive and not all omissions can be justified equally. Some omissions, like deliberations on artificial general intelligence, can be justified by pointing at their purely speculative nature, while other omissions are less valid and should be a reason to update or improve existing and upcoming guidelines.

Checkbox guidelines must not be the only “instruments” of AI ethics. A transition is required from a more deontologically oriented, action-restricting ethic based on universal abidance of principles and rules, to a situation-sensitive ethical approach based on virtues and personality dispositions, knowledge expansions, responsible autonomy and freedom of action. Such an AI ethics does not seek to subsume as many cases as possible under individual principles in an overgeneralizing way, but behaves sensitively towards individual situations and specific technical assemblages. Further, AI ethics should not try to discipline moral actors to adhere to normative principles, but emancipate them from potential inabilities to act self-responsibly on the basis of comprehensive knowledge, as well as empathy in situations where morally relevant decisions have to be made.

These considerations have two consequences for AI ethics. On the one hand, a stronger focus on technological details of the various methods and technologies in the field of AI and machine learning is required. This should ultimately serve to close the gap between ethics and technical discourses. It is necessary to build tangible bridges between abstract values and technical implementations, as long as these bridges can be reasonably constructed. On the other hand, however, the consequence of the presented considerations is that AI ethics, conversely, turns away from the description of purely technological phenomena in order to focus more strongly on genuinely social and personality-related aspects. AI ethics then deals less with AI as such, than with ways of deviation or distancing oneself from problematic routines of action, with uncovering blind spots in knowledge, and of gaining individual self-responsibility. Future AI ethics faces the challenge of achieving this balancing act between the two approaches.

Change history

28 july 2020.

In the original publication of this article, the Table 1 has been published in a low resolution. Now a larger version of Table 1 is published in this correction. The publisher apologizes for the error made during production.

Abacus. (2018). China internet report 2018. Retrieved July 13, 2018. https://www.abacusnews.com/china-internet-report/china-internet-2018.pdf .

Abrassart, C., Bengio, Y., Chicoisne, G., de Marcellis-Warin, N., Dilhac, M.-A., Gambs, S., Gautrais, V., et al. (2018). Montréal declaration for responsible development of artificial intelligence (pp. 1–21).

Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D. (2017). Concrete problems in AI safety. arXiv (pp. 1–29).

Ananny, M. (2016). Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, & Human Values, 41 (1), 93–117.

Google Scholar  

Anderson, M., & Anderson, S. L. (Eds.). (2011). Machine ethics . Cambridge: Cambridge University Press.

Anderson, M., Anderson, S. L. (2015). Towards ensuring ethical behavior from autonomous systems: A case-supported principle-based paradigm. In Artificial intelligence and ethics: Papers from the 2015 AAAI Workshop (pp. 1–10).

Anderson, D., Bonaguro, J., McKinney, M., Nicklin, A., Wiseman, J. (2018). Ethics & algorithms toolkit . Retrieved February 01, 2019. https://ethicstoolkit.ai/ .

Anderson, K., Waxman, M. C. (2013). Law and ethics for autonomous weapon systems: Why a ban won’t work and how the laws of WAR can. SSRN Journal , 1–32.

Asimov, I. (2004). I, Robot . New York: Random House LLC.

Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., et al. (2018). The moral machine experiment. Nature, 563 (7729), 59–64. https://doi.org/10.1038/s41586-018-0637-6 .

Article   Google Scholar  

Bakewell, J. D., Clement-Jones, T. F., Giddens, A., Grender, R. M., Hollick, C. R., Holmes, C., Levene, P. K. et al. (2018). AI in the UK: Ready, willing and able?. Select committee on artificial intelligence (pp. 1–183).

Baron, B., Musolesi, M. (2017). Interpretable machine learning for privacy-preserving pervasive systems. arXiv (pp. 1–10).

Beck, U. (1988). Gegengifte: Die organisierte Unverantwortlichkeit . Frankfurt am Main: Suhrkamp.

Beijing Academy of Artificial Intelligence. (2019). Beijing AI principles . Retrieved June 18, 2019. https://www.baai.ac.cn/blog/beijing-ai-principles .

Bendel, O. (2017). The synthetization of human voices. AI & SOCIETY - Journal of Knowledge, Culture and Communication, 82, 737.

Bentham, J. (1838). The Works of Jeremy Bentham . With the assistance of J. Bowring. 11 vols. 1. Edinburgh: William Tait. Published under the Superintendence of his Executor.

Boddington, P. (2017). Towards a code of ethics for artificial intelligence . Cham: Springer.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies . Oxford: Oxford University Press.

Bourdieu, P. (1984). Distinction: A social critique of the judgement of taste . Cambridge: Harvard University Press.

Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content in social networks. Proc Natl Acad Sci USA, 114 (28), 7313–7318.

Brahnam, S. (2006). Gendered bots and bot abuse. In Antonella de Angeli, Sheryl Brahnam, Peter Wallis, & Peter Dix (Eds.), Misuse and abuse of interactive technologies (pp. 1–4). Montreal: ACM.

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A. et al. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv (pp. 1–101).

Buolamwini, J., Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification . In Sorelle and Wilson 2018 (pp. 1–15).

Burton, E., Goldsmith, J., Koening, S., Kuipers, B., Mattei, N., & Walsh, T. (2017). Ethical considerations in artificial intelligence courses. Artificial Intelligence Magazine, 38 (2), 22–36.

Calo, R. (2017). Artificial intelligence policy: a primer and roadmap. SSRN Journal , 1–28.

Campolo, A., Sanfilippo, M., Whittaker, M., Crawford, K. (2017). AI now 2017 report . Retrieved October 02, 2018. https://assets.ctfassets.net/8wprhhvnpfc0/1A9c3ZTCZa2KEYM64Wsc2a/8636557c5fb14f2b74b2be64c3ce0c78/_AI_Now_Institute_2017_Report_.pdf .

Casilli, A. A. (2017). Digital labor studies go global: Toward a digital decolonial turn. International Journal of Communication, 11, 1934–3954.

Cave, S., ÓhÉigeartaigh, S. S. (2018). An AI race for strategic advantage: Rhetoric and risks (pp. 1–5).

Cowls, J., Floridi, L., (2018). Prolegomena to a white paper on an ethical framework for a good AI society. SSRN Journal , 1–14.

Crawford, K., Dobbe, R., Dryer, T., Fried, G., Green, B., Kaziunas, E., Kak, A. et al. (2019). AI now 2019 report . Retrieved December 18, 2019. https://ainowinstitute.org/AI_Now_2019_Report.pdf .

Crawford, K., Joler, V. (2018). Anatomy of an AI system . Retrieved February 06, 2019. https://anatomyof.ai/ .

Crawford, K., Whittaker, M., Clare Elish, M., Barocas, S., Plasek, A., Ferryman, K. (2016). The AI now report: The social and economic implications of artificial intelligence technologies in the near-term .

Cutler, A., Pribić, M., Humphrey, L. (2018). Everyday ethics for artificial intelligence: A practical guide for designers & developers . Retrieved February 04, 2019. https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf : 1–18.

Darling, K. (2016). Extending legal protection to social robots: The effect of anthropomorphism, empathy, and violent behavior towards robotic objects. In R. Calo, A. M. Froomkin, & I. Kerr (Eds.), Robot law (pp. 213–234). Cheltenham: Edward Elgar.

de Bruin, B., & Floridi, L. (2017). The ethics of cloud computing. Science and Engineering Ethics, 23 (1), 21–39.

DeepMind. DeepMind ethics & society principles. Retrieved July 17, 2019. https://deepmind.com/applied/deepmind-ethics-society/principles/ .

Derrida, J. (1997). Of grammatology . Baltimore: Johns Hopkins Univ. Press.

Diakopoulos, N., Friedler, S. A., Arenas, M., Barocas, S., Hay, M., Howe, B., Jagadish, H. V. et al. Principles for accountable algorithms and a social impact statement for algorithms. Retrieved July 31, 2019. https://www.fatml.org/resources/principles-for-accountable-algorithms .

Duchi, J. C., Jordan, M. I., Wainwright, M. J. (2013). Privacy aware learning. arXiv (pp. 1–60).

Eaton, E., Koenig, S., Schulz, C., Maurelli, F., Lee, J., Eckroth, J., Crowley, M. et al. (2017). Blue sky ideas in artificial intelligence education from the EAAI 2017 new and future AI educator program. arXiv (pp. 1–5).

Eckersley, P. (2018). Impossibility and uncertainty theorems in AI value alignment or why your AGI should not have a utility function. arXiv (pp. 1–13).

Ekstrand, M. D., Joshaghani, R., Mehrpouyan, H. (2018). Privacy for all: Ensuring fair and equitable privacy protections. In Sorelle and Wilson 2018 (pp. 1–13).

Engelmann, S., Chen, M., Fischer, F., Kao, C., Grossklags, J. (2019). Clear sanctions, vague rewards: How China’s social credit system currently defines “Good” and “Bad” behavior. In Proceedings of the conference on fairness, accountability, and transparency—FAT* ‘19 (pp. 69–78).

Ernest, N., & Carroll, D. (2016). Genetic fuzzy based artificial intelligence for unmanned combat aerial vehicle control in simulated air combat missions. Journal of Defense Management . https://doi.org/10.4172/2167-0374.1000144 .

Etzioni, A., & Etzioni, O. (2017). Incorporating ethics into artificial intelligence. The Journal of Ethics, 21 (4), 403–418.

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor . New York: St. Marting’s Press.

Fang, L. (2019). Google hired gig economy workers to improve artificial intelligence in controversial drone-targeting project . Retrieved February 13, 2019. https://theintercept.com/2019/02/04/google-ai-project-maven-figure-eight/ .

Fjeld, J., Hilligoss, H., Achten, N., Daniel, M. L., Feldman, J., Kagay, S. (2019). Principled artificial intelligence: A map of ethical and rights-based approaches . Retrieved July 17, 2019. https://ai-hr.cyber.harvard.edu/primp-viz.html .

Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter bubbles, echo chambers, and online news consumption. PUBOPQ, 80 (S1), 298–320.

Floridi, L. (2016). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences, 374 (2083), 1–13.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et al. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28 (4), 689–707.

Fogg, B. J. (2003). Persuasive technology: Using computers to change what we think and do . San Francisco: Morgan Kaufmann Publishers.

Frey, C. B., Osborne, M. A. (2013). The future of employment: How susceptible are jobs to computerisation: Oxford Martin Programme on Technology and Employment (pp. 1–78).

Fryer-Biggs, Z. (2018). The pentagon plans to spend $2 billion to put more artificial intelligence into its weaponry. Retrieved January 25, 2019. https://www.theverge.com/2018/9/8/17833160/pentagon-darpa-artificial-intelligence-ai-investment .

Future of Life Institute. (2017). Asilomar AI principles. Retrieved October 23, 2018. https://futureoflife.org/ai-principles/ .

Garzcarek, U., Steuer, D. (2019). Approaching ethical guidelines for data scientists. arXiv (pp. 1–18).

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumeé, III, H., Crawford, K. (2018). Datasheets for datasets. arXiv (pp. 1–17).

Gilligan, C. (1982). In a different voice: Psychological theory and women’s development . Cambridge: Harvard University Press.

Goldsmith, J., Burton, E. (2017). Why teaching ethics to AI practitioners is important. ACM SIGCAS Computers and Society (pp. 110–114).

Google. (2018). Artificial intelligence at Google: Our principles. Retrieved January 24, 2019. https://ai.google/principles/ .

Google. (2019). Perspectives on issues in AI governance (pp. 1–34). Retrieved February 11, 2019. https://ai.google/static/documents/perspectives-on-issues-in-ai-governance.pdf .

Gotterbarn, D., Brinkman, B., Flick, C., Kirkpatrick, M. S., Miller, K., Vazansky, K., Wolf, M. J. (2018). ACM code of ethics and professional conduct: Affirming our obligation to use our skills to benefit society (pp. 1–28). Retrieved February 01, 2019. https://www.acm.org/binaries/content/assets/about/acm-code-of-ethics-booklet.pdf .

Graham, M., Hjorth, I., & Lehdonvirta, V. (2017). Digital labour and development: Impacts of global digital labour platforms and the gig economy on worker livelihoods. Transfer: European Review of Labour and Research, 23 (2), 135–162.

Greene, D., Hoffman, A. L., Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Hawaii international conference on system sciences (pp. 1–10).

Hagendorff, T. (2016). Wirksamkeitssteigerungen Gesellschaftskritischer Diskurse. Soziale Probleme. Zeitschrift für soziale Probleme und soziale Kontrolle, 27 (1), 1–16.

Hagendorff, T. (2019). Forbidden knowledge in machine learning: Reflections on the limits of research and publication. arXiv (pp. 1–24).

Hao, K. (2019). Three charts show how China’s AI Industry is propped up by three companies. Retrieved January 25, 2019. https://www.technologyreview.com/s/612813/the-future-of-chinas-ai-industry-is-in-the-hands-of-just-three-companies/?utm_campaign=Artificial%2BIntelligence%2BWeekly&utm_medium=email&utm_source=Artificial_Intelligence_Weekly_95 .

Helbing, D. (Ed.). (2019). Towards digital enlightment: Essays on the darf and light sides of the digital revolution . Cham: Springer.

Held, V. (2013). Non-contractual society: A feminist view. Canadian Journal of Philosophy, 17 (Supplementary Volume 13), 111–137.

Holdren, J. P., Bruce, A., Felten, E., Lyons, T., & Garris, M. (2016). Preparing for the future of artificial intelligence (pp. 1–58). Washington, D.C: Springer.

Howard, P. N., Kollanyi, B. (2016). Bots, #StrongerIn, and #Brexit: Computational propaganda during the UK-EU Referendum. arXiv (pp. 1–6).

Hursthouse, R. (2001). On virtue ethics . Oxford: Oxford University Press.

Information Technology Industry Council. (2017). ITI AI policy principles . Retrieved January 29, 2019. https://www.itic.org/public-policy/ITIAIPolicyPrinciplesFINAL.pdf .

Introna, L. D., & Wood, D. (2004). Picturing algorithmic surveillance: The politics of facial recognition systems. Surveillance & Society, 2 (2/3), 177–198.

Irani, L. (2015). The cultural work of microwork. New Media & Society, 17 (5), 720–739.

Irani, L. (2016). The hidden faces of automation. XRDS, 23 (2), 34–37.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1 (9), 389–399.

Johnson, D. G. (2017). Can engineering ethics be taught? The Bridge, 47 (1), 59–64.

Kant, I. (1827). Kritik Der Praktischen Vernunft . Leipzig: Hartknoch.

King, T. C., Aggarwal, N., Taddeo, M., & Floridi, L. (2019). Artificial intelligence crime: An interdisciplinary analysis of foreseeable threats and solutions. Science and Engineering Ethics, 26, 89–120.

Kish-Gephart, J. J., Harrison, D. A., & Treviño, L. K. (2010). Bad apples, bad cases, and bad barrels: Meta-analytic evidence about sources of unethical decisions at work. The Journal of Applied Psychology, 95 (1), 1–31.

Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20 (1), 14–29.

Kitchin, R., & Dodge, M. (2011). Code/space: Software and everyday life . Cambridge: The MIT Press.

Kosinski, M., Matz, S. C., Gosling, S. D., Popov, V., & Stillwell, D. (2015). Facebook as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines. American Psychologist, 70 (6), 543–556.

Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences of the United States of America, 110 (15), 5802–5805.

Kosinski, M., & Wang, Y. (2018). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology, 114 (2), 246–257.

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences of the United States of America, 111 (24), 8788–8790.

Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., et al. (2018). The science of fake news. Science, 359 (6380), 1094–1096.

Lecher, C. (2018). The employee letter denouncing Microsoft’s ICE contract now has over 300 signatures. Retrieved February 11, 2019. https://www.theverge.com/2018/6/21/17488328/microsoft-ice-employees-signatures-protest .

Leonelli, S. (2016). Locating ethics in data science: Responsibility and accountability in global and distributed knowledge production systems. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences, 374 (2083), 1–12.

Luhmann, N. (1984). Soziale Systeme: Grundriß einer allgemeinen Theorie . Frankfurt A.M: Suhrkamp.

Luhmann, N. (1988). Die Wirtschaft der Gesellschaft . Frankfurt A.M: Suhrkamp.

Luhmann, N. (1997). Die Gesellschaft der Gesellschaft . Frankfurt am Main: Suhrkamp.

Luhmann, N. (2008). Die Moral der Gesellschaft . Frankfurt AM: Suhrkamp.

Luke, B. (1995). Taming ourselves or going Feral? Toward a nonpatriarchal metaethic of animal liberation. In Carol J. Adams & Josephine Donovan (Eds.), Animals & women: Feminist theoretical explorations (pp. 290–319). Durham: Duke University Press.

Lyon, D. (2003). Surveillance as social sorting: Computer codes and mobile bodies. In David Lyon (Ed.), Surveillance as social sorting: Privacy, risk, and digital discrimination (pp. 13–30). London: Routledge.

Lyons, S. (2018). Death and the machine . Singapore: Palgrave Pivot.

Matz, S. C., Kosinski, M., Nave, G., & Stillwell, D. (2017). Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the National Academy of Sciences of the United States of America, 114, 12714–12719.

McAllister, A. (2017). Stranger than science fiction: The rise of A.I. interrogation in the dawn of autonomous robots and the need for an additional protocol to the U.N. convention against torture. Minnesota Law Review, 101, 2527–2573.

McNamara, A., Smith, J., Murphy-Hill, E. (2018). Does ACM’s code of ethics change ethical decision making in software development?” In G. T. Leavens, A. Garcia, C. S. Păsăreanu (Eds.) Proceedings of the 2018 26th ACM joint meeting on european software engineering conference and symposium on the foundations of software engineering—ESEC/FSE 2018 (pp. 1–7). New York: ACM Press.

Microsoft Corporation. (2019). Microsoft AI principles. Retrieved February 01, 2019. https://www.microsoft.com/en-us/ai/our-approach-to-ai .

Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1 (11), 501–507.

Mittelstadt, B., Russell, C., Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency—FAT* ‘19 (pp. 1–10).

Morley, J., Floridi, L., Kinsey, L., Elhalal, A. (2019). From what to how. An overview of AI ethics tools, methods and research to translate principles into practices. arXiv (pp. 1–21).

Mullen, B., & Hu, L.-T. (1989). Perceptions of ingroup and outgroup variability: A meta-analytic integration. Basic and Applied Social Psychology, 10 (3), 233–252.

Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Vincent C. Müller (Ed.), Fundamental issues of artificial intelligence (pp. 555–572). Cham: Springer International Publishing.

Omohundro, S. (2014). Autonomous technology and the greater human good. Journal of Experimental & Theoretical Artificial Intelligence, 26 (3), 303–315.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy . New York: Crown Publishers.

MATH   Google Scholar  

OpenAI. (2018). OpenAI Charter . Retrieved July 17, 2019. https://openai.com/charter/ .

Organisation for Economic Co-operation and Development. (2019). Recommendation of the Council on Artificial Intelligence (pp. 1–12). Retrieved June 18, 2019. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 .

Pariser, E. (2011). The filter bubble: What the internet is hiding from you . New York: The Penguin Press.

Partnership on AI. (2018). About us . Retrieved January 25, 2019. https://www.partnershiponai.org/about/ .

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information . Cambridge: Harvard University Press.

Pekka, A.-P., Bauer, W., Bergmann, U., Bieliková, M., Bonefeld-Dahl, C., Bonnet, Y., Bouarfa, L. et al. (2018). The European Commission’s high-level expert group on artificial intelligence: Ethics guidelines for trustworthy ai . Working Document for stakeholders’ consultation. Brussels (pp. 1–37).

Pistono, F., Yampolskiy, R. (2016). Unethical research: How to create a malevolent artificial intelligence. arXiv (pp. 1–6).

Podgaiska, I., Shklovski, I. Nordic engineers’ stand on artificial intelligence and ethics: Policy recommendations and guidelines (pp. 1–40).

Prates, M., Avelar, P., Lamb, L. C. (2018). On quantifying and understanding the role of ethics in AI research: A historical account of flagship conferences and journals. arXiv (pp. 1–13).

Rawls, J. (1975). Eine Theorie Der Gerechtigkeit . Frankfurt am Main: Suhrkamp.

Rolnick, D., Donti, P. L., Kaack, L. H., Kochanski, K., Lacoste, A., Sankaran, K., Ross, A. S. et al. (2019). Tackling climate change with machine learning. arXiv (pp. 1–97).

Rosenberg, S. (2017) Why AI is still waiting for its ethics transplant.”Retrieved January 16, 2018. https://www.wired.com/story/why-ai-is-still-waiting-for-its-ethics-transplant/ .

Schneier, B. (2018). Click here to kill everybody . New York: W. W. Norton & Company.

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., Vertesi, J. (2018). Fairness and abstraction in Sociotechnical Systems. In ACT conference on fairness, accountability, and transparency (FAT) (vol. 1, No. 1, pp. 1–17).

Shoham, Y., Perrault, R., Brynjolfsson, E., Clark, J., Manyika, J., Niebles, J. C., Lyons, T., Etchemendy, J., Grosz, B., Bauer, Z. (2018). The AI index 2018 annual report. Stanford, Kalifornien (pp. 1–94).

Silberman, M. S., Tomlinson, B., LaPlante, R., Ross, J., Irani, L., & Zaldivar, A. (2018). Responsible research with crowds. Communications of the ACM, 61 (3), 39–41.

Singla, A., Horvitz, E., Kamar, E., White, R. W. (2014). Stochastic Privacy. arXiv (pp. 1–10).

Sitawarin, C., Bhagoji, A. N., Mosenia, A., Chiang, M., Mittal, P. (2018). DARTS: Deceiving autonomous cars with toxic signs. arXiv (pp. 1–27).

Smart Dubai. 2018. AI ethics principles & guidelines . Retrieved February 01, 2019. https://smartdubai.ae/pdfviewer/web/viewer.html?file=https://smartdubai.ae/docs/default-source/ai-principles-resources/ai-ethics.pdf?Status=Master&sfvrsn=d4184f8d_6 .

Statt, N. (2018). Google reportedly leaving project maven military AI program after 2019. Retrieved February 11, 2019. https://www.theverge.com/2018/6/1/17418406/google-maven-drone-imagery-ai-contract-expire .

Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361 (6404), 751–752.

MathSciNet   MATH   Google Scholar  

Tegmark, A. (2017). Life 3.0: Being human in the age of artificial intelligence . New York: Alfred A. Knopf.

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2016). Ethically aligned design: A vision for prioritizing human well-being with artificial intelligence and autonomous systems (pp. 1–138).

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems (pp. 1–294).

Tufekci, Z. (2018). YouTube, the great Radicalizer. Retrieved March 19, 2018. https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html .

Vaes, J., Bain, P. G., & Bastian, B. (2014). Embracing humanity in the face of death: why do existential concerns moderate ingroup humanization? The Journal of Social Psychology, 154 (6), 537–545.

Vakkuri, V., Abrahamsson, P. (2018). The key concepts of ethics of artificial intelligence. In Proceedings of the 2018 IEEE international conference on engineering, technology and innovation (pp. 1–6).

Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting . New York: Oxford University Press.

Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4 (2), 1–17.

Veglis, A. (2014). Moderation techniques for social media content. In D. Hutchison, T. Kanade, J. Kittler, J. M. Kleinberg, A. Kobsa, F. Mattern, J. C. Mitchell, et al. (Eds.), Social computing and social media (pp. 137–148). Cham: Springer International Publishing.

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359 (6380), 1146–1151.

Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., West, S. M., Richardson, R., Schultz, J., Schwartz, O. (2018). AI now report 2018 (pp. 1–62).

Wiggers, K. (2019). CB insights: Here are the top 100 AI companies in the world. Retrieved February 11, 2019. https://venturebeat.com/2019/02/06/cb-insights-here-are-the-top-100-ai-companies-in-the-world/ .

Yu, H., Shen, Z., Miao, C., Leung, C., Lesser, V. R., Yang, Q. (2018). Building ethics into artificial intelligence. arXiv (pp. 1–8).

Yuan, L. (2018). How cheap labor drives China’s A.I. ambitions . Retrieved November 30, 2018. https://www.nytimes.com/2018/11/25/business/china-artificial-intelligence-labeling.html .

Zeng, Y., Lu, E., Huangfu, C. (2018). Linking artificial intelligence principles. arXiv (pp. 1–4).

Download references

Acknowledgements

Open Access funding provided by Projekt DEAL.

This research was supported by the Cluster of Excellence “Machine Learning – New Perspectives for Science” funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – Reference Number EXC 2064/1 – Project ID 390727645.

Author information

Authors and affiliations.

Cluster of Excellence “Machine Learning: New Perspectives for Science”, University of Tuebingen, Tübingen, Germany

Thilo Hagendorff

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Thilo Hagendorff .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Hagendorff, T. The Ethics of AI Ethics: An Evaluation of Guidelines. Minds & Machines 30 , 99–120 (2020). https://doi.org/10.1007/s11023-020-09517-8

Download citation

Received : 01 October 2019

Accepted : 21 January 2020

Published : 01 February 2020

Issue Date : March 2020

DOI : https://doi.org/10.1007/s11023-020-09517-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Machine learning
  • Implementation
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Ethical AI and the importance of guidelines for algorithms

    ai ethics literature review

  2. AI Ethics: Why it Matters for Marketers

    ai ethics literature review

  3. Why AI Ethics is Important and Its Benefits in future?

    ai ethics literature review

  4. AI Ethics Are a Concern. Learn How You Can Stay Ethical

    ai ethics literature review

  5. AI & Ethics

    ai ethics literature review

  6. What We Need Now: Ethical AI

    ai ethics literature review

VIDEO

  1. AI ethics 101 #CapCut #printondemand

  2. AI ethics at UMD: Game-changing research? 🌟 #AIethics

  3. Ieva Martinkenaite

  4. Ethics of AI: Navigating the Moral Landscape

  5. Ethics in AI: Pioneering a Harmonious Future

  6. Understanding AI Ethics with OpenAI

COMMENTS

  1. Ethics of AI: A Systematic Literature Review of Principles ...

    We conducted a systematic literature review (SLR) study to investigate the agreement on the significance of AI principles and identify the challenging factors that could negatively impact the adoption of AI ethics principles.

  2. Ethics of AI: A Systematic Literature Review of Principles ...

    Using a research model based on extant, conceptual AI ethics literature, we explore the current state of practice out on the field in the absence of formal methods and tools for ethically...

  3. Ethics & AI: A Systematic Review on Ethical Concerns and ...

    The review, which followed the PRISMA guidelines, revealed 12 main ethical issues: justice and fairness, freedom and autonomy, privacy, transparency, patient safety and cyber security, trust, beneficence, responsibility, solidarity, sustainability, dignity, and conflicts.

  4. Ethics of AI: A systematic literature review of principles ...

    The aim of this study is to conduct a systematic literature review (SLR) and explore the available literature to identify the AI ethics principles. Moreover, the SLR study uncover the key challenging factors that are the demotivators for considering the ethics of AI.

  5. What ethics can say on artificial intelligence: Insights from ...

    This article systematizes the literature from 1986 to 2021 in the AI ethics field, with the specific aim to map ethical concerns (Research Question 1) and solutions (Research Question 2), and characterize which ethical approach supports the solutions offered (Research Question 3).

  6. What ethics can say on artificial intelligence: Insights from ...

    Considering 309 articles from the beginning of the publications in this field up until December 2021, this systematic literature review clarifies what the ethical concerns regarding AI are, and it charts them into two groups: (i) ethical concerns that arise from the design of AI and (ii) ethical concerns that arise from human–AI interactions.

  7. Ethical framework for Artificial Intelligence and Digital ...

    Systematic literature review to create a taxonomy of AI digital ethics implications. •. Instrument outlining fourteen ethical considerations mapped to seven DT archetypes. •. Conceptual model of the impact of AI digital ethics implications on societal impact.

  8. Objective metrics for ethical AI: a systematic literature review

    This systematic literature review aims to identify and map objective metrics documented in literature between January 2018 and June 2023, specifically focusing on the ethical principles outlined in the Ethics Guidelines for Trustworthy AI.

  9. The Ethics of AI Ethics: An Evaluation of Guidelines

    1 Introduction. The current AI boom is accompanied by constant calls for applied ethics, which are meant to harness the “disruptive” potentials of new AI technologies. As a result, a whole body of ethical guidelines has been developed in recent years collecting principles, which technology developers should adhere to as far as possible.

  10. Ethics of AI: A Systematic Literature Review of Principles ...

    We conducted a systematic literature review (SLR) study to investigate the agreement on the significance of AI principles and identify the challenging factors that could negatively impact the adoption of AI ethics principles.