From computer ethics and the ethics of AI towards an ethics of digital ecosystems
- Open access
- Published: 31 July 2021
- volume 2 , pages 65–77 ( 2022 )
You have full access to this open access article
- Bernd Carsten Stahl ORCID: orcid.org/0000-0002-4058-4456 1
8862 Accesses
16 Citations
4 Altmetric
Explore all metrics

Cite this article
Ethical, social and human rights aspects of computing technologies have been discussed since the inception of these technologies. In the 1980s, this led to the development of a discourse often referred to as computer ethics. More recently, since the middle of the 2010s, a highly visible discourse on the ethics of artificial intelligence (AI) has developed. This paper discusses the relationship between these two discourses and compares their scopes, the topics and issues they cover, their theoretical basis and reference disciplines, the solutions and mitigations options they propose and their societal impact. The paper argues that an understanding of the similarities and differences of the discourses can benefit the respective discourses individually. More importantly, by reviewing them, one can draw conclusions about relevant features of the next discourse, the one we can reasonably expect to follow after the ethics of AI. The paper suggests that instead of focusing on a technical artefact such as computers or AI, one should focus on the fact that ethical and related issues arise in the context of socio-technical systems. Drawing on the metaphor of ecosystems which is widely applied to digital technologies, it suggests preparing for a discussion of the ethics of digital ecosystems. Such a discussion can build on and benefit from a more detailed understanding of its predecessors in computer ethics and the ethics of AI.
Avoid common mistakes on your manuscript.
1 Introduction
The development, deployment and use of digital technologies has long been recognised as having ethical implications. Based on initial reflections of those implications by seminal scholars such as Wiener [ 122 ], [ 123 ] and Weizenbaum [ 121 ], a stream of research and reflection on ethics and computers emerged. The academic field arising from this work, typically called computer ethics, was and remains a thriving but nevertheless relatively small field that managed to establish a body of knowledge, dedicated conferences, journals and research groups.
While computer ethics continues to be a topic of discussion, the dynamics of the ethical reflection of digital technology changed dramatically from approximately the middle of the 2010s when the concept of artificial intelligence (AI) (re-)gained international prominence. The assumption that AI was in the process of fundamentally changing many societal and business processes with manifest implications for most individuals, organisations and societies led to a plethora of research and policy initiatives aimed at understanding ethical issues of AI and finding ways of addressing them.
The assumption underlying this paper is that one can reasonably and transparently distinguish between the discourses on computer ethics and the one focusing on the ethics of AI. If this is the case, then it would be advantageous to participants in both discourses to better understand the differences and similarities between these two discourses. This paper, therefore, asks the research question: how and to what extent do the discourses of computer ethics and the ethics of AI differ from one another?
The paper is furthermore motivated by a second assumption, which is that ethical reflection of digital technologies will continue to develop and that there will be future discourses, based on novel technologies and their applications that go beyond both computer ethics and the ethics of AI. If this turns out to be true, then an understanding of the commonalities and persistent features of computer ethics and the ethics of AI may well provide insights into likely ethical concerns that can be expected to arise in the next generation of digital technologies and their applications. The second question that the paper seeks to answer is, therefore: what can be deduced about a general ethics of digital technologies by investigating computer ethics and the ethics of AI?
These are important questions for several reasons. Answering them facilitates or improves mutual awareness and understanding of computer ethics and ethics of AI. Such an understanding can help both discourses identify current gaps existing ideas. For computer ethics scholars, this may be an avenue to contribute their work to the broader societal discourse on AI. For scholars involved in the ethics of AI debate, it may help to avoid repetition of settled discussion. But even more importantly, by comparing computer ethics and the ethics of AI, the paper can think beyond current discussions. A key contribution of the paper is the argument that an analysis of computer ethics and the ethics of AI allows for the identification of those aspects of the discourse that remain constant and are independent from specific technologies. The paper suggests that a weakness of both computer ethics and the ethics of AI is their focus on a particular technology or artefact, i.e. computers or AI. It argues that a better understanding of ethical issues can be achieved by taking seriously the systems nature of digital technologies. One stream of research that has not been prominent in the ethics-related debate is that of digital (innovation) ecosystems. By moving away from an artefact and looking at the ethics of digital ecosystems, it may be possible to proactively engage with novel and emerging technologies while the exact terminology to describe them is still being developed. This would allow for paying attention early to the ethical aspects of such technologies.
The paper proceeds as follows. The next section summarises the discourses on computer ethics and on the ethics of AI with a view to identifying both changing and constant aspects between these two. This includes a justification of the approach and a more detailed description of aspects and components of the discourses to be compared. This provides the basis for the description and critical comparison of the two discourses. The identification of overlaps and continuity provides the starting point for a discussion of a future-proof digital ethics.
2 Computer ethics and the ethics of AI
The argument of the paper rests on the assumption that one can reasonably distinguish between computer ethics and the ethics of AI. This assumption is somewhat problematic. A plausible reading is that the ethics of AI is simply a part or an extension of computer ethics. This paper therefore does not propose any categorical difference between computer ethics and the ethics of AI but simply suggests that it is an empirical phenomenon that these two discourses differ to some degree.
One argument that supports a distinction between computer ethics and the ethics of AI is the level of attention they receive. While many of the topics of interest to computer ethics, such as privacy, data protection or intellectual property, have raised societal and, thus, political interests, this has never led to the inclusion of computer ethics terminology into a public policy discourse. This is very different for the ethics of AI, which is not just a thriving topic of academic debate, but which is explicitly dealt with by numerous policy proposals [ 104 ]. A related aspect of the distinction refers to the participants in the discourse. Where computer ethics is to a large extent an academic topic, the ethics of AI draws much more on contributions from industry, media and policy.
This may suffice as a justification for the chosen approach. The validity of these observations are discussed in more detail below. The following Fig. 1 aims to represent the logic of the research described in this paper.

Representation of the research logic of the paper
The two blue ellipses on the left represent the currently existing discourses on computer ethics and the ethics of AI. The differences and similarities between these two are explored later in this section. From the insights thus generated the paper will progress to the question what can be learned about these existing discourses that can prepare the future discussion of the ethics of emerging digital technologies.
2.1 Methodology
The methodological basis of this paper is that of a literature review, more specifically of a comparison of two bodies of literature. Literature reviews are a key ingredient across all academic disciplines [ 42 ] and form at least part of most publications. There are numerous approaches to reviewing various bodies of literature that serve different purposes [ 115 ]. Rowe [ 106 ] suggests a typology for literature reviews along four different dimensions (goal with respect to theory, breadth, systematicity, argumentative strategy).
A central challenge for this paper is that the distinction between computer ethics and the ethics of AI is not clear-cut, but rather a matter of degree and emphasis. This is exacerbated by the fact that the terminology is ambiguous. So far, I have talked about computer ethics and the ethics of AI. Neither of these terms is used consistently. While the term computer ethics is well established, it is closely linked with other such as ethics of ICT [ 105 ], information technology ethics [ 110 ] or cyberethics [ 111 ]. Computer ethics is closely related to information ethics to the point where there are several publications that include both terms in the title [ 56 ] and [ 120 ]. The link between computer ethics and information ethics is discussed in more detail under the scope of the topic below.
Just like there are different terms that overlap with computer ethics, there are related terms describing ethics of AI, such as responsible AI [ 15 , 38 , 45 , 118 ] or AI for good [ 17 , 69 ]. In addition, the term ethics is used inconsistently. It sometimes refers to ethics as a philosophical discipline with references to ethical theories. However, it often covers ad hoc concerns about particular situations or developments that are perceived as morally problematic. Many such issues could be equally well described as social concerns. Many of them also have a legal aspect of them, in particular where they pertain to established bodies of law, notably human rights law. The use of the term 'ethics' in this paper, therefore, is a short hand for all these uses in the discourse.
The comparison of the discourses on computer ethics and ethics of AI, thus, requires criteria that allow to determine the content of the two discourses. An important starting point for the delimitation of the computer ethics discourse is the fact that there are several published accounts that review and classify this discourse. These notably include work undertaken by Terry Bynum [ 27 , 28 , 29 ] but also other reflective accounts of the field (H. T. [ 117 ]. There are several seminal publications that deserve to be mentioned as defining the discourse of computer ethics. Jim Moor [ 93 ] notably asked the question "what is computer ethics?". And Deborah Johnson [ 73 ] provided the answer in the first textbook on the topic, a work that was also initially published in 1985. The description of computer ethics in this paper takes its point of departure from these defining publications. It also takes into account other sources which include a number of edited volumes, work published in relevant conferences (notably Computer Ethics Philosophical Enquiry (CEPE), Computers and Philosophy (CAP) and ETHICOMP) but also published accounts of ethics of computing in adjacent fields, such as information systems or computing [ 113 ].
The debate on the ethics of AI is probably more difficult to delineate than the one on computer ethics. However, there are some foundational texts and review articles that can help with the task. Müller's [ 97 ]recent overview in the Stanford Encyclopedia provides a good overview. There are several review and overview papers, in particular of ethical principles [ 54 , 72 ]. In addition, there is a quickly growing literature covering several recent monographs [ 41 , 45 ] and several new journals, including the new Springer journal AI and Ethics [ 84 ]. These documents can serve as the starting point to delineate the discourse, which also covers many publications from neighbouring disciplines as well as policy and general media contributions. It should be clear that these criteria do not constitute a clear delineation and there will be many contributions that could count under both headings and some that may fit neither. However, despite the fuzziness of the demarcation line, this paper submits that a distinction between these discourses is possible to the point where it allows a meaningful comparison.
In order for such a comparison to be interesting, it requires a clarification of which aspects one can expect to differ, which is the subject of the following section.
2.2 Differences between computer ethics and the ethics of AI
This section starts with an overview of the aspects that are expected to differ between the two discourses and then discusses each of these in more detail The obvious starting point for a comparison of the discourses on computer ethics and the ethics of AI is the scope of the discourse, in particular the technologies covered by it. This leads to the topics that are covered and the issues that are of defining interest to the discourse. The next area is the theoretical basis that informs the discourse and the reference disciplines that it draws from. Computer ethics and the ethics of AI may differ on the solutions to these issues and the mitigation strategies they propose. Finally, there is the question of the broader importance and impact of the discourses.
Figure 2 represents the different aspects of the two discourses that will now be analysed in more detail.

Characteristics of the discourse
2.2.1 Scope: technology and its features
The question of the exact scope of both discourses has been the subject of reflection within the discourse itself and has varied over time. The early roots of computer ethics as represented by Wiener's [ 122 ] work was inspired by the initial developments of digital computing and informed by his experience of contributing to these during the Second World War. Wiener observed characteristics of these devices, such as an increased measure of autonomy and independence from humans which he saw as problematic. Similarly, Weizenbaum's [ 121 ] experience of natural language processing (an area that forms part of AI) led him to voice concerns about the potential social uses of technology (such as the ELIZA conversational system).
By the time the term "computer ethics" was coined in the 1980s, mainframe computers were already well established in businesses and organisations, and initial indications of personal computer use could be detected. The Apple II was launched in 1977, the BBC Micro and the IBM 5150 came to market in 1981, paving the way for wide-spread adoption of PCs and home computers. At this time, it was reasonably clear what constituted a computer and the discourse, therefore, spent little time on definitions of underlying technology and instead focused on the ethically problematic characteristics of the technology.
The initial clarity of the debate faded away because of technical developments. Further miniaturisation of computer chips, progress in networking, the development of the smartphone as well as the arrival of new applications such as social media and electronic commerce radically changed the landscape. At some point in the 2000s, so many consumer devices had integrated computing devices and capabilities that the description of something as a computer was no longer useful. This may explain the changing emphasis from the term "computer ethics" to "information ethics", which can be seen, for example, by the change of the title of Terry Bynum's [ 29 ] entry in the Stanford Encyclopedia of Philosophy which started out in 2001 as "Computer Ethics: Basic Concepts and Historical Overview" and was changed in 2008 to "Computer and Information Ethics". The difference between computer ethics and information ethics goes deeper than the question of technology and we return to it below, but Bynum's changed title is indicative of the problem of delimiting the scope of computer ethics in the light of rapid development of computing technologies.
The challenges of delimiting computer ethics are mirrored by the challenge of defining the scope of the ethics of AI. The concept of AI was coined in 1956 [ 88 ] in a funding proposal that was based on the conjecture that " every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it". It set out to explore " how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." These ambitions remain largely intact for current AI research, but they do not explain why ethics of AI became a pervasive discourse from the mid-2010s.
The history of AI (cf. [ 19 ]) includes a history of philosophical and ethical questions [ 31 ]. AI is a field of research, generally accepted to be a sub-field of computer science that developed several themes and bodies of theory, which point to different concepts of AI. Shneiderman [ 107 ] suggests a simple distinction between two goals of AI that is helpful to understand the conceptual challenge faced by the ethics of AI discourse. The two goals that Shneiderman sees for AI are: first, emulation to understand human abilities and then improve on them and, second, the application of technical methods to develop products and services. This distinction of goals aligns well with the well-established distinction between narrow and strong or general AI. Narrow AI aims to fulfil specifically described goals. In recent years, it has been hugely successful in the rapidly developing sub-field of machine learning [ 10 ], based on the implementation of deep learning through artificial neural networks and related technologies [ 114 ]. Narrow AI, in particular as realised in machine learning using neural networks to analyse and learn from large datasets, has roots going back decades. However, it is widely believed that these well-known technologies came to the fore because of advances in computing power, development of algorithms and the availability of large datasets [ 21 , 65 ].
In addition to this narrow AI aimed at solving practical problems, there is the long-standing aim to develop technologies with human-like abilities. These systems would be able to transfer learning across domains and are sometimes called artificial general intelligence [ 41 ]. Artificial general intelligence forms part of the earliest attempts to model intelligent behaviour through symbolic representations of reality [ 94 ], sometimes referred to as good old-fashioned AI or GOFAI [ 55 ]. It remains contested whether artificial general intelligence is achievable and, even if so, whether it could be done using current technological principles (i.e. digital computers and Turing machines) [ 56 ].
There are attempts to interpret the difference between narrow and general AI as a difference in temporal horizon, with narrow AI focusing on short-term goals, whereas general AI is seen as a long-term endeavour [ 13 , 32 ]. Notwithstanding the validity of this interpretation, the inclusion of narrow and general AI in the discussion means that its technical scope is large. It includes well-understood current technologies of machine learning with ethically relevant properties (e.g. need for large datasets, opacity of neural networks) as well as less determined future technologies that would display human-like properties. This breadth of the technical scope has important consequences for possible issues arising from the technology, as will be discussed below.
2.2.2 Topics and issues
The topics and issues discussed by both discourses cover all aspects of life where computers or AI have consequences for individuals and groups. It is, therefore, beyond the scope of this paper to provide a comprehensive overview of all topics discussed. Instead, the aim of this section is to provide an indication of some key topics with the aim of showing which of them have changed over time or remained stable.
In the introduction to 1985 special issue on computer ethics of the journal Metaphilosophy , the editor [ 46 ] stated that the central issue of computer ethics would be the replacement of humans by computers, in particular in tasks requiring judgment. It was clear at the time, however, that other issues were relevant as well, notably invasions of privacy, computer crime and topics related to the way computer professionals deal with clients and society, including ownership of programmes, responsibility for computer errors and the structure of professional codes of ethics. This structure is retained in the 2001 version of Bynum's [ 29 ] encyclopaedia entry which lists the following issues: computers in the workplace, computer crime, privacy and anonymity, intellectual property, professional responsibility, globalisation and the metaethics of computer ethics. Picking up the discussion of ethics of computing in the neighbouring discipline of information systems, Mason [ 87 ] proposed the acronym PAPA to point to key issues: privacy, accuracy, property and accessibility.
A more recent survey of the computing-oriented literature suggests that the topics discussed remain largely stable [ 113 ]. It may, therefore, not be surprising that there is much continuity from computer ethics in the ethics of AI debate. One way to look at this discussion is to distinguish between issues directly related to narrow AI, broader socio-technical concerns and longer-term questions. Current machine learning approaches require large datasets for training and validation and they are opaque, i.e. it is difficult to understand how input gets translated into output. This combination leads to concerns about privacy and data protection [ 26 , 47 ] as well as the widely discussed and interrelated questions of lack of transparency [ 3 , 109 ], accountability, bias [ 34 ] and discrimination [ 96 ]. In addition, current machine learning systems raise questions of reliability, security [ 7 , 10 , 25 ] and safety [ 45 ].
The impact of AI-enabled socio-technical systems on society and communities is covered well in the discourse. AI is a key enabler of the digital transformation of organisations and society which may have significant impact with ethical relevance. This includes economic concerns, notably questions of employment [ 77 , 124 ] as well as labour relationships including worker surveillance [ 97 ] as well as concerns about justice and distribution [ 96 ]. Digital transformation can affect political power constellations [ 98 ] and support as well as weaken citizen participation. Possible consequences of use AI include changes to the nature of warfare [ 103 ] and environmental impacts [ 99 ]. Concerns are raised about how machines may enhance or limit human agency [ 18 , 40 ].
Two concepts that figure prominently in the AI ethics discourse are those of trust and trustworthiness. The AI HLEG's [ 6 ] structured its findings and recommendations in a way that seems to suggest that ethics is considered, to strengthen trustworthiness of AI technologies, which then engenders trust and, thus, acceptance and use. This functional use of ethics is philosophically highly problematic but seems to be driven by a policy agenda that sees the desirability of AI as an axiom and ethics as a means to achieve targets for uptake.
Finally, there is some debate about the long-term issues related to artificial general intelligence. Due to the open question whether current type of technologies can achieve this [ 108 ], it is contested how much attention should be given to questions such as the singularity [ 80 ], superintelligence [ 22 ], etc. These questions do not figure prominently in current policy-oriented discussions, but they continue to attract interest in the scientific community and beyond.
The topics and issues discussed in computer ethics and the ethics of AI show a high level of consistency. Many of the discussions of computer ethics are continued or echoed in the ethics of AI. This includes questions of privacy and data protection, security, but also wider societal consequences of technical developments. At the same time, some topics are less visible, have morphed or moved into different discourses. The computer ethics discourse, for example, had a strong stream of discussion of ownership of data and computer code with a heavy emphasis on the communal nature of intellectual property. This discussion has changed deeply with some aspects appearing to be settled practice, such as ownership of content now administered through structures that are based on business models that emerged taking into account the competing views on intellectual property. Netflix, iTunes, etc. employ a distribution service and subscription model that appears to satisfy consumers, producers and intermediaries. Other aspects of ownership remain highly contested, such as the right to benefit from secondary use of process data, which underpins what Zuboff [ 126 ] calls surveillance capitalism.
2.2.3 Theoretical basis and reference disciplines
While there is a high level of continuity in terms of issues and topics, the theoretical positions vary greatly between computer ethics and the ethics of AI. This may have to do with the reference disciplines [ 11 , 14 , 78 ], i.e. the academic disciplines in which the contributors to the discourses were originally trained or from which they adopt theoretical positions they apply to computing and AI [ 85 ].
Both computer ethics and the ethics of AI are highly interdisciplinary and draw from a range of reference disciplines. In both cases there is a strong influence of philosophy, which is not surprising given that ethics is a discipline of philosophy. Similarly, there is a strong presence of contributors from technical disciplines. While the computer ethics discourse draws on contributions from computer scientists, the ethics of AI has attracted attention from more specialised communities that work on AI, notably at present the machine learning community. The most prominent manifestation of this is the FAT / FAccT community that focuses on fairness, accountability and transparency ( https://facctconference.org/ ). There are also contributions from other academic fields, such as technology law, social sciences including science and technology studies. Some fields such as information systems are less visible than one could expect them to be in the current discourse [ 112 ].
While details of the disciplinary nature of the contributions to both discourses are difficult to assess, there are notable changes in the use of foundational concepts. In computer ethics, there is a strong emphasis on well-established ethical theories, notably duty-based theories [ 75 , 76 ], theories focusing on consequences of actions [ 16 , 89 ] as well as theories focusing on individual character and virtue [ 9 ],A. C. [ 83 ]. Ethical theorising has of course not been confined to these and there are examples of other ethical theories applied to computing, such as the ethics of care [ 4 , 60 ], or discourse ethics [ 91 ]. In addition there have been proposals for ethical approaches uniquely suited to computing technologies, such as disclosive ethics [ 23 , 70 ].
The ethics of AI discourse also uses a rich array of ethical theories [ 82 ], but it displays an undeniable focus on principle-based ethical guidelines [ 72 ]. This approach is dominant in biomedical ethics [ 37 ] and its adoption by the ethics of AI discourse may be explained by the well-established biomedical ethics procedures which promise practical ways of dealing with ethical issues, as well as an increasing interest of the biomedical ethics community in computing and AI technologies. However, it should be noted that this reliance on principalism [ 39 ] is contested within the biomedical community [ 79 ] and has been questioned in the AI field [ 67 , 92 ], but at present remains dominant.
A further significant difference between computer ethics and the ethics of AI is that the latter has a much stronger emphasis on the law. One aspect of this legal emphasis is the recognition that many of the issues discussed in the ethics of AI are well-established issues of human rights, e.g. privacy or the avoidance of discrimination and physical harm. There are, thus, numerous vocal contributors to the discourse that emphasise human rights as a source of normativity in the ethics of AI as well as a way to address issues (Access [ 2 , 43 , 81 , 96 , 102 ]. This legal emphasis translates into a focus on legislation and regulation as a way of dealing with these issues, as discussed in the next section.
2.2.4 Solutions and mitigation
One can similarly observe some consistency and continuity but also some discontinuity with regards to proposals for addressing these issues. This is clearly a complex set of questions that depend on the issue in question and on the individual, group or organisation that is to deal with it. While it is, thus, not possible to provide a comprehensive overview of the different ways in which the issues can be resolved or mitigated, it is possible to highlight some differences between the two discourses [ 120 ].
One proposal that figured heavily in the computer ethics discourse that is less visible in the ethics of AI is that of professionalism. [ 8 , 30 , 74 ]. While it was and remains controversially discussed whether and to what degree computer experts are, should be or would want to be professionals, the idea of institutionalising professionalism as a way to deal with ethical issues has driven the development of organisations that portray themselves as professional bodies for computing [ 24 , 62 ]. The uncertain status of computing as a profession is reflected by the status of AI, which is probably at best be regarded as a sub-profession.
Both discourses underline the importance of knowledge, learning and education as conditions of successfully navigating ethical questions [ 20 ]. Both ask the question which help can be provided to people working in the design and development of technology and aim to develop suitable methodologies [ 68 ]. This is the basis of various "by design" approaches [ 33 , 64 , 86 ] that are based on the principles of value-sensitive design [ 58 ], [ 85 ]. Specific methodologies for incorporating ethical considerations in organisational practice can be found both in the computer ethics debate [ 63 , 66 ] as well as the ethics of AI discourse [ 7 , 45 , 48 ].
One area where the ethics of AI debate appears to be much more visible and impactful than computer ethics is that of legislation and regulation. This does not imply that the ethics of AI has a greater fundamental affinity to legislation, but it is based on the empirical observation that ethical (and other) issues of AI are perceived to be in need of legislation due to their potential impact (see next section). Rodrigues [ 104 ] provides an overview of recent legislative agendas. The most prominent example is probably the European Commission's proposed Regulation for AI (European [ 50 ] which would bring in sweeping changes to the AI field, mostly based on earlier ethical discussion. In addition to proposed legislation in various jurisdictions, there are proposals for the creation of new regulatory bodies [ 44 ], European [ 51 ] and international structures to govern AI [ 71 , 119 ]. It is probably not surprising that some actors in the AI field actively campaign against legislation and industry associations such as the Partnership on AI but also company codes of conduct, etc. can be seen as ways of heading off legislation.
Computer ethics, on the other hand, also touched on and influenced legislative processes concerning topics in its field of interest, notably data protection and intellectual property. However, the attention paid to AI by legislators is much higher than it ever was to computers in general.
2.2.5 Importance and impact
One reason for the high prevalence of legislation and regulation with regards to AI is the apparent importance and impact of the technology. AI is generally described as having unprecedented impact on most aspects of life which calls for ethical attention. Notwithstanding the accuracy of this narrative, it is broadly accepted across academia, policy and broader societal discourse. It is also the mostly unquestioned driver for the engagement with ethics. Questions about the nature of AI, its characteristics, and its likely and certain consequences are dealt with under the implicit assumption that they must be dealt with due to the importance of the technology.
The computer ethics debate does not share this unquestioned assumption of the importance of its subject matter. In fact, it was a recurrent theme of computer ethics to ask whether it was needed at all [ 57 ], H. [ 116 ]. This is, of course, a reasonable question to ask. There are a number of fields of applied ethics, e.g. medical ethics, business ethics or environmental ethics. But there are few, if any, that focus on a particular artefact, such as a computer. So, why would computer ethics be called for. Moor [ 93 ] famously proposed that it is the logical malleability, the fact that intended uses are not even foreseen by the designer, that sets computers apart from other artefacts, such as cars or airplanes. This remains a strong argument that also applies to current AI. With the growing spread of computers, first in organisations, then through personal and mobile computing which facilitated everyday application including electronic commerce and social media, computer ethics could point to the undeniable impact of computing technology which paved the way for the now ubiquitous reference to the impact of AI.
3 Towards an ethics of digital ecosystems
So far, this article has suggested that computer ethics and the ethics of AI can be read as two related, but distinct discourses and it has endeavoured to elucidate the differences and similarities between these two. While this should have convinced the reader that such a distinction is possible and helpful in understanding both discourses, it is also clear that other interpretations are possible. The ethics of AI can be seen as a continuation of the computer ethics discourse that has attracted new participants and led to a shift of topics, positions and impact. Both interpretations allow for a critical analysis of both discourses with a view to identifying their shared strengths and weaknesses and an exploration of what can be learned from them that can prepare the next discourse that can be expected to arise.
This question is motivated by the assumption that the ethics of AI discourse is not the final step in the discussion. AI is many things, but it is also currently a hype and an academic fashion. This is not to deny its importance but a recognition that academia, like policy and general discussions follow the technology hype cycle [ 52 ] and attention to technologies, management models and research approaches have characteristics of fashion cycles [ 1 , 12 ]. It is, therefore, reasonable to expect that the current focus on AI will peak and be replaced by another topic of debate. The purpose of this section is to discuss what may emerge from and follow the ethics of AI discourse and how this next stage of the debate can best profit from insights generated by the computer ethics and the ethics of AI discourses.
The term "computer ethics" lost some of its appeal when computing technologies became pervasive and integrated into many other devices. When a computer is in every phone, car and even most washing machines and refrigerators, then the term "computer ethics" becomes too fuzzy to be useful. A similar fate is likely to befall AI, or may already have done so. On the one hand, "AI" as a term is already too broad, as it covers everything from specific machine learning techniques to fictional artificial general intelligence. On the other hand, it is too narrow, given that it excludes many of the current and emerging technologies that anchor part of its future impact, such as quantum computing, neuromorphic technologies, the Internet of Things, edge computing, etc. And we can of course expect new technologies and terminology to emerge to add to this complexity.
One weakness that both computer ethics and the ethics of AI share is their apparent focus on a particular piece of technology. Ethical, social, human rights and other issues never arise from a technology per se, however, but result from the use of technologies by humans in societal, organisational and other setting. This is not to suggest that technologies are value neutral, but that the affordances that they possess [ 59 , 100 ] can play out differently in different environments.
To prepare for the next wave of ethics of technology discussion that will succeed the ethics of AI, it may, therefore, be advisable to take a slightly different perspective, one that reduces the focus on particular technologies. One family of such perspectives are based on systems theory [ 99 ]. There are a number of such theories that have been applied to computing technologies, such as complex adaptive systems [ 90 ] or soft systems [ 35 , 36 ].
A possible use of the systems concept to understand the way technology and social environments interact is that of an ecosystem. The metaphor of ecosystems to describe AI and its broader social and ethical consequences has already been employed widely by scholars [ 53 ] as well as policymakers. The European Commission, for example, in its White Paper (European [ 49 ] that prepared the proposed Regulation (European [ 50 ] framed European AI policy in terms of an ecosystem of excellence and an ecosystem of trust, with the latter representing ethical, social and legal concerns. The OECD [ 101 ] similarly proposes the development of a digital ecosystem for AI. The World Economic Forum [ 125 ] underlines the logic of this terminology when it emphasises the importance of a systems-wide approach, if responses to the ethics of AI are to be successful.
From a scholarly perspective, it is interesting to observe that a body of research has developed since the mid-1990s that uses the concept of an ecosystem to describe how technologies are used in the economic system [ 5 , 61 , 95 ]. This discourse is of interest to this paper because it has developed a rich set of theoretical positions, substantive insights and methodologies that can be used to understand specific socio-technical systems. At the same time, there has been very little emphasis in this discourse on the ethical and normative aspects of these ecosystems. This paper does not offer the space to pursue this argument in more detail, but can suggest that combining these different perspectives and looking at the ethics of digital (innovation) ecosystems can provide a helpful new perspective.
The benefit of using such a digital ecosystems-based approach is that it moves away from a particular technology and opens the view to the way in which technical developments interact with social developments, which broadens the view to encompass application areas, social structures societal environments as well as technical affordances. Actual ethical concerns are affected by all of these different factors and the dynamics of their relationships.
The proposal arising from this insight is, thus, that, to prepare the next wave of ethics and technology discussion, the focus should not be on predicting the next big technology, but rather to explore how ethical issues arise in socio-technical (innovation) ecosystems. This is a perspective that can be employed right now and used to better understand the ethics of AI or computing more generally. It invites detailed empirical observations of the social realities of the development, deployment and use of current and past technology. It is similarly open to sophisticated ethical and theoretical investigations. This understanding can then be the baseline for exploring consequences of technological and social change. Making use of this perspective for the current ethics of AI debate would have the great benefit that the question of adequately defining AI loses its urgency. The key question then becomes how socio-technical innovation ecosystems develop, which is a question that is open to the inclusion of other types of technology from quantum computing to well-established computational and other technological artefacts.
Taking this perspective, which might be called the "ethics of digital ecosystems" moves beyond individual technologies and allows keeping track of and continuing established ethical discussions. An ethical analysis of digital ecosystems will need to delineate the systems which will be required to determine the capabilities of these ecosystems. The capabilities, in turn will be what gives rise to possible social applications and the resulting benefits and concerns. Whatever the next technological hype will be, it is a safe bet that it will continue at least some trends from the past and that the corresponding ethical debates will remain valid. For example, it is plausible that future digital technologies will make use of, analyse and produce personal data, hence continuing the need for considerations of privacy and data protection. Security, safety and reliability of any future socio-technical system are similarly a good bet in terms of future relevance.
The focus on broader innovation ecosystem furthermore means that many of the currently discussed topics can be better framed as relevant topics of discussion. Questions of political participation, economic justice or human autonomy are much more easily understood as aspects of socio-technical systems than as intrinsically linked to a particular technology. The change of perspective towards digital ecosystems can, thus, strengthen the plausibility and relevance of some of the current topics of debate.
The same can be said for the discussion of possible mitigations. By focusing on digital innovation ecosystems, the breadth of possible mitigation strategy automatically increases. In computer ethics or ethics of AI, the focus is on technical artefacts and there is a temptation to link ethical issues as well as responses to these issues to the artefacts themselves. This is where approaches such as value-sensitive design or ethics by design derive their legitimacy from. The move away from the artefact focus towards the socio-technical ecosystem does not invalidate such approaches but clearly shows that the broader context needs to be included, thus opening up the discussion for regulation, legislation, democratic participation, societal debate as means to shape innovation ecosystems.
The move beyond the ethics of AI towards an ethics of digital innovation ecosystems will further broaden the disciplines and stakeholder groups involved in the discussion. Those groups who have undertaken research on computer ethics will remain important, as will the additional groups that have developed or have moved to exploring the ethics of AI. However, the move towards digital innovation ecosystems makes it clear that additional perspectives will be required to get a full understanding of potential problems and solutions. Innovation ecosystem research is done in fields like business studies and information systems, which have much to contribute, but have traditionally had limited visibility in computer ethics and ethics of AI. Such a broadening of the disciplines and fields involved suggests that the range of theoretical perspectives is also likely to increase. Traditional theories of philosophical ethics will doubtlessly remain relevant and the focus on mid-level principles that the ethics of AI has promoted are similarly likely to remain important for guiding ethical reflection. However, a broader range of theories is likely to be applied, including systems theories, theories from business and organisational studies as well as the literature on innovation ecosystems.
4 Conclusion
This paper started from the intuition that there is a noticeable difference between the discourses on computer ethics and the ethics of AI. It explored this difference with a view to examining how understanding it can help us prepare for the inevitable next discourse, which will follow the current discussion of ethics of AI. The analysis of the two discourses has outlined that there are notable differences in terms of scope of the discussion, topics and issues, theoretical basis and reference disciplines, solutions and mitigations and expected impacts. It is, thus, legitimate to draw a dividing line between the two discourses. However, it has also become clear that there is much continuity and overlap and to a significant degree, the ethics of AI discourse is a continuation and extension of the computer ethics discourse. This part of the analysis presented in the paper should help participants in both discourses to more clearly see similarities and discontinuities and appreciate where research has already been done that can benefit the respective other discourse.
The exact insights to be gained from the review of the two discourses clearly depends on the prior knowledge of the observer. Individuals who are intimately familiar with both discourses may be aware of all the various angles. However, participants of the computer ethics discourse who have not followed the ethics of AI debate can find insights with regards to current topics and issues, e.g. the broader socio-economic debates that surround AI. They can similarly benefit from an understanding of how biomedical principalism is being applied to AI, which may offer avenues of impact, solutions and mitigations that computer ethics tended to struggle with. Similarly a new entrant to the ethics of AI debate may benefit from an appreciation of computer ethics by realising that many of the topics have a decade long history that there are numerous ethical positions and mitigation structures that have been well established and do not need to be reinvented.
Following from these insights, the paper then moved to the question what the next discourse is likely to be. This part of the paper is driven by the recognition that the emphasis on a particular technology or family of technologies, be this computers or AI, is not particularly helpful. Technologies unfold their ethical benefits and problems when deployed and used in the context of socio-technical systems. It is less the affordances of a technology per se, but the way in which those affordances evolve in practical contexts that are of interest to ethical reflection. There are numerous ways in which these socio-technical systems can be described, and this paper has proposed that the concept of innovation ecosystems may offer one suitable approach.
The outcome of the paper is, thus, the suggestion to start to prepare the discourse of the ethics of digital innovation ecosystems. This will again be a somewhat different discourse from the ones on computer ethics and the ethics of AI, but can also count as a continuation of the former two. The shift of the topic from computing or AI gives this discourse the flexibility to accommodate existing and emerging technologies from quantum computing to IoT without requiring a major shift of the debate. Maybe more importantly, it will require a more focused attention to the social side of innovation ecosystems, which means that aspects like application area and the local and cultural context of use will figure prominently.
By calling for this shift of the debate, the paper provides the basis for such a shift and can help shape current debates in this direction. This is particularly necessary with regards to the ethics of AI, which otherwise may be locked into mitigation strategies ranging from legislation and regulation to standardisation and organisational practice with a focus on the concept of AI, which may misdirect efforts away from the areas of greatest need.
This shift of the debate and the attention to the ethics of innovation ecosystem will not be a panacea. The need for a delimitation of the subject of debate will remain, which means that the exact content or membership of an innovation ecosystem that raises ethical questions will remain. Systems-based approaches raise questions of individual agency and the locus of ethics, which the dominant ethical theories may find difficult to answer. The innovation ecosystems construct is also just an umbrella term underneath which there will be many specific innovation ecosystems, which means that the attention to the empirical realisation of such systems will need to grow.
Despite the fact that this shift of the debate will require significant additional efforts, it is still worth considering. The currently ubiquitous discussion of the ethics of AI will continue for the foreseeable future. At the same time it is already visibly reaching its limitations, for example by including numerous ethical issues that are not unique to AI. In order for the discussion to remain specific and allow the flexibility to react to future developments, it will need to reconsider its underpinnings. This paper suggests that this can be achieved by refocusing its scope and explicitly embracing digital innovation ecosystems as the subject of ethical reflection. Doing so will ensure that many of the lessons that have been learned over years and decades of working on the ethics of computing and AI will remain present and relevant, and that there is a well-established starting point from which we can engage with the next generations of digital technologies to ensure that their creation and use benefit humanity.
Abrahamson, E.: Management fashion. Acad. Manag. Rev. 21 (1), 254–285 (1996)
Article Google Scholar
Access Now.: Human Rights in the Age of Artificial Intelligence. Access Now. https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf (2018)
Access Now Policy Team.: The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems. Access No. https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf (2018)
Adam, A.: Computer ethics in a different voice. Inf. Organ. 11 (4), 235–261 (2001)
Adner, R.: Match your innovation strategy to your innovation ecosystem. Harv. Bus. Rev. 84 (4), 98–107 (2006)
Google Scholar
AI HLEG.: Ethics Guidelines for Trustworthy AI. European Commission - Directorate-General for Communication. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (2019)
AIEI Group.: From Principles to Practice—An Interdisciplinary framework to operationalise AI ethics (p. 56). VDE / Bertelsmann Stiftung. https://www.ai-ethics-impact.org/resource/blob/1961130/c6db9894ee73aefa489d6249f5ee2b9f/aieig---report---download-hb-data.pdf (2020)
Albrecht, B., Christensen, K., Dasigi, V., Huggins, J., Paul, J.: The Pledge of the computing professional: recognizing and promoting ethics in the computing professions. SIGCAS Comput. Soc. 42 (1), 6–8 (2012). https://doi.org/10.1145/2422512.2422513
Aristotle.: The Nicomachean Ethics. Filiquarian Publishing, LLC (2007)
Babuta, A., Oswald, M., & Janjeva, A.: Artificial Intelligence and UK National Security—Policy Considerations [Occasional Paper]. Royal United Services Institute for Defence and Security Studies. https://rusi.org/sites/default/files/ai_national_security_final_web_version.pdf (2020)
Baskerville, R.L., Myers, M.D.: Information systems as a reference discipline. MIS Q. 26 (1), 1–14 (2002)
Baskerville, R. L., & Myers, M. D.: Fashion waves in information systems research and practice. Mis Quarterly, 647–662 (2009)
Baum, S.D.: Reconciliation between factions focused on near-term and long-term artificial intelligence. AI & Soc. 33 (4), 565–572 (2018). https://doi.org/10.1007/s00146-017-0734-3
Benbasat, I., Weber, R.: Research commentary: Rethinking" diversity" in information systems research. Inf. Syst. Res. 7 (4), 389 (1996)
Benjamins, R.: A choices framework for the responsible use of AI. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00012-5
Bentham, J.: An Introduction to the Principles of Morals and Legislation. Dover Publications Inc (1789)
Berendt, B.: AI for the Common Good?! Pitfalls, challenges, and ethics pen-testing. Paladyn, Journal of Behavioral Robotics 10 (1), 44–65 (2019). https://doi.org/10.1515/pjbr-2019-0004
Boddington, P.: AI and moral thinking: How can we live well with machines to enhance our moral agency? AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00017-0
Boden, M. A.: Artificial Intelligence: A Very Short Introduction (Reprint edition). OUP Oxford (2018)
Borenstein, J., Howard, A.: Emerging challenges in AI and the need for AI ethics education. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00002-7
Borges, A.F.S., Laurindo, F.J.B., Spínola, M.M., Gonçalves, R.F., Mattos, C.A.: The strategic use of artificial intelligence in the digital era: Systematic literature review and future research directions Int. J. Inf. Manage. (2020). https://doi.org/10.1016/j.ijinfomgt.2020.102225
Bostrom, N.: Superintelligence: Paths, Dangers, Strategies (Reprint edition). OUP Oxford (2016)
Brey, P.: Values in technology and disclosive computer ethics. In L. Floridi (Ed.), The Cambridge Handbook of Information and Computer Ethics (pp. 41–58). Cambridge University Press (2010)
Brinkman, B., Flick, C., Gotterbarn, D., Miller, K., Vazansky, K., Wolf, M.J.: Listening to Professional Voices: Draft 2 of the ACM Code of Ethics and Professional Conduct. Commun. ACM 60 (5), 105–111 (2017). https://doi.org/10.1145/3072528
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Flynn, C., Héigeartaigh, S. Ó., Beard, S., Belfield, H., Farquhar, S., Amodei, D.: The malicious use of artificial intelligence: forecasting, prevention, and mitigation. http://arxiv.org/abs/1802.07228 (2018)
Buttarelli, G.: Choose Humanity: Putting Dignity back into Digital [Opening Speech]. 40th Edition of the International Conference of Data Protection Commissioners, Brussels. https://www.privacyconference2018.org/system/files/2018-10/Choose%20Humanity%20speech_0.pdf (2018)
Bynum, T.W.: Computer ethics: Its birth and its future. Ethics Inf. Technol. 3 (2), 109–112 (2001). https://doi.org/10.1023/A:1011893925319
Bynum, T. W.: The historical roots of information and computer ethics. In L. Floridi (Ed.), The Cambridge Handbook of Information and Computer Ethics (pp. 20–38). Cambridge University Press (2010)
Bynum, T. W.: Computer and Information Ethics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2015/entries/ethics-computer (2018)
Bynum, T. W., & Rogerson, S.: Computer ethics and professional responsibility: introductory text and readings. WileyBlackwell (2003)
Capurro, R.: The Age of Artificial Intelligences: A Personal Reflection. International Review of Information Ethics, 28. https://informationethics.ca/index.php/irie/article/view/388 (2020)
Cave, S., ÓhÉigeartaigh, S.S.: Bridging near- and long-term concerns about AI. Nature Machine Intelligence 1 (1), 5–6 (2019). https://doi.org/10.1038/s42256-018-0003-2
Cavoukian, A.: Privacy by design: The 7 foundational principles. Information and privacy commissioner of Ontario, Canada. http://dataprotection.industries/wp-content/uploads/2017/10/privacy-by-design.pdf (2009)
CDEI.: Interim report: Review into bias in algorithmic decision-making. Centre for Data Ethics and Innovation. https://www.gov.uk/government/publications/interim-reports-from-the-centre-for-data-ethics-and-innovation/interim-report-review-into-bias-in-algorithmic-decision-making (2019)
Checkland, P., Poulter, J.: Learning for action: A short definitive account of soft systems methodology and its use for practitioner, teachers, and students. Wiley (2006)
Checkland, P., & Poulter, J.: Soft systems methodology. In Systems approaches to managing change: A practical guide (pp. 191–242). Springer (2010)
Childress, J.F., Beauchamp, T.L.: Principles of biomedical ethics. Oxford University Press (1979)
Clarke, R.: Principles and Business Processes for Responsible AI. Comput. Law Secur. Rev. 35 (4), 410–422 (2019)
Clouser, K.D., Gert, B.: A Critique of Principlism. J. Med. Philos. 15 (2), 219–236 (1990). https://doi.org/10.1093/jmp/15.2.219
Coeckelbergh, M.: Technology, Narrative and Performance in the Social Theatre. In D. Kreps (Ed.), Understanding Digital Events: Bergson, Whitehead, and the Experience of the Digital (1 edition, pp. 13–27). Routledge (2019)
Coeckelbergh, M.: AI Ethics. The MIT Press (2020)
Book Google Scholar
Cooper, H. M.: Synthesizing research: A guide for literature reviews. Sage (1998)
Council of Europe.: Unboxing artificial intelligence: 10 steps to protect human rights. https://www.coe.int/en/web/commissioner/view/-/asset_publisher/ugj3i6qSEkhZ/content/unboxing-artificial-intelligence-10-steps-to-protect-human-rights (2019)
Council of Europe.: CAHAI - Ad hoc Committee on Artificial Intelligence. Artificial Intelligence. https://www.coe.int/en/web/artificial-intelligence/cahai (2020)
Dignum, V.: Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way (1st ed. 2019 edition). Springer (2019)
Editor.: Editor’s Introduction. Metaphilosophy, 16(4), 263–265 (1985)
EDPS.: EDPS Opinion on the European Commission’s White Paper on Artificial Intelligence – A European approach to excellence and trust (Opinion 4/2020) (Opinion No. 4/2020). EDPS. https://edps.europa.eu/sites/edp/files/publication/20-06-19_opinion_ai_white_paper_en.pdf (2020)
Eitel-Porter, R.: Beyond the promise: Implementing ethical AI. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00011-6
European Commission.: White Paper on Artificial Intelligence: A European approach to excellence and trust (White Paper COM(2020) 65 final). https://ec.europa.eu/info/files/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en (2020)
European Commission.: Proposal for a Regulation on a European approach for Artificial Intelligence (COM(2021) 206 final). European Commission. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence (2021)
European Parliament.: DRAFT REPORT with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies (2020/2012(INL)). European Parliament, Committee on Legal Affairs. https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/JURI/PR/2020/05-12/1203395EN.pdf (2020)
Fenn, J., & Lehong, H.: Hype Cycle for Emerging Technologies. Gartner. http://www.gartner.com/technology/research/hype-cycles/index.jsp (2011)
Findlay, M., & Seah, J.: An Ecosystem Approach to Ethical AI and Data Use: Experimental Reflections. 2020 IEEE / ITU International Conference on Artificial Intelligence for Good (AI4G), 192–197 (2020). https://doi.org/10.1109/AI4G50087.2020.9311069
Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M.: Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI. https://dash.harvard.edu/handle/1/42160420 (2020)
Floridi, L.: Information ethics: On the philosophical foundation of computer ethics. Ethics Inf. Technol. 1 (1), 33–52 (1999)
L. Floridi (ed.), The Cambridge Handbook of Information and Computer Ethics. Cambridge University Press (2010)
Floridi, L., Sanders, J.W.: Mapping the foundationalist debate in computer ethics. Ethics Inf. Technol. 4 (1), 1–9 (2002)
Friedman, B., Kahn, P., & Borning, A.: Value Sensitive Design and Information Systems. In P. Zhang & D. Galletta (eds.), Human-Computer Interaction in Management Information Systems: Foundations. M.E Sharpe, Inc (2006)
Gibson, J. J.: The theory of affordances. In R. E. Shaw & J. D. Bransford (Eds.), Perceiving, acting and knowing (pp. 67–82). Lawrence Erlbaum Associates (1977)
Gilligan, C.: In a Different Voice: Psychological Theory and Women’s Development (Reissue). Harvard University Press (1990)
Gomes, L. A. de V., Facin, A. L. F., Salerno, M. S., & Ikenami, R. K.: Unpacking the innovation ecosystem construct: Evolution, gaps and trends. Technological Forecasting and Social Change, 136, 30–48 (2018). https://doi.org/10.1016/j.techfore.2016.11.009
Gotterbarn, D., Miller, K., Rogerson, S.: Computer society and ACM approve software engineering code of ethics. Computer 32 (10), 84–88 (1999)
Gotterbarn, D., & Rogerson, S.: Responsible risk analysis for software development: Creating the software development impact statement. Communications of AIS, 15 , 730–750 (2005). https://doi.org/10.17705/1CAIS.01540
Gürses, S., Troncoso, C., & Diaz, C.: Engineering Privacy by Design. Conference on Computers, Privacy & Data Protection (CPDP) (2011)
Hall, W., & Pesenti, J.: Growing the artificial intelligence industry in the UK. Department for Digital, Culture, Media & Sport and Department for Business, Energy & Industrial Strategy. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/652097/Growing_the_artificial_intelligence_industry_in_the_UK.pdf (2017)
Harris, I., Jennings, R.C., Pullinger, D., Rogerson, S., Duquenoy, P.: Ethical assessment of new technologies: A meta-methodology. J. Inf. Commun. Ethics Soc. 9 (1), 49–64 (2011). https://doi.org/10.1108/14779961111123223
Hickok, M.: Lessons learned from AI ethics principles for future actions. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00008-1
Huff, C., Martin, C.D.: Computing consequences: A framework for teaching ethical computing. Commun. ACM 38 (12), 75–84 (1995)
International Telecommunication Union.: AI for Good Global Summit Report 2017. International Telecommunication Union. https://www.itu.int/en/ITU-T/AI/Documents/Report/AI_for_Good_Global_Summit_Report_2017.pdf (2017)
Introna, L.D.: Disclosive Ethics and Information Technology: Disclosing Facial Recognition Systems. Ethics Inf. Technol. 7 (2), 75–86 (2005)
Jelinek, T., Wallach, W., Kerimi, D.: Policy brief: The creation of a G20 coordinating committee for the governance of artificial intelligence. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00019-y
Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nature Machine Intelligence 1 (9), 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2
Johnson, D. G.: Computer Ethics (3rd ed.). Prentice Hall (2001)
Johnson, D.G.: Computing ethics Computer experts: Guns-for-hire or professionals? Commun. ACM 51 (10), 24–26 (2008)
Kant, I.: Kritik der praktischen Vernunft. Reclam, Ditzingen (1788)
Kant, I.: Grundlegung zur Metaphysik der Sitten. Reclam, Ditzingen (1797)
Kaplan, A., Haenlein, M.: Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus. Horiz. 62 (1), 15–25 (2019)
Keen, P.: MIS research: Reference disciplines and a cumulative tradition. Proceedings of the First International Conference on Information Systems (1980)
Klitzman, R.: The Ethics Police?: The Struggle to Make Human Research Safe (1 edition). OUP USA (2015)
Kurzweil, R.: The Singularity is Near. Gerald Duckworth & Co Ltd (2006)
Latonero, M.: Governing artificial intelligence: Upholding human rights & dignity. Data & Society. https://datasociety.net/wp-content/uploads/2018/10/DataSociety_Governing_Artificial_Intelligence_Upholding_Human_Rights.pdf (2018)
Lauer, D.: You cannot have AI ethics without ethics. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00013-4
MacIntyre, A. C.: After virtue: A study in moral theory. University of Notre Dame Press (2007)
MacIntyre, J., Medsker, L., Moriarty, R.: Past the tipping point? AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00016-1
Manders-Huits, N., & van den Hoven, J.: The Need for a Value-Sensitive Design of Communication Infrastructures. In P. Sollie & M. Düwell (Eds.), Evaluating New Technologies: Methodological Problems for the Ethical Assessment of Technology Developments (pp. 51–62). Springer (2009)
Martin, C.D., Makoundou, T.T.: Taking the high road ethics by design in AI. ACM Inroads 8 (4), 35–37 (2017)
Mason, R.O.: Four ethical issues of the information age. MIS Q. 10 (1), 5–12 (1986)
McCarthy, J., Minsky, M.L., Rochester, N., Shannon, C.E.: A proposal for the Dartmouth summer research project on artificial intelligence, august 31, 1955. AI Mag. 27 (4), 12–12 (2006)
Mill, J. S.: Utilitarianism (2nd Revised edition). Hackett Publishing Co, Inc (1861)
Miller, J. H., & Page, S. E.: Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Princeton University Press (2007)
Mingers, J., Walsham, G.: Towards ethical information systems: The contribution of discourse ethics. MIS Q. 34 (4), 833–854 (2010)
Mittelstadt, B.: Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, (2019) https://doi.org/10.1038/s42256-019-0114-4
Moor, J.H.: What is computer ethics. Metaphilosophy 16 (4), 266–275 (1985)
Moor, J.H., Bynum, T.W.: Introduction to cyberphilosophy. Metaphilosophy 33 (1/2), 4–10 (2002)
Moore, J.F.: Predators and prey: A new ecology of competition. Harv. Bus. Rev. 71 (3), 75–86 (1993)
Muller, C.: The Impact of Artificial Intelligence on Human Rights, Democracy and the Rule of Law (CAHAI (2020)06-fin). Council of Europe, Ad Hoc Committee on Artificial Intelligence (CAHAI) (2020). https://rm.coe.int/cahai-2020-06-fin-c-muller-the-impact-of-ai-on-human-rights-democracy-/16809ed6da
Müller, V. C.: Ethics of Artificial Intelligence and Robotics. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020). Metaphysics Research Lab, Stanford University (2020) https://plato.stanford.edu/archives/fall2020/entries/ethics-ai/
Nemitz, P.: Constitutional democracy and technology in the age of artificial intelligence. Phil. Trans. R. Soc. A 376 (2133), 20180089 (2018). https://doi.org/10.1098/rsta.2018.0089
Nishant, R., Kennedy, M., Corbett, J.: Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. Int. J. Inf. Manage. 53 , 102104 (2020). https://doi.org/10.1016/j.ijinfomgt.2020.102104
Norman, D.A.: Affordance, conventions, and design. Interactions 6 (3), 38–43 (1999). https://doi.org/10.1145/301153.301168
OECD.: Recommendation of the Council on Artificial Intelligence [OECD Legal Instruments]. OECD (2019). https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
Raso, F. A., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Kim, L.: Artificial Intelligence & Human Rights: Opportunities & Risks (SSRN Scholarly Paper ID 3259344). Social Science Research Network (2018). https://papers.ssrn.com/abstract=3259344
Richards, L., Brockmann, K., & Boulanini, V.: Responsible Artificial Intelligence Research and Innovation for International Peace and Security. Stockholm International Peace Research Institute (2020). https://reliefweb.int/sites/reliefweb.int/files/resources/sipri_report_responsible_artificial_intelligence_research_and_innovation_for_international_peace_and_security_2011.pdf
Rodrigues, R.: Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology 4 , 100005 (2020). https://doi.org/10.1016/j.jrt.2020.100005
Rogerson, S.: Ethics and ICT. In R. D. Galliers & W. Currie (Eds.), The Oxford Handbook of Management Information Systems: Critical Perspectives and New Directions (pp. 601–622). OUP Oxford (2011)
Rowe, F.: What literature review is not: Diversity, boundaries and recommendations. European Journal of Information Systems, 23(3), 241–255 (2014). http://dx.doi.org.proxy.library.dmu.ac.uk/ https://doi.org/10.1057/ejis.2014.7
Shneiderman, B.: Design Lessons From AI’s Two Grand Goals: Human Emulation and Useful Applications. IEEE Transactions on Technology and Society 1 (2), 73–82 (2020). https://doi.org/10.1109/TTS.2020.2992669
Smith, B. C.: The Promise of Artificial Intelligence: Reckoning and Judgment. The MIT Press (2019)
Spiegelhalter, D.: Should We Trust Algorithms? Harvard Data Science Review (2020). https://doi.org/10.1162/99608f92.cb91a35a
Spinello, R. A.: Case Studies in Information Technology Ethics (2nd edition). Pearson (2002)
Spinello, R. A., & Tavani, H. T.: Readings in CyberEthics. Jones and Bartlett Publishers, Inc (2001)
Stahl, B. C., & Markus, M. L.: Let’s claim the authority to speak out on the ethics of smart information systems. MIS Quarterly, 45(1), 33–36 (2021). https://doi.org/10.25300/MISQ/2021/15434.1.6
Stahl, B. C., Timmermans, J., & Mittelstadt, B. D.: The Ethics of Computing: A Survey of the Computing-Oriented Literature. ACM Comput. Surv. 48(4), 55:1–55:38 (2016). https://doi.org/10.1145/2871196
Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., Hirschberg, J., Kalyanakrishnan, S., Kamar, E., & Kraus, S.: Artificial Intelligence and Life in 2030. One hundred year study on artificial intelligence: Report of the 2015–2016 Study Panel. Stanford University, Stanford, CA, http://Ai100.Stanford . Edu/2016-Report. Accessed: September, 6, (2016)
Tate, M., Furtmueller, E., Evermann, J., & Bandara, W.: Introduction to the Special Issue: The Literature Review in Information Systems. Communications of the Association for Information Systems, 37(1) (2015). http://aisel.aisnet.org/cais/vol37/iss1/5
Tavani, H.: The foundationalist debate in computer ethics. In L. Floridi (Ed.), The Cambridge Handbook of Information and Computer Ethics (pp. 251–270). Cambridge University Press (2010)
Tavani, H.T.: The uniqueness debate in computer ethics: What exactly is at issue, and why does it matter? Ethics and Inf. Technol. 4 (1), 37–54 (2002)
Tigard, D.W.: Responsible AI and moral responsibility: A common appreciation. AI and Ethics (2020). https://doi.org/10.1007/s43681-020-00009-0
Wallach, W., Marchant, G.: Toward the Agile and Comprehensive International Governance of AI and Robotics [point of view]. Proc. IEEE 107 (3), 505–508 (2019). https://doi.org/10.1109/JPROC.2019.2899422
Weckert, J., & Adeney, D. (Eds.). Computer and Information Ethics. Greenwood Press (1997)
Weizenbaum, J.: Computer Power and Human Reason: From Judgement to Calculation (New edition). W. H. Freeman & Co Ltd (1977)
Wiener, N.: The human use of human beings. Doubleday (1954)
Wiener, N.: God and Golem. MIT Press, Inc. A comment on certain points where cybernetics impinges on religion (1964)
Willcocks, L.: Robo-Apocalypse cancelled? Reframing the automation and future of work debate. J. Inf. Technol. 35 (4), 286–302 (2020). https://doi.org/10.1177/0268396220925830
World Economic Forum.: Responsible Use of Technology [White paper]. WEB (2019). http://www3.weforum.org/docs/WEF_Responsible_Use_of_Technology.pdf
Zuboff, P. S.: The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (01 edition). Profile Books (2019)
Download references
This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3) and Grant Agreement No. 786641 (SHERPA).
Author information
Authors and affiliations.
Centre for Computing and Social Responsibility, De Montfort University, The Gateway, Leicester, LE19BH, UK
Bernd Carsten Stahl
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Bernd Carsten Stahl .
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and Permissions
About this article
Stahl, B.C. From computer ethics and the ethics of AI towards an ethics of digital ecosystems. AI Ethics 2 , 65–77 (2022). https://doi.org/10.1007/s43681-021-00080-1
Download citation
Received : 03 May 2021
Accepted : 01 July 2021
Published : 31 July 2021
Issue Date : February 2022
DOI : https://doi.org/10.1007/s43681-021-00080-1
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Computer ethics
- Ethics of AI
- Artificial intelligence
- Digital ethics
- Find a journal
- Publish with us
Perspectives on computing ethics: a multi-stakeholder analysis
Journal of Information, Communication and Ethics in Society
ISSN : 1477-996X
Article publication date: 23 September 2021
Issue publication date: 7 February 2022
Computing ethics represents a long established, yet rapidly evolving, discipline that grows in complexity and scope on a near-daily basis. Therefore, to help understand some of that scope it is essential to incorporate a range of perspectives, from a range of stakeholders, on current and emerging ethical challenges associated with computer technology. This study aims to achieve this by using, a three-pronged, stakeholder analysis of Computer Science academics, ICT industry professionals, and citizen groups was undertaken to explore what they consider to be crucial computing ethics concerns. The overlap between these stakeholder groups are explored, as well as whether their concerns are reflected in the existing literature.
Design/methodology/approach
Data collection was performed using focus groups, and the data was analysed using a thematic analysis. The data was also analysed to determine if there were overlaps between the literature and the stakeholders’ concerns and attitudes towards computing ethics.
The results of the focus group analysis show a mixture of overlapping concerns between the different groups, as well as some concerns that are unique to each of the specific groups. All groups stressed the importance of data as a key topic in computing ethics. This includes concerns around the accuracy, completeness and representativeness of data sets used to develop computing applications. Academics were concerned with the best ways to teach computing ethics to university students. Industry professionals believed that a lack of diversity in software teams resulted in important questions not being asked during design and development. Citizens discussed at length the negative and unexpected impacts of social media applications. These are all topics that have gained broad coverage in the literature.
Social implications
In recent years, the impact of ICT on society and the environment at large has grown tremendously. From this fast-paced growth, a myriad of ethical concerns have arisen. The analysis aims to shed light on what a diverse group of stakeholders consider the most important social impacts of technology and whether these concerns are reflected in the literature on computing ethics. The outcomes of this analysis will form the basis for new teaching content that will be developed in future to help illuminate and address these concerns.
Originality/value
The multi-stakeholder analysis provides individual and differing perspectives on the issues related to the rapidly evolving discipline of computing ethics.
- Social impact of ICT
- Information communication technology (ICT)
- Computing ethics
- Ethical concerns in ICT
Gordon, D. , Stavrakakis, I. , Gibson, J.P. , Tierney, B. , Becevel, A. , Curley, A. , Collins, M. , O’Mahony, W. and O’Sullivan, D. (2022), "Perspectives on computing ethics: a multi-stakeholder analysis", Journal of Information, Communication and Ethics in Society , Vol. 20 No. 1, pp. 72-90. https://doi.org/10.1108/JICES-12-2020-0127
Emerald Publishing Limited
Copyright © 2021, Damian Gordon, Ioannis Stavrakakis, J. Paul Gibson, Brendan Tierney, Anna Becevel, Andrea Curley, Michael Collins, William O’Mahony and Dympna O’Sullivan.
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) license. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this license may be seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
Computers and technological applications are now central to many aspects of life and society, from industry and commerce, government, research, education, medicine, communication to entertainment systems. The last decade has seen rapid technological growth and innovation, with the realities of artificial intelligence (AI) technology coming to fruition. Topics including privacy, algorithmic decision-making, pervasive technology, surveillance applications and automating human intelligence for robotics or autonomous vehicles frequently undergo scrutiny in the media and are increasingly coming into public discourse. These technologies have wide ranging impacts on society where those impacts can be beneficial but may also at times be negative. There is a sense that some technology development and innovation is happening at a more rapid pace than the relevant ethical and moral debates.
The history of computing ethics (or computer ethics) goes hand-in-hand with the history of computers themselves; since the early days of the development of digital computers, pioneering computer scientists, such as Turing, Wiener and Weizenbaum, spoke of the ethical challenges inherent in computer technology ( Weizenbaum, 1976 ; Bynum, 1999 , 2000 , 2006 , 2018 ), but it was not until 1985 that computing ethics began to emerge as a separate field. This was the year that two seminal publications were produced, Deborah Johnson’s book Computer Ethics ( Johnson, 1985 ) and James Moor’s paper, “What Is Computer Ethics?” ( Moor, 1985 ).
Deborah Johnson’s Computer Ethics ( Johnson, 1985 ) was the first major book to concentrate on the ethical obligations of computer professionals, and thoughtfully identifies those ethical issues that are unique to computers, as opposed to business ethics or legal ethics. She also notes the application of deficient common moral norms to new and unfamiliar computer-related moral problems and dilemmas.
In James Moor’s (1985) paper, he defined computer ethics as “the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology”, and argues that computer technology makes it possible for people to do a vast number of things that it was not possible to do before and since no one could do them before, the question may never have arisen as to whether one ought to do them.
The field of computing ethics continued to evolve in the 1990s, and the concept of “value-sensitive computer design” emerged, based on the insight that potential computing ethics problems can be avoided, while new technology is under development, by anticipating possible harm to human values and designing new technology from the very beginning in ways that prevent such harm ( Flanagan et al. , 2008 ; Brey, 2012 ). At the same time, others including Donald Gotterbarn ( Gotterbarn, 1991 ), theorised that computing ethics should be seen as a professional code of conduct devoted to the development and advancement of standards of good practice for computing professionals. This resulted in the development of a number of codes of ethics and codes of conduct for computing professionals. One important example is the ACM code which was first established in 1966 under the title “Guidelines for Professional Conduct” with the aim of upholding ethical conduct in the computing profession ( Gotterbarn et al. , 2018 ). The code has gone through various updates while keeping ethics and social impact as its main purpose. One of its most important updates was in 1992 when it was renamed to “ACM Code of Ethics and Professional Conduct” and was made up of 25 ethical principles for professionals to follow ( ACM, 1992 ). The last update of the ACM code was in 2018.
Professional bodies continue to play a very important role in producing and disseminating ethical and standard guidelines for ICT professionals, for example the IEEE Ethically Aligned Design Guidelines provide guidance for ICT professionals ( Shahriari and Shahriari, 2017 ). It should be noted that in contrast to other professions such as Medicine or Law, which have codes of ethics and possible penalties in place for noncompliance, the ICT profession still lacks of a coherent umbrella ethical framework ( Thornley et al. , 2018 ).
In 1996 the “Górniak Hypothesis” predicted that a global ethic theory would emerge over time because of the global nature of the internet. Developments since then appear to be confirming Górniak’s hypothesis and have resulted in the metaphysical information ethics theory of Luciano Floridi ( Floridi , 1999, 2014 ; Floridi and Sanders, 2005 ). These new theories make explicit the social and global change created by new technologies and call for an intercultural debate on computing ethics in order to critically discuss their impact on society.
In this paper, we present a literature review of recent work on computing ethics to understand the pertinent topics currently under discussion in the literature and the results of a series of focus groups that explored computing ethical concerns with Computer Science academics, ICT industry professionals, and citizens. The research was conducted as part of the Ethics4EU project’s report on European Values for Ethics in Technology ( Ethics4EU, 2021 ). Ethics4EU is an Erasmus + project that aims to develop a repository of open and accessible educational curricula, teaching and assessment resources relating to computing ethics. The rest of this paper is organized as follows. In Section 2, we discuss relevant recent literature on computing ethics. In Section 3, we describe our focus groups sessions. In Section 4, we present a thematic analysis of the data gathered during the focus groups. We conclude with a discussion in Section 5.
2. Literature review
A systematic literature review approach was employed in selecting relevant literature from a number of key areas that represent some notable present-day computing ethics topics and challenges (similar reviews have been undertaken by researchers such as Braunack-Mayer et al. , 2020 ; Saltz et al. , 2019 ; Saltz and Dewar, 2019 ). These key areas, also highlighted by Kumar et al. (2020) , focus on the overlap between three key areas in contemporary computing ethics, the areas of data science, AI and pervasive computing (including surveillance and privacy), thus the focus of this literature review is to critically examine those three areas, and to explore the themes that have emerged in each of those domains in the past five years.
2.1 Data ethics
Data ethics is a relatively new branch of computing ethics that studies moral problems related to data management (including generation, recording, curation, processing, dissemination, sharing and use) as well as algorithms (including those using AI, artificial agents, machine learning and robots) to formulate and support morally good solutions for data ( Floridi and Taddeo, 2016 ). Data has become a key input for driving growth, enabling businesses to differentiate themselves, and maintain a competitive edge. Value is particularly high from the mass collection and aggregation of data, particularly by companies with data-driven business models. However, the use of aggregated data underpins the risks to individuals’ privacy at a very fundamental level. Therefore, it is vital to highlight data management frameworks that promote the ethical collection, processing and aggregation of data. Three popular data management frameworks are the Data Management Association’s Data Management Body of Knowledge (DM-BOK) ( DAMA, 2017 ), the Zachman Framework ( Zachman, 2008 ) and the Ethical Enterprise Information Management Framework (E2IM) ( O’Keefe and Brien, 2018 ). A broad overview of the concerns to be addressed by data-based businesses are given in Loi et al. (2019) who outline the structure and content of a code of ethics for companies engaged in data-based business, i.e. companies whose value propositions strongly depend on using data.

2.2 Artificial intelligence ethics
AI has emerged as one of the central issues in computing ethics. Pertinent issues in AI ethics include transparency; inclusion; responsibility; impartiality; reliability; security and privacy. While many researchers and technology experts are excited by AI’s potential, many others are unsettled by it. Authors balance the positive effects of AI (self-driving cars leading to better safety, digital assistants, robots for heavy physical work; and powerful algorithms to gain helpful and important insights from large amounts of data) against the negatives (automation leading to job losses, rising inequalities attributed to AI haves and have nots and threats to privacy) ( Helbing et al. , 2019 ).
Algorithmic transparency and fairness are key elements of ethical AI systems ( Webb et al. , 2019 ). Ethical AI systems should also work to eliminate bias which can be achieved by a greater understanding of the data used to build systems ( Floridi and Taddeo, 2016 ; Mittelstadt et al. , 2016 ; Müller, 2020 ). A key element for automated decision-making systems is accountability and auditing ( Whittaker et al. , 2018 ). If the system involves machine or more recently, deep learning, it will typically be opaque even to the expert who developed it. Machine and robot ethics are another important subfield of AI ethics. If a robot acts, will it itself be responsible, liable, or accountable for its actions? (EGE, 2018). “The effects of decisions or actions based on AI are often the result of countless interactions among many actors, including designers, developers, users, software, and hardware. […] With such distributed agency comes distributed responsibility” ( Taddeo and Floridi, 2018 , p. 751).
2.3 Pervasive computing ethics
The terms “pervasive computing,” “ubiquitous computing”, “ambient intelligence,” and “the Internet of Things” refer to technological visions that share one basic idea: to make computing resources available anytime and anywhere by embedding computational devices in everyday objects, freeing the user from the constraint of interacting with ICT devices explicitly via keyboards and screens.
One of the central tenets of pervasive computing is “Understanding and Changing Behavior,” a topic that clearly has significant ethical considerations ( Kranzberg, 2019 ). Related elements include surveillance technologies, effects on privacy and technological paternalism ( Hilty, 2015 ; Macnish, 2017 ).
The ethics of surveillance considers the moral aspects of how surveillance, including facial recognition technology, is used ( Macnish, 2017 ). One of the core arguments against surveillance is that it poses a threat to privacy and in a world of ubiquitous automatic identification, the amount of personal data generated and circulated is expected to increase dramatically.
Privacy is an integral part of pervasive computing ethics and is defined as an individual condition of life characterized by exclusion from publicness ( Rasmussen et al. , 2000 ). In the context of computing, privacy is usually interpreted as “informational privacy,” which is a state characterized “by controlling whether and how personal data can be gathered, stored, processed or selectively disseminated”. There is a clear conflict between privacy and pervasive computing technologies, particularly those technologies that deal with sensing and storage ( Jacobs and Abowd, 2003 ). The resulting requirement to protect individual privacy against data misuse entered many laws and international agreements under different terms, some of them focusing on the defensive aspect, such as “data protection,” others emphasizing individual autonomy, such as “informational self-determination.
3. Methodology
Data collection was conducted during the Ethics4EU first multiplier event in November 2019 inside the campus of Technological University Dublin (TU Dublin) in Dublin, Ireland. Participants from academia and industry were recruited as part of a convenience sampling approach ( Saumure and Given, 2008 ). TU Dublin’s School of Computer Science has extensive collaborations with the ICT industry. Therefore, readily accessible participants from a list of contacts were invited according to their field of work in a range of ICT organizations (both large and small). Academic participants from a range of academic institutions were identified through their research areas and expressed interest in the topic. Citizen participants were recruited following a snowball sampling method ( Morgan, 2008b ). Some participants were initially contacted by the researchers and were asked to spread the word to other citizens who were interested in participating in the focus groups.
A focus group approach was used to help identify digital ethical concerns from stakeholder groups. Focus group interviews give researchers the opportunity to acquire simultaneously a variety of perspectives on the same topic ( Gibbs, 1997 ). Zikmund (1997) identifies ten advantages in the use of focus groups for research. Among these are speed, stimulation (participants’ views emerge from the group process), security (because of the homogeneity of the group) and synergy (when the group process brings forward a wide range of information) to name a few.
These advantages were aligned with our study where focus groups were used for the following reasons. Firstly, “Ethics” as a branch of philosophy is inherently about argumentation, debate, discussion and negotiation on things like practical moral issues, the morality of one’s actions and on the definition of morality itself. As there are no black and white answers, communication and argumentation are fundamental properties in ethical reasoning. Therefore, in an attempt to approach it as an “alive”, interactive and ever-evolving process focus groups were considered to be a good methodological tool for capturing this angle. To our knowledge, this has not happened before in the computing ethics literature.
Secondly, our study is exploratory in nature and our aim was to investigate the topic broadly and not so much in-depth ( Stokes and Bergin, 2006 ). The aim is that our focus groups will be supplemented later in the research with more in depth interviews. Thirdly, groups can be less expensive and faster for data collection compared to one-to-one interviews. Fourthly, compared to one-to-one interviews where the researcher can be at risk of biasing the interview ( Vyakarnam, 1995 ), focus groups reduce risk mainly because of the more balanced social dynamic within the group. However, this is dependent on the style of the group moderators and on group homogeneity. Homogeneity ( Morgan, 2008a ) is based upon the shared characteristics of the group participants, relevant to the research topic. In this case, for example, one common characteristic was the professional field, e.g. academics were placed in one group and people from industry in another.
Each group were asked to discuss their views on three open questions (see below) and spent approximately 30 min discussing each question. The number of participants in each focus group was kept at 10–12, and this meant that there was one industry group (10 participants), but it necessitated creating two academic groups (11 and 12 participants) and two citizen groups (11 participants per group).
Each focus group began with an introduction to the main goals of the Ethics4EU project. Consent was obtained from each participant with the clear understanding that they have the right to withdraw at any time. The participants were also informed that their privacy would be respected, and the data from this research would be secured in a protected location, with adherence to GDPR. This introduction process took 10–15 min, depending on the number of questions from the individual groups.
Each focus group was assigned a moderator. In our study, the group moderators took a semi-directive approach allowing members of the group to speak freely while encouraging all members to participate for each of the three questions. According to Morgan (2008a , p. 354) “[t]his strategy matches goals that emphasize exploration and discovery”.
The moderator facilitated the introduction, introduced the questions and encouraged all participants to contribute. They also made sure no single person dominated the discussion. They did not attempt to steer the discussion in any direction, rather letting the topics emerge from the participants. In addition, moderators took detailed notes of the discussion using pen-and-paper. Audio recordings were not taken.
What ethical concerns do you have about new technologies?
What skills or training should people have to protect themselves in the online world?
What ethical training should be given from persons designing and developing technology, and who do you think should give that training?
One reported downside of focus groups is the tendency for some participants to conform to group consensus ( Stokes and Bergin, 2006 ). Also focus groups cannot delve deep into each participant’s range of opinions and beliefs for any given topic. However, these disadvantages have been acknowledged from the start and were not deemed to negatively interfere with the scope of the current study.
3.1 Participant demographics
3.1.1 industry participants..
The majority of the participants (80 %) were aged 30–49, and 20% were aged 50–69. Half of the participants (50%) were female and 40% were male with 10% preferring not to say. Thirty percent of industry participants had a bachelor’s degree, 60% had a Master’s degree, and 10% had a PhD.
3.1.2 Academic participants.
A majority of the participants (65 %) were aged 30–49, 13% were aged 18–29, and 22% were aged 50–69. A total of 48 % of the participants were female and 43% were male, with 9% preferring not to say. A total of 9% had a bachelor’s degree, 39% had a master’s degree and 52% had a PhD.
3.1.3 Citizen participants.
A majority of the citizens (82 %) were aged 30–49, 14% were aged 18–29 and 4% were aged between 50–69. A total of 64 % of the participants were female and 32 were male with 4% preferring not to say. A total of 18%had second level education, 50% had a bachelor’s degree and 32% had a master’s degree. The professions of the citizen participants are shown in Table 1 . If there was more than one participant in a given profession, the number is shown in brackets after the profession. A total of 17professions are represented among the participants.
Define the coding categories : When the focus groups were completed, the researchers familiarized themselves with the data by reading the transcripts many times. The researchers looked for patterns and themes across the data. Initially a colour coding approach was used to identify the main themes emerging from the transcripts. This involves highlighting different parts of the transcripts in different colours to represent initial themes. This gave the researchers the ability to look at emerging themes “at a glance”, and to explore the balance of text that relates to each theme.
Assign code labels to the categories : From this first step, a preliminary tentative set of text codes were created to describe the computing ethics topics emerging from the transcripts. These codes replaced the coloured text. Examples of the initial codes included: digital-literacy , where-law-overlaps-ethics , older-people-concerns
Classify relevant information into the categories : Following this initial process, the transcripts were re-read and the initial codes were attached to all relevant text. It was found that in some cases these codes were too general, e.g. one of the early themes was: data-ethics , which was later deemed to be too general; and in other cases the code were too specific, e.g. ACM-professional-code-of-ethics .
Refining the codes : Following the identification of codes that were not fully suitable, those that were too general were further refined, e.g. data-ethics became data-ethics-privacy , data-ethics-reliability , data-ethics-retention , and data-ethics-misuse , and those codes that were too specific were merged, e.g. ACM-professional-code-of-ethics , employee-responsibilities and organizational-specific-guidelines became professional-ethics . Some other codes were renamed, e.g. relevant-European-laws became importance-of-GDPR . This was an iterative process and took place over a period of three weeks to complete.
Test the reliability of the coding : The reliability of the coding was tested by asking an independent reviewer to code one of the transcripts without having access to our coding process (this is called an independent-coder method). There was a strong overlap between the two coding processes, thereby validating the approach.
The final codes are the key themes of the transcripts, which are described in the subsections below. Each theme is highlighted in bold. Responses from the different groups are presented comparatively for each question. Where there were multiple groups of participants, we have combined the responses.
4.1 What ethical concerns do you have about new technologies?
4.1.1 industry responses..
There was agreement amongst the participants that one of the major areas of concern was the on-going automation of activities that were previously undertaken by human beings (this concern was further exacerbated for the group when the automation is achieved through Machine Learning). Participants discussed what to do with the people when their jobs are being replaced by machines - “we’re developing technology that is taking jobs away from people, and although they can reskill, it is not clear that enough new jobs will be created to replace those that are going to be lost”.
One of the most discussed considerations was the challenge of bias in automated decision-making systems , and this issue was looked at from the interrelated perspectives of bias in datasets, and bias in machine learning algorithms. When discussing bias in datasets participants highlighted the potential dangers of using open datasets, which may not have been analysed for completeness, and “may exclude particular populations, for example, those on the margins” and yet the conclusions derived from the analysis of these datasets may be presented as fact. There was also a good deal of discussion of potential historical patterns of bias in datasets (including issues around gender and race), and how to prevent those historical issues from being propagated. Suggestions for solutions included “exploring patterns for bias in data, looking at statistical variances, examining who owns or controls the data and how the datasets were created”. Participants suggested that this type of analysis “should look at composite biases that are more difficult to detect, for example, hiring women over a certain age in employment practices”. It was also suggested that the “GDPR guidelines are useful for exploring bias”.
There were discussions around environmental considerations in the collection of that data , particularly in the context of Internet of Things. Another participant mentioned the role of data centers contributing to the environmental impact.
One organisation that was highlighted for their excellence in ethics was German Software Company, SAP SE, who incorporate a great deal of ethics in their graduate training programmes. A representative from SAP described how their training “provided three examples of where there had been ethical compliance breaches, and one example looked at a senior female colleague working in Israel and Korea and how she was treated differently in different countries.” This discussion led onto a conversation on regional and cultural differences in ethical standards .
Participants discussed IT professional standards and their relation to ethical standards , in particular if IT professional standards can ensure that ethical standards are adhered to. Some participants felt it was important to note that very often employees do not necessarily have control over their work and their use of data and code, and may not be aware of where their work will be used or how it can be repurposed and that there is no clear guidance in IT professional standards in such cases.
Finally, all participants agreed that students should be educated about legal frameworks with relevant ethical aspects including Confidentiality Agreements, Non-Disclosure Agreements and Intellectual Property.
4.1.2 Academia responses.
The academic participants began by discussing their concerns about data and datasets – one participant remarking “I worry about the amount of personal information that is being kept, I think we are keeping a lot more data than we need.” Both groups felt that a lot of organisations seem to be collecting as much data as possible with no clear purpose, other than feeling there is something of value in the data. They also expressed their concerns about completeness and representativeness of datasets (particularly for open source datasets), and the potential bias that an AI system might embody in its decision-making if trained on these data sets. This led to a discussion on the ownership of decisions in an online context, for example, if a search engine is suggesting search phrases, and suggesting sites to visit, who is really making the decisions – one participant said “where is the line between me and the application?” Another concern raised was the dangers of technological or digital colonisation in developing countries and a lack of control of their data on the part of citizens in the developing world.
Another concern was that data sets might exclude certain people for privacy reasons because they might be identifiable due to their specific characteristics, and therefore could be unrepresented in the overall data sets. This conversation about marginalisation led to a discussion of how some voice recognition technologies, for example, one participant remarked how voice assistants such as Alexa, might not be as effective for people with speech impediments or strong regional accents.
On the theme of privacy , participants expressed concern over the possibility of governments or private organisations combining various data “personal, legal and financial information” about an individual and the impact that might have. One example cited by a participant was the South Korean government limiting the number of (and size of) gambling transactions that any one person can take part in per day when gambling online. Even though there was general agreement that too much gambling is a bad thing, nonetheless the notion of the government being able to restrict an individual’s liberty was considered objectionable, and they felt that education and help is preferable to controlling them. Participants remarked that a combination of control and educational measures have been used by governments to reduce the number of people who smoke.
Both groups highlighted the importance of cybersecurity as being an ethical imperative in the context of the significant amount of personal data being collected on individuals, as well as the numerous high profile data leaks that have occurred.
Both groups discussed research projects and the importance of ethics for research projects , particularly when funded by public monies. They discussed the importance of transparency in research projects and how this should be communicated to the public. For example participants discussed how it is important to make clear “whether enticements or incentives were permitted.” They discussed the importance of research communication more generally and how this is an ethical issue, for example how research findings are presented to the general public who may not be familiar with the general area or the specific details. Finally, both groups agreed on the importance of teaching students about plagiarism, copyright and the honest presentation of results and outlining how such practices are a violation of ethics as it is unfair to take credit for another person’s intellectual property or to present misleading results or information.
4.1.3 Citizens responses.
Many of the concerns of the citizen participants related to data . Some common themes were around collection and retention of data and misuse of data , in particular how data collected on individuals may be used and misused. Data gathering with the purpose of creating profiles, for example voter profiles with the purpose of influencing democratic elections were mentioned frequently by the participants.
Privacy was another topic discussed by all participants, in particular data appropriation on behalf of privately owned businesses, which do not reward their users for the acquisition of such data and use it to maximize their profits. Others raised concerns about third parties gaining access to data without the original users consent . Concerns were raised about applications (e.g. Facebook, Siri) that can listen to conversations through digital devices and use that information for targeted advertising. The balance between commercial gain and social benefit was discussed.
Respondents cited a number of concerns related to social media including the normalisation of unacceptable behaviours , inappropriate exposure to information and content , widespread dissemination of misinformation , the fuelling addictive tendencies and preying on vulnerable individuals. Social media platforms can have negative affect on individuals psyche leading to mental health issues and reduce genuine human interactions and empathy. Particular concerns were raised about the influence of social media on teenagers and young people, for example cyberbullying and a lack of awareness on the part of young people about the longevity of data online. One participant said “people who come into the public eye are likely to have their social media postings from decades earlier where they were perhaps a younger and less-informed person, examined for any failures to be used against them”. There were also concerns about misinformation online and the sharing of “fake news” via social media, and the detrimental impact that can have on younger, most impressionable people, one participant noted “As a parent I wonder how safe my teenagers are online, they lack the experience and skills to distinguish reputable sources from fake news”.
The participants also felt that surveillance of individuals, in particular using facial recognition technology was of great concern, where surveillance companies are potentially storing images and information about people who are unaware they are captured on camera, for example from walking on the street, going shopping or entering other commercial buildings. One participant notes that there are cultural differences in how surveillance technologies are being used “Surveillance is creeping up in prevalence but thus far I believe it is being used for legitimate and good purposes, for solving crimes, preventing crimes, and finding missing persons, in Ireland at least. I wouldn’t be quite so sure about other countries and other one-party states.”
4.2 What skills or training should people have to protect themselves in the online world?
4.2.1 industry responses..
The group analysed this question from the point-of-view of their work as software designers and developers. There was a discussion centred around the notion of consequence scanning , where designers and developers try to predict the consequences of developing the software they are asked to create. For example, designers and developers should consider what the software should and should not do, the worst possible negative consequence of the software and how the software would work if repurposed for another system.
Generally, the participants reflected that a lot of employees in organisations do not get to see the “big picture”, and therefore do not have the opportunity to evaluate the ethical implications of the processes that they are involved in. On the other hand, managers who have the bigger picture, but do not know the exact detail of how systems have been developed might also be missing out on some of the ethical implications of the how the work of different designers and developers impact each other from a moral point of view.
Another consideration that was discussed was the dangers of using off-the-shelf code , particularly when used by naïve or novice designers and developers, who may not have thought of the complete ethical implications of using that code, or may not have full information on how the off-the-shelf code works, and therefore have no awareness of the potential ethical issues. The conversation moved to the important role that educators must play in exploring these issues.
The participants reflected that there is a need for more diversity in the IT profession ; importantly as trainers, as designers; as developers, and as testers, so that “they can ask the ethical questions that others don’t think of”.
4.2.2 Academia responses.
The groups looked at this question from the point-of-view of teaching students how to design and develop a computer system. The groups, began by discussing considerations that students should have before designing and developing a computer system. These considerations could broadly be characterised as consequence scanning . For example, what is the best outcome of this development? What is the worst unintentional outcome that could happen? How would I mitigate the worst outcome if it happened?
There was a general agreement of the importance of “always keeping a human in the loop”, and particularly to ensure that there is consultation with persons with significant domain knowledge as they will understand in what context the technology will be used. There was also agreement that we have to encourage designers and developers to think more reflectively and “consider what the system they are developing is really for, and is really about”. One participant suggested a possible scenario where a developer was asked to create software following a specification, and it became evident that the system as a whole was designed to make the software addictive, even though it was not evident to the individual developers, what should they do?
There was also general agreement that explainability is vital in all automated decision-making processes. This explainability concept refers to both the ability to understand terminology such as “features” and “weights”, but also that each individual decision that the system takes can be explained. As part of this conversation, participants questioned whether or not it is appropriate to develop systems using partially correct data. Another participant mentioned the use of software libraries like LIME for Python which helps explain how machine learning systems make specific predictions.
The groups agreed that there is a need for designers and developers to be aware of the law as it pertains to them, and where the law overlaps with ethics . Although there was general agreement that often ethical principles can be of a higher standard than the law, but the groups wondered if there is one set of ethical principles that should be followed by everyone. Further to this, there are different laws in different countries, and there may even be different ethical standards in different regions that developers should be aware of. The topic of outsourcing was discussed, where systems can be developed in one region but used in another and different ethical standards can apply in the different regions.
The groups also discussed the nature of ethical standards, and wondered where can you find (and find out about) standards and how ethical standards can be enforced . They also questioned whether ethical standards can keep up with the rapid developments of software . An issue discussed was the impact of unethical behaviour which impacts commercial activities, but more importantly impacts people and consumers. One participant highlighted opaque Terms and Conditions that many users sign up to without reading, and agreeing to things they either do not read or do not fully understand.
Another key issue discussed was accessibility and the importance of ensuring that as wide a range of people as possible can use the software being developed. One participant commented “Sometimes the client might not be aware of, or concerned with, accessibility consideration, but does that mean the developers shouldn’t consider it?”
Finally there was some discussion on how equipped academics are to teach this type of content, and what sort of training or teaching content is required by academics so that they can become confident in teaching this topic. Both groups felt that such content should be publicly available to private and public organisations.
4.2.3 Citizens responses.
Participants in this group took a broad view of the question. The issue of the longevity of digital information was raised as a concern, particularly social media posts which may be innocuous in the context in which they were created, but could be potentially misunderstood or misrepresented without that context, and could be detrimental in the future. As one participant phrased it: “For all groups they need to be made aware, understand implications of posting information to a world that is never deleted, follows them around forever, for example, years old tweets coming back to haunt people and impact on work opportunities”.
Participants felt that people should understand how data can be obtained by others and be taught about their digital security and online platforms including privacy settings. As one participant said: “People are unaware they are ‘the product’ in many cases. I think the phone companies and social media companies need to be much more transparent when people open accounts about ethical issues around obtaining” Another participant gave the example of digital assistants like Siri or Alexa “always listening to conversations in the home and using the information for marketing purposes”. At a more fundamental level digital literacy was considered extremely important, one participant remarked “People should know the basics of digital literacy, cookies come to mind. I don’t fully understand these, yet every website asks to allow them be used.”
Both groups agreed that parents need specific training to help them navigate the online world and to understand the implications of having a digital presence. They suggested that training should include using parental controls (including monitoring tools) and other ways of securing devices (in hardware and software), knowing some of the key social media applications, how to deal with cyberbullying, and how to talk to their children “about the positives and negatives of social networks, and ensuring they keep the channels open to enable children to discuss issues or bumps they encounter in their cyber journeys”. Another participant suggested that parents and children “should be taught about risks, to have an awareness of strangers online and false accounts or information”. They also felt it would be extremely helpful to learn about the addictive nature of social media applications and smartphones, and advice on how to limit their children’s use of these technologies. They stated it would be helpful to know what is legal and illegal in terms of sharing and downloading of audio and video files.
Both groups also agreed that older people need training to help them navigate the online world , particularly if it could be tailored for their interests, including lifestyle application (health, banking, shopping) and privacy settings on their devices and applications. Most participants felt that training about scams and fraud would also help, as well as general personal data protection online. All participants felt that some older people may not want to (or be able to) access digital services, and therefore offline services (government services, libraries, postal services) should be maintained for this age group if that is their preference. One participant commented that “the move to online services is regrettable particularly since some older people may not have a laptop or smartphone, may not have an internet connection (their area may not have coverage), or may not be comfortable using online banking applications, and maybe therefore be far more vulnerable to phone scams”.
Participants also mentioned that “voluntary organizations, clubs, societies also need training on the do’s and don’ts of social media and use of members’ data”. They also felt these groups should be trained on how to recognize and report inappropriate content. Finally they felt that “all elements of security awareness from spamming, phishing, identifying secure sites, should be taught, with real life examples”.
4.3 What ethical training should be given from persons designing and developing technology, and who do you think should give that training?
4.3.1 industry responses..
The first topic that the participants discussed in some detail was the importance of GDPR – (the General Data Protection Regulation GDPR 2016/679), and the importance of ensuring that students know how to perform data protection impact assessments, be able to handle data securely, and realise the importance of handling sensitive data before graduation. There was also agreement that a work placement during a university course can be very beneficial in terms of learning the importance of computing ethics, and can teach the students lessons that may be more difficult to teach in the classroom.
The discussion then turned to what other skills need to be and participants suggested “how to develop empathy” and “respecting social norms” as fundamental skills. Additionally there was agreement on the importance of giving students that ability to deal with situations where they are asked to do something unethical, including “the tools to ask further questions in situations where there appears to be unethical activities occurring to gain a deeper understanding of the situation”. Participants also noted that another way to help students develop an appreciation of computing ethics is to help them understand the reason why ethics are being breached.
Participants felt that the most effective way to successfully teach ethics would be to incorporate ethics into existing modules rather than creating a dedicated separate module, and it was suggested that something as simple as writing a small reflective piece on how data protection or computing ethics applies to a specific module might be a way of raising awareness, and to consider having someone external to the module review the content of these pieces.
Finally everyone agreed that ethics is a broad societal issue, it is up to everyone to help develop their personal, and societal, understanding on ethical standards.
4.3.2 Academic responses.
The groups stressed the importance of communication, teamwork, and most especially a sense of personal responsibility as key skills that need to be taught as part of computing ethics courses. One participant mentioned the importance for graduates to understand the concepts of “informed consent and voluntary consent” . All participants agreed that “confidence is also a vital skill in graduates, including having the confidence to ask difficult questions of colleagues and of management”, as well as having the confidence to speak up when unethical issues arise, and having the confidence to adapt to a changing environment. A related discussion involved how to equip graduates with the ability to think critically about their work once working in a company, and to appreciate that there may be conflict between ethics and profit.
Participants also felt that it was vitally important that graduates be equipped with a good understanding of data science giving the recent advances in the topic. Participants felt it is important for students “to explore the power and dangers of aggregate data, including a discussion of how to make an individuals’ data private using techniques such as differential privacy”. Another participant commented that students should be equipped with “an understanding of both bias and representativeness of datasets”.
The groups discussed the relationship between ethics and the law , and how knowledge of GDPR legislation is very important for all graduates. From there the discussion moved onto the potential conflict between legal issues and ethical issues, and what choices the graduates should make under those circumstances. One participant commented “it is important that every organisation should have a clear code of ethics, and promote that code, and promote ethical thinking in their organisation”.
4.3.3 Citizen responses.
The participants felt that the key skills for people designing and developing technology is “sympathy and empathy to think about the people who will be using the technology”, and that it is “important to realise that not everyone is a technology wizard”. They felt that using new technology can be difficult for many end-users and that many of them may not even think about or understand the ethical implications of technology. The participants also noted that even if they fully understand how to use the system “that doesn’t mean I fully understand how it works, so there might be a whole layer of ethical issues that are not visible to me”.
The groups felt that ethical training about “laws, codes, and policies” are also very important, and both groups mentioned that the people designing and developing technology must also have the confidence and courage to ask questions of their organisations, and ask questions of themselves to ensure that the highest ethical standards are being adhered to.
The majority of participants thought that Universities should be responsible for the ethical training of persons designing and developing technology. Some did suggest the responsibility lies with employers, and a few thought it should be part of a continuing professional development while others thought it should be an individual’s personal responsibility.
4.3.4 Overlapping concerns among all participants.
The final codes from the focus groups data were collated and common themes between participant groups were identified. It is clear that there were specific thematic overlaps among all groups – privacy, consequence scanning and where the law overlaps with ethics. Overlapping themes between academics and industry people are bias in automated decision-making systems, regional and cultural differences in ethical standards and the importance of GDPR. Overlapping themes between academics and citizens were misuse of data, widespread dissemination of misinformation, surveillance of individuals – facial recognition technology and accessibility ( Figure 1 ).
Common themes mentioned by industry professionals and citizens were automation replacing human beings and longevity of digital information. Themes only discussed by one group are also shown and some interesting findings include that academics alone were concerned about completeness and representativeness of datasets, the impact of technology in the developing world, explainability of automated decision making; industry professionals alone discussed environmental considerations in the collection of data, the dangers of using off-the-shelf-code and a lack of diversity in the ICT industry; citizens spoke about third party access to data, digital literacy and how older people may need help to navigate the online world.
It should be stated that although the sample size was reasonably large it was not investigated why some issues were mentioned by one group and not by another. Therefore, we must be tentative in our conclusions and not make any generalisations outside of our sample.
5. Discussion and conclusions
In this paper we have presented a literature of pertinent computing ethics topics from 2014–2019 and the results of a multi-stakeholder analysis where we examined the computing ethics issues that are deemed most pertinent by computer science academics, ICT professionals and citizens. The focus groups showed a combination of overlapping concerns as well as some concerns that are unique to each of the specific groups. All expressed concerns around data, in terms of privacy and data collection and secure storage, bias in datasets and data misuse. All groups also expressed concerns that developers often lack empathy and do not fully understand the user groups they are developing technology for. Academics were concerned about how computing ethics is taught to computer science students, often as a standalone course or module that does not reflect the distributed and interrelated nature of computing ethics concerns that cut across many computer science topics. Industry participants highlighted legal aspects including the importance of GDPR and the legitimate use of data. Citizens expressed a broad range of concerns about social media applications including concerns that social media technology has led to the normalisation of unacceptable behaviours, inappropriate exposure, and preying on vulnerable individuals. Citizens also highlighted the need for training in an online world, for example, how to deal with cyberbullying and how to identify possible scams and fraud.
The topics discussed by the focus groups overlap well with our findings from the computing ethics literature – concerns around data ethics and automated decision-making systems as well as about privacy and influence of social media were voiced by participants. Participants also discussed topics less well developed in the literature including the environmental impact of computing, the enforcement of ethical standards, the role of personal responsibility in developing technologies and training needs for specific groups.
It is clear from the analysis that there is a broad range of computing ethics concerns and that all stakeholders are considering the dilemmas, pitfalls and solutions. The focus groups considered contemporary topics, many of which have only fully emerged in the last decade. It is very likely that new technologies with a new set of ethical dilemmas will emerge soon. For example, this work was conducted in 2019, prior to the COVID-19 pandemic where much has been written about the ethics of contact tracing technologies and associated privacy concerns.
The goal of this research was to collect key computing ethical issues that are of concern to three stakeholder groups to help develop teaching content for students in computer science programmes. These three stakeholder groups represent the transition that many undergraduate students will undergo; before their programmes they are Citizens (knowing little about the detailed mechanics of how computers work, and the ethical issues associated with them), during their programmes they become Academics (knowing more about how computers work, and discussing ethical issues from an academic perspective), and when they graduate, they become members of the Industry group (Learning how computers work in the professional environment, and which practice ethics issues come to the fore).
The Venn diagram therefore can be seen as a set of themes or motifs that can be incorporated into a computer science programme, that can add substantive ethical content into that programme. The exact sequence of teaching the ethical content will depend on the overall nature of the programme, but as the students transition from novice to expert learners, the depth and complexity of the discussions of ethical issues, that they can have, can continuously grow and evolve.
Researchers such as Moore (2020) advocate that a computer ethics curriculum needs to be dynamic, evolving, and relevant to students’ lives and their beliefs, and therefore topics that are of genuine concern to them, and in particular the political nature of computing technologies (including many of the topics highlighted by the stakeholder groups) can be used as a powerful source of motivation in educating future generations of computing students.
As stated earlier, in our experience, the use of focus groups for investigating computing ethics is a novel one but it can prove particularly useful because it provides qualitative insights on the ethical reasoning processes of the interplay between academics, industry and citizens on the moral issues of rapidly evolving digital technologies.
While technological innovation has many positive aspects, we should not be blind to ethical and moral imperatives and strive to develop responsible technology as a first principle. Otherwise, it can prove difficult to reverse the consequences. There is a responsibility on universities to ensure these blind spots do not exist and that students who design and develop technology are taught to consider both the expected and unexpected, and positive and negative consequences of any systems they implement. We will address these concerns as uncovered via the literature and the analysis of the insights from our focus group as part of our research in the Ethics4EU research project. More specifically, the project consortium are developing educational material in the form of lessons that address topics such as bias in AI, the creation, use, longevity and environmental impact of datasets, programming errors, accessibility, privacy and facial recognition, and ethics of smart devices and pervasive computing. Initial educational materials are available via the Ethics4EU website at http://ethics4eu.eu/bricks .
Overlapping concerns between participants from focus groups
Professions of citizen participants
ACM ( 1992 ), “ ACM code of ethics and professional conduct ”, Code of Ethics .
Braunack-Mayer , A.J. , Street , J.M. , Tooher , R. , Feng , X. and Scharling-Gamba , K. ( 2020 ), “ Student and staff perspectives on the use of big data in the tertiary education sector: a scoping review and reflection on the ethical issues ”, Review of Educational Research , Vol. 90 No. 6 , pp. 788 - 823 .
Brey , P.A.E. ( 2012 ), “ Anticipatory ethics for emerging technologies ”, NanoEthics , Vol. 6 No. 1 , pp. 1 - 13 .
Bynum , T.W. ( 1999 ), “ The development of computer ethics as a philosophical field of study ”, The Australian Journal of Professional and Applied Ethics , Vol. 1 No. 1 , pp. 1 - 29 .
Bynum , T.W. ( 2000 ), “ The foundation of computer ethics ”, ACM SIGCAS Computers and Society , Vol. 30 No. 2 , pp. 6 - 13 , doi: 10.1145/572230.572231 .
Bynum , T.W. ( 2006 ), “ Flourishing ethics ”, Ethics and Information Technology , Vol. 8 No. 4 , pp. 157 - 173 .
Bynum , T.W. ( 2018 ), “ Computer and information ethics ”, in Zalta , E.N. (Ed.), The Stanford Encyclopedia of Philosophy (Summer 201) , Metaphysics Research Lab, Stanford University .
Dama , I. ( 2017 ), DAMA-DMBOK: Data Management Body of Knowledge , Technics Publications, LLC .
Ethics4EU ( 2021 ), “ Research report on European values for ethics in technology ”, Erasmus+ Project , available at: http://ethics4eu.eu/european-values-for-ethics-in-technology-research-report/
Flanagan , M. , Howe , D.C. and Nissenbaum , H. ( 2008 ), “ Embodying values in technology: theory and practice ”, Information Technology and Moral Philosophy , Vol. 322 .
Floridi , L. ( 1999 ), “ Information ethics: on the philosophical foundation of computer ethics ”, Ethics and Information Technology , Vol. 1 No. 1 , pp. 33 - 52 .
Floridi , L. ( 2014 ), The Fourth Revolution: How the Infosphere is Reshaping Human Reality , OUP , Oxford .
Floridi , L. and Sanders , J.W. ( 2005 ), “ Internet ethics: the constructionist values of homo poieticus ”, The Impact of the Internet on Our Moral Lives , pp. 195 - 214 .
Floridi , L. and Taddeo , M. ( 2016 ), “ What is data ethics? ”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences , Vol. 374 No. 2083 , p. 20160360 , doi: 10.1098/rsta.2016.0360 .
Gibbs , A. ( 1997 ), “ Focus groups ”, Social Research Update , Vol. 19 No. 8 , pp. 1 - 8 .
Gorden , R. ( 1992 ), Basic Interviewing Skills , F.E.Peacock .
Gotterbarn , D. ( 1991 ), “ Computer ethics: responsibility regained ”, National Forum , Vol. 71 No. 3 , p. 26 , available at: http://search.proquest.com/openview/fdd917c9e0dbb6018e73d2e11d53229f/1?pq-origsite=gscholar&cbl=1820941
Gotterbarn , D. , Wolf , M.J. , Flick , C. and Miller , K. ( 2018 ), “ THINKING PROFESSIONALLY the continual evolution of interest in computing ethics ”, ACM Inroads , Vol. 9 No. 2 , pp. 10 - 12 , doi: 10.1145/3204466 .
Helbing , D. , Frey , B.S. , Gigerenzer , G. , Hafen , E. , Hagner , M. , Hofstetter , Y. , Van Den Hoven , J. , Zicari , R.V. and Zwitter , A. ( 2019 ), “ Will democracy survive big data and artificial intelligence? ”, Towards Digital Enlightenment , Springer , pp. 73 - 98 .
Hilty , L.M. ( 2015 ), “ Ethical issues in ubiquitous computing – three technology assessment studies revisited ”, Ubiquitous Computing in the Workplace , Springer , pp. 45 - 60 .
Jacobs , A.R. and Abowd , G.D. ( 2003 ), “ A framework for comparing perspectives on privacy and pervasive technologies ”, IEEE Pervasive Computing , Vol. 2 No. 4 , pp. 78 - 84 .
Johnson , D. ( 1985 ), Computer Ethics , Englewood Cliffs, NJ .
Kranzberg , M. ( 2019 ), Ethics in an Age of Pervasive Technology , Routledge .
Kumar , A. , Braud , T. , Tarkoma , S. and Hui , P. ( 2020 ), “ Trustworthy AI in the age of pervasive computing and big data ”, 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) , pp. 1 - 6 .
Loi , M. , Heitz , C. , Ferrario , A. , Schmid , A. and Christen , M. ( 2019 ), “ Towards an ethical code for data-based business ”, 2019 6th Swiss Conference on Data Science (SDS) , pp. 6 - 12 .
Macnish , K. ( 2017 ), The Ethics of Surveillance: An Introduction , Routledge .
Mittelstadt , B.D. , Allo , P. , Taddeo , M. , Wachter , S. and Floridi , L. ( 2016 ), “ The ethics of algorithms: Mapping the debate ”, Big Data and Society , Vol. 3 No. 2 , 2053951716679679-2053951716679679 , available at: file:///Users/454324/Library/ApplicationSupport/MendeleyDesktop/Downloaded/Mittelstadt_et_al-Unknown-The_ethics_of_algorithms_Mapping_the_debate.pdf
Moor , J.H. ( 1985 ), “ What is computer ethics? ”, Metaphilosophy , Vol. 16 No. 4 , pp. 266 - 275 .
Moore , J. ( 2020 ), “ Towards a more representative politics in the ethics of computer science ”, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency , pp. 414 - 424 .
Morgan , D.L. ( 2008a ), “ Focus groups ”, The SAGE Encyclopedia of Qualitative Research Methods , Sage publications , pp. 352 - 354 .
Morgan , D.L. ( 2008b ), “ Snowball sampling ”, The SAGE Encyclopedia of Qualitative Research Methods , Vol. 2 , pp. 815 - 816 .
Müller , V.C. ( 2020 ), “ Ethics of artificial intelligence and robotics ”, in Zalta , E.N. (Ed.), The Stanford Encyclopedia of Philosophy , Stanford University , Palo Alto CA , available at: https://plato.stanford.edu/archives/win2020/entries/ethics-ai/
O’Keefe , K. and Brien , D.O. ( 2018 ), Ethical Data and Information Management: concepts, Tools and Methods , Kogan Page Publishers .
Rasmussen , L.B. , Beardon , C. and Munari , S. ( 2000 ), Computers and Networks in the Age of Globalization: IFIP TC9 Fifth World Conference on Human Choice and Computers August 25-28, 1998 , Vol. 57 , Springer Science and Business Media , Geneva, Switzerland .
Saltz , J.S. and Dewar , N. ( 2019 ), “ Data science ethical considerations: a systematic literature review and proposed project framework ”, Ethics and Information Technology , Vol. 21 No. 3 , pp. 197 - 208 .
Saltz , J. , Skirpan , M. , Fiesler , C. , Gorelick , M. , Yeh , T. , Heckman , R. , Dewar , N. and Beard , N. ( 2019 ), “ Integrating ethics within machine learning courses ”, ACM Transactions on Computing Education (TOCE) , Vol. 19 No. 4 , pp. 1 - 26 .
Saumure , K. and Given , L.M. ( 2008 ), “ Convenience sample ”, The SAGE Encyclopedia of Qualitative Research Methods , Vol. 125 .
Shahriari , K. and Shahriari , M. ( 2017 ), “ IEEE standard review – ethically aligned design: a vision for prioritizing human wellbeing with artificial intelligence and autonomous systems ”, 2017 IEEE Canada International Humanitarian Technology Conference (IHTC) , pp. 197 - 201 .
Stokes , D. and Bergin , R. ( 2006 ), “ Methodology or ‘methodolatry’? An evaluation of focus groups and depth interviews ”, Qualitative Market Research: An International Journal , Vol. 9 No. 1 , pp. 26 - 37 , doi: 10.1108/13522750610640530 .
Taddeo , M. and Floridi , L. ( 2018 ), “ How AI can be a force for good ”, Science , Vol. 361 No. 6404 , pp. 751 LP - 752 , doi: 10.1126/science.aat5991 .
Thornley , C.V. , Murnane , S. , McLoughlin , S. , Carcary , M. , Doherty , E. and Veling , L. ( 2018 ), “ The role of ethics in developing professionalism within the global ICT community ”, International Journal of Human Capital and Information Technology Professionals (IJHCITP) , Vol. 9 No. 4 , pp. 56 - 71 .
Vyakarnam , S. ( 1995 ), “ FOCUS: focus groups: are they viable in ethics research? ”, Business Ethics: A European Review , Vol. 4 No. 1 , pp. 24 - 29 .
Webb , H. , Patel , M. , Rovatsos , M. , Davoust , A. , Ceppi , S. , Koene , A. , Dowthwaite , L. , Portillo , V. , Jirotka , M. and Cano , M. ( 2019 ), “ It would be pretty immoral to choose a random algorithm ”, Journal of Information, Communication and Ethics in Society , Vol. 17 No. 2 .
Weizenbaum , J. ( 1976 ), Computer Power and Human Reason: From Judgment to Calculation , W. H. Freeman & Co ., San Francisco .
Whittaker , M. , Crawford , K. , Dobbe , R. , Fried , G. , Kaziunas , E. , Mathur , V. , West , S.M. , Richardson , R. , Schultz , J. and Schwartz , O. ( 2018 ), AI Now Report 2018 , AI Now Institute at New York, NY University , New York, NY .
Zachman , J.A. ( 2008 ), The Zachman Framework: The Official Concise Definition , Zachman International .
Zikmund , W.G. ( 1997 ), Exploring Marketing Research , Dryden Press . Fort Worth, TX .
Further reading
Davies , N. ( 2013 ), “ Ethics in pervasive computing research ”, IEEE Pervasive Computing , Vol. 12 No. 3 , pp. 2 - 4 .
Acknowledgements
Disclaimer: All authors were involved in the data collection process, reviewing the content of the paper and contributing to the authorship. Funding disclaimer: This paper is part of the Ethics4EU project which is Co-funded by the Erasmus+ Programme of the European Union under Grant Agreement No 2019–1-IE02-KA203-000665.The European Commission’s support for the production of this publication does not constitute an endorsement of the contents, which reflect the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein. Conflicts of interest/competing interests: The authors have no conflicts of interest to declare that are relevant to the content of this article.
Corresponding author
Related articles, we’re listening — tell us what you think, something didn’t work….
Report bugs here
All feedback is valuable
Please share your general feedback
Join us on our journey
Platform update page.
Visit emeraldpublishing.com/platformupdate to discover the latest news and updates
Questions & More Information
Answers to the most commonly asked questions here
Proceedings of the 2021 International Conference on Social Development and Media Communication (SDMC 2021)
Importance of computer ethics and morality in society.
Computer ethics emerged in the 1980s as a field that deals with the ethical analysis of privacy and security issues in technology. The field is responsible for analyzing the moral duties of computer users and professionals and ethical issues in public policy. The digital advances in this essay are considered to cause related ethical problems both in workplaces and daily routine at the same time, e.g., cyber frauds, data abuse, and so forth. With these issues as the background, by using a literature review and case study, this essay takes the connection between cybercrime and ethical issues as the subject, discussing the risky consequences of lack of computer ethics and the positive impact of computer ethics, with its feasibility and necessity. Firstly, some citations from previous scholars interested in related subjects are applied in this essay, with the purpose of collecting a range of negative circumstances and data from official institutions, attached with ethic problems and computer usage. Then, this research mainly focuses on recently increasing cyber-crimes and privacy breach cases, through the provision and in-depth analysis of data over the years, to explore the causes of the current situation. Finally, it develops a series of effective recommendations in order to cope with the increased cases reported by cybercrime victims in this essay, based on the outcomes analyzed from previous data and cases. This paper finds the universality of computer technology brings ethical problems. Many data leakage and information abuse will contribute to a very serious negative impact on both individuals and society. However, once people have computer ethics and morality, it will be a very effective way in helping current problems.
Download article (PDF)
Cite this article

IMAGES
VIDEO
COMMENTS
Ethics are extremely important for setting boundaries in research to determine what science can and cannot do, and the difference between right and wrong. Research is the key to progress in science.
Examples of computer ethics include not using a computer to steal or to harm others, especially by avoiding the spread of computer viruses and shunning plagiarism of computer software.
As computers continue to transform our lives, studies surrounding computer ethics are becoming more critical than ever before. Interestingly, there’s a set of guidelines known as the 10 commandments of computer ethics that are written in a ...
... Computer ethics is a standard for computer use, signifying the prevention of copyright infringement, such as the reproduction of software
This paper, therefore, asks the research question: how and to what extent do the discourses of computer ethics and the ethics of AI differ from
In James Moor's (1985) paper, he defined computer ethics as “the ... Both groups discussed research projects and the importance of ethics for
... paper we use the term “computer ethics” as a broad term to encompass a wide range of topics related to computer technology and ethics, includ-.
ticles in the journals as well as almost monthly articles on some aspect
Paper presented at the Annual Meeting of the
This paper concentrates on the emergence and the need ... Keyword: computer ethics, privacy, information, social media, Internet research ethics.
One delimitation of computer ethics is any research activity that touches on right and wrong, good or bad, or moral or immoral in relation to computing.
In this paper, we describe our recent approaches to introducing students in a beginning computer science class to the study of ethical issues related to
With these issues as the background, by using a literature review and case study, this essay takes the connection between cybercrime and ethical issues as
With these issues as the background, by using a literature review and case study, this essay takes the connection between cybercrime and ethical