Report Taha
Report Taha
Report Taha
This topic discusses the ethical issues that are raised by the development,
deployment and use of AI. It starts with a review of the (ethical) benefits of AI and
then presents the findings of the different studies to identify what people perceived
to be ethical issues, the use of various AI technologies can lead to unintended but
harmful consequences, such as privacy intrusion; discrimination based on gender,
race/ethnicity, sexual orientation, or gender identity; and opaque decision-making,
among other issues. Addressing existing ethical challenges and building
responsible, fair AI innovations before they get deployed has never been more
important. These are discussed using the categorization of AI technologies
introduced earlier. Detailed accounts are given of ethical issues arising from
machine learning, from artificial general intelligence and from broader socio-
technical systems that incorporate AI.
Table of contents
1. Introduction
1.1. Background of the field
1.2. Definition of morality and Ethics
1.3. How it relates to AI
2. Motivation
2.1. Automation and Employment
2.2. Singularity and Super intelligence
2.3. Existential Risk from Super intelligence
3. Discussion
4. Practical applications
4.1. Applications of AI for Health Research
4.2. Artificial Intelligence in Drug Development
5. Conclusion
6. Recommendation / future work
7. List of abbreviations and key terms
8. Appendix / attachments/ additional information
9. References
1. Introduction:
Some technologies, like nuclear power, cars, or plastics, have caused ethical
and political discussion and significant policy efforts to control the trajectory
these technologies, usually only once some damage is done. In addition to such
“ethical concerns”, new technologies challenge current norms and conceptual
systems, which is of particular interest to philosophy. Finally, once we have
understood a technology in its context, we need to shape our societal response,
including regulation and law. All these features also exist in the case of new AI
and Robotics technologies—plus the more fundamental fear that they may end
the era of human control on Earth.
The ethics of AI and robotics has seen significant press coverage in recent
years, which supports related research, but also may end up undermining it: the
press often talks as if the issues under discussion were just predictions of what
future technology will bring, and as though we already know what would be
most ethical and how to achieve that. Press coverage thus focuses on risk,
security (Brundage et al. 2018, in the Other Internet Resources section below,
hereafter [OIR]), and prediction of impact (e.g., on the job market). The result
is a discussion of essentially technical problems that focus on how to achieve a
desired outcome. Current discussions in policy and industry are also motivated
by image and public relations, where the label “ethical” is really not much more
than the new “green”, perhaps used for “ethics washing”. For a problem to
qualify as a problem for AI ethics would require that we do not readily know
what the right thing to do is. In this sense, job loss, theft, or killing with AI is
not a problem in ethics, but whether these are permissible under certain
circumstances is a problem. This article focuses on the genuine problems of
ethics where we do not readily know what the answers are.
1.1. Definition of Morality and Ethics and How That Relates To AI?
Ethics are moral principles that govern a person's behavior or the conduct of an
activity. As a practical example, one ethical principle is to treat everyone with
respect.
Robots and artificial intelligence (AI) come in various forms, as outlined above,
each of which raises a different range of ethical concerns.
Social impacts: this section considers the potential impact of AI on the labor
market and economy and how different demographic groups might be affected.
It addresses questions of inequality and the risk that AI will further concentrate
power and wealth in the hands of the few. Issues related to privacy, human
rights and dignity are addressed as are risks that AI will perpetuate the biases,
intended or otherwise, of existing social systems or their creators. This section
also raises questions about the impact of AI technologies on democracy,
suggesting that these technologies may operate for the benefit of state-
controlled economies.
Psychological impacts: what impacts might arise from human-robot
relationships? How might we address dependency and deception? Should we
consider whether robots deserve to be given the status of 'personhood' and what
are the legal and moral implications of doing so?
Financial system impacts: potential impacts of AI on financial systems are
considered, including risks of manipulation and collusion and the need to build
in accountability.
Legal system impacts: there are a number of ways in which AI could affect the
legal system, including: questions relating to crime, such as liability if an AI is
used for criminal activities, and the extent to which AI might support criminal
activities such as drug trafficking. In situations where an AI is involved in
personal injury, such as in a collision involving an autonomous vehicle, then
Questions arise around the legal approach to claims (whether it is a case of
negligence, which is usually the basis for claims involving vehicular accidents,
or product liability).
Environmental impacts: increasing use of AIs comes with increased use of
natural resources, increased energy demands and waste disposal issues.
However, AIs could improve the way we manage waste and resources, leading
to environmental benefits.
Impacts on trust: society relies on trust. For AI to take on tasks, such as surgery,
the public will need to trust the technology. Trust includes aspects such as
fairness (that AI will be impartial), transparency (that we will be able to
understand how an AI arrived at a particular decision), accountability (someone
can be held accountable for mistakes made by AI) and control (how we might
'shut down' an AI that becomes too powerful).
2. Motivation:
Thinking about superintelligence in the long term raises the question whether
superintelligence may lead to the extinction of the human species, which is called
an “existential risk” (or Risk): The super intelligent systems may well have
preferences that conflict with the existence of humans on Earth, and may thus
decide to end that existence—and given their superior intelligence, they will have
the power to do so (or they may happen to end it because they do not really care).
Perhaps there is even an astronomical pattern such that an intelligent species is
bound to discover AI at some point, and thus bring about its own demise. Such a
“great filter” would contribute to the explanation of the “Fermi paradox” why there
is no sign of life in the known universe despite the high probability of it emerging.
It would be bad news if we found out that the “great filter” is ahead of us, rather
than an obstacle that Earth has already passed. These issues are sometimes taken
more narrowly to be about human extinction (Bostrom 2013), or more broadly as
concerning any large risk for the species (Rees 2018)—of which AI is only one.
3. Discussion:
The digital sphere has widened greatly: All data collection and storage is now
digital, our lives are increasingly digital, most digital data is connected to a
single Internet, and there is more and more sensor technology in use that
generates data about non-digital aspects of our lives. AI increases both the
possibilities of intelligent data collection and the possibilities for data analysis.
This applies to blanket surveillance of whole populations as well as to classic
targeted surveillance. In addition, much of the data is traded between agents,
usually for a fee.
At the same time, controlling who collects which data, and who has access, is
much harder in the digital world than it was in the analogue world of paper and
telephone calls. Many new AI technologies amplify the known issues. For
example, face recognition in photos and videos allows identification and thus
profiling and searching for individuals (Whittaker et al. 2018: 15ff). This
continues using other techniques for identification, e.g., “device
fingerprinting”, which are commonplace on the Internet (sometimes revealed
in the “privacy policy”). The result is that “In this vast ocean of data, there is a
frighteningly complete picture of us” (Smolan 2016: 1:01). The result is
arguably a scandal that still has not received due public attention.
The data trail we leave behind is how our “free” services are paid for—but we
are not told about that data collection and the value of this new raw material,
and we are manipulated into leaving ever more such data. For the “big 5”
companies (Amazon, Google/Alphabet, Microsoft, Apple, Facebook), the main
data-collection part of their business appears to be based on deception,
exploiting human weaknesses, furthering procrastination, generating addiction,
and manipulation (Harris 2016 [OIR]). The primary focus of social media,
gaming, and most of the Internet in this “surveillance economy” is to gain,
maintain, and direct attention—and thus data supply. “Surveillance is the
business model of the Internet” (Schneier 2015). This surveillance and
attention economy is sometimes called “surveillance capitalism” (Zuboff
2019). It has caused many attempts to escape from the grasp of these
corporations, e.g., in exercises of “minimalism” (Newport 2019), sometimes
through the open source movement, but it appears that present-day citizens
have lost the degree of autonomy needed to escape while fully continuing with
their life and work. We have lost ownership of our data, if “ownership” is the
right relation here. Arguably, we have lost control of our data.
These systems will often reveal facts about us that we ourselves wish to
suppress or are not aware of: they know more about us than we know
ourselves.
Cost to innovation
Harm to physical integrity
Lack of access to public services
Lack of trust
“Awakening” of AI
Security problems
Lack of quality data
Disappearance of jobs
Power asymmetries
Negative impact on health
Problems of integrity
Lack of accuracy of data
Lack of privacy
Lack of transparency
Potential for military use
Lack of informed consent
Bias and discrimination
Unfairness
Unequal power relations
Misuse of personal data
Negative impact on justice system
Negative impact on democracy
Potential for criminal and malicious use
Loss of freedom and individual autonomy
Contested ownership of data
Reduction of human contact
Problems of control and use of data and systems
Lack of accuracy of predictive recommendations
Lack of accuracy of non-individual recommendations
Concentration of economic power
Violation of fundamental human rights in supply chain
Violation of fundamental human rights of end users
Unintended, unforeseeable adverse impacts
Prioritization of the “wrong” problems
Negative impact on vulnerable groups
Lack of accountability and liability
Negative impact on environment
Loss of human decision-making
Lack of access to and freedom of information
2. Zero Risks
Another big advantage of AI is that humans can overcome many risks by
letting AI robots do them for us. Whether it be defusing a bomb, going to
space, exploring the deepest parts of oceans, machines with metal bodies
are resistant in nature and can survive unfriendly atmospheres. Moreover,
they can provide accurate work with greater responsibility and not wear
out easily.
3. 24x7 Availability
There are many studies that show humans are productive only about 3 to
4 hours in a day. Humans also need breaks and time offs to balance their
work life and personal life. But AI can work endlessly without breaks.
They think much faster than humans and perform multiple tasks at a time
with accurate results. They can even handle tedious repetitive jobs easily
with the help of AI algorithms.
4. Digital Assistance
Almost all the big organizations these days use digital assistants to
interact with their customers which significantly minimizes the need for
human resources. You can chat with a Chabot and ask them exactly what
you need. Some Chabot have become so intelligent these days that you
wouldn’t be able to determine whether you are chatting with a Chabot or
a human being.
5. New Inventions
AI has helped in coming up with new inventions in almost every domain
to solve complex problems. A recent invention has helped doctors to
predict early stages of breast cancer in women using advanced AI-based
technologies.
6. Unbiased Decisions
Human beings are driven by emotions, whether we like it or not. AI on
the other hand, is devoid of emotions and highly practical and rational in
its approach. A huge advantage of Artificial Intelligence is that it doesn't
have any biased views, which ensures more accurate decision-making.
1. High Costs
The ability to create a machine that can simulate human intelligence is no
small feat. It requires plenty of time and resources and can cost a huge
deal of money. AI also needs to operate on the latest hardware and
software to stay updated and meet the latest requirements, thus making it
quite costly.
2. No creativity
A big disadvantage of AI is that it cannot learn to think outside the box.
AI is capable of learning over time with pre-fed data and past
experiences, but cannot be creative in its approach. A classic example is
the bot Quill who can write Forbes earning reports. These reports only
contain data and facts already provided to the bot. Although it is
impressive that a bot can write an article on its own, it lacks the human
touch present in other Forbes articles.
3. Increase in Unemployment
Perhaps one of the biggest disadvantages of artificial intelligence is that
AI is slowly replacing a number of repetitive tasks with bots. The
reduction in the need for human interference has resulted in the death of
many job opportunities. A simple example is the Chabot which is a big
advantage to organizations, but a nightmare for employees. A study by
McKinsey predicts that AI will replace at least 30 percent of human labor
by 2030.
5. No Ethics
Ethics and morality are important human features that can be difficult to
incorporate into an AI. The rapid progress of AI has raised a number of
concerns that one day, AI will grow uncontrollably, and eventually wipe
out humanity. This moment is referred to as the AI singularity.
Bias typically surfaces when unfair judgments are made because the
individual making the judgment is influenced by a characteristic that is
actually irrelevant to the matter at hand, typically a discriminatory
preconception about members of a group. So, one form of bias is a
learned cognitive feature of a person, often not made explicit. The person
concerned may not be aware of having that bias—they may even be
honestly and explicitly opposed to a bias they are found to have (e.g.,
through priming, cf. Graham and Lowery 2004). On fairness vs. bias in
machine learning, see Binns (2018).
Apart from the social phenomenon of learned bias, the human cognitive
system is generally prone to have various kinds of “cognitive biases”,
e.g., the “confirmation bias”: humans tend to interpret information as
confirming what they already believe. This second form of bias is often
said to impede performance in rational judgment (Kahnemann 2011)—
though at least some cognitive biases generate an evolutionary advantage,
e.g., economical use of resources for intuitive judgment. There is a
question whether AI systems could or should have such cognitive bias.
A third form of bias is present in data when it exhibits systematic error,
e.g., “statistical bias”. Strictly, any given dataset will only be unbiased
for a single kind of issue, so the mere creation of a dataset involves the
danger that it may be used for a different kind of issue, and then turn out
to be biased for that kind. Machine learning on the basis of such data
would then not only fail to recognize the bias, but codify and automate
the “historical bias”. Such historical bias was discovered in an automated
recruitment screening system at Amazon (discontinued early 2017) that
discriminated against women—presumably because the company had a
history of discriminating against women in the hiring process. The
“Correctional Offender Management Profiling for Alternative Sanctions”
(COMPAS), a system to predict whether a defendant would re-offend,
was found to be as successful (65.2% accuracy) as a group of random
humans (Dressel and Farid 2018) and to produce more false positives and
less false negatives for black defendants. The problem with such systems
is thus bias plus humans placing excessive trust in the systems. The
political dimensions of such automated systems in the USA are
investigated in Eubanks (2018).
There are significant technical efforts to detect and remove bias from AI
systems, but it is fair to say that these are in early stages: see UK Institute
for Ethical AI & Machine Learning (Brownsword, Scotford, and Yeung
2017; Yeung and Lodge 2019). It appears that technological fixes have
their limits in that they need a mathematical notion of fairness, which is
hard to come by (Whittaker et al. 2018: 24ff; Selbst et al. 2019), as is a
formal notion of “race” (see Benthall and Haynes 2019). An institutional
proposal is in (Veale and Binns 2017).
4. Practical applications:
Care Robots
The use of robots in health care for humans is currently at the level of concept
studies in real environments, but it may become a usable technology in a few
years, and has raised a number of concerns for a dystopian future of de-
humanized care (A. Sharkey and N. Sharkey 2011; Robert Sparrow 2016).
Current systems include robots that support human careers/caregivers (e.g., in
lifting patients, or transporting material), robots that enable patients to do
certain things by themselves (e.g., eat with a robotic arm), but also robots that
are given to patients as company and comfort (e.g., the “Paro” robot seal). For
an overview, see van Wynsberghe (2016); Nørskov (2017); Fosch-Villaronga
and Albo-Canals (2019), for a survey of users Draper et al. (2014).
One reason why the issue of care has come to the fore is that people have
argued that we will need robots in ageing societies. This argument makes
problematic assumptions, namely that with longer lifespan people will need
more care, and that it will not be possible to attract more humans to caring
professions. It may also show a bias about age (Jecker forthcoming). Most
importantly, it ignores the nature of automation, which is not simply about
replacing humans, but about allowing humans to work more efficiently. It is
not very clear that there really is an issue here since the discussion mostly
focuses on the fear of robots de-humanizing care, but the actual and
foreseeable robots in care are assistive robots for classic automation of
technical tasks. They are thus “care robots” only in a behavioral sense of
performing tasks in care environments, not in the sense that a human “cares”
for the patients. It appears that the success of “being cared for” relies on this
intentional sense of “care”, which foreseeable robots cannot provide. If
anything, the risk of robots in care is the absence of such intentional care—
because less human carers may be needed. Interestingly, caring for something,
even a virtual agent, can be good for the carer themselves (Lee et al. 2019). A
system that pretends to care would be deceptive and thus problematic—unless
the deception is countered by sufficiently large utility gain (Coeckelbergh
2016). Some robots that pretend to “care” on a basic level are available (Paro
seal) and others are in the making. Perhaps feeling cared for by a machine, to
some extent, is progress for some patients.
Applications of AI for Health Research
The use of data created for electronic health records (EHR) is an important
field of AI-based health research. Such data may be difficult to use if the
underlying information technology system and database do not prevent the
spread of heterogeneous or low-quality data. Nonetheless, AI in electronic
health records can be used for scientific study, quality improvement, and
clinical care optimization. Before going down the typical path of scientific
publishing, guideline formation, and clinical support tools, AI that is correctly
created and trained with enough data can help uncover clinical best practices
from electronic health records. By analyzing clinical practice trends acquired
from electronic health data, AI can also assist in developing new clinical
practice models of healthcare delivery.
AI in Health Care:
AI is going to be increasingly used in healthcare and hence needs to be morally
accountable. Data bias needs to be avoided by using appropriate algorithms based
on un-biased real time data. Diverse and inclusive programming groups and
frequent audits of the algorithm, including its implementation in a system, need to
be carried out. While AI may not be able to completely replace clinical judgment,
it can help clinicians make better decisions. If there is a lack of medical
competence in a context with limited resources, AI could be utilized to conduct
screening and evaluation. In contrast to human decision making, all AI judgments,
even the quickest, are systematic since algorithms are involved. As a result, even if
activities don't have legal repercussions (because efficient legal frameworks
haven't been developed yet), they always lead to accountability, not by the
machine, but by the people who built it and the people who utilize it. While there
are moral dilemmas in the use of AI, it is likely to meager, co-exist or replace
current systems, starting the healthcare age of artificial intelligence, and not using
AI is also possibly unscientific and unethical.
AI will solve all problems” (Kurzweil 1999) and “AI may kill us all” (Bostrom
2014). This created media attention and public relations efforts, but it also raises
the problem of how much of this “philosophy and ethics of AI” is really about AI
rather than about an imagined technology. As we said at the outset, AI and robotics
have raised fundamental questions about what we should do with these systems,
what the systems themselves should do, and what risks they have in the long term.
They also challenge the human view of humanity as the intelligent and dominant
species on Earth. We have seen issues that have been raised and will have to watch
technological and social developments closely to catch the new issues early on,
develop a philosophical analysis, and learn for traditional problems of philosophy.
a. https://www.frontiersin.org/articles/10.3389/fsurg.2022.862322/full
b. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7968615/
c. https://www.simplilearn.com/advantages-and-disadvantages-of-
artificial-intelligence-article
d. https://www.thinkautomation.com/automation-ethics/is-automation-
ethical/
e. https://plato.stanford.edu/entries/ethics-ai/
f.