Report Taha

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 23

Introduction to Futures Studies

Artificial Intelligence and


Ethical Issues

Student name: Muhammad Taha


Roll number: 14283
Date: 10/06/2022
Submitted to: Ma’am Ghazala Shafi
ABSTRACT

This topic discusses the ethical issues that are raised by the development,
deployment and use of AI. It starts with a review of the (ethical) benefits of AI and
then presents the findings of the different studies to identify what people perceived
to be ethical issues, the use of various AI technologies can lead to unintended but
harmful consequences, such as privacy intrusion; discrimination based on gender,
race/ethnicity, sexual orientation, or gender identity; and opaque decision-making,
among other issues. Addressing existing ethical challenges and building
responsible, fair AI innovations before they get deployed has never been more
important. These are discussed using the categorization of AI technologies
introduced earlier. Detailed accounts are given of ethical issues arising from
machine learning, from artificial general intelligence and from broader socio-
technical systems that incorporate AI.
Table of contents
1. Introduction
1.1. Background of the field
1.2. Definition of morality and Ethics
1.3. How it relates to AI

2. Motivation
2.1. Automation and Employment
2.2. Singularity and Super intelligence
2.3. Existential Risk from Super intelligence

3. Discussion

3.1. Privacy & Surveillance


3.2. AI an Ethical Issue
3.3. Ethical Issues with AI
3.4. Advantages of AI
3.5. Disadvantages of AI
3.6. Bias in Decision Systems

4. Practical applications
4.1. Applications of AI for Health Research
4.2. Artificial Intelligence in Drug Development

5. Conclusion
6. Recommendation / future work
7. List of abbreviations and key terms
8. Appendix / attachments/ additional information
9. References
1. Introduction:

1.1. Background of the Field


The ethics of AI and robotics is often focused on “concerns” of various sorts,
which is a typical response to new technologies. Many such concerns turn out
to be rather quaint (trains are too fast for souls); some are predictably wrong
when they suggest that the technology will fundamentally change humans
(telephones will destroy personal communication, writing will destroy memory,
video cassettes will make going out redundant); some are broadly correct but
moderately relevant (digital technology will destroy industries that make
photographic film, cassette tapes, or vinyl records); but some are broadly
correct and deeply relevant (cars will kill children and fundamentally change
the landscape). The task of an article such as this is to analyse the issues and to
deflate the non-issues.

Some technologies, like nuclear power, cars, or plastics, have caused ethical
and political discussion and significant policy efforts to control the trajectory
these technologies, usually only once some damage is done. In addition to such
“ethical concerns”, new technologies challenge current norms and conceptual
systems, which is of particular interest to philosophy. Finally, once we have
understood a technology in its context, we need to shape our societal response,
including regulation and law. All these features also exist in the case of new AI
and Robotics technologies—plus the more fundamental fear that they may end
the era of human control on Earth.

The ethics of AI and robotics has seen significant press coverage in recent
years, which supports related research, but also may end up undermining it: the
press often talks as if the issues under discussion were just predictions of what
future technology will bring, and as though we already know what would be
most ethical and how to achieve that. Press coverage thus focuses on risk,
security (Brundage et al. 2018, in the Other Internet Resources section below,
hereafter [OIR]), and prediction of impact (e.g., on the job market). The result
is a discussion of essentially technical problems that focus on how to achieve a
desired outcome. Current discussions in policy and industry are also motivated
by image and public relations, where the label “ethical” is really not much more
than the new “green”, perhaps used for “ethics washing”. For a problem to
qualify as a problem for AI ethics would require that we do not readily know
what the right thing to do is. In this sense, job loss, theft, or killing with AI is
not a problem in ethics, but whether these are permissible under certain
circumstances is a problem. This article focuses on the genuine problems of
ethics where we do not readily know what the answers are.

1.1. Definition of Morality and Ethics and How That Relates To AI?

Ethics are moral principles that govern a person's behavior or the conduct of an
activity. As a practical example, one ethical principle is to treat everyone with
respect.
Robots and artificial intelligence (AI) come in various forms, as outlined above,
each of which raises a different range of ethical concerns.
Social impacts: this section considers the potential impact of AI on the labor
market and economy and how different demographic groups might be affected.
It addresses questions of inequality and the risk that AI will further concentrate
power and wealth in the hands of the few. Issues related to privacy, human
rights and dignity are addressed as are risks that AI will perpetuate the biases,
intended or otherwise, of existing social systems or their creators. This section
also raises questions about the impact of AI technologies on democracy,
suggesting that these technologies may operate for the benefit of state-
controlled economies.
Psychological impacts: what impacts might arise from human-robot
relationships? How might we address dependency and deception? Should we
consider whether robots deserve to be given the status of 'personhood' and what
are the legal and moral implications of doing so?
Financial system impacts: potential impacts of AI on financial systems are
considered, including risks of manipulation and collusion and the need to build
in accountability.
Legal system impacts: there are a number of ways in which AI could affect the
legal system, including: questions relating to crime, such as liability if an AI is
used for criminal activities, and the extent to which AI might support criminal
activities such as drug trafficking. In situations where an AI is involved in
personal injury, such as in a collision involving an autonomous vehicle, then
Questions arise around the legal approach to claims (whether it is a case of
negligence, which is usually the basis for claims involving vehicular accidents,
or product liability).
Environmental impacts: increasing use of AIs comes with increased use of
natural resources, increased energy demands and waste disposal issues.
However, AIs could improve the way we manage waste and resources, leading
to environmental benefits.
Impacts on trust: society relies on trust. For AI to take on tasks, such as surgery,
the public will need to trust the technology. Trust includes aspects such as
fairness (that AI will be impartial), transparency (that we will be able to
understand how an AI arrived at a particular decision), accountability (someone
can be held accountable for mistakes made by AI) and control (how we might
'shut down' an AI that becomes too powerful).
2. Motivation:

2.1. Automation and Employment

It seems clear that AI and robotics will lead to significant gains in


productivity and thus overall wealth. The attempt to increase productivity
has often been a feature of the economy, though the emphasis on
“growth” is a modern phenomenon (Harari 2016: 240). However,
productivity gains through automation typically mean that fewer humans
are required for the same output. This does not necessarily imply a loss of
overall employment, however, because available wealth increases and
that can increase demand sufficiently to counteract the productivity gain.
In the long run, higher productivity in industrial societies has led to more
wealth overall. Major labour market disruptions have occurred in the
past, e.g., farming employed over 60% of the workforce in Europe and
North-America in 1800, while by 2010 it employed ca. 5% in the EU,
and even less in the wealthiest countries (European Commission 2013).
In the 20 years between 1950 and 1970 the number of hired agricultural
workers in the UK was reduced by 50% (Zayed and Loft 2019). Some of
these disruptions lead to more labor-intensive industries moving to places
with lower labour cost. This is an ongoing process.
Classic automation replaced human muscle, whereas digital automation
replaces human thought or information-processing—and unlike physical
machines, digital automation is very cheap to duplicate (Bostrom and
Yudkowsky 2014). It may thus mean a more radical change on the labour
market. So, the main question is: will the effects be different this time?
Will the creation of new jobs and wealth keep up with the destruction of
jobs? And even if it is not different, what are the transition costs, and
who bears them? Do we need to make societal adjustments for a fair
distribution of costs and benefits of digital automation?
2.2. Singularity and Super intelligence

In some quarters, the aim of current AI is thought to be an “artificial


general intelligence” (AGI), contrasted to a technical or “narrow” AI.
AGI is usually distinguished from traditional notions of AI as a general
purpose system, and from Searle’s notion of “strong AI”:

Computers given the right programs can be literally said to understand


and have other cognitive states. (Searle 1980: 417)
The idea of singularity is that if the trajectory of artificial intelligence reaches up to
systems that have a human level of intelligence, then these systems would
themselves have the ability to develop AI systems that surpass the human level of
intelligence, i.e., they are “super intelligent” (see below). Such super intelligent AI
systems would quickly self-improve or develop even more intelligent systems.
This sharp turn of events after reaching super intelligent AI is the “singularity”
from which the development of AI is out of human control and hard to predict.

2.3. Existential Risk from Super intelligence

Thinking about superintelligence in the long term raises the question whether
superintelligence may lead to the extinction of the human species, which is called
an “existential risk” (or Risk): The super intelligent systems may well have
preferences that conflict with the existence of humans on Earth, and may thus
decide to end that existence—and given their superior intelligence, they will have
the power to do so (or they may happen to end it because they do not really care).
Perhaps there is even an astronomical pattern such that an intelligent species is
bound to discover AI at some point, and thus bring about its own demise. Such a
“great filter” would contribute to the explanation of the “Fermi paradox” why there
is no sign of life in the known universe despite the high probability of it emerging.
It would be bad news if we found out that the “great filter” is ahead of us, rather
than an obstacle that Earth has already passed. These issues are sometimes taken
more narrowly to be about human extinction (Bostrom 2013), or more broadly as
concerning any large risk for the species (Rees 2018)—of which AI is only one.

3. Discussion:

3.1. Privacy & Surveillance

There is a general discussion about privacy and surveillance in information


technology (e.g., Macnish 2017; Roessler 2017), which mainly concerns the
access to private data and data that is personally identifiable. Privacy has
several well recognized aspects, e.g., “the right to be let alone”, information
privacy, privacy as an aspect of personhood, control over information about
oneself, and the right to secrecy (Bennett and Raab 2006). Privacy studies have
historically focused on state surveillance by secret services but now include
surveillance by other state agents, businesses, and even individuals. The
technology has changed significantly in the last decades while regulation has
been slow to respond (though there is the Regulation (EU) 2016/679)—the
result is a certain anarchy that is exploited by the most powerful players,
sometimes in plain sight, sometimes in hiding.

The digital sphere has widened greatly: All data collection and storage is now
digital, our lives are increasingly digital, most digital data is connected to a
single Internet, and there is more and more sensor technology in use that
generates data about non-digital aspects of our lives. AI increases both the
possibilities of intelligent data collection and the possibilities for data analysis.
This applies to blanket surveillance of whole populations as well as to classic
targeted surveillance. In addition, much of the data is traded between agents,
usually for a fee.

At the same time, controlling who collects which data, and who has access, is
much harder in the digital world than it was in the analogue world of paper and
telephone calls. Many new AI technologies amplify the known issues. For
example, face recognition in photos and videos allows identification and thus
profiling and searching for individuals (Whittaker et al. 2018: 15ff). This
continues using other techniques for identification, e.g., “device
fingerprinting”, which are commonplace on the Internet (sometimes revealed
in the “privacy policy”). The result is that “In this vast ocean of data, there is a
frighteningly complete picture of us” (Smolan 2016: 1:01). The result is
arguably a scandal that still has not received due public attention.

The data trail we leave behind is how our “free” services are paid for—but we
are not told about that data collection and the value of this new raw material,
and we are manipulated into leaving ever more such data. For the “big 5”
companies (Amazon, Google/Alphabet, Microsoft, Apple, Facebook), the main
data-collection part of their business appears to be based on deception,
exploiting human weaknesses, furthering procrastination, generating addiction,
and manipulation (Harris 2016 [OIR]). The primary focus of social media,
gaming, and most of the Internet in this “surveillance economy” is to gain,
maintain, and direct attention—and thus data supply. “Surveillance is the
business model of the Internet” (Schneier 2015). This surveillance and
attention economy is sometimes called “surveillance capitalism” (Zuboff
2019). It has caused many attempts to escape from the grasp of these
corporations, e.g., in exercises of “minimalism” (Newport 2019), sometimes
through the open source movement, but it appears that present-day citizens
have lost the degree of autonomy needed to escape while fully continuing with
their life and work. We have lost ownership of our data, if “ownership” is the
right relation here. Arguably, we have lost control of our data.

These systems will often reveal facts about us that we ourselves wish to
suppress or are not aware of: they know more about us than we know
ourselves.

3.2. Why is AI an ethical issue?

Lack of transparency makes it more difficult to recognize and address


questions of bias and discrimination. Bias is a much-cited ethical concern
related to AI (CDEI 2019). One key challenge is that machine learning
systems can, intentionally or inadvertently, result in the reproduction of
already existing biases.
3.3. What are some Ethical issues with AI?

 Cost to innovation
 Harm to physical integrity
 Lack of access to public services
 Lack of trust
 “Awakening” of AI
 Security problems
 Lack of quality data
 Disappearance of jobs
 Power asymmetries
 Negative impact on health
 Problems of integrity
 Lack of accuracy of data
 Lack of privacy
 Lack of transparency
 Potential for military use
 Lack of informed consent
 Bias and discrimination
 Unfairness
 Unequal power relations
 Misuse of personal data
 Negative impact on justice system
 Negative impact on democracy
 Potential for criminal and malicious use
 Loss of freedom and individual autonomy
 Contested ownership of data
 Reduction of human contact
 Problems of control and use of data and systems
 Lack of accuracy of predictive recommendations
 Lack of accuracy of non-individual recommendations
 Concentration of economic power
 Violation of fundamental human rights in supply chain
 Violation of fundamental human rights of end users
 Unintended, unforeseeable adverse impacts
 Prioritization of the “wrong” problems
 Negative impact on vulnerable groups
 Lack of accountability and liability
 Negative impact on environment
 Loss of human decision-making
 Lack of access to and freedom of information

3.4. Advantages of Artificial Intelligence:

1. Reduction in Human Error


One of the biggest advantages of Artificial Intelligence is that it can
significantly reduce errors and increase accuracy and precision. The
decisions taken by AI in every step is decided by information previously
gathered and a certain set of algorithms. When programmed properly,
these errors can be reduced to null.

2. Zero Risks
Another big advantage of AI is that humans can overcome many risks by
letting AI robots do them for us. Whether it be defusing a bomb, going to
space, exploring the deepest parts of oceans, machines with metal bodies
are resistant in nature and can survive unfriendly atmospheres. Moreover,
they can provide accurate work with greater responsibility and not wear
out easily.

3. 24x7 Availability
There are many studies that show humans are productive only about 3 to
4 hours in a day. Humans also need breaks and time offs to balance their
work life and personal life. But AI can work endlessly without breaks.
They think much faster than humans and perform multiple tasks at a time
with accurate results. They can even handle tedious repetitive jobs easily
with the help of AI algorithms.

4. Digital Assistance
Almost all the big organizations these days use digital assistants to
interact with their customers which significantly minimizes the need for
human resources. You can chat with a Chabot and ask them exactly what
you need. Some Chabot have become so intelligent these days that you
wouldn’t be able to determine whether you are chatting with a Chabot or
a human being.

5. New Inventions
AI has helped in coming up with new inventions in almost every domain
to solve complex problems. A recent invention has helped doctors to
predict early stages of breast cancer in women using advanced AI-based
technologies.

6. Unbiased Decisions
Human beings are driven by emotions, whether we like it or not. AI on
the other hand, is devoid of emotions and highly practical and rational in
its approach. A huge advantage of Artificial Intelligence is that it doesn't
have any biased views, which ensures more accurate decision-making.

3.5. Disadvantages of Artificial Intelligence:

1. High Costs
The ability to create a machine that can simulate human intelligence is no
small feat. It requires plenty of time and resources and can cost a huge
deal of money. AI also needs to operate on the latest hardware and
software to stay updated and meet the latest requirements, thus making it
quite costly.

2. No creativity
A big disadvantage of AI is that it cannot learn to think outside the box.
AI is capable of learning over time with pre-fed data and past
experiences, but cannot be creative in its approach. A classic example is
the bot Quill who can write Forbes earning reports. These reports only
contain data and facts already provided to the bot. Although it is
impressive that a bot can write an article on its own, it lacks the human
touch present in other Forbes articles.

3. Increase in Unemployment
Perhaps one of the biggest disadvantages of artificial intelligence is that
AI is slowly replacing a number of repetitive tasks with bots. The
reduction in the need for human interference has resulted in the death of
many job opportunities. A simple example is the Chabot which is a big
advantage to organizations, but a nightmare for employees. A study by
McKinsey predicts that AI will replace at least 30 percent of human labor
by 2030.

4. Make Humans Lazy


AI applications automate the majority of tedious and repetitive tasks.
Since we do not have to memorize things or solve puzzles to get the job
done, we tend to use our brains less and less. This addiction to AI can
cause problems to future generations.

5. No Ethics
Ethics and morality are important human features that can be difficult to
incorporate into an AI. The rapid progress of AI has raised a number of
concerns that one day, AI will grow uncontrollably, and eventually wipe
out humanity. This moment is referred to as the AI singularity.

3.6. Bias in Decision Systems

Automated AI decision support systems and “predictive analytics”


operate on data and produce a decision as “output”. This output may
range from the relatively trivial to the highly significant: “this restaurant
matches your preferences”, “the patient in this X-ray has completed bone
growth”, “application to credit card declined”, “donor organ will be given
to another patient”, “bail is denied”, or “target identified and engaged”.
Data analysis is often used in “predictive analytics” in business,
healthcare, and other fields, to foresee future developments—since
prediction is easier, it will also become a cheaper commodity. One use of
prediction is in “predictive policing” (NIJ 2014 [OIR]), which many fear
might lead to an erosion of public liberties (Ferguson 2017) because it
can take away power from the people whose behaviour is predicted. It
appears, however, that many of the worries about policing depend on
futuristic scenarios where law enforcement foresees and punishes
planned actions, rather than waiting until a crime has been committed
(like in the 2002 film “Minority Report”). One concern is that these
systems might perpetuate bias that was already in the data used to set up
the system, e.g., by increasing police patrols in an area and discovering
more crime in that area. Actual “predictive policing” or “intelligence led
policing” techniques mainly concern the question of where and when
police forces will be needed most. Also, police officers can be provided
with more data, offering them more control and facilitating better
decisions, in workflow support software (e.g., “ArcGIS”). Whether this is
problematic depends on the appropriate level of trust in the technical
quality of these systems, and on the evaluation of aims of the police work
itself. Perhaps a recent paper title points in the right direction here: “AI
ethics in predictive policing: From models of threat to an ethics of care”
(Asaro 2019).

Bias typically surfaces when unfair judgments are made because the
individual making the judgment is influenced by a characteristic that is
actually irrelevant to the matter at hand, typically a discriminatory
preconception about members of a group. So, one form of bias is a
learned cognitive feature of a person, often not made explicit. The person
concerned may not be aware of having that bias—they may even be
honestly and explicitly opposed to a bias they are found to have (e.g.,
through priming, cf. Graham and Lowery 2004). On fairness vs. bias in
machine learning, see Binns (2018).

Apart from the social phenomenon of learned bias, the human cognitive
system is generally prone to have various kinds of “cognitive biases”,
e.g., the “confirmation bias”: humans tend to interpret information as
confirming what they already believe. This second form of bias is often
said to impede performance in rational judgment (Kahnemann 2011)—
though at least some cognitive biases generate an evolutionary advantage,
e.g., economical use of resources for intuitive judgment. There is a
question whether AI systems could or should have such cognitive bias.
A third form of bias is present in data when it exhibits systematic error,
e.g., “statistical bias”. Strictly, any given dataset will only be unbiased
for a single kind of issue, so the mere creation of a dataset involves the
danger that it may be used for a different kind of issue, and then turn out
to be biased for that kind. Machine learning on the basis of such data
would then not only fail to recognize the bias, but codify and automate
the “historical bias”. Such historical bias was discovered in an automated
recruitment screening system at Amazon (discontinued early 2017) that
discriminated against women—presumably because the company had a
history of discriminating against women in the hiring process. The
“Correctional Offender Management Profiling for Alternative Sanctions”
(COMPAS), a system to predict whether a defendant would re-offend,
was found to be as successful (65.2% accuracy) as a group of random
humans (Dressel and Farid 2018) and to produce more false positives and
less false negatives for black defendants. The problem with such systems
is thus bias plus humans placing excessive trust in the systems. The
political dimensions of such automated systems in the USA are
investigated in Eubanks (2018).

There are significant technical efforts to detect and remove bias from AI
systems, but it is fair to say that these are in early stages: see UK Institute
for Ethical AI & Machine Learning (Brownsword, Scotford, and Yeung
2017; Yeung and Lodge 2019). It appears that technological fixes have
their limits in that they need a mathematical notion of fairness, which is
hard to come by (Whittaker et al. 2018: 24ff; Selbst et al. 2019), as is a
formal notion of “race” (see Benthall and Haynes 2019). An institutional
proposal is in (Veale and Binns 2017).
4. Practical applications:

Care Robots

The use of robots in health care for humans is currently at the level of concept
studies in real environments, but it may become a usable technology in a few
years, and has raised a number of concerns for a dystopian future of de-
humanized care (A. Sharkey and N. Sharkey 2011; Robert Sparrow 2016).
Current systems include robots that support human careers/caregivers (e.g., in
lifting patients, or transporting material), robots that enable patients to do
certain things by themselves (e.g., eat with a robotic arm), but also robots that
are given to patients as company and comfort (e.g., the “Paro” robot seal). For
an overview, see van Wynsberghe (2016); Nørskov (2017); Fosch-Villaronga
and Albo-Canals (2019), for a survey of users Draper et al. (2014).

One reason why the issue of care has come to the fore is that people have
argued that we will need robots in ageing societies. This argument makes
problematic assumptions, namely that with longer lifespan people will need
more care, and that it will not be possible to attract more humans to caring
professions. It may also show a bias about age (Jecker forthcoming). Most
importantly, it ignores the nature of automation, which is not simply about
replacing humans, but about allowing humans to work more efficiently. It is
not very clear that there really is an issue here since the discussion mostly
focuses on the fear of robots de-humanizing care, but the actual and
foreseeable robots in care are assistive robots for classic automation of
technical tasks. They are thus “care robots” only in a behavioral sense of
performing tasks in care environments, not in the sense that a human “cares”
for the patients. It appears that the success of “being cared for” relies on this
intentional sense of “care”, which foreseeable robots cannot provide. If
anything, the risk of robots in care is the absence of such intentional care—
because less human carers may be needed. Interestingly, caring for something,
even a virtual agent, can be good for the carer themselves (Lee et al. 2019). A
system that pretends to care would be deceptive and thus problematic—unless
the deception is countered by sufficiently large utility gain (Coeckelbergh
2016). Some robots that pretend to “care” on a basic level are available (Paro
seal) and others are in the making. Perhaps feeling cared for by a machine, to
some extent, is progress for some patients.
Applications of AI for Health Research

The use of data created for electronic health records (EHR) is an important
field of AI-based health research. Such data may be difficult to use if the
underlying information technology system and database do not prevent the
spread of heterogeneous or low-quality data. Nonetheless, AI in electronic
health records can be used for scientific study, quality improvement, and
clinical care optimization. Before going down the typical path of scientific
publishing, guideline formation, and clinical support tools, AI that is correctly
created and trained with enough data can help uncover clinical best practices
from electronic health records. By analyzing clinical practice trends acquired
from electronic health data, AI can also assist in developing new clinical
practice models of healthcare delivery.

Artificial Intelligence in Drug Development

In the future, AI is expected to simplify and accelerate pharmaceutical


development. AI can convert drug discovery from a labor-intensive to capital-
and the data-intensive process by utilizing robotics and models of genetic
targets, drugs, organs, diseases and their progression, pharmacokinetics, safety
and efficacy. Artificial intelligence (AI) can be used in the drug discovery and
development process to speed up and make it more cost-effective and efficient.
Although like with any drug study, identifying a lead molecule does not
guarantee the development of a safe and successful therapy, AI was used to
identify potential Ebola virus medicines previously.
5. Conclusion:
The technology may be here now, but the ethical rules for managing AI will take
time to develop.
In General:
Now that you know both the advantages and disadvantages of Artificial
Intelligence, one thing is for sure has massive potential for creating a better world
to live in. The most important role for humans will be to ensure that the rise of the
AI doesn’t get out of hand. Although there are both debatable pros and cons of
artificial intelligence, its impact on the global industry is undeniable. It continues
to grow every single day driving sustainability for businesses. This certainly calls
for the need of AI literacy and up skilling to prosper in many new age jobs.

AI in Health Care:
AI is going to be increasingly used in healthcare and hence needs to be morally
accountable. Data bias needs to be avoided by using appropriate algorithms based
on un-biased real time data. Diverse and inclusive programming groups and
frequent audits of the algorithm, including its implementation in a system, need to
be carried out. While AI may not be able to completely replace clinical judgment,
it can help clinicians make better decisions. If there is a lack of medical
competence in a context with limited resources, AI could be utilized to conduct
screening and evaluation. In contrast to human decision making, all AI judgments,
even the quickest, are systematic since algorithms are involved. As a result, even if
activities don't have legal repercussions (because efficient legal frameworks
haven't been developed yet), they always lead to accountability, not by the
machine, but by the people who built it and the people who utilize it. While there
are moral dilemmas in the use of AI, it is likely to meager, co-exist or replace
current systems, starting the healthcare age of artificial intelligence, and not using
AI is also possibly unscientific and unethical.
AI will solve all problems” (Kurzweil 1999) and “AI may kill us all” (Bostrom
2014). This created media attention and public relations efforts, but it also raises
the problem of how much of this “philosophy and ethics of AI” is really about AI
rather than about an imagined technology. As we said at the outset, AI and robotics
have raised fundamental questions about what we should do with these systems,
what the systems themselves should do, and what risks they have in the long term.
They also challenge the human view of humanity as the intelligent and dominant
species on Earth. We have seen issues that have been raised and will have to watch
technological and social developments closely to catch the new issues early on,
develop a philosophical analysis, and learn for traditional problems of philosophy.

6. Recommendation / future work:


People worry that computers will get too smart and take over the world, but
the real problem is that they’re too stupid and they’ve already taken over
the world (Domingo’s 2015)
Some questions that arise when we talk about using Artificial Intelligence in
future? Which we might search answers for in further researching.
1. What is the purpose of our job, and what AI do we need to achieve it?
2. Do we understand how these systems work? Are we in control of this
technology?
3. What are the risks of its usage? Who benefits and who carries the risks
related to the adoption of the new technology?
4. Who bears the costs for it? Would it be considered fair if it became widely
known?
5. What are the ethical dimensions, and what values are at stake?
6. What might be the unexpected consequences?
7. Do we have other options that are less risky?
8. What is the governance process for introducing AI?
9. Who is responsible for AI? Because machines are not moral agents, who is
responsible for the outcome of the decision-making process of an artificial
agent?
10. How is the impact of AI to be monitored?
7. List of abbreviations and key terms:
8. Appendix / attachments/ additional information:
9. References:

a. https://www.frontiersin.org/articles/10.3389/fsurg.2022.862322/full
b. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7968615/
c. https://www.simplilearn.com/advantages-and-disadvantages-of-
artificial-intelligence-article
d. https://www.thinkautomation.com/automation-ethics/is-automation-
ethical/
e. https://plato.stanford.edu/entries/ethics-ai/
f.

You might also like