Bekiri Ethicsargumentessay

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

Bekiri 1

Amela Bekiri
CST 300 Writing Lab
31 January 2023
Concerns as AI is Advancing

As artificial intelligence (AI) is evolving it offers huge technological and potential

benefits with its advancements. With the creation of artificial intelligence, topics of concern start

to arise such as the imposed risks and consequences artificial intelligence causes for society.

While AI advances it becomes more developed leading towards possibilities of misuse. The

question for concern arises when the public gains awareness towards the risk’s new technology

and AI poses for them. Issues such as privacy and misuse of data that could be used for profit or

beneficial purposes without regards to the consumer, while there is a constant risk of unwanted

exposure of this data.

It would be difficult or in some instances impossible to have an ethical approach to AI

systems. As ethics related to these topics are hard to define and enforce, ethical rules contribute

many regulations. Finding ways to implement the application of these wanted rules can be

messy. “The experts who expressed worries also invoked governance concerns. They asked:

Whose ethical systems should be applied? Who gets to make that decision? Who has

responsibility to care about implementing ethical AI? Who might enforce ethical regimes once

they are established?” (Atske, 2021). It could be hard to establish what is ethical behavior and

how the evolution of artificial intelligence raises further questions and concerns. “The problem

with the application of ethical principles to artificial intelligence is that there is no common

agreement about what those are” (Atske, 2021). Artificial intelligence is a part of our current

existence, and these ethical issues will be determined by an individual's own ethical views. Now

that societal norms are being more distinguished by the new technology that is being
Bekiri 2

implemented, there is room for more ethical AI, even if this ethical contribution comes from

artificial intelligence itself.

The issues of artificial intelligence have become more sophisticated, with this, ethical

frameworks should be put in place to ensure the right uses of AI. With a widely growing industry

the four biggest ethical issues with artificial intelligence are: concerns with privacy and

surveillance, bias and discrimination, human judgment, and how data is obtained by companies,

with the main purpose of these issues being how it's affecting society. The highest concern about

ethics in artificial intelligence is privacy. The main issue with privacy regulations is creating an

ethical aspect on how this personal information is given and taken from the consumer. This panic

over AI introduces bias into everyday life. “As machines learn from data sets they’re fed,

chances are “pretty high” they may replicate” (Pazzanese, 2020). As advances in AI are

inevitable, these advances will be used in many ways, but will it be used in ethical or

questionable ways. As Steven Miller, who is a professor of information systems at Singapore

Management University mentions that “We have to move beyond the current mindset of AI

being this special thing - an almost mystical thing. I wish we could stop using the term AI, and

just refer to it for what it is - pattern-recognition systems, statistical analysis systems that learn

from data, logical reasoning, goal-seeking systems'' (Atske, 2021). By understanding artificial

intelligence for its application there is very little consensus on how these rising concerns could

be dealt with and resolved. Now these firms use AI to manage sourcing of material and products

from suppliers, as well as the integration of vast amounts of data in order to aid in strategic

decision-making, this is particularly effective at processing large quantities of data quickly

(Pazzanese, 2020).
Bekiri 3

As artificial intelligence is in its early stages of development there are critical events that

are somewhat arising. Useful frameworks for historical analysis in regard to artificial intelligence

is a fairly new field for the subject at hand. Based on the information that is given throughout the

evolution of AI, discussing the main concerns could determine some history for AI. This history

could be written by tasks that AI systems perform. Although artificial intelligence is forming

drastically with new developments, these new technologies have started taking its course on what

is justifiably right or wrong. To start off with data privacy and surveillance, it started becoming a

concern with awareness and education obtained by the public. Also, these issues wouldn’t have

surfaced without the mainstream issues of data exploitation. These types of issues exist due to

the human race's ability to constantly move forward and improve as a society, with the need to

improve quality of life. By being this innovative it comes with an abundance of ethical issues

that must in turn be resolved. As advancement continue to gain attention and new complex

innovations are continuously being presented at a faster rate than human intelligence itself.

Meaning that these issues came to be with the ability of artificial intelligence being able to

surpass human intelligence. That is leading to questions of how this will affect human life by

taking privacy into account. Making it hard to distinguish personal data from non-personal data.

Two sets of stakeholders, the company, and the consumer, oppose one another of the

issues at hand. Whether it is beneficial to the consumer, these companies are working to

contribute towards its own gains and values with the advancements AI can bring to the

consumer. With this AI systems need a lot of data that is accumulated to imitate human behavior.

“When they have it, those systems can extract useful insights from previous customer behaviors

that can be highly valuable to business” (Usercentrics, 2022). With these resources the collection

of this data can help determine big decisions for humans. These companies are on many different
Bekiri 4

platforms that are continuously obtaining data from these sources. Specifically, these companies

have information provided by users such as contact information and other personal information,

or as some companies got to the extent of using other tracking technologies. On a larger-scale

artificial intelligence is continuously building on information the longer these technologies are

used and the more data they have. To get the full beneficial use of AI there needs to be

continuous data collected by these companies.

The claim of “Policy” supports companies' position on the beneficial purposes of

advancement that reflect on varying factors related towards artificial intelligence. The claim of

policy exists to ensure that conditions are met for a policy or course of action. To win over the

consumer companies need to apply that their course of action is in good faith and is the right

course of action, ultimately convincing the consumer that this approach is beneficial to them and

it should be supported. The companies support their claim by providing evidence that contributed

towards the growth of their companies. Asserting that the more data a business needs, the more

there needs to be assurance about what it is used for.

When addressing ethical values of AI, there tends to be a distinct assumption that what is

going to be addressed or talked about is morally bad. As concerns with newer technologies

becoming a new source of inaccuracy and potential data breaches. The consumer, which is the

public, values their right to privacy and to be a part of policy making, as human right plays a role

with the innovation in AI which has out done private protection, referring to a digital footprint.

Leading the consumers to further question the amount of personal information that is being

collected and used without knowledge. With growing sensitivity around privacy and the

continuous unknown of data exploitation, also known as personal information, that is being

taken, the public has figured out their position on the topic. “As such, it makes sense to spend
Bekiri 5

time considering what we want these systems to do and make sure we address ethical questions

now so that we build these systems with the common good of humanity in mind” (Walch, 2019).

The claims that reflect on what the public wants, is the claim of “value”. The claim of

value addresses an event's inherent goodness or morality, as well as the value systems that should

guide decision making. “The more data a business needs, the more they need to ensure

transparency about what the data is to be used for, that they have a legitimate and legal basis to

use it” (Usercentrics, 2022). In turn helps bring regulations for data and privacy.

Should there be enforcements on past and current artificial intelligence (AI)

advancements? By circulating around the various topics that bring concern to the majority or

consumers the issue should be taken into consideration. The decision is directly related to the

issue at hand. With the need of bridging the gap between unaligned end-goals for AI technology

one stakeholder contradicts the other. AI technology is not the easiest to explain, with that arises

a challenge on what decision should be made on the issue. One thing that could be worked

around is the benefits a company can gain by addressing customer concerns. With the constant

usage of this data, companies wouldn’t have the ability to have access to all this information. The

easiest approach to addressing this is giving the consumer, the public, what they want. As

“concerns were also raised regarding a number of more insidious scenarios” (Medium, 2018).

Mention that it would be easier to implement these enforcements and regulations while

benefiting both stakeholder’s interests. “Which raises the legal and ethical question of how to

establish and maintain user privacy while still obtaining and processing the data companies need

to power AI usage” (Usercentrics, 2022).

The ethical framework that is used to represent the stakeholder that is pro with their

position is ethical egoism. Ethical egoism is an act based on how much it benefits, in this case
Bekiri 6

the companies, who are doing it. Artificial intelligence could mean a variety of things depending

on the audience the term is presented to. As the core business decision for artificial intelligence

is the operation of AI’s ability to adjust to human behavior, eventually accommodating these

behaviors. The position taken on the question being argued is, how will this affect and benefit the

company in accordance with the companies’ growth with new advancements? The position that

is being argued helps correspond with the ethical egoism framework. It is important to note that

for these companies’ good data is better than more data.

Early on, artificial intelligence wasn’t expected to take such a drastic change. While there

was still a public assumption that AI wouldn’t involve more than just simple repetitive tasks

requiring low-level decision-making (Pazzanese, 2020). From these fast advancement companies

would start to improve their business operations as mentioned by the Harvard Gazette, “firms

now use AI to manage sourcing of materials and products from suppliers and to integrate vast

troves of information to aid in strategic decision-making” (Pazzanese, 2020). As there isn’t

history to back any problems arising, there isn’t an ethical guide to rules that companies should

be obligated to follow. The stakeholders are in a predicament that relates to self-interest. Mention

that it’s important for this stakeholder to be aware of changes on how the government plans to

regulate AI (Pazzanese, 2020).

This stakeholder, also known as the consumer or general public, is against the position

that is taken from the other stakeholder, the companies, which has an ethical framework of care

ethics. To put it into words, care ethics goes above someone's obligation to other people. The

position taken by this stakeholder is, “how can companies use AI to improve their business

operations while also prioritizing user privacy and data protection” (Usercentrics, 2022). Care

ethics is used to generalize the concerns that are presented to the public regarding data and
Bekiri 7

privacy issues. Technology that can prioritize and protect customer privacy and data and to

contribute compliance with privacy laws (Usercentrics, 2022). The public's main concern is that

will there ever be any ground for responsible AI usages. Potentially this can go both ways but

there is a higher expectancy of what companies can do to incorporate these concerns in their

businesses. The stake that the public has is the unknown. While there is a need to restrict

companies from obtaining this data and regulate privacy. The thought naturally comes on what

regulations or enforcements should be applied. “The more technology evolves, the less likely the

average person is to understand it deeply, or what it can do and how that can affect them”

(Usercentrics, 2022). Although it is not their place to be the experts.

Until there is proper regulation with the amount of information that is harvested from the

public by these companies, there stands ethical issues including the issue of privacy. The position

taken aligns with a stakeholder that is against the issue, in this instance the stakeholder is the

public. It is easy to come to a conclusion on how this issue is to be addressed without taking

under consideration the company's beneficial aspects and society's dilemma. As a whole the

current societal knowledge of AI and how it affects the consumer is vast to a certain degree.

Addressing the issues that arise with these technological advancements is important for

longevity, while applying the concerns. In this instance there are issues with the quantity of

which data is being collected and how this data is taken, eventually compromising privacy.

“When AI is involved in data collection or storage, it’s imperative to educate your users or

customers about how their data is stored, what it is used for, and the benefits they derive from

sharing that data” (Pansanen, 2022). In a way this is a recommendation to solve the issues

because it creates an ethical AI framework that is seen in a positive aspect rather than one’s own

intention. The companies should engage in developing their own guidelines to use with AI
Bekiri 8

ethically. As the best course of action is adding regulations and enforcing expected work ethic

for companies, all while waying out the benefits.

If artificial intelligence exists, there will always be a constant collection of data, where

companies determine the value of the data. The main reason of concern is how these companies

take this information and prevent misuse amongst its platforms, which eventually leads towards

the rise of ethical concerns. As society wants to keep innovating the issues are a topic of concern

and managing these issues is what sparks the need for regulation, with personal information.

With the constant advancement in technology the average consumer isn’t aware of the scale of

which this could affect them. This is where companies should step in becoming responsible with

their technologies while considering strategies on how to handle privacy and data security.
Bekiri 9

References

Atske, S. (2021, June 25). 1. Worries about developments in AI. Pew Research

Center: Internet, Science & Tech. Retrieved from https://www.pewresearch.org/inter

net/2021/06/16/1-worries-about-developments-in-ai/

Pasanen, J. (2022c, April 28). AI Ethics Are a Concern. Learn How You Can Stay Ethical.

Retrieved from https://learn.g2.com/ai-ethics?hs_amp=true

Pazzanese, C. (2020, December 4). Ethical concerns mount as AI takes bigger

decision-making role. Harvard Gazette. Retrieved from https://news.harvard.edu/gaz

ette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/

S. (2018, May 7). Scientists and Stakeholders in Geneva for Good Artificial Intelligence.

Medium. Retrieved from https://medium.com/syncedreview/scientists-and-stakeholders-

in-geneva-for-good-artificial-intelligence-5e09e7dcafa9

Usercentrics, U. (2022, April 12). Artificial intelligence (AI) and data privacy. Consent

Management Platform (CMP) Usercentrics. Retrieved from https://usercentrics.

com/knowledge-hub/data-privacy-artificial-intelligence/

Walch, K. (2019, December 29). Ethical Concerns of AI. Forbes. Retrieved from https://www.

forbes.com/sites/cognitiveworld/2020/12/29/ethical-concerns-of-ai/?sh=c09392723a8f

You might also like