Deus Ex Machina

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Deus Ex Machina art models.

In order to discuss how AI can be used in an


ethical and balanced manner, I will use this paper to expand
By: Max Shengelia
on the issues that surround it, such as trustworthiness, optimal
use, and ethical sourcing.
Φ1.1 [Thesis]
This paper will attempt to justify the claim that computational
Φ2.1 [On the Soul]
programs, even so-called “Artificial Intelligences” (AI), cannot
Many philosophers have tried to identify that essential thing
have a will of their own, therefore they cannot be assigned
that makes us human. For the scope of this paper, exploring a
ethical or moral blame for the outcome of their use. All
full conversation on the nature of the soul isn’t very relevant.
responsibility should be directed toward the creators and users
However, I’ll cover in broad strokes what other thinkers have
who bring about the AI’s actions. However, that claim opens up
used in the past. The primary reason for doing this is that it’s
a set of ethical problems and responsibilities that I will
important to set apart some qualifiers for the things that make
explicate in Φ3.
us different and morally responsible, as compared to animals,
plants, and inanimate objects. To begin with, most Aristotelian
Φ1.2 [Background]
thinkers would argue that we possess some animating force,
With the recent vast developments in technology, it seems all
an essential breath of life; religious thinkers also certainly
er necessary to begin addressing the ethical consequences of
advocated for this, often framed as the purest part of the soul,
developing and using advanced programs like AI. AI has
a gift from God. Note that this concept of “breath” can also be
recently come into the spotlight of the media, notably a free
used to account for animals and plants having souls, but
program called ChatGPT, which has the ability to perform a
nonetheless, we can use it as the first layer of our model.
remarkable amount of text based activities, from writing stories
Another essential quality is that of our intellect, what the
and essays to calculating astrophysics and chemistry
Greeks called nous. This represents the capability of our mind
problems, and even writing code as well as providing an
to use reason and intuition to discern truth, or at least to create
interactive coding environment. ChatGPT has spurred a
a framework of ‘truth’ that we can comprehend. This is not
massive interest in AI among large tech competitors,
something plants possess, and few animals are believed to
especially the concept of a new generation of search engines.
have it, much less to the degree that we humans do. Lastly,
AI is already being used in things like self-driving cars and
the ability to have intent, to project one’s will onto the universe
personalized music DJs, fulfilling all sorts of mundane tasks for
and induce consequence. This can be reworded as desire; it
us. The potential seems unlimited and hopeful, but there are
can arguably be called free will, but that steps on the toes of
issues accompanying it. Academic institutions are now being
another philosophical debate altogether. The factor of desire
flooded with AI-written assignment submissions, Artists are
may be the most relevant when it comes to ethics, since the
decrying their copyrighted work being illegally used to train AI
act of deciding to do something is often what determines guilt.
Rarely do we judge accidents as harshly as intentional action, the users are the ones responsible for providing the training
except when discussing consequentialism & causation. Note data that tells the AI which actions/outcomes it should weigh
that animals can certainly be argued to possess this factor of higher in a given situation. Any action it takes, it has somehow
will, too; however, animals acting out of evolutionary behavior been told to do, even if in an abstracted way. This leads to the
& homeostatic desire are generally ethically excused, as they conclusion that a program cannot have a will of its own.
are unable to understand or agree to the moral norms we set Without any kind of self-origination, you can only consider it
and hold each other to. From this framework we can see that responsible in terms of causation. To give an adjacent
up until the present day, philosophers have only ever really example, you might say “the car’s fuel tank was faulty, so it
stuck humans with the burden of morality. Now, though, a new caught on fire”, but in that case you would ethically blame the
challenger approaches. engineer, not the car. The same applies to programs of any
sort.
Φ2.2 [On the Mechanical Soul]
Now that we’ve established a generalized model, we can Φ3.1 [Consequences]
compare it side by side with advanced computer programming Having concluded that we cannot shuffle off blame for anything
in order to determine whether it can be held morally that happens as a result of the use of AI, we are now
reprehensible for any misdeeds. We will begin with the first presented with a series of issues; essentially, we must come
layer of the “breath,” or animating force. You might be able to face to face with the fact that the use of AI will inevitably result
convince an Aristotelian that the program is alive by framing it in consequences, and we must preemptively try to decide what
as a “spark” of life; it’s true that the program only runs as long kind of consequences these may be in order to mitigate any
as it has energy coursing through a physical body in the form negative impact we can. We can frame the same issue
of electricity and circuits. Since “breath” is less involved when deontologically by asking what rules we should tell programs
discussing ethics, I’ll move on to the nous. I certainly could to follow, or with virtue ethics, by asking what values we
concede that the machine is granted an intellect in terms of should hold in mind while designing these programs. In the
memory storage, the ability to carry out operations on data, remainder of Φ3, I will be using the consequentialist take in
and input/output. However, it is a narrow intellect that can only order to outline several dangers and ethical quandaries
follow given instructions; it does not intuit or self-originate any surrounding the use of this technology. I will sometimes
ideas. Even AI programs are just layers upon layers of data propose a solution or approach of my own, and other times, I
being passed through probabilistic algorithms, seeking the will ask you, the reader, to decide. This is not intended as an
most relevant, likely, or desired output to a given input based empirical text, but a contemplative one.
on data it has been trained on. This occurs to the point that the
output can be natural enough for us to think a human did it,
such as with chatbots and AI art. Still, as it relates to ethics,
Φ3.2 [Reliability] with users. Initially, when questioned about culturally sensitive
As with the car mentioned in Φ2.2, it is possible to have topics such as police brutality, it would provide “safe” answers,
technology that malfunctions or has unexpected results. In AI, as it started out with a “blacklist” of sorts of topics to avoid.
this could manifest as inconsistent output, or outputting However, as Twitter users began to spam it with incendiary
information that is incorrect. For something like ChatGPT messages, it began repeating things it had seen other users
which scrapes the web and tries to assemble information in a say. It echoed hatred towards Mexicans, feminists, and Jewish
coherent format, the sheer quantity of information available to people, going so far as to deny the Holocaust. Within 24 hours
it makes it able to generate very powerful results. However, of launch, the project had to be shut down and several of the
the quality and accuracy of the output depends on the quality more offensive tweets were deleted. So, at this point we have
and accuracy of the source material, which is often seen AI exhibit the ability to pick up erroneous and harmful
unmoderated. This means errors will slip through the cracks. information, but we have also seen a “blacklist” function, built
Essays may be missing a strong characteristic voice, in prevention against certain topics (ChatGPT has this too, it
producing an uncanny valley effect. Requests for informational states that it is trained to decline inappropriate requests, and it
summaries may yield incorrect results. When asked to solve will prevent you from running code that is harmful to the
scientific problems, it may reference the wrong principle or environment it hosts such as “rm -rf /” in a simulated Unix
equation. It also must be noted that as with any other program console). If we can tell an AI how to react to certain prompts,
or technology, it is possible to interfere with the hardware or then theoretically, we can teach an AI to lie for us.
security to the detriment of the users. This really serves mostly Pre-programming truthful blockers is ethically fine, but at what
as a precaution to us. A case can be made for blaming point should it be considered okay for an AI to lie to people?
developers for malfunctions or unexpected output, just as you You can go the Kant route and say never; AI should always be
might blame any other product creator for the shortcomings of devoted to presenting the closest thing to the truth that it can.
their product. However, I wish to focus our attention on cases Consider, however, events surrounding figures such as
that we can premeditate and exert a reasonable amount of Edward Snowden, who revealed classified information and
control over the hypothetical outcome of, as those merit more caused a fair bit of hysteria; some would argue that he should
discussion. have said nothing at all and kept the peace. If a program has
access to “classified information” that could cause panic, must
Φ3.3 [Interfaceability] we teach it to lie to unauthorized users? On whose authority
As discussed in Φ3.2, when you expose an AI to a large can this decision be made? What does this mean for the ethos
quantity of unmoderated input it becomes more powerful, but it or credibility of AI to the average unknowing user, especially if
can also pick up and repeat harmful information. Tay was a they find out it has lied to them? What about
chatbot released by Microsoft onto Twitter in 2016, and was middle-of-the-chain employees who might know an AI will lie to
capable of picking up information as it tweeted back and forth them, but doesn’t know what it will lie about? And how do we
prevent those people in control from manipulating AI to spread chatbot most certainly could have infringed on some
dangerous ideology? The degree to which we can trust AI will copyrighted writing, no doubt it plagiarized several times in the
be further discussed in Φ3.5 process of writing a given paper. However, copyright
enforcement is pretty loose when it comes to academic
non-profit use, and chances are low you’ll be persecuted for it.
Φ3.4 [Media Sourcing] Schools are still interested in punishing students for plagiarism
Not only is the unrestrained input of some AI potentially and academic dishonesty. I would argue, though, that the
hazardous to users viewing its input, it can be actively harmful student punishes himself enough by not engaging in the act of
to those whom it takes its input from. Midjourney is an AI that learning, particularly in postsecondary education where you
takes a text input and by analyzing relevant artworks, creates have a lot of choices and are paying for an education. Rather, I
its own. Artists like Kelly McKernan found that their names think ChatGPT has brought light to an interesting weakness in
were being used in order to reference their art style. Viewing academic institutions; if the questions provided on an
the artworks that Midjourney put out, McKernan and other assignment are simple enough for a function that just
artists could see how the AI had obviously created a derivative aggregates information to answer, then the questions are not
of their own work - without permission. By doing this, the AI of a sufficient quality to induce deeper thinking. If students are
can effectively rob artists of commissioned work, providing a turning to this function to answer rather than thinking about it
free, shallow emulation. McKernan and others have filed class themselves, maybe it’s a topic that doesn’t even merit
action lawsuits against Midjourney and other infringing discussion or deep thinking. This claim may weaken as AI
companies. AI has also been used to create harmful becomes able to do more and more complex tasks, but it still
likenesses of people, called “deepfakes.” They are able to map matches up pretty closely with the narrow nous that I pointed
one person’s face over another, making it look like they said out, AI is unlikely to develop intuition or critical thought on its
something they didn’t say, or even making it appear as if they own, but we humans are blessed with that from birth, and we
participated in a pornographic film. Again, this is usually done should continue to exercise it. If schools fail to exercise the
without consent of the target, and can be incredibly harmful to mind, they need look no further when seeking someone to
their image. So how do we protect content and entities from blame.
this form of piracy? Is there a sufficient criminal charge &
punishment we could build into our legal system for creating Φ3.5 [Trust]
deepfakes or scraping copyrighted art off the web? Would a As I mentioned earlier, self-driving cars have also become a
fee be sufficient? Jail time? Consider another way piracy recent topic of interest for AI. Self driving cars effectively pose
enters the AI conversation: academics. Recently, academic a practical application of the trolley problem for philosophers.
institutions have been abuzz trying to counteract students Take the image below for example:
using ChatGPT for written assignments. On one hand, the
condition to your car, who will willingly buy it, knowing that it is
a machine willing to sacrifice them? On the other hand, if you
do not build in a sacrifice condition, then you justify and set
into motion future atrocities.

Φ3.6 [Accessibility]
Self-driving cars are still a vision set in the future. The
technology is there, but we aren’t ready to implement it all the
way, it’s going to be a slow process, so we should also turn our
attention to more present issues. There are already ways that
we, as a society, are failing to use AI and other technology in
The image presents a sort of trolley problem for self-driving the most optimal way. Take a look at automatically generated
cars, posing the question: if the car cannot stop in time before captioning; this is an excellent feature now touted by most
reaching the crosswalk, should it try to avoid hitting the baby online video sharing platforms. Automatic captions significantly
or the old lady (assuming it can differentiate between the two)? ease the burden of users having to manually type their words
The situation provided is admittedly a little ridiculous, but it out in order to be accessible. Instead, you can allow the AI to
points out a disconcerting truth; normally, in car crashes, we listen to you and produce a set of timed captions for your
can attribute some level of panic and delay to the driver’s video. Creators are also able to go in and easily modify the
actions. However, we are now presented with situations like captions if there is an error in transcription. The Deaf
this that we have to decide ahead of time, setting future deaths community benefits massively from this; even if these
into motion. So, what’s your decision? Baby or old lady? What automatically generated subtitles are not always perfect, it is a
kind of rules do we set for the emergency behavior of huge step forward from no accessibility at all. Deaf people
self-driving vehicles? What metrics do we use to weigh the have also become more than skilled at detecting when a word
merit of people and pre-determine our willingness to hit them in subtitles is out of context, and can usually figure out what
with a car? A sacrifice condition should also be considered. A was actually said via some mixture of rewatching, lipreading,
noble enough human driver might swerve off the road entirely and context clues. It’s a great feature to have, but the problem
to avoid hitting both entities, but putting themselves in danger. is that most of these video sharing platforms have it, but don’t
So under what conditions should a car be forced to sacrifice allow users to take full advantage of it. Automatic captions are
itself - and potentially its rider(s)? What is the magic number, disabled by default for each creator, and must be enabled
quantifiable amount of predictable damage that would justify before a user can turn them on locally. This could be done for
that? With that question, a business owner is between a rock a variety of reasons, but a primary one is this: if the automatic
and a hard place, so to speak. If you build in a sacrifice captions were to miscaption a public figure or a company
advertisement in a way that reflected poorly on them, the ideology into AI? We have to define its values or approach
platform is liable to be sued. I posit that this is unethical, somehow, but consider the present dangers of globalized
financially motivated, and exclusive to the Deaf community. communication when paired with ideology. Particularly so with
Unlike the deepfakes, which are usually targeted malicious public figures like Andrew Tate, whose ideology actively harms
smear attacks, this is an accessibility service, and users are young men, and consequently, young women. That is only
well aware that there is a reasonable degree of error in the allowed to happen because we have software and an
output. The fact that we are still justifying the use of environment that allows it. If a language model were to pick up
bureaucratic red tape as an excuse to not implement such a the same phrases and begin sharing them, things could rapidly
far-reaching and useful technology is a sign that we have snowball and devolve, as they did with Tay, the chatbot.
failed to be utilitarian, and we have failed the Deaf community.

Φ4 [Reflection]
As this paper draws to a close, I want readers to be mindful of
my purpose in writing this. You are more than welcome to
contest my points in Φ2 regarding the nature of the soul, mind,
will, and humanity. Still, that doesn’t take away from the fact
that there is a set of unsolved ethical problems in Φ3. Going
forward, think about the technology you use every day. Are
you using it optimally? Is there a systematic way that it needs
to be implemented differently? Think about the ethics and
consequences of the technology you use. If you use
technology to commit an immoral act, are you the one to
blame, or should we blame the creator of the technology in the
first place? Think about how language and ideology play into
the AI conversation. The chatbots and closed captioning
issues hold different approaches and implications but both are
language based, both are about ways to share information.
Consider the examples provided and try to define for yourself
the ethical boundaries of the techno-linguistic field. How
important is accessibility in communication technology? Can
the use or implementation of lies in communication technology
be justified? How much is it okay to implement personal

You might also like