Deus Ex Machina
Deus Ex Machina
Deus Ex Machina
Φ3.6 [Accessibility]
Self-driving cars are still a vision set in the future. The
technology is there, but we aren’t ready to implement it all the
way, it’s going to be a slow process, so we should also turn our
attention to more present issues. There are already ways that
we, as a society, are failing to use AI and other technology in
The image presents a sort of trolley problem for self-driving the most optimal way. Take a look at automatically generated
cars, posing the question: if the car cannot stop in time before captioning; this is an excellent feature now touted by most
reaching the crosswalk, should it try to avoid hitting the baby online video sharing platforms. Automatic captions significantly
or the old lady (assuming it can differentiate between the two)? ease the burden of users having to manually type their words
The situation provided is admittedly a little ridiculous, but it out in order to be accessible. Instead, you can allow the AI to
points out a disconcerting truth; normally, in car crashes, we listen to you and produce a set of timed captions for your
can attribute some level of panic and delay to the driver’s video. Creators are also able to go in and easily modify the
actions. However, we are now presented with situations like captions if there is an error in transcription. The Deaf
this that we have to decide ahead of time, setting future deaths community benefits massively from this; even if these
into motion. So, what’s your decision? Baby or old lady? What automatically generated subtitles are not always perfect, it is a
kind of rules do we set for the emergency behavior of huge step forward from no accessibility at all. Deaf people
self-driving vehicles? What metrics do we use to weigh the have also become more than skilled at detecting when a word
merit of people and pre-determine our willingness to hit them in subtitles is out of context, and can usually figure out what
with a car? A sacrifice condition should also be considered. A was actually said via some mixture of rewatching, lipreading,
noble enough human driver might swerve off the road entirely and context clues. It’s a great feature to have, but the problem
to avoid hitting both entities, but putting themselves in danger. is that most of these video sharing platforms have it, but don’t
So under what conditions should a car be forced to sacrifice allow users to take full advantage of it. Automatic captions are
itself - and potentially its rider(s)? What is the magic number, disabled by default for each creator, and must be enabled
quantifiable amount of predictable damage that would justify before a user can turn them on locally. This could be done for
that? With that question, a business owner is between a rock a variety of reasons, but a primary one is this: if the automatic
and a hard place, so to speak. If you build in a sacrifice captions were to miscaption a public figure or a company
advertisement in a way that reflected poorly on them, the ideology into AI? We have to define its values or approach
platform is liable to be sued. I posit that this is unethical, somehow, but consider the present dangers of globalized
financially motivated, and exclusive to the Deaf community. communication when paired with ideology. Particularly so with
Unlike the deepfakes, which are usually targeted malicious public figures like Andrew Tate, whose ideology actively harms
smear attacks, this is an accessibility service, and users are young men, and consequently, young women. That is only
well aware that there is a reasonable degree of error in the allowed to happen because we have software and an
output. The fact that we are still justifying the use of environment that allows it. If a language model were to pick up
bureaucratic red tape as an excuse to not implement such a the same phrases and begin sharing them, things could rapidly
far-reaching and useful technology is a sign that we have snowball and devolve, as they did with Tay, the chatbot.
failed to be utilitarian, and we have failed the Deaf community.
Φ4 [Reflection]
As this paper draws to a close, I want readers to be mindful of
my purpose in writing this. You are more than welcome to
contest my points in Φ2 regarding the nature of the soul, mind,
will, and humanity. Still, that doesn’t take away from the fact
that there is a set of unsolved ethical problems in Φ3. Going
forward, think about the technology you use every day. Are
you using it optimally? Is there a systematic way that it needs
to be implemented differently? Think about the ethics and
consequences of the technology you use. If you use
technology to commit an immoral act, are you the one to
blame, or should we blame the creator of the technology in the
first place? Think about how language and ideology play into
the AI conversation. The chatbots and closed captioning
issues hold different approaches and implications but both are
language based, both are about ways to share information.
Consider the examples provided and try to define for yourself
the ethical boundaries of the techno-linguistic field. How
important is accessibility in communication technology? Can
the use or implementation of lies in communication technology
be justified? How much is it okay to implement personal