Final Essay
Final Essay
Final Essay
Bora Toker
Faculty of Engineering
David O’Regan
12 January 2024
In today’s world, disinformation is a daily and serious problem to deal with, sources of
the problem are many, specifically social media, journalists who have unethical intentions,
propaganda, and so on, and just AI has become worldwide, because of the working logic of AI,
disinformation had affected AI inevitably, as a result, in some cases AI can have wrong
information, hence it can’t answers questions correctly, even simple questions, for instance, a
user asked Chat GPT that is there any country name starts with letter “V”, Chat GPT said no, but
there are country names exist which starts with “V”, such as Vietnam, Vatican, etc., Moreover,
information is constantly changing, so there is a possibility of having outdated information in
AI’s database. In fact, AI has data until September 2021. But how working logic of AI affect the
accuracy of data, and why it is affecting the accuracy of answers? Despite of complexity of the
AI algorithm, in fact, AI’s working logic is pretty simple. When a user gives an input to AI, it
receives information, and data from its database, and writes the most accurate output according
to its statistics. Lanier (2023) explains AI with an analogy of a “new version of Wikipedia”, but
with bigger data, merging together with statistics, and writing sentences in order. But when AI
receives data, there is a possibility of receiving wrong or outdated information, after all, AI is
receiving information from data across the world. There can be negative outcomes such as
manipulation, propaganda, and so on. In these cases, the developer would be equally responsible
with the person who used this false information to manipulate, or propagandize. Because paying
attention to disinformation, and preventing it is computer scientists’ responsibility, Douglas et al.
(2021) also believe that since the human developer is responsible for designing and
implementing these aspects of the system, it seems clear that they bear the responsibility for any
flaws in the resulting physical design (p. 276). In summary, disinformation is one of the most
vital issues for AI, and this issue has the potential of creating adverse consequences, if one of the
consequences occurs, scientists are responsible for the outcome, hence they definitely shouldn’t
ignore this issue.
Transparency, privacy, and honesty issue are already a heated, daily, and popular debate,
especially for social media, it is a well-known fact that huge software companies like Google,
Microsoft, Twitter, and Meta track people’s private information and leak them to third parties,
furthermore companies don’t have a tendency to reveal their works in general, they can even do
things against their policies and main purposes too, unfortunately, AI companies are doing these
things too. For instance, Open AI was incorporated in 2015, targeting to research possible
dangers of AI, and when Open AI was incorporated, it was a non-profit organization. But in
2023, Open AI deployed Chat GPT 4.0 with a price tag of 20$, In addition in the Chat GPT 4.0
report, there was no information given to the public, and they admitted it clearly in that report.
Moreover, Chat GPT is leaking the data of users to third parties for various reasons such as
training AI. These attitudes clearly undermine the recognition of the significance of privacy.
Brey (2010) also said that making use of software that includes spyware or discloses personal
data to external parties negatively impacts the appreciation of privacy (p. 46). With behaviors
like these, it is almost impossible to gain people’s trust, especially newly introduced and doubted
technology like AI. Without trust, the development of AI will slow drastically and won’t last
long, eventually public is the main resource of development. Wolpe (2006) thinks that society
gives responsibility to scientists with public trust and privileges, providing them with access to
funding, materials, and public institutions, and even utilizing their bodies as subjects for research
(p. 1025). To summarize, to maintain the sustainability of the development of AI, ignoring moral
values of transparency, privacy, and honesty is not acceptable, thus scientists should absolutely
not omit these values.
Ammanath, B. (2021). Thinking Through the Ethics of New Tech…Before There’s a Problem.
Brey, P. (2010). Values in technology and disclosive computer ethics.
Douglas, D.M., Howard, D., Lacey, J. (2021). Moral responsibility for computationally designed
products.
Lanier, J. (2023). There is no A.I.: There are ways of controlling the new technology-but first
we have to stop mythologizing it. The New Yorker.
Steen, M. (2023). Ethical perspectives on ChatGPT.
Wolpe, P.R. (2006). Reasons Scientists Avoid Thinking about Ethics