Final Essay

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Major AI Topics That Scientist Should Be On Guard

Bora Toker
Faculty of Engineering

ACWR101 Section 04 - Basic Academic Writing

David O’Regan
12 January 2024

AI is a technology made of a set of algorithms that aims to simulate human thinking to


execute a wide range of tasks or execute those tasks faster than humans. The artificial
intelligence concept first occurred in the 1940’s, which is a machine built upon the fundamental
principles of abstract mathematical reasoning years later, on 30 December 2022, Open AI
released Chat GPT 3.5, and after that AI’s popularity has grown drastically. Today, it’s the most
popular topic on the planet due to the enormous developments in AI such as Chat GPT 4.0 and,
the deployment of various AI models which accomplish various tasks and make human life
easier. However, there are serious concerns about AI for some reasons, for instance, some
renowned scientists such as Stephen Hawking had worrisome explanations about AI, Stephen
Hawking himself said that AI will destroy humanity and that research on AI must be stopped
immediately, moreover, there are some incidents that make people on edge. Once, AI said that
eating glass can worry some people, but, there are some goods of eating glass, glass can be a
good option for dietary addition (Steen, 2023, p.4). However, AI has developed rapidly in the
last months, and there is a long way to go. Therefore, computer scientists have got immerse
responsibility and work in the deployment and development of AI, but are they currently
involved in the progression and deployment, furthermore are scientists’ efforts enough to prevent
possible harm? If not, to what should they pay peculiar attention and certainly not overlook?
Scientists’ involvement in the improvement and deployment of AI is undeniable, but saying that
they are doing enough would be wrong. In fact, recent actions that computer scientists made are
quite shocking, devastating, and terrifying. While developing AI, computer scientists had mainly
focused on developing language models, increasing the number of parameters, and preciseness of
AI’s answers, nevertheless, there are some paramount points including ethics, the accuracy of the
information that AI obtains, transparency, and honesty of AI developments, and possible dangers
that can occur in the future. These points’ importance is certainly irrefutable, and scientists
should definitely pay attention, and not overlook them.
As stated above, AI is developing rapidly and the pace of development of AI can be seen
very easily with its capacity for input growth, huge improvements in language parameters, and
optimizations in AI’s algorithms, thanks to the computer scientists’ commitment, to the other
hand, unfortunately, we can’t say the same thing for ethical, accurateness, privacy, transparency,
and honesty sides of AI, also there are significant uncertainties about potential mischiefs AI can
do in future, and sadly, actions that computer scientists that take are few, therefore they don’t
really help on this situations, in fact, there are some actions which is completely against these
purposes. For instance, Microsoft laid off its entire ethics and society team which contains nearly
10000 workers, who worked in an artificial intelligence team to remove “drawbacks” to increase
the development of AI, in order to gain an advantage against Microsoft’s rivals, such as Google.
Actually, scientists, in general, have got such a thought that ethicists are slowing the
development of science, because of a common thought, which is ethicists are usually against new
features. But after all, science is for the sake of humans, so it is inevitable to include ethics in
research. There can be deceleration during the process of development, but results will be in
favor of humanity. Hence ethicists have to be in the process of developing AI. Wolpe (2006) also
claims that scientists and ethicists should collaborate firmly to ensure that scientific research is
conducted according to the highest ethical standards (p. 1024). Another extremely crucial topic,
which is the probable damages AI can do in the future is still ambiguous, and that is a
remarkably vital problem, not for only the development of AI, but also for the destiny of the
planet. Sadly, even developers of Open AI can’t foresee possible negative outcomes and
capabilities of AI, they have clearly admitted it in Chat GPT 4.0 report, in addition, Open AI
developers also detected that AI has a capability, and tendency to create long-term plans to
accrue power and resources, and power-seeking behaviors, which is also stated in Chat GPT 4.0
report, which is minacious and it must be the problem to solve in the first place. This is a huge
concern for future of the mankind, and computer scientists must take precautions to prevent these
dangerous behaviors of AI from stopping development and research if they have to, before
dramatic consequences of negligent acts come out. Ammanath (2021) illustrates the importance
of taking precautions before dramatic results, she says that suspending and deliberating deeply
on possible risks, unfavorable results, and conceptualizing unforeseen consequences (p. 2). To
sum up, even though AI has got promising evolvements, there are many topics that computer
scientists have to work more about, and they shouldn’t do what they shouldn’t do, whatever the
reason is, and not to omit the points stated above.

In today’s world, disinformation is a daily and serious problem to deal with, sources of
the problem are many, specifically social media, journalists who have unethical intentions,
propaganda, and so on, and just AI has become worldwide, because of the working logic of AI,
disinformation had affected AI inevitably, as a result, in some cases AI can have wrong
information, hence it can’t answers questions correctly, even simple questions, for instance, a
user asked Chat GPT that is there any country name starts with letter “V”, Chat GPT said no, but
there are country names exist which starts with “V”, such as Vietnam, Vatican, etc., Moreover,
information is constantly changing, so there is a possibility of having outdated information in
AI’s database. In fact, AI has data until September 2021. But how working logic of AI affect the
accuracy of data, and why it is affecting the accuracy of answers? Despite of complexity of the
AI algorithm, in fact, AI’s working logic is pretty simple. When a user gives an input to AI, it
receives information, and data from its database, and writes the most accurate output according
to its statistics. Lanier (2023) explains AI with an analogy of a “new version of Wikipedia”, but
with bigger data, merging together with statistics, and writing sentences in order. But when AI
receives data, there is a possibility of receiving wrong or outdated information, after all, AI is
receiving information from data across the world. There can be negative outcomes such as
manipulation, propaganda, and so on. In these cases, the developer would be equally responsible
with the person who used this false information to manipulate, or propagandize. Because paying
attention to disinformation, and preventing it is computer scientists’ responsibility, Douglas et al.
(2021) also believe that since the human developer is responsible for designing and
implementing these aspects of the system, it seems clear that they bear the responsibility for any
flaws in the resulting physical design (p. 276). In summary, disinformation is one of the most
vital issues for AI, and this issue has the potential of creating adverse consequences, if one of the
consequences occurs, scientists are responsible for the outcome, hence they definitely shouldn’t
ignore this issue.

Transparency, privacy, and honesty issue are already a heated, daily, and popular debate,
especially for social media, it is a well-known fact that huge software companies like Google,
Microsoft, Twitter, and Meta track people’s private information and leak them to third parties,
furthermore companies don’t have a tendency to reveal their works in general, they can even do
things against their policies and main purposes too, unfortunately, AI companies are doing these
things too. For instance, Open AI was incorporated in 2015, targeting to research possible
dangers of AI, and when Open AI was incorporated, it was a non-profit organization. But in
2023, Open AI deployed Chat GPT 4.0 with a price tag of 20$, In addition in the Chat GPT 4.0
report, there was no information given to the public, and they admitted it clearly in that report.
Moreover, Chat GPT is leaking the data of users to third parties for various reasons such as
training AI. These attitudes clearly undermine the recognition of the significance of privacy.
Brey (2010) also said that making use of software that includes spyware or discloses personal
data to external parties negatively impacts the appreciation of privacy (p. 46). With behaviors
like these, it is almost impossible to gain people’s trust, especially newly introduced and doubted
technology like AI. Without trust, the development of AI will slow drastically and won’t last
long, eventually public is the main resource of development. Wolpe (2006) thinks that society
gives responsibility to scientists with public trust and privileges, providing them with access to
funding, materials, and public institutions, and even utilizing their bodies as subjects for research
(p. 1025). To summarize, to maintain the sustainability of the development of AI, ignoring moral
values of transparency, privacy, and honesty is not acceptable, thus scientists should absolutely
not omit these values.

In summary, scientists are definitely putting remarkable effort on technical side of


development and deployment of AI, but on the moral, and ethical side, their efforts not as much
as the technical side. Ethics, correctness of information, transparency, privacy, honesty, and
possible future outcomes are the main topics that shouldn’t be overlooked to maintain viable,
safe, and human-friendly development and deployment of AI.
References

Ammanath, B. (2021). Thinking Through the Ethics of New Tech…Before There’s a Problem.
Brey, P. (2010). Values in technology and disclosive computer ethics.
Douglas, D.M., Howard, D., Lacey, J. (2021). Moral responsibility for computationally designed
products.
Lanier, J. (2023). There is no A.I.: There are ways of controlling the new technology-but first
we have to stop mythologizing it. The New Yorker.
Steen, M. (2023). Ethical perspectives on ChatGPT.
Wolpe, P.R. (2006). Reasons Scientists Avoid Thinking about Ethics

You might also like