Document 2-5
Document 2-5
Document 2-5
Ibtisam Seid
Essay Pivotal Contests
October 16, 2023
1
1
GOVERNANCE
What is AI? AI is the ability of machines to perform tasks that normally require human
intelligence. AI maybe used to solve problems that we cannot address sufficiently. However,
most of these provisions rely on the assumptions that the AI will be smarter and less
emotional than people. There seems to be a widespread agreement that AI growth is
accelerating. Even though AI research is still far from being perfect, we are already starting
to see what a future with AI at the centre of it might look like. While the rapid progress of the
technology should be seen with a positive lens, it is important to exercise some caution and
introduce worldwide regulations for the development and use of AI technology.
Attorney and legal scholar Matthew Scherer calls for an Artificial Intelligence
Development Act and the creation of a government agency to certify AI programs' safety.
The White House organized four workshops on AI in 2016. One of the main topics: does AI
need to be regulated? AI is still a relatively new technology, but it has quickly become an
essential part of our lives, yet there exists very little to no regulation on the issue. Advances
in AI are likely to be among the most important global development in the coming decades,
and AI governance will be among the most important global issue areas. If highly advanced
and complex AI systems are left uncontrolled and unsupervised, they stand risk of departing
from desirable behaviour and perform tasks in unethical ways. Stephen Hawking is quoted as
saying that "the development of full artificial intelligence could spell the end of the human
race." AI is a powerful technology that can bring many benefits to humanity, but it also
possesses some risks and challenges that we need to be aware of and address.
1
1
▫ Privacy and security: AI systems pose privacy and security risks, as they collect and
process large amounts of personal data. This data could be hacked or misused, or it
could be used to track and monitor people without their consent.
▫ Ethical and moral dilemmas: AI systems can be used to make decisions that have
significant impacts on people's lives. For example, AI systems are being used to make
decisions about who gets access to healthcare, who gets hired for a job, and who gets
parole. It is important to carefully consider the ethical and moral implications of these
decisions before deploying AI systems.
▫ Job displacement and social inequality: AI systems are automating many tasks that are
currently performed by humans. This could lead to job losses and increased social
inequality.
▫ Losing control: AI systems might escape control by writing their own computer code
to modify themselves into something we don’t expect or understand. Nick Bostrom,
an Oxford philosophy professor, warned that "once unfriendly superintelligence
exists, it would prevent us from replacing it or changing its preferences. Our fate
would be sealed."
▫ The greenhouse effect: "The energy needed to power this technology is increasing,"
Corina Standiford, a spokesperson for Google, said in an email. The potential energy
consumption of 100,000 AI servers, running at full capacity might burn through 5.7 to
8.9 TWh of electricity annually.
It is important to develop policies and strategies to mitigate the negative impacts of AI. Some
of the recent initiatives or proposals for regulating Al include the European Commission's
proposal for an Al legal framework, which aims to create a single market for trustworthy Al
in the EU. The proposal adopts a risk-based approach that classifies Al.
Concerns are shared by several highly respected scholars and tech leaders.
An Oxford University team warned: "Such extreme intelligences could not easily be
controlled (either by the groups creating them, or by some international regulatory regime).
The intelligence will be driven to construct a world without humans or without meaningful
features of human existence. This makes extremely intelligent AIs a unique risk, in that
extinction is more likely than lesser impacts."
Bill Gates told Charlie Rose that "A.I. was potentially more dangerous than a nuclear
catastrophe."
AI is a complex field, and it's important to understand that we can't always perfectly
explain how AI algorithms work. This is because AI algorithms are often complex and non-
linear, and they can be self-learning, meaning that they can change their output based on new
information. Despite these limitations, AI is still an important field of research. By
developing better ways to explain AI algorithms, we can help people to understand how AI is
being used and to make informed decisions about its use. The starting point in each case is
distinct corporate and civil society motivations. Developing guidelines and best practices is
important to ensure the quality, reliability, and explain ability of AI systems.
1
1
One of the most neglected issues is that we must not only govern the technology, but its
abuses as well. The approach focusses on how human beings and organizations concretely
use Al. It is important to establish regulations for the development of Al systems, national
and international agencies should set ethical rules and laws that help us design, build, and test
AI systems in a fair, responsible, and human-rights-centric way. This can be enforced by
creating technical standards and best practices that help us build high-quality, reliable, robust,
and explainable AI systems. Examples include, organising and facilitating interactive
activities, creating informative material, developing curricula and courses, establishing and
supporting networks that foster dialogue and collaboration. We should promote education and
awareness among the public and stakeholders about the benefits and challenges of Al
systems, such as opportunities, responsibilities, rights, and risks. "A key takeaway from the
article is this call to action for people to just be mindful about what they're going to be using
Al for," De Vries tells The Verge.
Reference:
https://studycorgi.com/artificial-intelligence-through-human-inquiry/
https://papersowl.com/examples/why-artificial-intelligence-must-be-regulated/
http://www.brookings.edu
http://www.theverge.com