Artificial Intelligence Skill Based Learning: Team AISV1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 13

Artificial Intelligence Skill

Based Learning
Team AISV1
Module 12
AI Ethics & Bias-II
AI Ethics & Bias-II
Learning Objectives
 Critically think about the cost and benefits of Al technology
 Gain awareness about Al bias and Al access
We have seen how Al is impacting all
aspects of our life and how the
overall world economy and societies
are being transformed with Al. With
this deep level of Impact, we also
need to be aware of the challenges if
anything goes wrong with Al and how
we can mitigate the risks thus posed.
Many people are worried that more Al in their lives will lead to loss of jobs, lack
of privacy and Government watching their every move. Imagine if such systems
got hacked—there are massive risks of undermining everyone's privacy and
putting life in danger. There is also an ongoing debate around companies using
user data their Al devices and apps and then using this data for displaying ads
and generating profit out of it without the users fully understanding it.
When we look at these issues with a legal frame of mind, we should realize that
our current legal system works on human judgement. Human judgements prone
to errors, biases and prejudices. While we explore Al systems, we also need to
think about laws governing Al and the risks around data privacy.
The only way through which trust in Al systems can be built up is explainability
Let us understand what it exactly means. DARPA, US, initiated a project called
Explainable Al (XAI) program with the following aims:
 Producing more explainable Al models while maintaining a high level of
learning performance
 Augmenting human intelligence and decision-making, enabling humans to
understand and appropriately trust
 The machine learning algorithms need to have the inbuilt capability to
explain their logic, define their strengths/weaknesses and specify a clear
understanding of their future behaviour.
Security in AI
Al systems need to be secure and reliable if their usage has to be increased. We
need to understand that every new technology goes through a transition and it
takes time before it becomes reliable. Think of airplanes when they started and
now. Their reliability has increased many times over the years. How? Every time
we had an issue or accident, we analyzed it, improved it and fixed it so it doesn't
repeat. Going through this process again and again, we are able to build trust and
reliability. Now, no one questions the safety of planes before taking flight, right?
The key solutions around reliability could be:

 Testing guidelines of Al systems before being used by humans


 Liability of Al systems for any consequence
 Defining Al consequences and comparing with what a human would do in
that scenario to define gaps. Don't expect miracles from Al!
The United Kingdom is taking a lead in this direction by investing 9
million pounds to establish the Centre for Data Ethics and Innovation.
This centre will ensure ethical, safe and Innovative uses of data. This may
include possibilities of establishing data trust to facilitate easy and secure
sharing of data.
A consortium of Ethics Council may be formed to define the standard
practice. it would be expected that all Centres of Excellence adhere to
standard good ethical practices while developing Al technology and
products.
So, if Al and machines are to take over in the future, how to make their
decision making reliable? How to make their decisions predictable and
certain under all circumstances? What will make these autonomous, self-
improving independent machines and software trustworthy?
This is where ethics come into the picture. Ethics are loosely defined as a
set of moral principles, guiding actions of an individual or a group,
helping to determine what is good or right. Since technology itself does
not possess moral or ethical qualities, it needs to be fed with human ethics.
When designed and tested well, it arrives at predictable outputs for
predictable inputs via such a set of rules or decision paths.
But there are two challenges-
 First, how does the team of developers determine what is a good or right outcome,
and for whom? Is this outcome universally good or is it good only for some? Is this
outcome good under certain contexts or situations and not under other conditions? Is
it good against certain standards but not good against certain others? These
discussions, the questions and answers "chosen" hu the team, are critical.
 Second challenge is that Al is an autonomous, self-leaming and self-improving
technology. This means it cannot be fed into and does most of its decision-making
itself based on its own analysis of data.
There is a fundamental debate now that Al will change the way our
society works and it is very important to plan for such a society in
advance before Al becomes too involved in our lives.
At the core of these two challenges is the problem of existing human
biases which will enter Al systems through both developers and data.
In the very process of its creation, technology becomes inherently
biased by the people who create it. It exhibits the opinions,
understanding and ethical stand of its creators. Thus, ethics of a
technology starts with the ethics of its creators.
Recap Quiz

AI Ethics & Bias-II


https://forms.office.com/r/dgyv4gm4j1

You might also like