ABSTRACT

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 24

Privacy and Artificial Intelligence: Problems and Opportunities

This resource introduces the topic of information privacy and artificial


intelligence in a broader context. It is intended for a non-technical readership
and makes no attempt to answer queries or offer legal advice. It should be
emphasized that this paper does not cover all of the ethical, technical, or legal
challenges surrounding artificial intelligence.

This resource aims to:

• Give a high-level overview of artificial intelligence (AI) and its applications in


the public sector; and

• Draw attention to some of the potential and problems that AI poses for data
privacy.

Introduction

In its most basic form, artificial intelligence (AI) is a branch of computer


science that aims to write systems that can carry out tasks that are typically
completed by humans. These tasks—which include perceiving sounds and
visuals, learning and adapting, thinking, recognizing patterns, and making
decisions—can be categorized as intelligent. The term artificial intelligence (AI)
is used to refer to a broad range of related methods and tools, such as robots,
machine learning, predictive analytics, and natural language processing.
The idea of artificial intelligence as we know it today dates back to the early
1940s and gained notoriety with the invention of the "Turing test" in 1950,
although the theory behind the technology has been debated since at least
Leibnitz in the early 18th century.
Three reasons have led to the recent rapid growth in the field of artificial
intelligence (AI): enhanced algorithms, greater networked computing power,
and the capacity to acquire and store previously unheard-of volumes of
data.1Many of the current developments are made possible by a dramatic shift in
the way we think about intelligent machines that occurred throughout the 1960s,
in addition to technological advancements.
Although many people are unaware of it, AI technologies have already found
practical applications in our daily lives. One of the features of artificial
intelligence (AI) is that, once developed, the technology ceases to be called AI
and becomes mainstream computing. 2 Commonplace AI technology includes
things like having an artificial voice answer the phone or making movie
recommendations based on your tastes. Since these technologies are already
commonplace in our daily lives, people frequently forget that artificial
intelligence (AI) capabilities, such as speech recognition, natural language
processing, and predictive analytics, are at work.
AI has a vast array of benefits that can improve our lives. Among the many
benefits of artificial intelligence (AI) are reduced costs and increased efficiency,
significant advancements in healthcare and research, enhanced vehicle safety,
and overall convenience. However, the opportunities presented by AI also
present a number of legal and societal difficulties, as is the case with any new
technology.3
Terminology
AI is a field with a lot of technical jargon and terminology that is frequently
used interchangeably, which can be confusing, especially for people without a
technical background. The basic definitions of the terms used in this text are
provided below to help the general reader comprehend some of the jargon
related to artificial intelligence and the topics covered in this paper. This is by
no means a comprehensive list, nor is it meant to be technologically deep.
Narrow, general and super artificial intelligence

The majority of AI that exists today is "narrow." This indicates that it has been
purposefully designed to be proficient in a single field. To emphasize its
capacity to supplement human intelligence rather than entirely replace it, it is
also occasionally referred to as "augmented intelligence." For instance, IBM's
Deep Blue computer from the 1980s is capable of playing chess better than
humans, which is a significant achievement in the history of artificial
intelligence. Deep Blue can play chess better than humans, but that's where its
intelligence ends. On the other hand, artificial general intelligence, or AGI, is a
level of intelligence that spans several domains.
The natural world already demonstrates the difference between narrow and
general intelligence. For example, bees and ants are instances of narrow
intelligence, as they both know how to construct beehives and nests,
respectively.
But this intelligence is limited to a certain domain—ants cannot construct a
hive, and bees cannot build a nest. In contrast, humans possess the ability to be
intelligent in a variety of domains and can pick up intelligence in new ones
through observation and experience. Building on the concept of artificial
general intelligence (AGI), artificial super intelligence is often defined as AI
that is both more general and intelligent than humans.
Super intelligence is defined as "an intellect that is much smarter than the best
human brains in practically every field, including scientific creativity, general
wisdom, and social skills" by eminent author Nick Bostrom.4

Ex Machina and Her are two examples of popular culture representations of AI


that show AI as super intelligence. While artificial intelligence (AI) is a popular
concept in science fiction, there is substantial dispute on the likelihood,
imminence, and consequences of ever producing such technology. This kind of
portrayal can add to the hype and/or anxiety surrounding AI. This topic is
restricted to narrow artificial intelligence, or AI for short from now on.

Big data

Big data and AI have a reciprocal relationship. A large portion of big data's
actual value can only be realized through the use of AI approaches, even though
big data analytics processes already exist. Conversely, big data provides AI with
a vast and abundant reservoir of input data to grow and learn from. Big data and
AI are closely related in this way. Although the term "big data" has no set
definition, it is typically used to refer to vast volumes of data that are generated
and gathered in a range of ways.5
It is impossible to overstate the variety and volume of information that fall
under the umbrella of "big data." Virtually everything people do, from browsing
the internet to sharing and exchanging daily information with the government,
businesses, and social media to simply using a smartphone while out and about,
produces copious amounts of data about people, whether on purpose or not. The
breadth of data generated, gathered, and fed into AI systems is expected to
expand into our personal lives as the Internet of Things (IoT) drives the network
more into our physical surroundings and private areas.6
Machine learning

A computer science method called "machine learning" enables computers to


"learn" on their own. Though that is simply one aspect of it, it is frequently
described as AI. The dynamic ability of machine learning to adapt to new data
sets is what sets it apart from previous types of artificial intelligence. The
machine is teaching itself by absorbing data and creating its own reasoning
based on the data it has examined.

Supervised and unsupervised machine learning are the two primary categories.
In order for the machine to understand the relationship between the two,
supervised learning necessitates that a human supply both the data and the
answer. When a machine is fed a lot of data—often big data—and is repeatedly
able to identify patterns and insights from it, it can learn more autonomously.
You could find it interesting to forecast the cost of a property, for instance. You
may instruct the machine to check for a number of attributes, including the
number of rooms and the presence of a garden, in order to accomplish this.
In order for the algorithm to develop a model to comprehend the relationship
between specific attributes and price and, consequently, be able to reasonably
forecast a house price based on those features, you would also need to supply
the past prices of comparable houses using a supervised learning technique. In
an unsupervised learning setting, the machine would identify patterns on its own
without being given access to past home values or instructions on which
features are crucial to take into account. These methods are applied in various
situations and for various goals. Both don't require explicit programming on
what to search for, allowing the system some autonomy to produce its own logic
and spot trends that humans might have missed otherwise.7
Algorithms for machine learning are already widely employed in contemporary
life. Creating web search results, suggesting services like Netflix and Pandora,
and projecting a product's financial worth based on the current market are a few
examples. The quality of the supplied data determines how useful machine
learning is. Big data has therefore been essential to the development of machine
learning.

Deep learning

Deep learning, which mostly refers to deep neural networks, is a subset of


machine learning.8 A neural network, to put it broadly, uses a layered technique
to process data, with each layer taking its input from the output of the layer
before it. The neural network's layer count is indicated by the term "deep."
It might get more challenging to comprehend the choices and deductions made
at each level as the output of one layer feeds into the next. It can be difficult to
fully comprehend and explain the actions that lead to a specific outcome when
moving through each layer because of what is known as the "black box" effect.9
Neural networks are sometimes explained using the human brain as an analogy.
This is not very helpful, though, as it suggests that robots understand
information similarly to human thought, which is untrue. Deep learning is a
very potent technology, and many people believe that the recent surge in AI is
due to it. It has revolutionized computer vision, enabled computers to recognize
spoken words nearly as well as humans, and greatly enhanced machine
translation—abilities that would be extremely difficult to manually program
into machines. Decision transparency is hampered by the nature of this process,
since each layer of processing might make the reasoning more difficult for the
human eye to understand. Moreover, prejudice still affects neural networks.
A recurrent neural network (RNN), for instance, would examine material that it
has already been exposed to. Some claim that RNNs have memories that
influence their output in a manner comparable to that of humans. Microsoft, for
instance, used Twitter data to train an AI bot using an RNN in 2016, illustrating
the possibility of unforeseen outcomes from this type of learning.10

Artificial intelligence in the public sector

While industry and university research are the primary forces behind the
development of AI technology, the public sector can also benefit from AI
applications and advancements. Although AI is now used by the government in
many contexts, more widespread usage of these technologies will be
advantageous. Furthermore, by enacting laws, establishing policies, and
exemplifying best practices, the government may significantly influence how AI
technologies affect the lives of its constituents. Government must keep up with
the private sector's rapid advancement; this calls for a proactive, adaptable, and
knowledgeable approach to technology and how it interacts with the law and
society.
Resources, technological capacity, and public trust continue to be the key
constraints on AI's present and future use in government. The public sector
stands to gain significantly from artificial intelligence (AI) in situations where it
may ease administrative workloads and assist in finding solutions to resource
allocation issues. Artificial intelligence (AI) applications have the potential to
significantly improve the effectiveness of current government processes,
including question answering, document filling and searching, request routing,
translation, and document authoring, in the near future. 11 For instance, several of
the bigger Australian government agencies currently employ chatbots to offer
individuals guidance and customer care.
Longer future, AI could completely change how government functions, rather
than just improving current procedures. Organizations will probably need to
adjust to the changing demands and expectations of citizens as well as change
the legal and regulatory environment to accommodate new technological
applications.
Artificial Intelligence (AI) holds great potential for the public sector;
nevertheless, it cannot be viewed as a solution to every problem facing
government today. AI technology must be used and regulated carefully and
strategically, with special attention paid to information management, which
includes data security, privacy, and ethics in general.12

Privacy considerations

This section examines some of the most important information privacy-related


questions raised by AI. This is meant to serve as a summary and jumping off
point for additional conversation on some of the more important information
privacy concerns; it is not a thorough examination of all the topics.
The 1980 OECD Guidelines on the Protection of Privacy and Transborder
Flows of Personal Data serve as the fundamental foundation for information
privacy law in Victoria and beyond. Eight fundamental concepts found in these
guidelines are still upheld by privacy laws across the globe, such as the Privacy
and Data Protection Act 2014 (PDP Act).
One advantage of principle-based law is that it acknowledges the complex and
subtle nature of privacy and permits some latitude in the ways that privacy can
be safeguarded in different situations, in tandem with changing society norms
and technological advancements. Although the OECD Guidelines have been
extremely effective in pushing information privacy laws all across the world,
artificial intelligence (AI) poses difficulties for the fundamental ideas that
underpin the Guidelines.
While AI may put traditional ideas of privacy to the test, it's not inevitable that
privacy would be compromised by AI by design; in fact, it's possible to envision
a time when AI will work to support privacy. For example, it is likely to imply
that fewer people will really require access to raw data in order to work with it,
which may reduce the possibility of privacy violations brought on by mistakes
made by humans. Additionally, it might enable more meaningful consent, as
people receive customized services based on their learned privacy preferences.
The growing use of AI may need a review of the current privacy protection
framework, but this does not mean that privacy will disappear or become
obsolete.
The framework that information privacy offers for deciding how to use new
technology in an ethical manner is one of its key components. For artificial
intelligence to be successful in the long run, privacy concerns must be
addressed and technological ethics must be taken into account. Long-term
public value creation can be facilitated by the development of socially
conscious AI, which will be encouraged by striking a compromise between
privacy concerns and technical progress.
Why is AI different?

Important privacy concerns are generally always present with emerging


technology, but the scope and use of AI present a special set of difficulties
never seen before. The implications of artificial intelligence (AI) are somewhat
similar to those of big data, but AI technology offers the capacity to process
enormous volumes of data as well as use it for learning, creating adaptive
models, and making actionable predictions, all of which can be done without the
need for clear, understandable procedures.
As AI technology advances, there is a serious risk that the presumptions and
biases of the people and organizations developing it will affect how the AI
performs. Government organizations looking to deploy neural networks for
decision-making face hurdles due to unintended repercussions brought on by
biases and opaque outcomes. Below is a discussion of the potential for
discrimination and how it relates to privacy.
The ability of AI to automate all of these processes sets it apart from current
analytics tools in another important way. Because AI is being used more
frequently, humans may no longer have the same level of control over data
processing as they once had. Additionally, the integration of AI with present
technologies has the potential to significantly change how they are currently
used and how privacy is handled. For instance, the deployment of CCTV
cameras for surveillance in public areas is a fairly common practice and is not
viewed as unduly intrusive in today's culture. However, a network of cameras
may become a considerably more intrusive tool when used in conjunction with
facial recognition software.
Additionally, AI may alter how people communicate with technology. For
example, many AI systems already exhibit human traits. Anthropomorphic
interfaces, such those with human-sounding voices in assistants like Alexa and
Siri, may give rise to new privacy issues. According to social science study,
individuals tend to treat technology like they would a human being. 13 Thirteen
As a result, individuals may be more willing to disclose increasingly personal
information with AI systems that mimic human traits than they are with other
types of technology that gather data through conventional means. This suggests
that people may find it easier to build trustworthy relationships with AI systems.
A significant portion of the conversation on AI privacy has failed to take into
consideration the increasing power imbalances that exist between the
organizations that gather data and the people who produce it. 14 When people
interact with systems they do not fully understand, especially when the system
knows them well and has learned how to manipulate their preferences by
ingesting their data, it can be challenging for them to make decisions about their
data. This is why current models typically treat data as a good that can be
traded. Furthermore, a lot of AI's adaptive algorithms are so dynamic that their
designers frequently find it impossible to adequately describe the outcomes they
produce.
The conventional understanding of information privacy is predicated on the
premise that humans are the major information handlers. AI's computational
capabilities, which defy conventional notions of data collecting and processing,
were not intended to be faced by humans.15 AI is posing the most fundamental
threat to the way we currently understand ideas like informed consent, notice,
and what it means to access or handle personal information. As previously
mentioned, building an ethical framework around privacy concerns could help
ensure that AI is developed in a way that preserves information privacy as these
ideas develop.

Personal information

Personal information is the sole type protected by the PDP Act and numerous
other information privacy laws. In this way, the legal safeguards provided to
persons are restricted by the definition of what is considered "personal
information." Jurisdictions might have different definitions of personal
information, and these definitions change as society and the law do. As new
types of information are developed, new technologies may also alter the reach
of personal data. Fitness trackers, for instance, generate data about people that
did not previously exist but may now be regarded as personal data.
The principle of identifiability, or whether or not a person's identity can be
properly deduced from that information, is often the foundation of the concept
of personal information. Though traditionally believed to be "de-identified" or
non-identifying to begin with, the growing capacity to link and match data to
persons is challenging the boundaries of what is and is not deemed to be
"personal." In this way, a collection of information that at first glance appears to
be non-personal might, upon analysis or correlation, become personal
information.
It gets harder to determine whether a particular piece of data is "identifiable" as
processing and combining technologies advance and data volume rises. Looking
at a piece of data in isolation is incompatible with AI technology and no longer
accurately reflects whether it can be considered "personal information."
AI's capacity to see patterns that are invisible to the human eye, learn, and
forecast characteristics of people and groups accounts for a large portion of its
usefulness. In this way, artificial intelligence (AI) can produce data that would
otherwise be hard to get or non-existent. This implies that data may be gathered
and utilized that goes beyond what a person initially voluntarily supplied.
Predictive technologies have the advantage of enabling deductions from other
(ostensibly unrelated and benign) data points. For instance, an artificial
intelligence system created to streamline the hiring process might be able to
determine an applicant's political inclination based on other data they have
provided and then take that into account when making decisions.
This type of inference raises concerns about acceptableness for the disclosure of
personal information about a person who has opted not to share it, in addition to
challenging the definition of personal information.
Additional queries include who owns the data and whether it is governed by
information privacy principles, which include notifying the subject that data has
been gathered about them through inference. Mainstream technologies are
already challenging the binary notion of personal information that already
exists, but artificial intelligence (AI) further blurs this line, making it harder to
distinguish what is and is not "personal information." All information created by
or connected to an individual will probably be identifiable in the future because
to the rising advent of AI.

In this case, interpreting the definition of personal information to determine


what is or is not protected by privacy law is unlikely to be useful either
technically or legally, nor very beneficial as a means of effectively protecting
individuals' privacy. Many contend that in order for privacy law to continue
safeguarding individuals' information privacy in an AI setting, attention needs
to be diverted from the binary understanding of personal information.

Collection, purpose and use

Three enduring foundations of data privacy that originate from the OECD
Guidelines are:

• Limiting the collection of personal data: personal data should only be gathered
when absolutely necessary, using only legal and equitable methods, and, when
appropriate, with the subject's knowledge or consent.

• Purpose specification: When collecting personal data, the individual should be


informed of the reason for the data's gathering.
• Use limitation: Unless there is permission or legal authority to do differently,
personal information should only be used or disclosed for the reason for which
it was obtained.

The fundamental objective of these interconnected principles is to minimize the


quantity of data that a single organization possesses about a person and to
guarantee that the information is handled in a manner that aligns with the
person's expectations. AI essentially contradicts each of these three tenets.

Collection limitation

By their very nature, a lot of AI techniques—especially machine learning—


depend on consuming large volumes of data in order to test and train
algorithms. While gathering this much data can help AI advance, it can also go
against the collection limitation principle. The data fed into AI systems is
frequently not obtained through a traditional transaction whereby consumers
consciously disclose their personal information to someone who is asking for it.
This is due to technological advancements in IoT devices, cellphones, and
online monitoring.16 Actually, a lot of people are frequently unaware of the
volume of personal data that is gathered from their devices and utilized as input
for artificial intelligence (AI) systems.
This leads to a certain amount of friction because restricting the gathering of
personal data is incompatible with the way AI technologies and the devices that
gather data to support them work, yet gathering such large volumes of data also
comes with privacy problems.
Specification of purpose
Most organizations follow the purpose specification concept by explaining the
reason for the collection (usually through a collection notice). This idea is
seriously challenged by AI's capacity to derive meaning from data beyond the
original purposes for which it was gathered. In certain situations, organizations
might not be aware of how the data will be used by AI in the future.
There is a chance that more data will be collected than is required "just in case,"
with the use of excessively general collection notices and privacy rules in an
effort to "catch-all." This type of behavior is dishonest and at odds with the
fundamental purpose of the collection limitation principle, but it enables
organizations to assert technical compliance with their privacy obligations.
Furthermore, it makes it more difficult for people to really manage how their
personal information is used. On the other hand, AI may be used to improve
people's capacity to express their preferences on the usage of their personal
data. It is conceivable, for example, that services could be able to ascertain the
privacy preferences of its users and apply various restrictions to the data
acquired on certain individuals.
AI might thus play a key role in the development of personalized, preference-
based models that could satisfy the information privacy law's goals of
permission, transparency, and reasonable expectations even more successfully
than the model of notice and consent that is in place now.
Utilization restriction

The use limitation principle works to guarantee that personal data is only used
for the purposes for which it was gathered. In general, organizations are also
allowed to use personal data for a secondary use that the person would
"reasonably expect."

Given that the result of doing so would frequently be unknown to the user, this
begs the question of whether information utilized as input data for an AI system
may be regarded a "reasonably expected secondary purpose." AI has the ability
to identify patterns and links in data that people would not have noticed, and it
may also suggest new applications for that data. When combining this with the
above-mentioned purpose specification concerns, organizations may find it
challenging to guarantee that personal data is utilized exclusively for the
intended purpose when utilizing AI technologies.
The notion that a reasonably expected secondary purpose for use of information
would be rather broad may be prompted by the presumption that people,
especially young people or "digital natives," are becoming less worried about
their information privacy. This isn't always the case, though. According to
research by the Boston Consulting Group, consumers in most nations continue
to rank 75% of privacy of personal information as a top concern. Additionally,
those in the 18 to 24 age group are only marginally less cautious than previous
generations.17 This suggests that people are not automatically growing less
concerned about the use of their personal information as a result of technology
becoming more widely available. As a result, individuals might not always view
the use of their personal information by AI as a secondary goal that can be
legitimately expected.18
AI may make it more difficult to distinguish between primary and secondary
purposes, to the point where the usefulness of the use limitation principle may
need to be re-examined. When considered collectively, AI poses a serious threat
to the principles of use limitation, collection limitation, and purpose
specification. The current understanding of information privacy through these
principles may no longer be effective in light of mass data collection, frequently
through methods that are not obvious to individuals, vague or misleading
collection notices, and an assumption that people are more comfortable with the
secondary use of their information than they actually are. AI, however, also
offers the potential to completely transform how conventional privacy standards
are applied.
For example, enhancing data security may be made possible by first training a
machine learning algorithm on vast volumes of data in a safe setting before
releasing it. We will need to adapt how we apply traditional privacy concepts as
a result of the widespread usage of AI; whether this raises or lowers the bar for
privacy protection is yet to be determined. Organizations may be able to
enhance collection notice procedures and give people the ability to engage with
companies about the use — and secondary use — of their data in a more
sophisticated and informed manner if they view privacy as a fundamental
component of an ethical framework for developing AI.

Consent and transparency

The basis of our present concept of information privacy is people's ability to


control what information is shared about them and how it is used. However,
because AI is so sophisticated, people whose information is being utilized may
not understand the processes involved, which makes it impossible to obtain
really informed and meaningful consent. For example, deep learning methods
might make it difficult to be transparent because it can be hard to explain how
conclusions are reached, even for the people who created the algorithms in the
first place, let alone the general public. If organizations are unable to explain the
procedures to the public, they will find it difficult to gain consent or to be
transparent in their AI activities.

A "privacy paradox" has been extensively studied, wherein individuals claim to


be concerned about their privacy, yet in reality, they continue to voluntarily give
their information to the systems and services they use. 19 According to one
reading of this dilemma, people frequently have little choice but to sign a
"unconscionable contract" allowing the use of their data, even when they are
informed.20 In this way, rather than enthusiastically accepting the use of
personal data, many people can feel resigned to it because they believe there is
no other option.21
In the modern world, a binary yes/no response to consent at the start of a
transaction becomes less and less meaningful due to the growing complexity of
the networks and systems we use and the expanding range of data gathering
methods.22 Even if AI technologies contribute to many of these problems, they
also hold the potential to be the answer since they offer new means of
elucidating the actions taken with respect to an individual's data at every stage
of processing or because they allow for customized platforms where consent
can be exercised by individuals.
The "right to explanation" is being investigated as a possible means of
promoting transparency as well as of scrutinizing, contesting, and limiting
decision-making that has taken place without human input. People would be
able to challenge decisions that have been made solely based on algorithms if
they had this kind of right.23 Many influential members of the AI community
believe that "explainability," or the transparency of decisions, is essential to
fostering and preserving confidence in the dynamic connection between
intelligent computers and people, even in the face of existing technological
obstacles.24
Currently, a lot of effort is being invested into developing algorithms that can
explain how and why their output was produced.25 With the capacity to screen
for bias and provide a clear explanation of decisions, AI may be able to promote
transparency in ways that human decision makers are not always able to. Article
22 of the General Data Protection Regulation of the European Union examines
this right from a legal and policy standpoint. The effectiveness of this approach
is yet to be determined, since some critics contend that there are still "serious
practical and conceptual flaws" in the fact that the right is limited to judgments
that are fully automated, which is not always the case.26
Discrimination
Since information privacy is often seen as an enabling right, much of its value
comes from its capacity to make other human rights—like the freedoms of
expression and association—realistic. By imposing restrictions on the
collection, use, and disclosure of personal information, privacy protections can
also help prevent prejudice. For instance, under privacy law, information about
a person's sexual orientation or ethnic origin is protected more strongly. This is
because the data is inherently sensitive, and it attempts to reduce the possibility
of harm coming from decisions that are based on it.
With immediate consequences, one of the most important ethical concerns of AI
is its ability to discriminate, reinforce prejudices, and worsen already-existing
inequities. Algorithms may unintentionally reproduce unfair patterns since they
are taught on pre-existing data, which is a result of the data they have
consumed.27
Furthermore, it's possible that people who built the systems unintentionally
included their own prejudices into its operation. Information privacy is an
enabling right that protects against discrimination, but because AI challenges
information privacy's ability to function as it has traditionally, it runs the risk of
being undermined.
It's interesting to note that, if AI technology is created with these concerns in
mind, prejudice may be reduced. This is because, by keeping or enhancing
human input in many decision-making processes, inherent human biases can be
avoided.
Governance and Accountability

Information privacy law advocates for governance and supervision to guarantee


that the right mechanisms are in place to avoid an imbalance of power between
the people and the government. This depends on regulators making sure that
personal data is managed properly. The obstacles to a successful regulation of
AI technology are similar to those that have been delineated in the preceding
sections concerning our comprehension of information privacy.

 Although the challenge of regulating technology has been extensively


covered elsewhere, some factors that are especially pertinent to artificial
intelligence (AI) and information privacy are as follows:28
• Since AI technology is not limited to any one state or jurisdiction, it is
challenging to establish and uphold effective privacy policies and
governance internationally. It is a complicated challenge for regulators to
determine who owns the data, where it is held, and who is responsible for
it. Good governance must be predicated on a knowledge of the
technology. The complexity and range of applications of AI are growing
along with its rapid development, which is extending the already existing
gap between technology and law.
 • The degree to which AI should be regulated by the government, keeping
in mind that the lack of an AI regulatory framework with regard to data
protection is a regulatory decision unto itself. Effective governance
frameworks can facilitate the development, organization, and supervision
of AI technology and their privacy-related aspects. In line with a Privacy
by Design approach to privacy protection, regulations can encourage the
creation of automated systems that are based on information privacy by
establishing an atmosphere that upholds fundamental rights and
protections.

It is not possible to establish privacy governance only through regulators' top-


down oversight; people in charge of data and technology development should
also be involved in the creation of systems that improve privacy.29

Conclusion

Big data already permeates our environment, and artificial intelligence (AI) has
the potential to significantly change the information privacy landscape. AI-
powered smart city and IoT technologies offer a linked life with many potential
advantages, such as more efficient and productive use of resources, improved
standard of living, and more dynamic resource utilization. AI technology has a
plethora of potential applications in the fields of government services, justice,
and healthcare. But just like many other technologies before it, artificial
intelligence (AI) poses technological, social, and legal issues that affect how we
perceive and safeguard personal data.
This resource has walked through some of the most important information
privacy concerns related to AI and how it will force us to re-evaluate our long-
held beliefs about personal data. But even while long-standing information
privacy concepts might need to be rethought, the development of AI does not
imply that privacy will become obsolete. An ethical foundation for developing,
utilizing, and regulating new technologies is provided by privacy. It will also
remain crucial to the way we negotiate our identities, create a sense of self, and
exercise other crucial rights like the freedom of association and speech. Long-
term success of AI will depend on how privacy concerns are resolved.
As time goes on, the focus of our knowledge of AI and privacy may shift from
protecting the privacy of information during collection to highlighting measures
to guarantee that information is used properly and ethically once it is collected.
The ubiquity of data-gathering technology is expected to make efforts to restrict
or control data collection more challenging. Therefore, it has been suggested
that the focus be shifted from data collection to "ethical data stewardship." This
would necessitate a sincere dedication to accountability and openness via sound
governance procedures.
The government must play a significant role in fostering an atmosphere that
allows the advancement of technology to coexist with a commitment to
producing safe and equitable AI.30 The correct balance requires an
interdisciplinary, consultative approach, since too restrictive, incorrect, or
misguided regulations may hinder the advancement of artificial intelligence or
neglect to address its real problems. Building, utilizing, and regulating AI will
heavily rely on reimagining conventional ideas and utilizing already-existing
information privacy frameworks.

References

1. Alex Campolo, Madelyn Sanfilippo, Meredith Whittaker & Kate


Crawford, ‘AI Now 2017 Report’, AI Now, 2017, available at:
https://ainowinstitute.org/AI_Now_2017_Report.pdf, p 3.
2. Toby Walsh, It’s Alive! Artificial Intelligence from the logic piano to
killer robots, Latrobe University Press, 2017, p 60.
3. For example, Samuel Warren and Louis Brandeis wrote on the impact of
the portable camera on the right to be let alone in the 19th century. See
Samuel D. Warren and Louis D. Brandeis, ‘The Right to Privacy’,
Harvard Law Review, Vol. IV, No. 6, 15 December 1890.
4. Nick Bostrom, ‘How long before superintelligence?’, Linguistic and
Philosophical Investigations, Vol. 5, No.1, 2006, pp 11-30.
5. A thorough explanation of big data can be found in the Report of the
Special Rapporteur on the right to privacy, prepared for the Human
Rights Council, A/72/43103, October 2017.
6. The UK Information Commissioner’s Office (ICO), Big Data, artificial
intelligence, machine learning and data protection, 2017, p 8.
7. Will Knight, ‘The Dark Secret at the Heart of AI’, MIT Technology
Review, 11 April 2017, available at
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-
heart-of-ai/.
8. It is also less frequently used to refer to deep reinforcement learning.
9. ICO, Big Data, artificial intelligence, machine learning and data
protection, 2017, p 11.
10.Elle Hunt, ‘Tay, Microsoft’s AI chatbot, gets a crash course in racism
from Twitter’, The Guardian, March 2016, available at:
https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-
chatbot-gets-a-crash-course-in-racism-from-twitter.
11.Hila Mehr, ‘Artificial Intelligence for Citizen Services and Government’,
Harvard Ash Center for Democratic Governance and Innovation, August
2017, available at:
https://ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_s
ervices.pdf.
12.Hila Mehr, ‘Artificial Intelligence for Citizen Services and Government’,
Harvard Ash Center for Democratic Governance and Innovation, August
2017, available at:
https://ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_s
ervices.pdf, p 10.
13.Stanford University, ‘Artificial Intelligence and Life in 2030’, One
Hundred Year Study on Artificial Intelligence: Report of the 2015-2016
Study Panel, Section III: Prospects and Recommendations for Public
Policy, September 2016, available at: http://ai100.stanford.edu/2016-
report; Kate Darling, Extending legal protection to social robots: The
effects of anthropomorphism, empathy, and violent behavior towards
robotic objects, 2012.
14.Alex Campolo, Madelyn Sanfilippo, Meredith Whittaker & Kate
Crawford, ‘AI Now 2017 Report’, AI Now, 2017, available at:
https://ainowinstitute.org/AI_Now_2017_Report.pdf, p 28.
15.Alex Campolo, Madelyn Sanfilippo, Meredith Whittaker & Kate
Crawford, ‘AI Now 2017 Report’, AI Now, 2017, available at:
https://ainowinstitute.org/AI_Now_2017_Report.pdf, p 28.
16.Information Accountability Foundation, Artificial Intelligence, Ethics and
Enhanced Data Stewardship, 20 September 2017, p 6.
17.John Rose, Christine Barton, & Rob Souza, ‘The Trust Advantage: How
to Win with Big Data’, Boston Consulting Group, November 2013,
available at: https://www.bcg.com/publications/2013/marketing-sales-
trust-advantage-win-with-big-data.aspx.
18.For instance, a Pew Research Center survey of 1,002 adult users
conducted in 2013 found that 86% had taken steps online to remove or
mask their digital footprints, and 68% believed that current laws were not
good enough in protecting online privacy. See Anonymity, privacy, and
security online, Pew Research Centre, 2013.
19.Patricia A. Norberg, Daniel. R. Horne & David A. Horne, ‘The privacy
paradox: Personal information disclosure intentions versus behaviors’,
Journal of Consumer Affairs, Vol. 41, No.1, 2007, pp 100–126; Bettina
Berendt, Oliver Gunther & Sarah Spiekermann ‘Privacy in e-commerce:
Stated preferences vs. actual behavior’, Communications of the ACM,
Vol. 48, No. 4, 2005, pp 101–106.
20.Sylvia Peacock, ‘How web tracking changes user agency in the age of
Big Data; the used user’, Big data and society, Vol. 1, No. 2, 2014,
available at: http://m.bds.sagepub.com/content/1/2/2053951714564228.
21.ICO, Big data, artificial intelligence, machine learning and data
protection, 2017, p 24.
22.ICO, Big data, artificial intelligence, machine learning and data
protection, 2017, p 30.
23.Toby Walsh, It’s Alive! Artificial Intelligence from the logic piano to
killer robots, Latrobe University Press, 2017, pp 150-151.
24.For example, such as Ruslan Salakhutdinov (Director of AI research at
Apple and Associate Professor at Carnegie Mellon University) in Will
Knight, ‘The Dark Secret at the Heart of AI’, MIT Technology Review, 11
April 2017, available at
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-
heart-of-ai/.
25.For example, see the Privacy Preservation work done by Data61 and
CSIRO at https://www.data61.csiro.au/en/Our-Work/Safety-and-
Security/Privacy-Preservation.
26.Lilian Edwards & Michael Veale, ‘Enslaving the Algorithm: From a
‘Right to an Explanation’ to a ‘Right to Better Decisions’?’, IEEE
Security & Privacy, 2017, p 5.
27.Lilian Edwards & Michael Veale, ‘Enslaving the Algorithm: From a
‘Right to an Explanation’ to a ‘Right to Better Decisions’?’, IEEE
Security & Privacy, 2017, p 2.
28.See Michael Kirby, ‘The fundamental problem of regulating
technology’, Indian JL & Tech, Vol. 5, 2009.
29.Information Accountability Foundation, Artificial Intelligence, Ethics and
Enhanced Data Stewardship, 20 September 2017, p 15.
30.This approach is currently being explored in the European Union General
Data Protection Regulation. See Article 35 in particular.

You might also like