Data Protection
Data Protection
Data Protection
Emerging areas like Internet of Things (IoT) and Artificial Intelligence (AI)
Abstract
The present article analyses how the contemporary problem of AI-generated deepfakes can be dealt
with through the application of the Digital Personal Data Protection Act, 2023. It identifies the
responsibility of data fiduciaries under the Act to protect personal data from being misused for such
purposes. The article also identifies certain gaps in the provisions of the Act and suggests a manner of
interpretation for them that can aid in developing a holistic framework to counter deepfakes.
Introduction
The threat of Artificial Intelligence (“AI”) tools becoming more sophisticated is less concerned with
replacing humanity and more with disconcerting it. A recent example of this issue, which sparked a row
among the celebrities and the common populace alike, was the widely circulated video of actress
Rashmika Mandanna entering an elevator. However, the issue with this viral video was that it was
consistently morphed. Rashmika’s face was edited on to the original video and it would be nearly
impossible for the average person to question its authenticity. This incident is not only extremely
concerning but also raises a larger question on the ability of Indian data protection laws to counter such
an occurrence.
The present article attempts to, firstly, decipher deepfake technology which is an application of
Generative Artificial Intelligence. Secondly, it applies the applicable provisions of the Digital Personal
Data Protection Act, 2023 (“DPDPA”) which can counter their presence on social media platforms. The
article elucidates the responsibility of Data Fiduciaries under the DPDPA to counter the breach of
personal ces certain shortcomings of the DPDPA’s provisions and suggests a manner of interpretation in
which a holistic framework to counter deepfakes can be created.