Regulating Artificial Intelligence in Malaysia: The Two-Tier Approach Nazura Abdul Manap & Azrol Abdullah
Regulating Artificial Intelligence in Malaysia: The Two-Tier Approach Nazura Abdul Manap & Azrol Abdullah
Regulating Artificial Intelligence in Malaysia: The Two-Tier Approach Nazura Abdul Manap & Azrol Abdullah
1
Corresponding author: [email protected]
ABSTRACT
183
UUMJLS 11(2), July 2020 (183-201)
INTRODUCTION
184
UUMJLS 11(2), July 2020 (183-201)
WHY REGULATE AI
185
UUMJLS 11(2), July 2020 (183-201)
The court’s decision in State v Loomis1 became the leading case which
illustrates the extent of damage that AI can cause on human rights.
In this case, the accused pleaded guilty to the criminal charges made
against him. The court had relied upon COMPAS risk assessment
score to deny the accused probation and sentenced the accused to
six years imprisonment and five years extended supervision. It was
reported that COMPAS was prone to mistakenly label black offenders
as likely to reoffend by flagging them with 45-24 per cent higher
risk to reoffend than white people (Buranyi, 2017). The case went
on appeal but the appeal was dismissed by the Wisconsin Supreme
Court. Despite the dismissal, the Supreme Court acknowledged that
the risk assessment score generated by COMPAS failed to explain
the manner of data being employed to generate the results (Liu et
al., 2019). The court’s reliance on the conclusion made by COMPAS
was a flagrant breach of the accused’s right to be heard and right to
be treated equally before the law.
186
UUMJLS 11(2), July 2020 (183-201)
REGULATING NORMS
187
UUMJLS 11(2), July 2020 (183-201)
188
UUMJLS 11(2), July 2020 (183-201)
The regulating norms discussed above may not be able to deal with
the emerging threats of AI. Therefore, the adoption of the two-tier
regulation approach is appealing. This is an approach which requires
the regulation on AI to be made by way of two levels. The first level
or the first tier specifically refers to the promulgation of hard law by
the Parliament i.e. a specific Act on AI. Whereas the second level
or the second tier, refers to delegated legislations or policies passed
by a specific ministry(s) of the government. Both levels will serve
different objectives and purposes in regulating AI. The following
discussion will not formulate a guideline or blueprint. Instead, it
intends to highlight the crucial elements that must be incorporated
in the legal provisions made by way of the two-tier approach.
The first tier shall serve as the primary piece of legislation to govern
only crucial matters on AI so that AI will not be left unchecked, will
not lose control and will be safe. The first-tier refers to a specific Act
of Parliament on AI. The intended specific Act on AI incorporates
the following fundamental elements:
189
UUMJLS 11(2), July 2020 (183-201)
Definition of AI
190
UUMJLS 11(2), July 2020 (183-201)
Certification Requirements
191
UUMJLS 11(2), July 2020 (183-201)
For certification under this Act, all AI intended for commercial and
consumer purposes must be placed in a special zone. Special zone is
a controlled area designated within real society. General regulations
on the use of AI are being applied in order to allow for the presence
of experimental AI which has proven to be safe in laboratories.
Special precautions are taken in order to prevent serious accidents
and undesired outcome (Santoni de Sio, 2016).
192
UUMJLS 11(2), July 2020 (183-201)
Digital Peculium
193
UUMJLS 11(2), July 2020 (183-201)
194
UUMJLS 11(2), July 2020 (183-201)
CONCLUSION
AI needs its own set of regulations because existing laws may not be
able to withstand the challenging legal issues on AI. Stretching the
application of conventional laws on AI issues will eventually break
at one point. Therefore, AI cannot continue to be left ungoverned
by any legal framework because nature abhors a vacuum. However,
formulating a feasible AI regulatory framework is more challenging
than advocating the rhetorical aspect of regulations. The classical
method in drafting regulations based on pure single legal theory is
found to be inappropriate in drafting a regulatory framework on AI.
Reason being, AI comes with a new set of potential risks which demand
proactive regulatory intervention. The kind of regulations intended
to be introduced must not only be agile to support the exponential
change in the development of AI technology but also, should not
constrain the future development of AI. The two-tier formulation
195
UUMJLS 11(2), July 2020 (183-201)
REFERENCES
196
UUMJLS 11(2), July 2020 (183-201)
from https://www.theguardian.com/inequality/2017/aug/08/
rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-
impulses
Calo, R. (2015). Robotics and the lessons of Cyberlaw. California
Law Review, 103, 513.
Carrigan, C., & Coglianese, C. (2016). George J. Stigler, ‘The
theory of economic regulation’. In M. Lodge, E. C. Page,
& S. J. Balla (Eds.), Oxford Handbook of Classics in Public
Policy and Administration (p. 287). United Kingdom: Oxford
University Press.
Cerka, P., Grigiene, J., & Sirbikytè, G. (2015). Liability for damages
caused by Artificial Intelligence. Computer Law & Security
Review, 2.
Chokshi, N. (2018, May 25). Alexa listening? Amazon echo sent
out recording of couple’s conversation. The New York Times.
Retrieved from https://www.nytimes.com/2018/05/25/
business/amazon-alexa-conversation-shared-echo.html
Clifford, C. (2017, November 8). Hundreds of A.I. experts echo Elon
Musk, Stephen Hawking in call for a ban on killer robots.
CNBC. Retrieved from https://www.cnbc.com/2017/11/08/
ai-experts-join-elon-musk-stephen-hawking-call-for-killer-
robot-ban.html
Delvaux, M., Mayer, G., & Boni, M. (2017). Report: With
recommendations to the commissions on civil law rules on
robotics. European Parliament, A8-0005, 7.
Gerstner, M. E. (1993). Liability issues with Artificial Intelligence
software. Santa Clara Law Review, 33(1), 239.
Holley, P. (2015, January 29). Bill Gates on danger of artificial
intelligence: ‘I don’t understand why some people are not
concerned.’ The Washington Post. Retrieved from. https://
www.washingtonpost.com/news/the-switch/wp/2015/01/28/
bill-gates-on-dangers-of-artificial-intelligence-dont-
understand-why-some-people-are-not-concerned/
Husain, A. (2017). The sentient machine-the coming age of Artificial
Intelligence. New York, USA: Scribner, 3.
Inn, T. H. (2019, October 14). TPM set to house AI park. The Star
Online. Retrieved from https://www.thestar.com.my/business/
business-news/2019/10/14/tpm-set-to-house-ai-park
Kaplan, J. (2016). Artificial Intelligence what everyone needs to
know. USA: Oxford University Press, 5-13.
197
UUMJLS 11(2), July 2020 (183-201)
198
UUMJLS 11(2), July 2020 (183-201)
Pagallo, U. (2018). Vital, Sophia and Co.-the quest for the legal
Personhood of robots. Information, 9(230), 8.
Palmerini, E., Bertolini, A., Battaglia, F., Koops, B-J., Carrevale,
A., & Salvini, P. (2016). Robolaw: Towards a European
framework for robotics regulation. Robotics and Autonomous
Systems. Retrieved from http://dx.doi.org/10.1016/j.
robot.2016.08.026, 78-85
Parnas, D. L. (2017). Inside risks: The real risks of Artificial
Intelligence, incidents from the early days of AI research are
instructive in the current AI environment. Communications of
the ACM, 60(10), 27-31.
Petit, N. (2017). Law and regulation of Artificial Intelligence and
robots: Conceptual framework and normative implications.
SSRN. Retrieved from https://dx.doi.org/10.2139/
ssrn.2931339
Photong, J. (2017, October 31). Alexa plays music without command.
Amazon forum. Retrieved from https://www.amazonforum.
com/forums/devices/echo-alexa/2643-alexa-plays-music-
without-command
Poole, D. L., & Mackworth, A. K. (2010). Artificial Intelligence:
Foundations of computational agents. United Kingdom:
Cambridge University Press, 71.
Ravid, S. Y., & Hallisey, S. K. (2018). Equality and privacy by
design: Ensuring Artificial Intelligence (AI) is properly
trained and fed: A new model of AI data transparency &
certification as safe harbour procedures. SSRN Electronic
Journal. Retrieved from https://papers.ssrn.com/sol3/papers.
cfm?abstract_id=3278490
Reed, C. (2018). How should we regulate Artificial Intelligence.
Philosophical Transactions Royal Society, 2.
Russel, S. J., & Norvig, P. (2010). Artificial Intelligence a modern
approach (3rd ed.). New Jersey, USA: Prentice Hall, 2.
Santoni de Sio, F. (2016). Ethics and self-driving cars - A white
paper on responsible innovation in automated driving
system. Department Values, Delft University of Technology.
Technology and Innovation, 19- 20.
Scherer, M. U. (2016). Regulating Artificial Intelligence systems:
Risks, challenges, competencies and strategies. Harvard
Journal of Law and Technology, 29(2), 354, 356, 360.
Schirmer, S., Torens, C., Nikodem, F., & Dauer, J. (2018,
September 18). Considerations of Artificial Intelligence
199
UUMJLS 11(2), July 2020 (183-201)
200
UUMJLS 11(2), July 2020 (183-201)
201