Preprints202405 0415 v1
Preprints202405 0415 v1
Preprints202405 0415 v1
net/publication/380472433
CITATIONS READS
0 11
2 authors, including:
SEE PROFILE
All content following this page was uploaded by Swapan kumar Patra on 12 May 2024.
doi: 10.20944/preprints202405.0415.v1
Keywords: artificial intelligence; machine learning; information resilience; robustness; data bias
Copyright: This is an open access article distributed under the Creative Commons
Attribution License which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 May 2024 doi:10.20944/preprints202405.0415.v1
Disclaimer/Publisher’s Note: The statements, opinions, and data contained in all publications are solely those of the individual author(s) and
contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting
from any ideas, methods, instructions, or products referred to in the content.
Article
Abstract: Introduction: Artificial Intelligence (AI) tools and techniques are comparatively recent
advances of present-day globalized world. With their easy availability and friendly user interface
AI has affected in every sphere of human being. These tools are increasingly becoming popular
because of numerous benefits occurred to the society in many ways. However, there are not only
the positive side of these technologies, but also many negative sides. Along with the development
and adoption of AI, cyber-attacks are increased and disrupting the information system. Threat
actors may misuse artificial intelligence. Hence, these tool designers are working and applying the
ability of generative AI can tip the scales in cybersecurity. So, in the age of AI, there is always a
tension between the cyber attackers and the defenders to make balanced information system. It is a
general rule that defenders must be well in advance in their favor and keep ahead of adversaries.
The United National and several governments for example the US, UK, and the big AI giants like
Microsoft along with OpenAI, are working on emerging cyberattacks. They are focusing on
identifying the unusual activities associated with both known and unknown threat actors. Purpose:
The purpose of this paper is to investigate cybersecurity issues and subsequent policies adopted by
various agencies. This paper is a review of the policy statements adopted by selected governments
and the subsequent policy measures taken by the big multinational corporation. This study will do
a critical analysis on “The Bletchley Declaration,” President Biden’s Executive Order on Safe, Secure,
and Trustworthy Artificial Intelligence and the strategies adopted by the big Information
Technology (IT) firms like Apple, Microsoft, and Open AI. Design/ methodology/ Approach: For
this study, we have investigated the recently adopted policies of the United Nations, China, India.
US and UK governments. We have also analysed the issues dealt by the two big technology giants
Microsoft and OpenAI. Originality: The paper has done a critical analysis of both government and
corporate policies to deal with the cybersecurity issues. Globally, there is an increasing trend of
increasing adoption of AI in every sphere of human life. With this growing use there is a major
apprehension of safety, security, integrity, and robustness of information system. So, it is a matter
if concern how the information system deals with this issue. In this context, the paper is going to
shed light on these pressing challenges from the available literature. Conclusion: Information
systems often face challenges related to the issues of data quality, robustness, and security.
Although the recent advances of AI and other related technologies, it is becoming easier to build
quite resilient information system. However, information system, can compromise their
performance and reliability due to the security issues. This research paper explores the concept of
information resilience in ML and AI systems, focusing on strategies to enhance their ability to
withstand uncertainties, adversarial attacks, and data perturbations.
Keywords: Artificial Intelligence; Machine Learning; information resilience; robustness; data bias
1. Introduction
The term “Artificial Intelligence” (AI) was coined by ‘Minsky and McCarthy’ [1] in 1956 to denote
the theory of human intelligence being exhibited by machines [2]. In the current era of exponential
growth of “big data” and rapid technical innovation. AI has made an unparalleled leap from theory
to practical application. Machine learning (ML) and AI systems are all pervasive in present day
globalized world. AI influence in every domain, ranging from healthcare to finance, and from
autonomous vehicles to cybersecurity [3]. However, these systems often face challenges related to
data quality, robustness, and security. These issues altogether can compromise their performance and
reliability.
This research paper explores the concept of information resilience in ML and AI systems,
focusing on strategies to enhance their ability to withstand uncertainties, adversarial attacks, and data
perturbations. While doing so, the paper is going to review approaches taken by the government
agencies and the strategies taken by several big multinational firms. This paper examines these
policies and techniques to improve the resilience of these systems, thereby ensuring their reliability
and trustworthiness in real-world applications.
2. Artificial Intelligence
“AI leverages computers and machines to mimic the problem-solving and decision-making capabilities of
the human mind. AI lists theorem proving, game playing, pattern recognition, problem solving, adaptive
programming, decisions making, music composition by computer, learning networks, natural language data
processing and verbal and concept learning as suitable topics.” (ACM Curriculum committee) [4].
The first computational models intended to imitate human intellect appeared in the middle of
the 20th century, when artificial intelligence (AI) first emerged. However, it is the advent of
sophisticated algorithms, coupled with the exponential growth in computational power and data
availability, that has propelled ML and AI to the forefront of technological innovation in recent years
[5].
AI and its subfield ML gained significant traction in current years due to their ability to handle
massive volume of data. These systems can extract meaningful insights quickly, and can make
autonomous decisions without explicit programming. The significance of ML and AI systems lies in
their potential to revolutionize numerous aspects of human endeavour, ranging from improving
efficiency and productivity to enhance decision-making and problem-solving. Some key areas where
ML and AI are making a profound impact include healthcare, finance, autonomous systems,
cybersecurity etc [6].
3. Information Resilience
Information resilience means the power of a system to maintain its functionality, integrity, and
performance in the face of disturbances, uncertainties, adversarial attacks, and changes in the
environment or operating conditions. National Institute of Standard and Technology (NIST), US
Department of Commerce defines Information Resilience as “The ability to maintain required capability
in the face of adversity” [7]. Further NIST defines Information System Resilience as “The ability of an
information system to continue to: (i) operate under adverse conditions or stress, even if in a degraded or
debilitated state, while maintaining essential operational capabilities; and (ii) recover to an effective operational
posture in a time frame consistent with mission needs” [8].
In the context of AI and ML systems, information resilience encompasses the capacity to
withstand various challenges associated with data quality, security vulnerabilities, concept drift,
domain shift, and ethical considerations, while still achieving reliable and trustworthy outcomes.
The importance of information resilience in AI systems cannot be overstated, particularly in
today’s data-driven globalized world where these technologies play integral roles in decision-making
processes across various domains.
disruptions, and continue functioning effectively. In this context, the purpose of this paper is to
capture the role of AI & Machine Learning on in Cybersecurity.
“Cybersecurity is the application of technologies, processes, and controls to protect systems,
networks, programs, devices, and data from cyber-attacks. It aims to reduce the risk of cyber-attacks
and protect against the unauthorised exploitation of systems, networks, and technologies” [9]. It
encompasses various measures to defend against cyber threats, including:
Network Security: Securing networks to prevent unauthorized access and ensure data integrity
Data Security: Safeguarding sensitive information through encryption, access controls, and data
loss prevention.
Endpoint Security: Protecting individual devices like computers, smartphones, and IoT devices
from malware and other cyber threats.
AI and ML algorithms can analyze patterns in data to identify abnormal behavior that can point
to a security risk. This is particularly useful for detecting unusual patterns in network traffic, user
behavior, or system activities. Machine Learning models can forecast possible security threats based
on historical data, helping organizations to take proactive measures. AI can automate certain aspects
of incident response, enabling faster and more efficient reactions to security incidents. Automated
responses can include isolating affected systems, blocking malicious activities, and notifying relevant
personnel. Moreover, AI can contribute to the improvement of encryption algorithms and techniques,
making it more challenging for unauthorized entities to access sensitive information. Machine
learning models can be designed to process data while preserving individual privacy, using
techniques like federated learning or homomorphic encryption.
4. Selected Cases
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 May 2024 doi:10.20944/preprints202405.0415.v1
Various government and big multinational firms have adopted policies, guidelines and issued
statements to mitigate cyberattacks for an information resilient system. The following section will
deal with a few major initiatives, policies related to cybersecurity issues:
The Bletchley Declaration was adopted by several countries attending the AI Safety Summit,
during 1-2 November 2023 [23]. The Artificial Intelligence Safety Summit held at Bletchley Park,
England. Representatives from 28 major countries, including the United States, China, India, and the
European Union, came together to sign this ground breaking declaration [24]. This declaration
represents a high-level political consensus among major AI players worldwide. The declaration
represents a critical turning point in the global strategy to address the issues raised by cutting-edge
AI technologies. It recognizes the potential advantages of AI for improving human welfare.
Simultaneously, it recognizes the risks posed by AI, particularly frontier AI. Frontier AI is the term
for extremely powerful generative AI models that can generate realistic outputs (text, graphics, audio,
or video) whenever needed. The conference has observed the many positive as well as negative sides
of the AI.
Recommendations of Bletchley Park Declaration
The declaration highlights the necessity of worldwide cooperation to handle the inherent global
nature of AI-related risks. It calls for collaboration among all stakeholders, including companies, civil
society, and academia. The declaration also says the standing of a regular AI Safety Summit to
facilitate dialogue and collaboration among various stakeholders on frontier AI safety and security.
In summary, the Bletchley Declaration aims to harness the positive potential of AI while addressing
risks, ensuring that AI benefits everyone and is used responsibly.
4.4. India
India has shifted from a position of not considering AI regulation to actively formulating
regulations based on a risk-based, user-harm approach. India advocates for a global framework to
expand the use of “ethical” AI tools, demonstrating commitment to responsible AI usage. India
expresses interest in establishing regulatory bodies at both domestic and international levels to
ensure responsible AI use.
The Digital India Act, 2023, which is yet to be implemented, is expected to introduce issue-
specific regulations for online intermediaries, including AI-based platforms [25].
India National cyber security policy 2013
The National Cyber Security Policy 2013 established the National Critical Information
Infrastructure Protection Centre (NCIIPC) was a significant step towards enhancing the protection
and resilience of the nation’s critical information infrastructure. The NCIIPC operates round the clock
(24x7) and is tasked with the responsibility of safeguarding critical information infrastructure (CII)
in India. It monitors, detects, and responds to cyber threats targeting critical sectors such as energy,
finance, transportation, telecommunications, and government services.
The National Cyber Security Policy 2013 indeed emphasized the establishment and operation of
a 24x7 National Level Computer Emergency Response Team (CERT-In). CERT-In serves as the nodal
agency for coordinating all efforts related to cyber security emergency response and crisis
management in India. It serves as the central coordinating authority for all cyber security-related
activities within the country. It operates round the clock to respond promptly to cyber security
incidents and crises. It provides technical assistance and guidance to organizations facing cyber
threats or attacks.
This Policy aims to ensure a comprehensive and coordinated effort to enhance cybersecurity
across various levels of governance and operation, acknowledging the diverse challenges posed by
cyberspace security [26].
4.5. China
Cybersecurity in China has been a significant focus for the government due to the country’s
growing reliance on technology and the internet. The Chinese government has implemented various
regulations and laws aimed at enhancing cybersecurity within the country. One of the most notable
is the Cybersecurity Law, which came into effect in 2017. This law regulates various aspects of
cybersecurity, including data protection, critical information infrastructure security, and
cybersecurity reviews for network products and services. China operates one of the most
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 May 2024 doi:10.20944/preprints202405.0415.v1
sophisticated internet censorship systems in the world. China’s internet security system often
referred to as the ‘Great Firewall.’ This system controls and monitors internet traffic entering and
leaving China. The system blocks access to certain websites and content deemed politically sensitive
or harmful to national interest [27].
Cybersecurity Law (CSL) in China
The CSL, implemented in 2017, is one of the most comprehensive cybersecurity laws globally. It
imposes obligations on network operators to safeguard data, report security incidents, undergo
security assessments, and store data within China’s borders.
Key provisions include requirements for the protection of personal information, Critical
Information Infrastructure (CII) security, and the conduct of security reviews for network products
and services.
The CSL also grants broad powers to investigate cybersecurity incidents, enforce compliance,
and punish non-compliant entities to the Chinese government.
National Intelligence Law
Implemented in 2017, the National Intelligence Law authorizes Chinese intelligence agencies to
compel organizations and individuals to cooperate with intelligence work, including access to data
and network facilities.
While not explicitly focused on cybersecurity, the above stated law has implications for data
governance and cybersecurity by granting authorities broad powers to access and monitor
information [28].
Data Security Law (DSL)
Enacted in 2021 and set to come into effect in September 2021, the DSL focuses specifically on
data security and aims to regulate the collection, storage, processing, transmission, and use of data
within China. The DSL requires data handlers to secure sensitive and personal data, get permission
before processing it, and put security measures in place to stop illegal access or disclosure.
Additionally, the DSL introduces a data classification system and establishes mechanisms for
cross-border data transfers, with a requirement for security assessments and approval by authorities
for certain types of data [29].
APIs, services, and/or systems. This lets the service provider follow their own policies and
independently confirm Microsoft’s findings.
• Collaboration with other stakeholders: Microsoft identify threat actors using AI. Its guiding
philosophy is to work with other interested parties to regularly exchange information on
threat actors. In this way Microsoft, encourage, coordinate, dependable, and effective
responses to threats for an information resilient ecosystem.
Transparency: To maintain the transparency in the whole ecosystem, Microsoft decide to notify
all stakeholders and the public about actions taken in accordance with these threat actors. As part of
our continuing efforts to improve responsible use of AI, it offers information concerning the kind and
scope of threat actors’ usage of AI detected within their systems as well as the necessary
countermeasures [31].
It is probable that OpenAI utilizes stringent access restrictions to grant authorized workers
exclusive access to its systems and data. To make sure that only individuals who require access can
receive it, this involves putting multi-factor authentication (MFA), role-based access control (RBCA),
and frequent access reviews into place.
OpenAI’s cybersecurity grant program has project ideas such as: By applying AI and
coordinating like-minded people who are working for our collective safety, we hope to shift the
power dynamics of cybersecurity in partnership with defenders throughout the world. Through the
following issues OpenAI is likely to coordinate the cyber security issues by following addressing the
following issues: Empower defenders; Capabilities measurement; Enhance discourse [33]
5. Concluding Remarks
Cybersecurity is an important issue in present day globalized world. The increasing cyber-attack
in various sectors is a matter of global concern. International agencies like the United Nations, many
governments around the world as well as big firms are using digital technologies to improve the
security and maintain stability in information resilience. This can be done by adopting suitable
policies, and collaborative efforts from various stakeholders by developing more sustainable
products. Because of various types of cyber threat actors, it is perhaps impossible for an entity to deal
with these issues. Hence all stake holders call for a unified approach to deal with the cybersecurity
issues. Cyber collaborative systems are a combination of physical and digital technology that allows
machines to communicate and work together. This might improve cybersecurity, prevent unexpected
disruption in information system and can build an information resilient system.
References
1. McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer
research project on artificial intelligence, august 31, 1955. AI magazine, 27(4), 12-12.
2. Helm, J. M., Swiergosz, A. M., Haeberle, H. S., Karnuta, J. M., Schaffer, J. L., Krebs, V. E., ... & Ramkumar,
P. N. (2020). Machine learning and artificial intelligence: definitions, applications, and future directions.
Current reviews in musculoskeletal medicine, 13, 69-76.
3. Kühl, N., Schemmer, M., Goutier, M., & Satzger, G. (2022). Artificial intelligence and machine learning.
Electronic Markets, 32(4), 2235-2244.
4. Hunt, E. B. (2014). Artificial intelligence. Academic Press.
5. Rockwell Anyoha (2017, August 28). The History of Artificial Intelligence - Science in the News. Available at:
https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ accessed on 6th May 2024
6. Michalski, R. S., Carbonell, J. G., & Mitchell, T. M. (Eds.). (2013). Machine learning: An artificial intelligence
approach. Springer Science & Business Media.
7. Resilience - Glossary | CSRC. (n.d.). NIST Computer Security Resource Center | CSRC. Retrieved March 24,
2024, from https://csrc.nist.gov/glossary/term/resilience
8. Information system resilience - Glossary | CSRC. (n.d.). NIST Computer Security Resource Center | CSRC.
Retrieved March 24, 2024, from
https://csrc.nist.gov/glossary/term/information_system_resilience#:~:text=1%20under%20Information%20
System%20Resilience,%2C%20contingency%2C%20and%20continuity%20planning
9. What is Cyber Security? Definition & Best Practices. (n.d.). IT Governance - Governance, Risk Management
and Compliance for Information Technology. Retrieved March 24, 2024, from
https://www.itgovernance.co.uk/what-is-
cybersecurity#:~:text=Cyber%20security%20is%20the%20application,systems%2C%20networks%2C%20a
nd%20technologies.
10. Landoll, D. (2021). The security risk assessment handbook: A complete guide for performing security risk
assessments. CRC Press.
11. Linkov, I., & Kott, A. (2019). Fundamental concepts of cyber resilience: Introduction and overview. Cyber
resilience of systems and networks, 1-25.
12. van den Adel, M. J., de Vries, T. A., & van Donk, D. P. (2022). Resilience in interorganizational networks:
dealing with day-to-day disruptions in critical infrastructures. Supply Chain Management: An International
Journal, 27(7), 64-78.
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 May 2024 doi:10.20944/preprints202405.0415.v1
13. Chang, V. (2015). Towards a big data system disaster recovery in a private cloud. Ad Hoc Networks, 35, 65-
82.
14. Fiksel, J., & Fiksel, J. R. (2015). Resilient by design: Creating businesses that adapt and flourish in a changing world.
Island Press.
15. Tan, L., Yu, K., Ming, F., Cheng, X., & Srivastava, G. (2021). Secure and resilient artificial intelligence of
things: a HoneyNet approach for threat detection and situational awareness. IEEE Consumer Electronics
Magazine, 11(3), 69-78.
16. Morabito, V. (2015). Big data and analytics. Strategic and organisational impacts.
17. Molden, D., Sharma, E., Shrestha, A. B., Chettri, N., Pradhan, N. S., & Kotru, R. (2017). Advancing regional
and transboundary cooperation in the conflict-prone Hindu Kush–Himalaya. Mountain Research and
Development, 37(4), 502-508.
18. Nyström, M., Jouffray, J. B., Norström, A. V., Crona, B., Søgaard Jørgensen, P., Carpenter, S. R., ... & Folke,
C. (2019). Anatomy and resilience of the global production ecosystem. Nature, 575(7781), 98-108.
19. United Nations (2024, March 21). General Assembly adopts landmark resolution on artificial intelligence The
United Nations; Retrieved March 24, 2024, from https://news.un.org/en/story/2024/03/1147831
20. The White House. (2023, October 30). FACT SHEET: President Biden Issues Executive Order on Safe, Secure,
and Trustworthy Artificial Intelligence | The White House. The White House;
https://www.facebook.com/WhiteHouse/. https://www.whitehouse.gov/briefing-room/statements-
releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-
artificial-intelligence
21. Roesener, A. G., Bottolfson, C., & Fernandez, G. (2014). Policy for US cybersecurity. Air & Space Power
Journal, 28(6), 38-54.
22. Souza, G. (2015). An Analysis of the United States Cybersecurity Strategy. Center for Development for Security
Excellence, Defense Security Service, 1-29..
23. Street, P. M. O., 10 Downing. (2023, November 1). The Bletchley Declaration by Countries Attending the AI
Safety Summit, 1-2 November 2023 - GOV.UK. GOV.UK; GOV.UK.
https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-
bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023
24. The Bletchley Declaration. (n.d.). The Bletchley Declaration. Retrieved March 24, 2024, from
https://thebletchleydeclaration.com/
25. Ministry of Electronics and Information Technology (2023) Proposed Digital India Act, 2023 Available at:
https://www.meity.gov.in/writereaddata/files/DIA_Presentation%2009.03.2023%20Final.pdf accessed on
6th May 2024
26. National Cyber Security Policy -2013 Available at: accessed on 2nd May
2024https://static.investindia.gov.in/National%20Cyber%20Security%20Policy.pdf
27. China’s Data Governance and Cybersecurity Regime - ICAS. (n.d.). ICAS. Retrieved March 24, 2024, from
https://chinaus-icas.org/research/chinas-data-governance-and-cybersecurity-regime/
28. PricewaterhouseCoopers. (n.d.). A comparison of cybersecurity regulations: China. PwC. Retrieved March 24,
2024, from https://www.pwc.com/id/en/pwc-publications/services-publications/legal-publications/a-
comparison-of-cybersecurity-regulations/china.html
29. Data Security Law of the People’s Republic of China (n.d.) Retrieved March 24, 2024 from
http://www.npc.gov.cn/englishnpc/c2759/c23934/202112/t20211209_385109.html
30. Cybersecurity Framework & Policies | Microsoft Cybersecurity. (n.d.). Microsoft – Cloud, Computers, Apps &
Gaming. Retrieved March 24, 2024, from https://www.microsoft.com/en-
us/cybersecurity?activetab=cyber%3aprimaryr2
31. Intelligence, M. T. (2024, February 14). Staying ahead of threat actors in the age of AI | Microsoft Security Blog.
Microsoft Security Blog. https://www.microsoft.com/en-us/security/blog/2024/02/14/staying-ahead-of-
threat-actors-in-the-age-of-ai/
Preprints.org (www.preprints.org) | NOT PEER-REVIEWED | Posted: 8 May 2024 doi:10.20944/preprints202405.0415.v1
10
32. Blog - Advancing iMessage security: iMessage Contact Key Verification - Apple Security Research. (n.d.). Blog -
Advancing IMessage Security: IMessage Contact Key Verification - Apple Security Research. Retrieved
March 24, 2024, from https://security.apple.com/blog/imessage-contact-key-verification
33. OpenAI Cybersecurity Grant Program. (n.d.). OpenAI. Retrieved March 24, 2024, from
https://openai.com/blog/openai-cybersecurity-grant-program
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those
of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s)
disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or
products referred to in the content.