regulating-ai-the-icos-strategic-approach
regulating-ai-the-icos-strategic-approach
regulating-ai-the-icos-strategic-approach
Regulating AI:
The ICO’s strategic
approach
April 2024
Regulating AI: The ICO’s strategic approach
The Information Commissioner’s Office (ICO) sets out its strategic vision in the
ICO25 plan, which highlights promoting regulatory certainty, empowering
responsible innovation and safeguarding the public as key priorities.
2
Regulating AI: The ICO’s strategic approach
Contents
3
Regulating AI: the ICO’s strategic approach
Introduction
1. At the ICO, we support the government’s ambition to make sure that
artificial intelligence (AI) is adopted in ways that make the UK the
smartest, healthiest, safest and happiest place to live and work. We believe
that data protection is a crucial part of making this a reality.
5. Many of these risks derive from how data – and specifically personal data –
is used in the development and deployment of AI systems. Wherever
processing of personal data takes place this will fall under the ICO’s
purview, as the UK’s data protection regulator. The ICO has the ability and
the tools to intervene right across the AI supply chain, from model
developers to deployers, depending on where risks may be greatest or
mitigations most effective.
1
Letter from DSIT Secretary of State to the Information Commissioner's Office | GOV.UK
2
A pro-innovation approach to AI regulation | GOV.UK
3
Implementing the UK’s AI Regulatory Principles | GOV.UK
4
Regulating AI: the ICO’s strategic approach
Such models are typically developed using personal data, in order to enable
them to be deployed for a general set of purposes. While data protection law
does not explicitly reference foundation models or general-purpose AI, the
scope of the legislation enables the ICO to intervene wherever personal data
is processed.
Data protection law applies to every stage of the model lifecycle and every
actor within the supply chain where personal data is being processed,
enabling the ICO to act on concerns around matters such as fairness and
transparency both upstream and downstream. Fines for non-compliance can
be set at up to 4% of annual global turnover.
4
A pro-innovation approach to AI regulation: government response | GOV.UK
5
Regulating AI: the ICO’s strategic approach
10. The ICO welcomes the approach taken by the government to build on the
strengths of its existing regulators, who are well-placed to tackle the AI
risks that emerge in their context. We do not consider that the risks
relating to AI require new, extensive, cross-cutting legislation, but
appropriate resourcing of existing UK regulators and their empowerment to
hold organisations to account.
Data protection law can mitigate many of the risks these initiatives are
seeking to address. For example, the fairness principle already requires
organisations to not undertake data processing that has unjustifiably
adverse effects on individuals. The ICO has already issued warnings
regarding emerging AI uses such as emotion recognition technology9.
5
AI and data protection risk toolkit | ICO
6
Guidance on AI and data protection | ICO
7
Overview of Data Protection Harms and the ICO Taxonomy | ICO
8
Executive Order on the Safe, Secure, and Trustworthy Development and Use of
Artificial Intelligence | The White House
9
‘Immature biometric technologies could be discriminating against people’ says ICO in
warning to organisations | ICO
6
Regulating AI: the ICO’s strategic approach
11. While some risks derive from the specific contexts in which AI is deployed
(e.g. healthcare, law enforcement or education) others derive from the AI
development process. For example, facial recognition technology built on
inaccurate or unrepresentative datasets may have discriminatory outcomes
regardless of the context in which it will be applied, so due diligence and
operational testing are necessary.
12. Vulnerable groups including children are more exposed to risks and
organisations using or deploying AI need to factor this into their
overarching risk management framework.
10
The use of live facial recognition in public places | ICO
11
The use of live facial recognition technology by law enforcement in public places | ICO
12
Biometric data guidance: Biometric recognition | ICO
7
Regulating AI: the ICO’s strategic approach
We have also taken action where we have concerns about potential harm to
children as a result of AI products and services. For example, we
investigated Snap's risk assessment process in relation to its 'My AI'
generative AI chatbot, with a particular focus on ensuring that risks to
children were appropriately identified and mitigated. We will continue to act
to ensure that children’s privacy is protected online.
13. Many AI risks will sit outside of data protection law, or be addressed more
effectively through other regulatory regimes. For example, data protection
law offers little protection against the use of AI to develop new biological or
chemical threats. It cannot tackle the threat to national security and
election integrity from development of synthetic media (‘deepfakes’) by
hostile states. The ICO is working with the AI Safety Institute on the risks
that fall within its remit.
13
Protecting children's privacy online: Our Children's code strategy | ICO
14
A guide to the data protection principles | ICO
8
Regulating AI: the ICO’s strategic approach
17. The security of data is a pillar of other frameworks the ICO oversees such
as the Network and Information Systems Regulations. We work closely with
stakeholders including the National Cyber Security Centre (NCSC) to tackle
security challenges. The ICO is a member of the NCSC AI Working Group
and provides input into the Cyber Regulators Forum, which has been
considering matters related to AI and cybersecurity.
15
A pro-innovation approach to AI regulation | GOV.UK
16
The ICO has produced guidance on security: A guide to data security | ICO including
as part of its Guidance on AI and Data Protection: How should we assess security and
data minimisation in AI? | ICO
17
How should we assess security and data minimisation in AI? | ICO
9
Regulating AI: the ICO’s strategic approach
18. Transparency is also a data protection principle. This is about being clear,
open and honest with people from the start about who organisations are,
and how and why they use their personal data.
20. The ICO, in conjunction with the Alan Turing Institute (ATI), has produced
guidance on Explaining Decisions Made with AI18 to support organisations in
explaining systems and their decisions to people.
Fairness
“AI systems should not undermine the legal rights of individuals or
organisations, discriminate unfairly against individuals or create unfair
market outcomes. Actors involved in all stages of the AI life cycle should
consider definitions of fairness that are appropriate to a system’s use,
outcomes and the application of relevant law.”
AI Regulation White Paper, UK Government, 2023
21. Fairness is a key data protection principle. Put simply, it means that
organisations should only handle personal data in ways people would
reasonably expect, and not in ways that have unjustified adverse effects on
them.
22. The concept of fairness in data protection law is more holistic than notions
of algorithmic fairness that focus on the distribution of outcomes among a
group as it also accounts for the relationship between those groups and the
organisations processing their data. Just because a system is statistically
accurate it does not necessarily mean its use is fair under data protection
law – other factors will also play a role.
18
Explaining decisions made with AI | ICO
10
Regulating AI: the ICO’s strategic approach
24. Our AI and Data Protection Guidance19 provides a roadmap for how
organisations should evaluate their data protection fairness obligations. The
ICO continues our work on fairness in AI by supporting the Fairness
Innovation Challenge20, in partnership with the Department for Science,
Innovation and Technology (DSIT) and the Equality and Human Rights
Commission (EHRC). We have worked on concepts of fairness with
counterparts in the Digital Regulation Cooperation Forum (DRCF),21 and
contributed to the ATI’s ’AI Fairness in Practice’ workbook.22
27. The ICO has developed an overarching Accountability Framework24 and AI-
specific guidance on accountability25 that we continue to update. Following
19
How do we ensure fairness in AI? | ICO
20
Fairness Innovation Challenge
21
Fairness in AI: A View from the DRCF | DRCF
22
AI Ethics and Governance in Practice: AI Fairness in Practice | The Alan Turing
Institute
23
Controllers and processors | ICO
24
Accountability Framework | ICO
25
What are the accountability and governance implications of AI? | ICO
11
Regulating AI: the ICO’s strategic approach
28. We are also working to ensure that organisations procuring systems can be
assured that AI supply chain actors providing their services and products
have undertaken the appropriate to due diligence. In conjunction with the
EHRC, the London Office of Technology and Innovation, and the Local
Government Association, we plan to develop guidance for local authorities
who are procuring AI products and services. This follows the work we have
done with our DRCF partners on transparency in AI procurement.28
29. The AI White Paper’s contestability and redress principle is not a principle of
data protection law but is instead reflected in a set of information rights
that individuals can exercise, such as the right of access to personal data
being processed about them. Of particular note are the rights in relation to
solely automated decision-making with legal or similarly significant effects
on individuals.29
30. A legal effect is something that affects someone’s legal rights. For example,
someone’s entitlement to child or housing benefit. A similarly significant
effect is generally something that has the same sort of impact on
someone’s circumstances or choices. For example, a computer decision to
offer someone a job, or a decision to agree or decline a person’s mortgage
application. These effects can be positive or negative.
31. These data protection provisions enable people to contest decisions they
deem unfair when they are solely automated. When decision-making is
assisted by AI and is not captured by these provisions, people can still
exercise their information rights such as the rights to access, rectification
and exercise control on data that relates to them and by implication any
26
HMG_response_to_SPV_Digital_Tech_final.pdf (publishing.service.gov.uk)
27
ICO consultation series on generative AI and data protection | ICO
28
Transparency in the procurement of algorithmic systems: Findings from our workshops
| DRCF
29
See Article 22 of the UK GDPR, Sections 49 and 50 of the DPA 2018. These provisions
are subject to amendment by the Data Protection and Digital Information Bill.
12
Regulating AI: the ICO’s strategic approach
decisions this data led to. Facilitating the exercise of these rights also aligns
with the fairness principle.
30
Big data, artificial intelligence, machine learning and data protection | ICO
31
Guidance on AI and data protection | ICO
32
Automated decision-making and profiling | ICO
33
Explaining decisions made with AI | ICO
34
AI and data protection risk toolkit | ICO
35
Global Privacy and Data Protection Awards 2023 | Global Privacy Assembly
36
Biometric data guidance: Biometric recognition | ICO
37
Age assurance for the Children’s code | ICO
13
Regulating AI: the ICO’s strategic approach
38. Our Innovation Advice service aims to respond to regulatory questions from
innovators in 10-15 days, ensuring our advice is as rapid as developments
in the market. The service, which was named Best Innovative Privacy
Project in the 2023 PICCASO awards, has advised on topics from generative
AI to automated calling systems.
39. Our Innovation Hub, which partners with accelerators, incubators and other
agencies to mentor innovators as they engineer data protection into the
fabric of their new ideas. Our collaborations have included:
• working with DSIT, the Home Office and GCHQ on the SafetyTech
Innovation Challenges;44
• collaborating with DSIT, Innovate UK and the EHRC on the Fairness
Innovation Challenge45; and
• working with the Digital Catapult Bridge AI46 cohorts.
38
Generative AI: eight questions that developers and users need to ask | ICO
39
ICO tech futures: neurotechnology | ICO
40
ICO tech futures: biometrics | ICO
41
Tech Horizons Report | ICO
42
Onfido Regulatory Sandbox Final Report | ICO
43
Good With Regulatory Sandbox Final Report | ICO
44
Safety Tech Challenge
45
Fairness Innovation Challenge
46
BridgeAI | Digital Catapult
14
Regulating AI: the ICO’s strategic approach
40. With our DRCF partners, we are currently piloting the AI and Digital Hub47,
which allows innovators to obtain answers to complex queries which span
the regulatory remits of DRCF member regulators.
47
AI and Digital Hub | DRCF
48
A Guide to ICO Artificial Intelligence (AI) Audits | ICO
49
ICO fines facial recognition database company Clearview AI Inc more than £7.5m and
orders UK data to be deleted | ICO
50
Information Commissioner seeks permission to appeal Clearview AI Inc ruling | ICO
51
ICO orders Serco Leisure to stop using facial recognition technology to monitor
attendance of leisure centre employees | ICO
15
Regulating AI: the ICO’s strategic approach
45. In this section, we set out some of the key developments that organisations
can expect over the coming months, complementing the ongoing work of
our advice services and our supervision of firms using personal data to
develop or deploy AI.
48. Further calls for evidence will focus on individual rights and controllership.
The ICO is seeking views from a range of stakeholders, including
developers and users of generative AI, civil society groups and other public
bodies with an interest in the technology.
52
UK Information Commissioner issues preliminary enforcement notice against Snap |
ICO
53
The findings in the preliminary enforcement notice are provisional. No conclusion
should be drawn at this stage that there has, in fact, been any breach of data protection
law or that an enforcement notice will ultimately be issued. The ICO will carefully
consider any representations from Snap before taking a final decision.
16
Regulating AI: the ICO’s strategic approach
54
Biometric data guidance: Biometric recognition | ICO
55
Guidance on AI and data protection | ICO
56
Automated decision-making and profiling | ICO
57
Current projects | ICO
58
Safety Tech Challenge
59
Fairness Innovation Challenge
60
BridgeAI | Digital Catapult
17
Regulating AI: the ICO’s strategic approach
58. AI is a priority for the DRCF. With our fellow regulators, we have:
61
The benefits and harms of algorithms: a shared perspective from the four digital
regulators | DRCF
62
Maximising the benefits of Generative AI for the digital economy | DRCF
63
Fairness in AI: A View from the DRCF | DRCF
64
Auditing algorithms: the existing landscape, role of regulators and future outlook |
DRCF
18
Regulating AI: the ICO’s strategic approach
60. In relation to the AI Regulation White Paper, we will host joint workshops to
explore how its principles interact across the four regulators, with a focus
this year on AI transparency and accountability. We will continue to work
with the government’s new central AI function and consider potential joint
regulator capability-building projects.
Bilateral partnerships
62. Beyond these multilateral forums, we work closely with other regulators on
a bilateral basis. For example, we continue to partner with EHRC and DSIT
to support the Fairness Innovation Challenge to address bias and
discrimination in AI systems, building on our earlier work together on
fairness. Later this year we will publish a joint statement with the
Competition and Markets Authority on foundation models, with the aim of
supporting coherence for businesses and promoting behaviours that benefit
consumers where our remits interact.
19
Regulating AI: the ICO’s strategic approach
65
Roundtable of G7 Data Protection and Privacy Authorities Statement on Generative AI
| Personal Information Protection Commission of Japan
20
Regulating AI: the ICO’s strategic approach
68. At the core of our regulatory operations is our AI and Data Science team,
which acts as a centre of excellence on AI, informing our policy, advisory
and enforcement interventions alike. This unit comprises 10 professionals
who are dedicated full-time to AI governance and is expected to grow in
the coming years.
69. In tandem with growing our capacity to regulate AI, we are also investing
in the technical capabilities of our staff. We will continue to develop our
people, ensuring that we have the right mix of skills to be an effective
regulator of AI.
Our technology
70. As AI presents new opportunities, we will role-model responsible use of AI
here at the ICO as part of our Enterprise Data Strategy.66
71. The use of AI and machine learning will help the organisation become more
data-led in its decision making, improving the services and support it can
offer to customers. We currently use AI to support a customer service
chatbot and an algorithmic tool for email triage and are developing an AI
solution to help identify websites using non-compliant cookie banners.
73. We will develop an internal data literacy initiative to empower data and
analytics skills across the organisation. We will also develop a suite of AI
training and resources to ensure AI adoption at the ICO aligns with our
regulatory expectations of others.
66
ICO Enterprise Data Strategy | ICO
67
Algorithmic Transparency Recording Standard Hub | GOV.UK
21
Annual report 2020/2 1