Introduction To AI Assurance
Introduction To AI Assurance
Introduction To AI Assurance
Science, Innovation
& Technology
Introduction
to AI assurance
February 2024
Department for Science, Innovation and Technology ©2024 Introduction to AI Assurance ©2024 — 1
Contents
Ministerial foreword 3
1. Executive summary 4
2. AI assurance in context 7
4. AI assurance in practice 27
6. Additional resources 42
Artificial Intelligence (AI) is increasingly impacting AI assurance can help to provide the basis for
how we work, live, and engage with others. AI consumers to trust the products they buy will work
technologies underpin the digital services we use as intended; for industry to confidently invest in new
every day and are helping to make our public services products and services; and for regulators to monitor
more personalised and effective, from improving compliance while enabling industry to innovate
health services to supporting teachers; and driving at pace and manage risk. A thriving AI assurance
scientific breakthroughs so we can tackle climate ecosystem will also become an economic activity in
change and cure disease. However, to fully grasp its own right – the UK’s cyber security industry, an
its potential benefits, AI must be developed and example of a mature assurance ecosystem, is worth
deployed in a safe, responsible way. nearly £4 billion to the UK economy.
The UK government is taking action to ensure that However, building a mature AI assurance ecosystem
we can reap the benefits of AI while mitigating will require active and coordinated effort across the
potential risks and harms. This includes acting to economy, and we know that the assurance landscape
establish the right guardrails for AI through our agile can be complex and difficult to navigate, particularly
approach to regulation; leading the world on AI safety for small and medium enterprises. This Introduction
by establishing the first state-backed organisation to AI assurance is the first in a series of guidance
focused on advanced AI safety for the public interest; to help organisations upskill on topics around AI
Viscount Camrose and – since 2021 – encouraging the development of a assurance and governance. With developments in
Minister for Artificial Intelligence flourishing AI assurance ecosystem. the regulatory landscape, significant advances in AI
and Intellectual Property capabilities, and increased public awareness of AI,
As highlighted in our AI regulation white paper it is more important than ever for organisations to
in 2023, AI assurance is an important aspect of start engaging with the subject of AI assurance and
broader AI governance, and a key pillar of support for leveraging its critical role in building and maintaining
organisations to operationalise and implement our trust in AI technologies.
five cross-cutting regulatory principles in practice.
01
Executive summary
Introduction
The Introduction to AI assurance provides a grounding In March 2023, the government published its The guidance will cover:
in AI assurance for readers who are unfamiliar with the AI governance framework in a pro-innovation
subject area. This guide introduces key AI assurance approach to AI regulation. This white paper set • AI assurance in context: Introduction to the
concepts and terms and situates them within the out a proportionate, principles-based approach to background and conceptual underpinnings of
wider AI governance landscape. As an introductory AI governance, with the framework underpinned AI Assurance.
guide, this document focuses on the underlying by five cross-sectoral principles. These principles
concepts of AI assurance rather than technical detail, describe “what” outcomes AI systems must achieve, • The AI assurance toolkit: Introduction to key AI
however it will include suggestions for further reading regardless of the sector in which they’re deployed. assurance concepts and stakeholders.
for those interested in learning more. The white paper also sets out a series of tools that
can be used to help organisations understand “how” • AI assurance in practice: Overview of different AI
As AI becomes increasingly prevalent across all to achieve these outcomes in practice: tools for assurance techniques and how to implement AI
sectors of the economy, it’s essential that we ensure trustworthy AI, including assurance mechanisms assurance within organisations.
it is well governed. AI governance refers to a range and global technical standards.
of mechanisms including laws, regulations, policies, • Key actions for organisations: A brief overview of
institutions, and norms that can all be used to outline This guidance aims to provide an accessible key actions that organisations looking to embed AI
processes for making decisions about AI. The goal introduction to both assurance mechanisms and assurance can take.
of these governance measures is to maximise and global technical standards, to help industry and
reap the benefits of AI technologies while mitigating regulators better understand how to build and deploy
potential risks and harms. responsible AI systems. It will be a living, breathing
document that we keep updated over time.
Why is AI assurance
important?
Artificial intelligence (AI) offers transformative However, there are also concerns about the risks and AI assurance is consequently a crucial component of
opportunities for the economy and society. The societal impacts associated with AI. There has been wider organisational risk management frameworks for
dramatic development of AI capabilities over recent notable debate about the potential existential risks developing, procuring, and deploying AI systems, as
years, particularly generative AI - including Large to humanity but there are also significant, and more well as demonstrating compliance with existing - and
Language Models (LLMs) such as ChatGPT - has immediate, concerns relating to risks such as bias, a any relevant future - regulation. With developments
fuelled significant excitement around the potential loss of privacy and socio-economic impacts such as in the regulatory landscape, significant advances in AI
applications for, and benefits of, AI systems. job losses. capabilities and increased public awareness of AI, it is
more important than ever for organisations to start
Artificial intelligence has been used to support When ensuring the effective deployment of AI engaging with AI assurance.
personalised cancer treatments, mitigate the worst systems many organisations recognise that, to unlock
effects of climate change and make transport more the potential of AI systems, they will need to secure
efficient. The potential economic benefits from AI are public trust and acceptance. This will require a multi-
also extremely high. Recent research from McKinsey disciplinary and socio-technical approach to ensure
suggests that generative AI alone could add up to that human values and ethical considerations are
$4.4 trillion to the global economy. built-in throughout the AI development lifecycle.
02
AI assurance in context
The importance
of trust
The term ‘assurance’ originally derived from They also might not adopt AI for fear of facing
accountancy but has since been adapted to reputational damage or public backlash. Without
cover areas including cyber security and quality trust, consumers will also be cautious about using
management. Assurance is the process of measuring, these technologies. Although awareness of AI is very
evaluating and communicating something about a high amongst the public and has increased over the
system or process, documentation, a product or an last year, their primary associations with AI typically
organisation. In the case of AI, assurance measures, reference uncertainty.
evaluates and communicates the trustworthiness
of AI systems. When developing and deploying AI AI assurance processes can help to build confidence
systems, many organisations recognise that to unlock in AI systems by measuring and evaluating reliable,
their potential, a range of actors – from internal standardised, and accessible evidence about the
teams to regulators to frontline users – will need to capabilities of these systems. It measures whether
understand whether AI systems are trustworthy. they will work as intended, hold limitations, and pose
Without trust in these systems, organisations may potential risks, as well as how those risks are being
be less willing to adopt AI technologies because they mitigated to ensure that ethical considerations are
don’t have the confidence that an AI system will built-in throughout the AI development lifecycle.
actually work or benefit them.
AI assurance In March 2023, the UK government outlined its approach to AI governance through its
white paper, a pro-innovation approach to AI regulation, which set out the key elements
and governance
of the UK’s proportionate and adaptable regulatory framework. It includes five cross-
sectoral principles to guide and inform the responsible development and use of AI in all
sectors of the economy:
AI risks:
e.g. privacy and fundamental rights UK regulatory
Addressed by
framework
AI assurance will play a critical role in the
AI governance: implementation and operationalisation of these
e.g. UK AI regulation white paper principles. The principles identify specific goals – the
“what” - that AI systems should achieve, regardless
of the sector in which they are deployed. AI assurance
Supported by techniques and standards (commonly referred to as
“tools for trustworthy AI”) can support industry and
regulators to understand “how” to operationalise
these principles in practice, by providing agreed-upon
processes, metrics, and frameworks to support them
to achieve these goals.
SDO-developed standards: Sector specific rules
e.g. through ISO, IEC, IEEE, ETSI and guidance
Compliance
measured by
Assurance mechanisms:
e.g. audits, performance testing
AI governance
and regulation
Due to the unique challenges and opportunities Through AI assurance, organisations can measure Outside of the UK context, supporting cross-
raised by AI in particular contexts, the UK’s approach whether systems are trustworthy and demonstrate border trade in AI will also require a well-developed
to AI governance focuses on outcomes rather than this to government, regulators, and the market. They ecosystem of AI assurance approaches, tools,
the technology itself – acknowledging that potential can also gain a competitive advantage, building systems, and technical standards which ensure
risks posed by AI will depend on the context of its customer trust and managing reputational risk. On international interoperability between differing
application. To deliver this outcomes-based approach, one hand, using assurance techniques to evaluate regulatory regimes. UK firms need to demonstrate
existing regulators will be responsible for interpreting AI systems can build trust in consumer-facing risk management and compliance in ways that are
and implementing the regulatory principles in their AI systems by demonstrating adherence to the understood by trading partners and consumers in
respective sectors and establishing clear guidelines principles of responsible AI (fairness, transparency other jurisdictions.
on how to achieve these outcomes within a particular etc.) and/or relevant regulation/legislation. On the
sector. By outlining processes for making and other hand, using assurance techniques can also
assessing verifiable claims to which organisations can help identify and mitigate AI-related risks to manage
be held accountable, AI assurance is a key aspect of reputational risks and avoid negative publicity. This
broader AI governance and regulation. helps to mitigate greater commercial risks, in which
high-profile failures could lead to reduced customer
trust and adoption of AI systems.
Spotlight:
AI assurance
and frontier AI
AI assurance, the process of measuring, evaluating The UK’s AI Safety Institute (AISI) – the first state-
and communicating the trustworthiness of AI backed organisation focused on advanced AI
systems, is also relevant at the “frontier” of AI. safety for the public interest – is developing the
sociotechnical infrastructure needed to identify
The Bletchley Declaration, signed by countries that potential risks posed by advanced AI. It will offer
attended the November 2023 UK AI Safety Summit, novel tools and systems to mitigate these risks, and
recommended that firms implement assurance support wider governance and regulation, further
measures. This includes safety testing, evaluations, expanding the AI assurance ecosystem in the UK.
and accountability and transparency mechanisms to
measure, monitor and mitigate potentially harmful
capabilities of frontier AI models.
03
The AI assurance toolkit
Measure, evaluate
1. Measure
Gathering qualitative and quantitative data on how an AI system functions, to ensure that it performs
and communicate as intended. This might include information about performance, functionality, and potential impacts in
different contexts. Additionally, you may need to ensure you have access to documentation about the
system design and any management processes to ensure you can evaluate effectively.
3. Communicate
A range of communication techniques can be applied to ensure effective communication both within
an organisation and externally. This might include collating findings into reports/presenting information
in a dashboard as well as external communication to the public to set out steps an organisation has
taken to assure their AI systems. In the long-term this may include activities like certification.
mechanisms
Used to consider and Assesses the inputs and
identify a range of potential Used to anticipate the wider outputs of algorithmic
risks that might arise from effects of a system/product systems to determine if there
the development and/or on the environment, equality, is unfair bias in the input data,
deployment of an AI product/ human rights, data protection, the outcome of a decision or
system. These include bias, or other outcomes. classification made by the
There is a spectrum of AI assurance mechanisms
data protection and privacy system.
that can, and should, be used in combination with
risks, risks arising from the use
one another across the AI lifecycle. These range from
of a technology (for example
qualitative assessments which can be used where
the use of a technology for
there is a high degree of uncertainty, ambiguity and
misinformation or other
subjectivity, for example thinking about the potential
malicious purposes) and
risks and societal impacts of systems, to quantitative
reputational risk to the
assessments for subjects that can be measured
organisation.
objectively and with a high degree of certainty, such
as how well a system performs against a specific
metric, or if it conforms with a particular legal Compliance audit: Conformity assessment: Formal verification:
requirement. The table on the right details a sample
of some key assurance techniques that organisations Involves reviewing adherence The process of conformity Formal verification establishes
should consider as part of the development and/or to internal policies, external assessment demonstrates whether a system satisfies
deployment of AI systems. regulations and, where whether a product or system specific requirements, often
relevant, legal requirements. meets relevant requirements, using formal mathematical
prior to being placed on methods and proofs.
the market. Often includes
performance testing.
AI assurance
To provide a consistent baseline, and increase their effectiveness and impact, AI assurance mechanisms
should also be underpinned by available global technical standards. These are consensus-based
standards developed by global standards development organisations (SDOs) such as the International
Standards Organisation (ISO). Global technical standards are essentially agreed ways of doing things,
and standards
designed to allow for shared and reliable expectations about a product, process, system or service. Global
technical standards allow assurance users to trust the evidence and conclusions presented by assurance
providers – without standards we have advice, not assurance. There are different kinds of standards that
can support a range of assurance techniques. These include:
Spotlight: AI
Standards Hub
The AI Standards Hub is a joint initiative led by The Dedicated to knowledge sharing, community and
Alan Turing Institute, the British Standards Institution capacity building, and strategic research, the hub To learn more, visit the AI Standards Hub website.
(BSI), and the National Physical Laboratory (NPL), seeks to bring together industry, government,
supported by the government. The Hub’s mission is regulators, consumers, civil society and academia
to advance trustworthy and responsible AI with a with a view to:
focus on the role that global technical standards
can play as governance tools and innovation • Increasing awareness and contributions to global
mechanisms. The AI Standards Hub aims to help technical AI standards in line with UK values.
stakeholders navigate and actively participate in
global AI standardisation efforts and champion global • Increasing multi-stakeholder involvement in AI
technical standards for AI. standards development.
The AI assurance
ecosystem
There is a growing market of AI assurance providers However, AI assurance isn’t just limited to a selection
who supply the assurance systems and services of mechanisms and standards and the assurance
required by organisations who either don’t have in- teams and providers that use them. A range of actors
house teams offering internal assurance capabilities, need to check that AI systems are trustworthy
or who require additional capabilities on top of and compliant, and to communicate evidence of
those they have internally. As with assurance this to others. These actors can each play several
techniques and mechanisms, there is no single interdependent roles within an assurance ecosystem.
‘type’ of assurance provider, with some third-party The next few pages provide examples of key
providers offering specific technical tools, whilst supporting stakeholders and their role within the AI
others offer holistic AI governance platforms. There assurance ecosystem.
are also diversified professional services firms who
offer assurance ‘as a service’, supporting clients to
embed good governance and assurance practices.
Due to its relationship with wider organisational risk
management, AI assurance is often seen as one
part of an organisation’s Environmental, Social and
Corporate Governance processes (ESG).
The ICO has already developed regulatory guidance and toolkits relating to how data Example:
protection regulation applies to AI. UKAS is the UK’s sole national accreditation body. It is appointed by the government to
assess a third-party organisation known as a conformity assessment bodies who provide
Regulators are also implementing a range of sandboxes and prize challenges to support a range of services including AI assurance.
regulatory innovation in AI.
Services offered by UKAS include certification, testing, inspection and calibration.
Audience: Audience:
Example: Example:
The DSIT AI assurance programme supports the development of a robust and There are both international and national standards bodies. International standards
sustainable AI assurance ecosystem. bodies include the International Organisation for Standardisation (ISO) and the
International Electrotechnical Commission (IEC) as well as national standards group.
Work to date includes building knowledge of the AI assurance market, its drivers and
barriers, highlighting emerging assurance techniques (through the Portfolio of Assurance The British Standards Institute (BSI) is the UK’s national standards body. BSI represents
Techniques) and supporting the development of novel assurance techniques, for UK stakeholders at specific regional and international standards bodies that are part
example through the Fairness Innovation Challenge. of the standards system. BSI along with the National Physical Laboratory (NPL) and the
UK accreditation service (UKAS) make up the UK’s national quality infrastructure for
standards (see here for more information).
Audience: Audience:
Audience: Audience:
Professional bodies
Role: Spotlight: Fairness
Innovation Challenge
To define, support, and improve the professionalisation of assurance standards and to
promote information sharing, training, and good practice for professionals, which can be
important both for developers and assurance service providers.
Example:
There are not currently any professional bodies with Chartered Status, with a focus on AI
A range of the above stakeholders are already working together to grow the
assurance.
UK’s AI assurance ecosystem through the Fairness Innovation Challenge.
However, the UK Cyber Security Council has recently been created and made responsible The Challenge, run by DSIT in partnership with Innovate UK, brings together
for standards of practice for cyber security professionals. This model that could be government, regulators, academia, and the private sector to drive the
adopted for AI assurance in the future. development of novel socio-technical approaches to fairness and bias audit –
a currently underdeveloped area of research, with most bias audits measuring
The International Association of Algorithmic Auditors (IAAA) is another recently formed purely technical or statistical notions of fairness. The Challenge will provide
body, hoping to professionalise AI auditing by creating a code of conduct for AI auditors,
greater clarity about how different assurance techniques can be applied in
training curriculums, and eventually, a certification programme.
practice, and work to ensure that different strategies to address bias and
discrimination in AI systems comply with relevant regulation, include data
protection and equalities law.
Audience:
Regulators
Organisations Organisations
developing AI systems procuring AI systems
Spotlight: IAPP
Governance Center
The International Association of Privacy Professionals
(IAPP) is the largest global information privacy
community and resource centre, with more than
Through the AI Governance Center, the IAPP
provides professionals tasked with AI governance,
risk, and compliance with the content, resources,
60%
80,000 members. It is a non-profit, policy-neutral networking, training and certification they need of respondents indicated their organisation has
professional association helping practitioners develop to manage the complex risks of AI. This allows AI already established a dedicated AI governance
their capabilities and organisations to manage and governance professionals to share best practices, function or is likely to in the next 12 months.
protect their data. track trends, advance AI governance management
31%
issues, standardise practices and access the latest
In Spring 2023, the IAPP AI Governance Center was educational resources and guidance.
launched, in recognition of the need for professionals
to establish the trust and safety measures that In December 2023, the IAPP AI Governance Center cited a complete lack of qualified AI governance
will ensure AI fulfils its potential to serve society in published a report, based on survey responses professionals as a key challenge.
positive and productive ways. from over 500 individuals from around the world,
56%
on organisational governance issues relating to the
professionalisation of AI governance. The report
covered the use of AI within organisations, AI
governance as a strategic priority, the AI governance indicated they believe their organization does not
function within organisations, the benefits of understand the benefits and risks of AI deployment.
AI-enabled compliance, and AI governance
implementation challenges. Notably, IAPP research To learn more visit the IAPP AI Governance Center.
has found that:
04
AI assurance in practice
AI assurance Given the variety of use-cases and contexts that AI assurance techniques can be
deployed in and the need for assurance across the AI development lifecycle, a broad
spectrum
range of assurance techniques beyond purely technical solutions will need to be applied.
Use case
Deployment process
Environment
and governance
sourced data, AI technologies will be unable will work to ensure that they function as intended,
to operate effectively or maintain trust. and that they produce beneficial outcomes. This
Organisations should put in place robust and includes ensuring that outputs are accurate, and
in practice
standardised processes for handling data, minimising potential harmful outcomes such as
including: unwanted bias. A range of assurance tools and
techniques can be used to evaluate AI models
• An effective organisational data strategy; and systems, including impact assessments, bias
• Clear employee accountabilities for data and; audits and performance testing.
• Robust, standardised and transparent
processes for data collection, processing and When setting metrics, teams should build in
sharing. rigorous software testing and performance
assessment methodologies with comparisons to
All organisations processing data must comply clear performance benchmarks.
with existing legal requirements, in particular,
UK GDPR and the Data Protection Act 2018. All
systems using personal data must carry-out a
data protection impact assessment (DPIA).
Background
Risk assessment An online education platform is exploring ways of using AI to personalise video content presented
to users. The company conducts a risk assessment to explore potential costs and benefits, including
effects on reputation, safety, revenue, and users.
Staff are encouraged to share their thoughts and concerns in workshops designed to capture and classify
a range of potential risks. The workshop is followed by more detailed questionnaires. The company’s
internal audit staff then assess the risks to quantify and benchmark them and present their findings in
an internal report to inform future decision-making.
Outcomes
The risk assessment results in a clearer and shared appreciation/understanding of potential risks, which is
used to create mitigation strategies and build resilience to manage future change across the organisation.
Models
Tools
Background
Impact A specialist waste collection company is conducting an Impact Assessment before deploying a new AI
system to manage its waste sorting system more efficiently.
assessment
Impact assessments are used to anticipate the wider
Process
effects of a system/product on the environment,
equality, human rights, data protection, or other
The company uses a tool to assess different aspects of the potential outcomes of the new AI system.
outcomes.
During the planning phase, the organisation has adapted a toolkit - including a detailed questionnaire
- from a publicly available resource. The toolkit includes categories that cover environmental impacts,
fairness, transparency and inclusivity and guides assessors to map out the rationale for deploying the
new system, as well as potential consequences. Results are published in a report alongside mitigations
and reflections on the findings.
Outcomes
The impact assessment enables the company to develop a clearer understanding of the potential
opportunities and costs from deploying a new AI system. The final report also provides a record of actions
that could help mitigate harmful outcomes.
Models Data
Tools Governance
Background
Bias audit A growing company is using an AI system they have built and trained to sift candidates in a recruitment
campaign. This is to ensure that their approach does not introduce unintended biases to the system.
Outcomes
The audit reveals that the model is more likely to pass candidates who live in a certain area.
This might introduce biases as home addresses may correlate with other protected characteristics
e.g. ethnicity/age/disability.
A third-party auditor collaborates with system developers to analyse findings and present
Models Data recommendations. They also support the developers to implement mitigations and ensure that
systems are designed in a way that minimises unfair bias.
Tools Governance
Background
Compliance audit A UK-based company that uses AI in its manufacturing processes is undertaking work to ensure its
products comply with regulatory requirements overseas. The company is planning to expand to sell a new
product in international markets.
The company starts with a compliance audit. They work with a third-party service provider to review
the extent to which their products and processes adhere to internal policies and values, as well as with
regulatory and legal requirements.
The company uses a governance platform, purchased from a specialist third-party provider of AI
assurance systems. This includes features that will update, inform and guide developers of relevant
requirements that apply across different markets.
Outcomes
After applying these assurance techniques, the company has a clearer understanding of how well it
complies with requirements and standards across different markets and regions.
Governance
Background
Conformity A large technology company is using a UKAS accredited product certification body to demonstrate that its
products conform with applicable product standards, initially, and on an ongoing basis to give customers/
assessment
regulators confidence that normative requirements are being continuously achieved and maintained.
Tools
Background
Formal verification A bank is using formal verification to test a newly updated AI model that will support the assessment of
mortgage applications to ensure the models are robust and any risks associated with its use
are verified.
The bank is working with a consultancy to provide a robust assessment of its software prior to
deployment. The use of a third-party assurance provider helps to ensure that the assessment is impartial
and allows the bank to assess its algorithms thoroughly by making use of specialist expertise. The
verification process uses formal mathematical methods to assess whether the system updates satisfy
key requirements. Following the assessment, the consultants present results in a detailed report, which
highlights any errors or risk factors the assessment has flagged.
Outcomes
The thorough scrutiny of the formal verification process ensures that any potential risks or errors are
identified before the system is in use. This is particularly important in financial services, where errors could
have severe consequences for users, the bank’s reputation and ability to meet regulation. The results also
Models provide an objective and quantifiable measurement of the model’s functionality. This enhances security,
and confidence from users and shareholders.
Tools
05
Key actions for
organisations
Department for Science, Innovation and Technology Introduction to AI Assurance ©2024 — 39
05 Key actions for organisations
Steps to build AI
assurance
1. 2.
AI assurance is not a silver bullet for responsible and Consider existing regulations Upskill within your organisation
ethical AI, and whilst the ecosystem is still developing
there remain limitations to, and challenges for, While there is not currently statutory AI regulation Even whilst the ecosystem is still developing,
successfully assuring AI systems. However, early in the UK, there are existing regulations that are organisations should be looking to develop their
engagement and proactive consideration of likely relevant for AI systems. For example, systems understanding of AI assurance and anticipating
future governance needs, skills and/or technical must adhere to existing regulation such as UK likely future requirements. The Alan Turing
requirements can help to build your organisation’s GDPR, the Equality Act 2010 and other industry- Institute has produced several training workbooks
assurance capabilities. specific regulation. focused on the application of AI governance in
practice, and the UK AI Standards Hub has a
If you are an organisation interested in further training platform, with e-learning modules on AI
developing your AI assurance understanding Assurance.
and capability, you may want to consider the
following steps:
3. 4. 5.
Review internal governance and risk Look out for new regulatory guidance Consider involvement in AI
management standardisation
Over the coming years, regulators will be
Effective AI assurance is always underpinned developing sector-specific guidance setting Private sector engagement with SDOs is crucial
by effective internal governance processes. It’s out how to operationalise and implement the for ensuring the development of robust and
crucial to consider how your internal governance proposed regulatory principles in each regulatory universally accepted standards protocols,
processes ensure risks and issues can be quickly domain. For example, the ICO has developed particularly from SMEs who are currently
escalated, and effective decision-making can be guidance on AI and data protection for those in underrepresented. Consider engaging with
taken at an appropriate level. The US’s National compliance-focused roles. The UK government standards bodies such as BSI. Visit the AI
Institute for Science and Technology (NIST) has also published initial guidance to regulators Standards Hub for information and support for
has developed an in-depth Risk Management as part of its response to the AI regulatory white implementing AI standards.
Framework (RMF) that can support the paper consultation.
management of organisational risk.
06
Additional resources
Additional resources for Department for Science, Innovation and Technology UK AI Standards Hub: Upcoming Events
those interested
(2023): A pro-innovation approach to AI regulation
UK AI Standards Hub: AI Standards Database