AI and ethics (2)

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 26

AI ethics and

Governance
The six ethical principles, identified by the WHO Expert Group to guide the
development and use of AI technology for health:-

1. Protect autonomy;
2. Promote human well-being, human safety and the public interest;
3. Ensure transparency, explainability and intelligibility;
4. Foster responsibility and accountability;
5. Ensure inclusiveness and equity;
6. Promote artificial intelligence that is responsive and sustainable.
Ethical concerns relating AI for health include:
• AI could reinforce the digital divide;
• AI design may suffer from a lack of good-quality
data;
• Data collected may incorporate clinical biases;
• Data privacy and confidentiality risks;
• A lack of treatment options after diagnosis.
Ethical concerns relating AI for health
include:
• AI could reinforce the digital divide;
• AI design may suffer from a lack of good-quality data;
• Data collected may incorporate clinical biases;
• Data privacy and confidentiality risks;
• A lack of treatment options after diagnosis.
Black Box AI
• AI is sometimes referred to as a “Black box” when the AI systems inputs and/or
processing is not visible or transparent. Black box AI, therefore, makes it
difficult to understand how data is being used and how results are reached.
• In other words, with black box AI, one may (or may not) be aware of:
• The system input (the data used);
• The output (end result or recommendations made);
• But not how the system reaches this output. This can raise ethical challenges,
particularly in relation to data bias.
Data Bias
• Data bias is a concern that we will explore in more detail later
in this module. With “black box” AI, there can be issues regard
the transparency of:
• The data used and/or the bias in this data;
• The influence of that bias on data processing to inform insights.
• It can be argued that the opacity of an AI system can
exacerbate concerns regarding intelligibility and explainability
of the AI system.
Autonomous Decision-Making
• Autonomous decision-making uses a “data-driven” approach to augment
human decision-making.
• To do this, AI technologies identify patterns in data that reveal
correlations. These correlations provide medical professionals with new
insights and help inform decisions.
• Therefore, the power of AI in decision-making is its influence over human
decisions.
Delegation of Clinical Judgement
• While AI is used to augment human decision-making in health care, in some
circumstances, AI systems are displacing humans from the centre of knowledge
production. The delegation of clinical judgement introduces concern about
whether full delegation is legal.
• Laws increasingly recognize the right of individuals not to be subject to solely
automated decisions, when such decisions would have a significant effect. For
example, Article 22 of GDPR.
• Full delegation also creates a risk of automation bias on the part of the provider.
If human judgement were replaced by machine-guided judgement, wider ethical
concern would arise concerning the loss of human control.
• However, it is unlikely that AI in medicine will ever achieve full autonomy. It may
achieve only conditional automation or require human back u
Bias and Discrimination with AI
• Societal bias and discrimination are often replicated by AI technologies. The
design of AI technologies should take into consideration the different forms of
discrimination and bias that people may suffer because of identities such as
gender, race and sexual orientation.
• Training data;
• Development;
• Deployment.
AI and Commercialization

• The use of AI for health has been pushed by companies – from small start-ups to
large technology companies – by significant advocacy and investment.
• Those who support a growing role for these companies expect that they will be
able to marshal their capital, in-house expertise, computing resources and data
to identify and build novel applications to support health providers and systems.
• Ethical Challenges
1. Lack of transparency;
2. Data collection and use;
3. The power of commercial companies.
Commercialization and Excess Data
• Data exploited commercially, without the knowledge of those who supplied the data.
• For example, excess data could be shared with, and used by, companies to:
• Develop AI technologies for marketing goods and services;
• Or to create prediction-based products to be used, for example, by an insurance firm.
• The provision of health data to commercial entities has also resulted in the filing of legal
actions by individuals whose health data have been disclosed.
• Excess data concerns
• Concerns about the commercialization of excess health data include:
• Individual loss of autonomy;
• Loss of control over the data (with no explicit consent to such secondary use);
• How data may be used;
• Whether companies are allowed to profit from the use of data;
• Whether companies meet the duty of confidentiality, either deliberately or inadvertently
(for example due to a data breach).
Liability regimes for AI for

Health
The performance of AI is improving, but there can still be error. For example, when an
algorithm has been trained with incomplete or inappropriate data. Even AI technologies
designed with well-curated data and an appropriate algorithm could cause harm.
• Lawmakers and regulators should ensure that:
• Safety rules and frameworks are applicable to the use of AI technologies for health care;
• These frameworks are proactively integrated into the design and deployment of AI-
guided technologies.
• Liability rules for the use of AI in clinical care should include the same standards already
applied to health care. However, it is possible that reliance on AI technologies and the
risks they may pose require additional obligations and damages.
WHO recommendations for liability regimes
for AI for health care:
1. International agencies (and professional societies) should ensure that their clinical
guidelines keep pace with the rapid introduction of AI technologies, accounting for the
evolution of AI technologies by continuous learning.
2. WHO should support national regulatory agencies in assessing AI technologies for health.
3. WHO should support countries in evaluating the liability regimes that have been
introduced for the use of AI technologies for health and how such regimes should be
adapted to different health care systems and country contexts.
4. WHO and partner agencies should seek to establish international norms and legal
standards to ensure national accountability to protect patients from medical errors.
The Need for Consensus Principles
• The use of AI for health introduces several challenges that cannot be resolved by
ethical principles and existing laws and policies. This is because the risks and
opportunities of the use of AI are not yet well understood, or may change over
time.
• Furthermore, many principles, laws and standards were devised by and for high-
income countries. Therefore, low- and middle-income countries will face
additional challenges to introducing AI technologies, which will require both:
• Awareness of and adherence to ethical principles;
• And appropriate governance.
Ai in healthcare and governance
Broad Consent and Governance: User-
generated Content
• The use of the term “user-generated” can lead to an assumption that information is consciously provided by the user;
however, a lot of the information generated by devices can be produced unbeknownst to the user.
• User-generated health data can be obtained through:
• Digital devices and wearables;
• Chatbots
• Social media;
• Online patient communities.
• Governance of user-generated data is complex because:
• Internet usage crosses international boundaries and jurisdictions;
• It is not consistently regulated by law;
• There is a lack of self-regulation across technology companies.
Recommendations: Content and Benefit
Sharing
1. Governments should have clear data protection laws and regulations for the use of health
data and protecting individual rights, including the right to meaningful informed consent.
2. Governments should establish independent data protection authorities with adequate power
and resources to monitor and enforce the rules and regulations in data protection laws.
3. Governments should require entities that seek to use health data to be transparent about the
scope of the intended use of the data, including any onward transfers. These entities should
also commit to safeguarding human rights, in relation to their access and use of health data.
4. Mechanisms for community oversight of data should be supported. These include data
collectives and establishment of data sovereignty by indigenous communities and other
marginalized groups.
5. Data hubs should meet the highest standards of informed consent if their data might be used
by the private or public sector, should be transparent in their agreements with companies and
should ensure that the outcomes of data collaboration provide the widest possible public
benefit
Control and Benefit Sharing -
Recommendations
1. WHO should ensure clear understanding of which types of rights will apply to the
use of health data and the ownership, control, sharing and use of algorithms and AI
technologies for health.
2. Governments, research institutions and universities involved in the development of
AI technologies should maintain an ownership interest in the outcomes so that the
benefits are shared and are widely available and accessible, particularly to
populations that contributed their data for AI development.
3. Governments should consider alternative “push-and-pull” incentives instead of IP
rights, such as prizes or end-to-end push funding, to stimulate appropriate research
and development.
4. Transparency in regulatory procedures and in interoperability should be enhanced
and should be fostered by governments as deemed appropriate.
Governance of the Private Sector
The private sector plays a central role in the development and delivery of AI
for health care. Ranging from small start-ups to the world’s largest
technology companies, the private sector provides many of the materials
necessary for AI, including:
• Data collection;
• Data aggregation;
• AI algorithms.
Recommendations: Governance of the Private Sector
1. Governments should regulate online platforms that provide health-related services to ensure patient safety and care.
2. Governments should encourage tech companies to voluntarily commit to the UN’s Business and human rights guiding
principles (i.e. Ruggie principles).
3. Governments should consider co-regulation of AI technologies with the private sector and build capacity to effectively
regulate companies that deploy AI technologies.
4. Governments should consider establishing dedicated teams to conduct objective peer reviews of software and system
implementation.
5. Governments should consider which aspects of health care private companies should supply and how to hold them
accountable.
6. Public-Private Partnerships that develop or deploy AI technologies for health should be transparent, prioritize protection
of individual and community rights, with government ownership to ensure outcomes are affordable and available to all.
7. Companies must adhere to national and international laws and regulations, and human rights principles, on the
development, commercialization and use of AI for health systems.
8. Companies should invest in measures to improve the design, oversight, reliability and self-regulation of their products.
Companies should also consider licensing requirements for developers of “high-risk” AI.
9. Companies should be transparent about their policies, practices, and how ethical principles are implemented and
regulated
Governance of the Public Sector
Use of AI in the public sector is increasing. In 2019, OECD identified 50 countries that have launched – or are
planning to launch – national AI strategies, of which 36 plan to or have issued separate strategies for public
sector AI.
• In 2017, the United Arab Emirates was the first country in the world to have a designated minister for AI,
which has resulted in increased use of AI in the health-care system, with:
• “Pods” to detect early signs of illness;
• AI-enabled telemedicine;
• Use of AI to detect diabetic retinopathy.
The OECD identified six broad roles for governments in AI, as a:
• Financier or direct investor;
• “Smart buyer” and co-developer;
• Regulator or rule-maker;
• Convenor and standard setter;
• Data steward;
• User and services provider.
Governance of the Public Sector
1. Governments should conduct transparent, inclusive impact assessments before selecting or using any AI
technology for the health sector and regularly during deployment and use. This should consist of ethics,
human rights, safety, and data protection impact assessments. Governments should also define legal and
ethical standards for procurement of AI technologies and require public and private health- care providers to
integrate those standards into their procurement practices.
2. Governments should be transparent about the use of AI for health, including investment in use, partnerships
with companies and development of AI in state- owned enterprises or government agencies, and should also
be transparent about any harm caused by use of AI.
3. Governments and national health authorities should ensure that decisions about introducing an AI system for
health care and other purposes are taken not only by civil servants and companies but with the democratic
participation of a wide range of stakeholders and in response to needs identified by the public health sector
and patients. They should include representatives of public interest groups and leaders of marginalized
groups, who are often not considered in making such decisions.
4. Governments should develop and implement ethical, legally compliant principles for the collection, storage
and use of data in the health sector that are consistent with internationally recognized data protection
principles. In particular, governments should take steps to avoid risks of bias in data that are collected and
used for development and deployment of AI in the public sector.
5. Governments should ensure that any use of AI to facilitate access to health care is inclusive, such that uses of
AI do not exacerbate existing health and social inequities or create new ones.
Recommendations: Regulatory Considerations
1. Governments should introduce and enforce regulatory standards for new AI technologies to
promote responsible innovation and to avoid the use of harmful, unsafe or dangerous AI
technologies for health.
2. Government regulators should require the transparency of certain aspects of an AI
technology, while accounting for proprietary rights, to improve oversight and assurance of
safety and efficacy. This may include an AI technology’s source code, data inputs and
analytical approach.
3. Government regulators should require that an AI system’s performance be tested and sound
evidence obtained from prospective testing in randomized trials and not merely from
comparison of the system with existing datasets in a laboratory.
4. Government regulators should provide incentives to developers to identify, monitor and
address relevant safety- and human rights-related concerns during product design and
development and should integrate relevant guidelines into precertification programmes.
Regulators should also mandate or conduct robust marketing surveillance to identify biases.
Recommendations: Policy Observatory and
Model Legislation
1. WHO should work in a coordinated manner with appropriate
intergovernmental organizations to identify and formulate laws, policies
and best practices for ethical development, deployment and use of AI
technologies for health.
2. WHO should consider issuing model legislation to be used as a reference
for governments that wish to build an appropriate legal framework for
the use of AI for health.
Recommendations: Global Governance of Artificial
Intelligence
1. Governments should support global governance of AI for health to ensure that the development and
diffusion of AI technologies is in accordance with the full spectrum of ethical norms, human rights
protection and legal obligations.
2. Global health bodies such as WHO, Gavi, the Vaccines Alliance, the Global Fund to Fight AIDS,
Tuberculosis and Malaria, Unitaid and major foundations should commit themselves to ensuring that
adherence to human rights obligations, legal safeguards and ethical standards is a core obligation of
all strategies and guidance.
3. International agencies, such as the Council of Europe, OECD, UNESCO and WHO, should develop a
common plan to address the ethical challenges and the opportunities of using AI for health, for
example through the United Nations Interagency Committee on Bioethics. The plan should include
providing coherent legal and technical support to governments to comply with international ethical
guidelines, human rights obligations and the guiding principles established in this report.
4. Governments and international agencies should engage nongovernmental and community
organizations, particularly for marginalized groups, to provide diverse insights.
5. Civil society should participate in the design and use of AI technologies for health as early as possible
in their conceptualization.

You might also like