2023 MEG Report
2023 MEG Report
2023 MEG Report
The Multistakeholder Experts Group Annual Report (MEG) Report for 2023 was prepared by the MEG
Chair and supported by the Expert Support Centres.
JT03534656
OFDE
This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the
delimitation of international frontiers and boundaries and to the name of any territory, city or area.
Multistakeholder Expert Group
Annual Report
Citation
GPAI 2023, “Multistakeholder Expert Group Annual Report 2023”, November 2023, Global
Partnership on AI.
Executive Summary 1
MEG Chair Welcome 2
Introducing the Multistakeholder Expert Group 5
Multistakeholder Expert Group - Progress Report 6
GPAI Priorities 13
Resilient Society 13
Climate Change 14
Human Rights 15
Global Health 16
Multistakeholder Expert Group - Strategic Planning 17
Forward Look 18
ANNEX 20
0
Executive Summary
The Global Partnership on AI (GPAI) is a multistakeholder initiative bringing together leading experts
from science, industry, civil society, international organizations and government that share values to
bridge the gap between theory and practice on AI by supporting cutting-edge research and applied
activities on AI-related priorities.
With the aim of promoting international collaboration, minimising duplication, serving as a global
reference point for particular AI issues, and ultimately fostering trust in and the adoption of
trustworthy AI, GPAI is uniquely placed globally through its unique mechanism for sharing
multidisciplinary research among AI practitioners and identifying key issues.
The Multistakeholder Expert Group - which we familiarly call the MEG - is reuniting all four expert
working groups working on the themes of responsible AI, data governance, the future of work, and
innovation and commercialization. The MEG assesses the scientific, technical, and socio-economic
information relevant to understanding advanced AI systems including its impacts whilst encouraging
its safe, responsible and ethical deployment.
All activities undertaken by the MEG aims to promote ethical development of AI based on the
values of inclusiveness, diversity, creativity, and economic growth while advancing the UN
Sustainable Development Goals.
Inma Martinez
Chair of GPAI Multistakeholder Expert Group
Co-Chair of GPAI Steering Committee
Technology Innovator and AI Pioneer
Guest Lecturer at Imperial London School of Business, London, United Kingdom
Director of the Master on Artificial Intelligence at Loyola University Spain
Government and Corporate Advisor on Digital Transformation and Artificial Intelligence
The year 2023 has created an enormous divide in the AI landscape. While the expansion of
traditional Artificial Intelligence has continued to bring forth extraordinary progress and the promise
of transformational rewards across many sectors and social scenarios, the rise of advanced AI has
exponentially increased the negative and threatening effect of Artificial Intelligence. Algorithmic
inequality, bias, poor inclusiveness of intercultural values, misinformation at super-scale and the
continuous degradation of the mental health of young people on social platforms, are exponential
challenges that the AI community and governments must solve with approaches that are effective
and actionable at international and national levels.
In May this year, the G7 convened the GPAI at the launch of the Hiroshima Process to provide
specific advisory and strategic solutions to the advanced AI challenge. When our co-chairs looked at
the list of concerns detailed in the communiqué, we noticed that for some of them, the MEG already
had ongoing initiatives because since 2022 we had approaches to projects that spotted the rise of
Generative models being developed. On 9th October 2023, at the United Nations Internet
Governance Forum in Kyoto, GPAI presented how our Experts have responded to all points in the
Hiroshima Process list of challenges with innovative solutions and strategic advisory, some of which
has also been presented at the United States Congress and the European Commission and been
incorporated into Executive Orders and AI directives. The MEG works at all times towards the goal
of supporting governments halt the widespread of algorithmic misinformation threatening our
democracies, suggesting this year to create detection mechanisms for social media, elaborated
protocols on how to create sandboxes and algorithm repositories for responsible AI in public
procurement, as well as protecting AI innovation whilst incorporating human-centric and design-led
principles. A total of 18 solutions to the ten areas of concern listed are currently in our pipeline. In
2024 we remain committed to deploy further ones and launch many of the ones already started in
2023.
The activities of the MEG in 2023 were impacted by the unexpected release of a generative AI
model in the Autumn of 2022. At the request of the GPAI Members from the United States, the chair
of the MEG was encouraged to pivot from the linearity of our activities and create a response to
what the GPAI members needed - a quick analysis of how Generative AI was going to increase the
AI challenges. The MEG chair convened the Expert Support Centres and the co-chairs of the
Working Groups in January 2023 and proposed to organise a Townhall. It is important to explain
where the idea of the Tonwnhall came to be, and the fact that the MEG was able to react because
the new MEG chair had had the opportunity of having a direct relationship with one of the GPAI
The MEG wants to continue organising Townhalls in 2024, and invites the GPAI Members to reach
out when specific issues of concern affect their individual nations, continents or economic alliances.
The Experts keep a close eye on all AI events emerging, and we would like to have a protocol to
bring them to the attention of the Members with ease, and to convene as seamlessly as possible. AI
is a living thing, as the MEG chair mentioned in her opening speech at the Townhall, and we must
react swiftly and stay alert to any nuances in the marketplace that may come to affect our project
roadmaps, because advanced AI is today a competitive attribute both commercially and
geopolitically. These aspects, not just the scientific ones, will create tectonic forces in 2024 that
GPAI members will have to address in completely new ways.
The MEG Project Leaders have also organised workshops where public participants have been
invited to provide input and reactions to our work. It merits to be mentioned the success of the
workshops organised by the IP Project Advisory Group of the Innovation & Commercialisation of AI
Working Group on ‘Exploring Pathways to the Standardization of Licenses for AI Data and Machine
Learning Models’ co-organized with the Max Planck Institute for Innovation and Competition in
Munich, Germany, and Duke University in the United States. The relevancy of this project - to
support the development of an informed and inclusive ecosystem that can advance efforts to
develop standard contract terms for responsible and efficient data and AI model sharing to help
unlock the promise of AI, and its vision to involve market players to engage in its formulation, is a
solid example of the gravitas that our Experts possess and the vision that we have for GPAI MEG to
be an effective tool for “real solutions for real problems beyond policy”.
As part of the mandate undertaken as chair of the MEG, other initiatives that could not be fulfilled in
previous years were finally accomplished. This was the case of the Innovation Workshop, proposed
by CEIMIA in previous years. In both entrepreneurial and corporate environments, the
transformational effect that innovation methodologies bring to strategy, ideation and risk
assessment, are core protocols that allow not just for the invigoration of product development, but
also strategic pivoting, and the creation of better adjusted business models that create optimised
competitive attributes. Is there room for governments to use innovation methodologies to
re-evaluate roadmaps, concerns, assumptions, and the re-calibration of needs? The answer is a
resounding “Yes”. The Innovation Workshop held in Montreal in late September this year happened
because the MEG and the ESCs fought hard for it, grounded in the firm belief that we could
demonstrate another way in which the MEG could create asset-value for the GPAI members. We
want to establish a protocol to re-evaluate our roadmaps each year by putting them through the
sieve of an innovation workshop to which all GPAI Members can send sector specialists, and
together with the Experts and invited guests, immerse themselves in the tasks to fine-tune our
assumptions and approaches, just like AI scientists do. On this report we would like to share not
only all the statistics, value points and Key Performance Indicators that emerged from the
Innovation Workshop, so that in 2024 we increase the benefit for all Members, but the philosophy
behind it, and the purpose of empowering government representatives to actively learn the tools of
At the time of writing this opening letter, I am confirmed one more year as an Member-Nominated
Expert and as chair of the MEG by acclamation. I accepted my appointment as Expert in 2021
because, fundamentally, beyond our common mission to relentlessly work for the economic
progress, and social welfare that the world needs, there is room for innovative approaches in
collaborating at international level for the greater good that all citizens expect from their
governments and from the AI scientific community. Neither of these sides of the table have good
optics where it comes to AI: citizens blame governments for lateness in reacting to AI’s destruction
and threat to our human rights, democracy and the welfare of the most vulnerable, and the AI
community is blamed for developing something that could put the world on auto-destruction mode.
The GPAI is an incredibly powerful platform to prove that we both work for the greater good with
concrete and attainable solutions, and we hope to increase this level of awareness in 2024 with a
future AI Academy.
I encourage you to read this MEG report as an invitation to know directly from the Experts how we
work and how on top of the real issues we are, and hopefully, support our efforts within a
collaborative environment between Members and Experts, allowing us to truly deliver to our great
potentiality.
Inma Martinez
_________________________________
Chair of GPAI Multistakeholder Expert Group
Co-Chair of GPAI Steering Committee
It is worth noting that the MEG reunites all four expert working groups working on the
themes of responsible AI, data governance, the future of work, and innovation and
commercialization to shape GPAI’s research agenda and to deliver practical projects
including action-oriented recommendations to the GPAI Members.
Currently, 33% of MEG Experts are women, a number which we’ll work to increase in the
future. Most of the MEG Experts (56%) come from the science sector, 21% are from the
industry, 8% are from civil society, 7% are from international organisations, 6% are from
government institutions and 2% are representatives from trade unions.
The MEG also represents an interesting diversity of countries, although more countries
should be represented, especially middle-low-income countries. A better balance should be
achieved in the coming months and years as the collaboration of all stakeholders is
necessary to ensure responsible development, governance, commercialization and the
future of artificial intelligence.
Following the viral launch of OpenAI’s ChatGPT-3 on November 30, 2022, it became clear to AI
experts, casual connoisseurs of AI, and even the AI unaware public alike just how impactful an
interactive generative AI tool could become in both the near and distant future. When GPT-3
launched, it was the first opportunity that the public had the opportunity to interact with ChatGPT
directly, ask it questions, and receive comprehensive and collaborative responses.
Noticing the immediate buzz around generative AI as a whole, the MEG was compelled to launch
our first MEG Town Hall meeting, which took place on May 15, 2023. The goal was to bring together
the Experts in an extraordinary meeting to address pressing issues raised by Members on
generative AI.
GPAI Members and Experts alike believe that generative AI affects society in unprecedented ways.
The short-term issues erode many of the pillars from which we founded our societies; the way in
which we work, how we educate people, share information, develop creative assets, and guarantee
the safety of citizens’ data has all been challenged by the recent release of generative AI tools to
the public.
The MEG suggested that we must democratise the development of AI by providing scientists and AI
developers with access to resources in order to complement the current commercialization of
generative AI. This recommendation is founded on the rationale that it’s in the best interest of the
society as a whole that AI is being developed by experts who do have the tools and guidelines
required to assess a responsible development and deployment of AI.
Due to the success of the Questions & Answers part that ensued the presentations, the MEG
offered to briefly answer further questions post Townhall. For some, inquiries centred around the
specific risks associated with generative AI, particularly those not adequately covered by existing
regulations. Emphasising gaps in regulatory frameworks, the focus extended beyond mere
application to encompass the entire lifecycle and developmental phases of AI systems. Concerns
were raised regarding economic regulations to address exclusion, discrimination, and ensure more
socially just outcomes.
Risks associated with generative AI were deliberated upon by others, highlighting threats to
democratic principles, safety, integrity, and education. The discussion prompted contemplation on
whether broader requirements should be established for large language models (LLMs) like
ChatGPT, given their adaptable nature across diverse contexts.
Other Members highlighted the need for agile governance tools to ensure the ethical and safe use
of generative AI amidst its rapid growth. Further questioning emphasised the necessity of governing
the entire AI ecosystem cohesively, addressing systemic inequalities and suggesting the importance
of differentiating content generated by AI systems from human-generated content. Some stressed
the need for accountable engagement with AI providers, defining liabilities throughout the AI
lifecycle, and promoting open-source rules to protect coders’ work.
Questions posed by others centred on defining generative AI and understanding its associated
benefits and risks. Concerns were expressed regarding compliance with copyright and personal
Overall, these global inquiries underscore the complexity surrounding generative AI and its
multifaceted implications across societal, ethical, and regulatory domains. The need for agile
governance, inclusive participation, ethical guidelines, accountability, and collaboration emerges as
essential components in navigating the evolving landscape of generative AI.
The discussions reveal a collective call for holistic and adaptive approaches, urging stakeholders to
collaborate globally, fostering equitable frameworks, ethical guidelines, and responsible adoption of
generative AI to harness its potential while mitigating associated risks. As nations grapple with the
challenges posed by this transformative technology, the need for continual dialogue, informed
regulation, and ethical guidance remains paramount in ensuring a balanced and beneficial
integration of generative AI into our societies.
Hiroshima AI Process
Following up from the GPAI 2023 Town Hall, the G7 met in Hiroshima on May 19-21, 2023, where
they agreed on the necessity to address the pressing challenges raised by advanced AI systems.
The Hiroshima process for Generative AI was agreed to move forward with the GPAI conducting
practical projects that would support the need to tackle the opportunities and challenges of these
advanced AI systems.
Moreover, the idea of the MEG Town Hall was to further facilitate the open exchange between the
growing community of GPAI Members and GPAI’s Experts who dedicate their time to working on
GPAI’s projects. Fostering the GPAI community between the Members and Experts is a top priority,
which was further explored in our first Innovation Workshop held in Montreal in September of this
year.
• Track 3: Addressing Climate Change (this track was cancelled due to the limited number of
interested participants)
Innovation workshops aim to accelerate the identification of key trends and challenges from a group
of different stakeholders. These events usually last from one to three days to align and prioritise the
most strategic opportunities to be addressed. During these workshops participants collect and
integrate “out-of-the-box” ideas and solutions. These events are designed and organised to
purposefully engage participants, boost their creativity and benefit from the diverse mindset of the
crowd. Participants work in teams to tackle complex challenges together. As a result, innovation is
accelerated as it recharges the portfolio of ideas, as well as empowers teams to implement the
outcomes.
The Double Diamond methodology was applied to organise and execute the GPAI Innovation
Workshop. During day 1 the organising team kicked off the workshop by presenting the objectives of
the event, explaining the proposed tracks, and sharing the results of the survey which highlighted
Based on the outcomes of this first Innovation Workshop, we propose five recommendations:
Building on the outcomes of the innovation workshop, GPAI Experts have developed a proposal for
an inclusive project to raise awareness on best practices for safe use and governance of advanced
AI systems the “GPAI Academy” project. The objective of this project is to educate society, through
all its stakeholders, about AI, on two specific tracks: (a) raise awareness among the general public
on artificial intelligence and the conditions for its controlled development, and (b) raise awareness
among AI specialists on conditions for deploying AI systems in a safe and trustworthy manner.
Many of the Innovation Workshop's calls for initiatives addressed the need to create content in order
to reinforce AI literacy among various categories of public: young people, students, teachers, and
workers. However, it makes sense to start by highlighting the best existing content, in consultation
with its creators.
In order to fulfil its role of helping countries getting access to resources and tools to seize the full
potential of AI in a trustworthy manner, GPAI could take the lead at international level on the
creation of pedagogical content (5-minute Youtube videos) that would raise awareness on emerging
issues related to AI. A starting point could be the creation of content based on insights from already
existing GPAI projects.
In order not to duplicate existing initiatives around the globe, the GPAI Academy project would be
conducted in close cooperation with individuals who already actively engage in knowledge sharing.
This initiative also seeks to foster collaboration among these individuals, urging them to collaborate
on addressing areas that are currently inadequately addressed in educational courses.
To pool efforts in a collaborative way, GPAI proposes the organization gathering a wide range of
specialists of AI training: MOOC creators, universities, AI institutes, teachers, NGOs, public
authorities, etc. In these events, such actors would get the opportunity to present their key materials
to four specific audiences: teachers, workers, youth, and citizens (TWYCs). To ensure a global
reach, these events would be promoted through social media and locally through educational
institutions.
Through the organization of such events, GPAI is seeking to identify topics that have not been
adequately addressed. To achieve this, GPAI proposes commissioning content producers and
engaging the Students Communities to develop content for one of the four targets mentioned.
Inadequately covered topics, such as the implementation of AI systems while considering diverse
trustworthiness factors, monitoring the energy usage of AI models, and comprehending the various
regulatory frameworks worldwide, are few instances that could benefit from further examination and
attention.
Organizing thoughtful discussions during these events may yield two possible outcomes: (a) an
emulation of producers to target and create in their own channels content on insufficiently covered
topics; (b) the creation of GPAI-branded specific content involving different kinds of content creators:
individuals, NGOs, universities, National AI Institutes, GPAI experts that could be broadcasted on
the official GPAI channels.
● Define and propose mechanisms to improve access to learning materials and models on AI
● Learning Accelerator which includes a learning portal with training, webinars, workshops,
train-the-trainers, curriculum compendium/recommendation
Needs :
● Team to map the already existing content, organise the events and disseminate the contents
● Funding to organize the events, pay multimedia services for video making
Steps :
● Map already existing content and their producers, classifying them following the TWYCs
approach
● Liaise with them to organize events on a dedicated AI topic and the related relevant target
(TWYCs)
● During the event, map the need for creation of new content that would be branded as GPAI
content
The MEG organises its work around four priority pillars identified by GPAI Members as follows :
(1) Ensuring a resilient society knowing the challenges ahead and preparing for them;
(2) Continue strengthening our efforts to mitigate the effects of climate change across
humanity including promoting the preservation of biodiversity;
(3) Making AI a powerful and effective tool while ensuring that future society is built
upon respect of human rights
(4) Supporting the healthcare systems of all nations through AI including addressing
the future new pandemics and threats to health that will require international
coordination and cooperation.
These transversal priorities provide orientation for the MEG to identify key issues that need
to be undertaken to address the specific challenges and opportunities presented by
advanced AI systems. Each priority theme involves fostering interdisciplinary dialogue,
engaging in evidence-based research, and formulating guidelines that promote safe,
responsible and ethical AI practices.
Further details on each priority theme including the projects undertaken in 2023 that fall into
one of the four priority themes are found below.
Resilient Society
Under the theme of resilient society, the MEG focuses on developing AI-driven solutions that
enhance societal resilience to various challenges. This includes the development of AI solutions for
disaster response, resource allocation, and infrastructure planning. The MEG works towards
establishing best practices and frameworks that contribute to the creation of resilient communities
and nations.
Harnessing AI’s potential for a resilient society helps it overcome salient challenges derived from
rapid sociological change as well as internal and external shocks. However, it requires AI literacy - a
transversal theme identified at the Innovation Workshop - to help citizens realise concrete benefits,
increased investments in research and development, improved data-related infrastructure and
Creating effective and mission-critical digital twins for resilience against disasters and climate
changes requires not only AI but also sensor networks, satellite-based monitoring, and a series of
specific actions linked to AI systems so that the status of this planet can be properly monitored,
perceived and analysed by AI systems that lead to specific countermeasures. Digital twins, with the
necessary digital security safeguards, can also result in efficiency gains from digitalisation. Making
the most of AI and digitalisation more broadly can also help address structural economic
challenges, such as shrinking and ageing populations and enable productivity gains, including in
small and medium-sized enterprises (SMEs).
The Big Unknown - A Journey into Generative AI's Transformative Effect on Professions,
FoW
starting with Medical Practitioners
* New proposals subject to Council approbation at the GPAI New Delhi Summit
Climate Change
AI technologies, particularly when fostering inclusion, offer novel solutions to help move towards a
low-carbon economy as well as to adapt to the impacts of climate change. In addressing climate
change, the MEG has been exploring the role of AI to support monitoring, mitigating, and adapting
to environmental challenges including how AI-driven solutions can preserve biodiversity. This work
conducted through the RAISE committee includes research into AI applications for sustainable
development, energy optimization, and climate modelling. By fostering collaborations between AI
experts and environmental scientists, the MEG aims to harness technology for the fight against
climate change for high-income to low to middle-income economies.
AI can potentially offer tools to defend human rights and democratic values around the world. AI has
the potential to automate tasks within government so that human rights cases can be heard and
addressed faster. Transparency, accountability and safeguards on how these systems are designed,
how they work and how they may change over time is therefore key. Poorly designed AI solutions
can lead to the reproduction and worsening of already existing biases. GPAI's work on the impact of
AI on human rights should be all-encompassing and include a strong focus on inclusion and gender
equality. Moreover, human rights, inclusion and gender equality should be regarded as a transversal
perspective across all GPAI work, thus avoiding niche conversations that only achieve local
optimums.
GPAI Members and Experts are brought together to foster the responsible development of AI,
grounded in the principles of human rights, inclusion, diversity, innovation, and economic growth.
Since 2021, the topic of AI in human rights has been identified as one of GPAI’s priority topics. The
MEG is uniquely positioned to build on foundational work in this area as it is able to bring together
academic, international organisations (including the OECD and UNESCO), industry, and civil society
expertise to address Member concerns and challenges in guiding the trustworthy development of AI.
AI can identify patterns or irregularities in health data at a fraction of the time needed via more
traditional methods, thus improving the accuracy of administrative or clinical decision-making, better
allocating resources, and anticipating risks. AI-enabled automated systems are being used by
experts to help analyse X-rays, retina scans, and other diagnostic images, examine biopsy samples,
predict risks of unplanned hospitalisation, and conduct genetic analysis. However, the use of AI and
data to accelerate medical breakthroughs should be guided by rigorous safeguards to protect
patient privacy and safety and increase human interpretability.
The 2024 Work Plan for projects continues to focus on delivering practical projects which bridge the
gap between theory and practice and will expand in 2024 to address additional areas of recent
concern such as advanced AI systems including Generative AI.
The consolidated Work Plan presents a total of 25 projects. The current project list includes 12 new
projects, including some which have emerged as initiatives identified as priority at the Innovation
Workshop, and 13 others which are continuing projects from the 2023 Work Plan.
The MEG is eager to move these projects forward - as such, all of these new proposals are open for
immediate adoption by GPAI Members, as they require additional funding beyond the base annual
project funding envelope as currently provided by Canada and France. If a GPAI Member is
interested in adopting one of the listed Innovation Workshop Future Projects, for example, IW #1:
Coordinating Compute Access, GPAI would undertake the expedited process for project approval
that was recently approved by the GPAI Steering Committee and Executive Council, and seek to put
out an open call for support from both GPAI and External Experts to mobilise a project team.
Interested Members for these initiatives are encouraged to reach out to the Expert Support Centres.
We’re looking forward to starting 2024 with these upcoming projects in the pipeline. We’re hopeful
that the next months will be productive and that our future research agenda will guide the next steps
on opportunities to go further and deeper in advancing research and practice on responsible AI,
data governance, the future of work, and innovation and commercialization.
Participation across our Multistakeholder Expert Group is a big part of what makes these projects
true international collaborations. The MEG is always looking out for talent that can join us. Member
countries nominate experts to join the MEG but the MEG also welcomes the participation of AI
specialists who can contribute to our projects with singular skills, new perspectives, and wide blue
sky thinking. If you are one of such individuals, you can join the MEG as a self-nominated expert by
directly applying to us. We would like to invite those who are interested to make a contribution to
these projects by joining our Project Advisory Groups to help shape direction, give feedback, and
review research.
You can express your interest in contributing by connecting with the Expert Support Centres : the
CEIMIA at [email protected] and the INRIA at [email protected].