Nextgen Risk Management: How Do Machines Make Decisions?
Nextgen Risk Management: How Do Machines Make Decisions?
Nextgen Risk Management: How Do Machines Make Decisions?
Effective risk identification and monitoring are integral to an organisation’s success and
improving strategic decision-making. Accurate and timely risk identification and assessment
help drive efficiencies and improve customer experiences with business processes.
Consistent with its agile risk management philosophy, Protiviti presents its perspective
on establishing and sustaining leading practices for identifying, assessing, mitigating and
monitoring risks stemming from artificial intelligence (AI).
• Customer Centricity
Customer
Satisfaction • Consistent Experiences
• Agility
Protiviti’s Agile Risk Management philosophy enables organisations to focus on growth, improve efficiency
and become more effective at managing risks while providing greater value to business partners.
Source: Protiviti Insights — Agile Risk Management: "As costs continue to increase, it is clear that the overly manual, reactive and siloed lines of
defence status quo is unsustainable and cannot continue. We believe risk capabilities must be agile, flexible and nimble in order to be effective and
efficient in responding to the changing environment. A better model is technology-enabled, proactive, aligned across all three lines of defence and
embedded into business processes. This is the solution we refer to as Agile Risk Management.”
Many organisations are quickly adopting AI based wide range of risks associated with AI. For instance,
on the benefits it can create. AI technologies have risk frameworks utilised to assess new products and
the potential to advance established industries by services, as well as activities, can be leveraged, as
improving the efficiency and accuracy of company AI is developed, implemented and changed. Another
operations and customer experiences. Additionally, useful framework is a model risk management
AI is opening the door to entirely new operating (MRM) framework that is based on identifying,
models, ushering in a new set of competitive measuring and monitoring all risks related to a
dynamics that rewards organisations focused on model — generally a component of AI in the form of a
interpreting and extracting internal and external machine learning algorithm.
data quickly and accurately. 1
MRM practices mitigate the risks of traditional
Machine learning, a type of AI, utilises the econometric model lifecycles, however, often they
fields of knowledge discovery and data mining. fail to capture the risks presented by AI. While these
Machine learning algorithms study and react to frameworks can be leveraged, organisations may not
data automatically, without human assistance be currently equipped and resourced to handle all risks
or intervention, enabling systems to learn from and ongoing monitoring needed in an AI environment.
experience and improve. However, using machine To account fully for risks posed by AI, organisations’
learning and AI increases complexity and creates existing frameworks and risk practices can be
new, more dynamic risks that may lead to tailored with some well-targeted enhancements
unintended consequences. within the AI lifecycle, as discussed in detail below.
To mitigate the new and changing risk environment, As use of AI continues to expand exponentially,
an organisation needs to have a properly established risk and compliance functions will be challenged to
risk management foundation. Organisations can rethink resourcing, traditional oversight monitoring
leverage existing risk management frameworks to techniques, and how to leverage existing frameworks to
create a framework that can identify and oversee the ease implementation and fully manage risks.
AI technologies have the potential to advance established industries by improving the efficiency and
accuracy of company operations and customer experiences. Additionally, AI is opening the door to entirely
new operating models, ushering in a new set of competitive dynamics that rewards organisations focused on
the scale and sophistication of data much more than the scale or complexity of capital.
“The New Physics of Financial Services: How Artificial Intelligence Is Transforming the Financial Ecosystem,” World Economic Forum, Aug. 15, 2018: www.weforum.
1
org/reports/the-new-physics-of-financial-services-how-artificial-intelligence-is-transforming-the-financial-ecosystem.
2 · Protiviti
AI in the Marketplace
The financial services industry continues to invest Financial institutions are incorporating AI into asset
heavily in artificial intelligence systems, leading other management, fraud detection, credit risk management
industries such as manufacturing, healthcare and and regulatory compliance, to name a few use cases.
professional services. Last year, research firm IDC said Specifically, these organisations are turning to machine
it expected the banking industry to spend more than learning models as an alternative to traditional models
$5 billion on artificial intelligence systems in 2019. to gain faster, more accurate, and insightful predictions
Overall, IDC projects spending on AI systems will reach and classifications in their risk management and
$97.9 billion in 2023, more than two and one half times financial management business decisions. Several
the $37.5 billion that will be spent in 2019. 2
types of AI components and the effect they have on
organisations are provided below.
Machine Learning Models Organisations can use AI as a modelling technique through machine
learning to improve decision-making in these select areas:
• Underwriting/credit decisioning
• Personalised marketing
• Asset management
• Compliance monitoring
• Credit risk management
• Customer segmentation
• Fraud detection
• Loss forecasting
Virtual Agents Virtual financial assistants or chatbots can guide consumers through
day-to-day financial tasks, providing personalised and proactive
assistance to help them stay on top of their personal finances.
Common Risks of AI
Regulatory and
Strategic Risk Operational Risk Technology Risk Financial Risk
Compliance Risk
4 · Protiviti
Although AI is innovative and technically complex, • A model is misused because of a misunderstanding
it has foundational components of a core model that of its purpose and limitations.
quantifies theories, techniques and assumptions from To avoid these challenges, organisations should
processed input data. However, the differences with consider these fundamental questions:
AI are the exponential increase of model complexity
due to intricate algorithms, vast unstructured data sets • Do you know how the machine learning model
and the potential for immense decision trees. AI — was built?
specifically, machine learning — removes the element • Do you know its purpose?
of human subject-matter expertise from the decision
process, which can result in unwanted risk exposure.
• Do you know how to use the results and how
success is defined?
As the use of machine learning models continues
The Federal Reserve Board (FRB) has reinforced
to expand across the financial services industry,
that SR 11-7/ OCC 2011-124 (Guidance on Model Risk
regulators are increasing their attention on model
Management) remains the applicable regulatory
risk. The following three root causes can result in
guidance on the use of AI. There have been no
model risk:
indications by the FRB of any new standards or
• A model has fundamental errors that cause it requirements that will come into place. Although
to produce inaccurate or biased outputs when SR 11-7/ OCC 2011-12 provides a foundation for
viewed against the design objective and intended establishing risk management frameworks for
business use. mitigating risks posed by AI systems, guidance
and expectations have not been expanded and
• A model is implemented or used inappropriately, formalised to address the dynamic changes,
or when its limitations or assumptions are not
unintended results, and bias risks5 posed by AI.
fully understood.
and-alternatives”
model findings 3
ign
11
lement and T
Effective
Challenge
itig
6 · Protiviti
Insight into the lifecycle will help organisations navigate various considerations, including risk and compliance,
governance and reporting, data management, technology, and workforce and training implications. Additionally,
an environment of effective challenge, where decision-making processes promote a range of views, fosters
independent testing and validation of current practices and AI solutions prior to implementation and production,
and an integrated environment of open and constructive engagement. Organisations can take the following actions
now to enhance risk mitigation during the AI lifecycle:
AI Governance Build-Out
• Adapt and extend existing model governance to • Configure a risk-based methodology consisting of
fit AI tools, specifically the use and maintenance severity tiers, which will incorporate the necessary
of models, validation of models, and the adequate requirements to implement AI successfully.
disclosure of model assumptions and limitations.
• Formalise a well-defined project oversight and
• Review and update the model risk policy regulating change management framework around AI systems.
the definition of model risk, scope of MRM, roles
and responsibilities, model approval and change
• Improve data quality programs to profile input
data and strengthen data governance (i.e.,
process and management of model weaknesses, to
embed data requirements and a rigorous data
encompass the new risks that AI presents.
monitoring process).
• Develop an AI policy consisting of requirements • Build a data warehouse for all performance
around use, development, and ongoing monitoring,
monitoring and testing data. This will allow an AI
which include roles and responsibilities for business
tool to easily input and manage the data repository
leaders, independent risk and compliance managers,
once the structure is built.
and technology and operations functions.
• Define the purpose and scope of the AI solution • Define hyperparameters, including a standard set of
clearly, including its methodology, decision criteria, analysis to be run on input data and output results.
and data requirements.
• Perform quality control during
• Hold meetings with key stakeholders to pre-implementation rollout.
understand the AI tool requirements, desired
output and use cases.
• Obtain appropriate approvals and signoffs for
development and use of the AI tool.
• Before developing an AI tool, map its process • Build mechanisms within the AI tool to ensure
workflow, including data inputs, variables, and
accountability and adequate access to redress.
monitoring triggers to gain a full understanding of
Algorithms, data and design processes should all
the foundation of the tool.
be auditable.
• Complete documentation of the AI tools underlying • Configure consistent and recurring testing in a
model’s purpose, design, assumptions, parameteri-
live environment.
sation, testing, limitations, and user instruction.
• Identify scale and potential inherent risks that may • Conduct preliminary analytics on the outputs
generated by the tool to understand its limitations
be triggered with the use of an AI solution.
and determine optimal parameters when building
• Examine the amount of change that a business will out the tool.
be required to undergo as it relates to building and
running the AI tool in production.
• Validate the parameters chosen through
human subject-matter experts (SMEs) and
• Embed, understand and analyse rules and industry benchmarks.
regulatory requirements in the algorithm design
and monitoring.
2 Implement
• Ensure the approved project plan serves as stakeholders to help mitigate risks associated with
the baseline or source of record, and acts as the implementation of the AI tool.
a “contract” of the work to be performed to
successfully implement the AI tool.
• Establish and monitor controls and human override
in the design of the algorithm to control inputs,
• Hold meetings with key stakeholders to introduce processing and outcomes during implementation.
the AI and designate model owners and SMEs to
monitor performance.
• Conduct proof-of-concept testing and/or controlled
case studies before going into live production.
• Configure a cross-functional team consisting of • Develop an implementation plan for moving the
data scientists, AI experts, model risk experts,
AI solution into production and assist with the
data officers, regulatory experts, and any key
implementation phase.
8 · Protiviti
• Develop and formalise communication protocols • Perform validation testing of the AI tool prior to
to internal and external stakeholders (e.g., implementation and make final updates to mitigate
consumers, investors, regulators) of the use of the any material weaknesses of the tool.
newly implemented AI tool.
• Perform rigorous and continuous testing of • Provide insight regarding risk and compliance
underlying/input data. considerations that align to the use of AI.
• Perform scheduled backups and parallel testing of • Conduct an independent audit to ensure the design
underlying/input data. and effectiveness of controls relied upon to mitigate
the model’s risks.
• Conduct periodic testing of the controls in place to
guardrail underlying/input data. • Perform an independent assessment of the process
for establishing and monitoring limits on model use.
• Perform post-implementation AI validation
testing and exceptions testing and conduct a • Conduct a bias/variance analysis.
risk assessment.
• Develop a challenger model using alternative
• Review AI model findings and hold meetings algorithms to benchmark output performance.
with key stakeholders and SMEs to discuss
key takeaways.
• Perform a post-implementation analysis to
determine if the change management process or
• Review performance threshold exception reports to methodologies need to be modified.
identify areas of improvement for the model.
• If needed, redesign and recalibrate the AI model
• Formalise review of key risks inherent in AI and its based on the findings, discussions, and risk and
operational component (e.g., economic variables, compliance considerations.
qualitative factors).
• Incorporate appropriate human intervention
• Perform a quality assurance review of surrounding throughout each component of the AI lifecycle.
business objectives, stated benefits and process flow.
• Develop an AI feedback loop consisting of existing
• Review choice of architecture, hyper-parameters, complaints and customer feedback to allow an
optimisers, regularisation and activation functions. organisation to understand and quickly resolve AI
issues and/or defects.
• Conduct an independent assessment as it relates
to operating within parameters outlined in the
approval documentation.
Numerous organisations are intensely focused and controls, while incorporating new AI activity
on gaining a competitive advantage through AI governance, agile implementation and effective
implementation. To succeed, organisations need challenge of AI tools. Establishing an AI risk
to commit to monitoring and understanding risks framework will benefit an organisation’s ability
posed by AI. and speed to innovate. This can be applied to all
three lines of defence and updated regularly to reflect
As AI becomes more prevalent, it is crucial for
evolving best practices and regulatory expectations.
organisations to move into an agile risk target state
The updated framework can leverage existing
to manage AI risks. An organisation can align its
governance and risk management activities while
MRM infrastructure with the enhanced procedures
catering to AI.
• Interpretability
Review Inventory
10 · Protiviti
With an agile AI risk framework, organisations should, at a minimum, implement the following activities and
concepts per the framework components:
1 Governance
• A formalised governance structure will establish verify if the AI was efficiently integrated into
accountability around the execution of the AI an organisation’s technological infrastructure
lifecycle. It will also assign appropriate resources without falling into algorithmic loops that
and processes required to assess the design and overload the system.
performance of the AI tool.
• With the enhancement of the governance structure,
• Organisations will be required to ensure resources organisations will need to incorporate the following:
possess the appropriate skill sets needed to
challenge, control, and monitor the use of
– A formalised, documented, clear, and
comprehensive definition of AI.
AI. However, due to the complexity of AI, the
respective skill set to govern AI effectively will be – Defined roles and responsibilities.
tailored for the sustainability and for each business – A formalised and socialised project
use of the AI tool. governance charter.
• Organisations will immediately need to revisit • The organisation’s model risk assessment
their tools inventory to ensure AI models are process, as required under regulatory guidance,
included. A robust model inventory provides will need to be formally adapted to incorporate
management with a comprehensive overview AI. The risk assessment process will need to
of all models in use, including model owners, assess model impact risk, covering both the
restrictions on use, and the validation status. assumptions that are drawn from models and
Lack of a robust method to update the model the impact of decisions based upon model
inventory on a regular basis can result in output. Conducting a risk assessment allows
undocumented model changes, inefficient an institution to understand inherent risks of
processes to risk rate models, and ineffective the business, products and services, as well
performance monitoring. as the effectiveness of the controls in place. A
periodic risk assessment will support appropriate
scheduling of monitoring to ensure resources are
allocated and risk is mitigated.
• Organisations will need an effective and transparent independent reviewer) will be required to maintain
process to improve underlying or input data and/or understand the following components:
throughout the model’s tenure. A formalised and
documented model input change management
– Data quality and data set integration.
process and communication plan is critical to the – Data architecture and data infrastructure.
aggregation and quality of underlying or input data – Understand > review > assess > remediate >
used in the AI tool. The key stakeholders (model algorithms.
owner, model user, model approver, and
– Transparency of algorithms.
– Effective controls in place to guardrail
underlying/input data.
• The successful development and implementation • Organisations should consider the key risks
of AI solutions within an enterprise depends generated from the use of AI. For example, data
largely on the design and effectiveness of the bias will require organisations to produce impartial
control and testing process. An enhanced control decisions by examining the choice of data. As bias
framework and continuous testing can help in AI can trigger costly errors, organisations will
reduce inherent risks to a residual risk level that need to focus on the front-end of the AI lifecycle,
aligns with the organisation’s risk appetite and the development of the AI tool. One way to identify
framework. Currently, organisations tend to test data bias is by benchmarking with other models or
new initiatives within a sandbox environment; the opinion of SMEs. Appropriate data de-biasing
however, given the complexity and development of techniques should be used to remove bias from
AI, they should consider configuring consistent and development data. In addition to traditional
recurring testing outside a sandbox. Developing a methods such as downscaling and quantile mapping,
control framework and testing process would allow randomisation and sample weighting should also
organisations to identify gaps and potential options be incorporated to correct data bias. The statistical
for improvement quickly. The control process soundness of selecting unbiased development and
should be determined and aligned by an established holdout data should be given extra emphasis for
and enhanced risk assessment framework. The machine learning models.
risk assessment process is critical, as it helps to
determine the controls needed to mitigate the
inherent risks.
12 · Protiviti
5 Ongoing Performance Monitoring
• Performance monitoring is essential to mitigating independently establishes key protocols for risk and
risks connected to AI tools. Effective monitoring compliance decisions while working with model
will help an organisation draw clear conclusions developers and owners. Lastly, the third line of
to support business decisions. An effective defence, specifically audit, conducts its own tests
performance monitoring function comes from a to ensure that the residual model risk of the AI
highly automated monitoring and testing program, tool does not surpass the risk appetite established.
using a common methodology and real-time The scope of activities by the third line of defence
reporting. Organisations can enhance the rigor of will stay similar in nature in comparison to the
the performance monitoring function by using the traditional MRM framework. However, the third
techniques below: line of defence will be required to expand its skill
set to understand how AI algorithms work and their
– Real-time monitoring and bias output intended use, as well as understand the risk they
reporting.
pose to technology infrastructure and operations.
– Results and output-based testing. To have the most impact, an effective challenge
• Effective challenge requires the cooperation and render analysis, as software has the potential to
alignment of all three lines of defence, as each fail to understand the impacts of the results. Lastly,
plays a specific role. The first line of defence, organisations should review and update policy,
specifically model developers and owners, works procedures and processes periodically to encompass
to understand and monitor the risks from the use the changes that AI brings, which, in turn, will help
of an AI tool. The second line, the model validators, an organisation effectively evaluate an AI tool.
• As with any model, periodic independent • SR 11-7 and OCC 2011-12 require that model
validations will continue to be a focal point of
7
documentation be comprehensive and detailed
AI monitoring. To assess the innovations of AI, enough so that a knowledgeable third party can
model validators will need to understand the recreate the model without having access to the
challenges, such as a model’s fitness for use, model development code. The complexity of AI and
and develop customised methods for validating the model development process are likely to make
AI tools. The validation will still be required to documentation of AI tools much more challenging
assess models broadly from four perspectives: than traditional model documentation. It is
conceptual soundness, process verification, recommended that organisations standardise their
ongoing monitoring and outcomes analysis. model development and validation procedures for
AI and provide a model documentation template
that is consistent with regulatory expectations and
its model risk management policies and standards.
7 Postmortem Review
• An organisation will need to plan strategically interpretability analysis, and review performance
and execute effectively around the performance threshold exceptions and controls in place. Based
monitoring results, as postmortem reviews will on the examination and reviews, organisations will
be crucial to refining and improving the models. need to constantly redesign and recalibrate the AI
Organisations will need to thoroughly examine the tool for continuous improvement.
analysis and explanation of the AI output, bias and
models-whitepaper-protiviti.pdf
14 · Protiviti
Conclusion
With the continued investment in AI, the use of AI An AI tool will never be fully clear of risk, but an
in business processes and practices is only growing efficient and effective AI risk management framework
larger in scope and deeper in granularity. To stay will keep risk manageable and enable organisations to
ahead and provide effective and efficient monitoring respond to fluctuations in the outputs and decisions
of risk, organisations will not only utilise AI as generated by AI. The key for all organisations using AI
their most comprehensive and valued tool but will currently is to build and maintain AI in a responsible
need agile risk and compliance management. and transparent way, which, in turn, will help reduce
Competitive advantages will come not only from operational cost and, more important, maintain the
how organisations use AI but also from how they confidence of customers.
are able to avoid mistakes, ensure smooth customer
experiences, prevent violations of law and explain
what AI is intended to do to customers and regulators.
Protiviti is a global consulting firm that delivers deep expertise, objective insights, a tailored approach and unparalleled collaboration to help leaders
confidently face the future. Through its network of more than 85 offices in over 25 countries, Protiviti and its independent and locally owned Member Firms
provide clients with consulting solutions in finance, technology, operations, data, analytics, governance, risk and internal audit.
Named to the 2020 Fortune 100 Best Companies to Work For® list, Protiviti has served more than 60% of Fortune 1000 ® and 35% of Fortune Global 500 ®
companies. The firm also works with smaller, growing companies, including those looking to go public, as well as with government agencies. Protiviti is a
wholly owned subsidiary of Robert Half (NYSE: RHI). Founded in 1948, Robert Half is a member of the S&P 500 index.
Protiviti has a record of success helping clients develop strong risk management practices with the responsiveness required for an ever-changing
business environment. We work with over 75% of the world’s largest financial institutions, which benefit from our collaborative team approach to
resolving today’s risk management challenges. Our professional consultants have varied industry and regulatory backgrounds that enable our unified
financial services practice, with the seamless integration of risk and compliance, technology, data and analytics solutions, to develop customised agile risk
management approaches to meet tomorrow’s challenges today.
Business, risk, compliance and internal audit groups need to work within an integrated framework with clear accountabilities that will lead to an aligned
organisation for making sound decisions. We address risk and operational excellence as two sides of the same coin, leading to agility and optimal performance.
We understand how customer satisfaction, and in turn growth, have become elusive. While risk management is intended to drive growth, it too often becomes
an inhibitor. Our expertise positions you at the forefront of effective risk management with a unique approach to reap both immediate and long-term benefits.
16 · Protiviti
© 2018 Protiviti Inc. An Equal Opportunity Employer M/F/Disability/Veterans. PRO-0918
THE AMERICAS UNITED STATES
Alexandria
Houston
Kansas City
Sacramento
Salt Lake City
ARGENTINA*
Buenos Aires
COLOMBIA*
Bogota
Atlanta Los Angeles San Francisco
Baltimore Milwaukee San Jose BRAZIL* MEXICO*
EUROPE, FRANCE
Paris
NETHERLANDS
Amsterdam
BAHRAIN*
Manama
SAUDI ARABIA*
Riyadh
SOUTH AFRICA *
Durban
ASIA-PACIFIC AUSTRALIA
Brisbane
CHINA
Beijing
INDIA*
Bengaluru
JAPAN
Osaka
Canberra Hong Kong Hyderabad Tokyo
Melbourne Shanghai Kolkata
Sydney Shenzhen Mumbai SINGAPORE
*MEMBER FIRM