Eca - Aqcg - Sop 03 - Aplm - v0.6 - Feb 2018 Final Draft
Eca - Aqcg - Sop 03 - Aplm - v0.6 - Feb 2018 Final Draft
Eca - Aqcg - Sop 03 - Aplm - v0.6 - Feb 2018 Final Draft
(APLM)
Technical Review:
Dr Phil Nethercote
On behalf of the ECA Analytical Quality Control Group
Approved by:
Dr Günter Brendelberger
On behalf of the ECA Analytical Quality Control Group
Table of Contents
Document Revision History ........................................................................................................................ 4
Expert Drafting Group ................................................................................................................................ 4
Regulatory References ............................................................................................................................... 5
Quality involvement/Responsibilities ......................................................................................................... 5
Introduction ............................................................................................................................................... 5
Rationale for this Guideline ........................................................................................................................ 6
Limitations of the current ICH Q2(R1) ................................................................................................................. 6
Limitations of USP General Chapters <1224>, <1225> and <1226> .................................................................... 7
Importance of adopting an APLM approach in the context of Data Governance ............................................... 7
Principles of Analytical Procedure Lifecycle Management (APLM) .............................................................10
Defining the requirement; The Analytical Target Profile ................................................................................... 16
Establishing Target Measurement Uncertainty ................................................................................................. 17
Specifications, Acceptance Criteria and Decision Rules .................................................................................... 18
Prerequisites for the APLM .......................................................................................................................19
Analytical Instrument Qualification ................................................................................................................... 20
Computerised System Validation (CSV) ............................................................................................................. 21
Reference Standards .......................................................................................................................................... 23
Sampling and Sample Management .................................................................................................................. 24
Training of analytical staff .........................................................................................................................24
Guidance recommendations for the 3 stages of the APLM .........................................................................25
Stage 1: Procedure Design and Development ............................................................................................25
Overview ............................................................................................................................................................ 25
Application of AQbD Principles in Analytical Procedures Lifecycle ................................................................... 27
Application of Quality Risk Management principles over the Analytical Procedures Lifecycle......................... 29
Process for Procedure Design and Development .............................................................................................. 36
Establishing an Analytical Control Strategy ....................................................................................................... 38
Establishing a Replication Strategy .................................................................................................................... 40
Procedure development and understanding..............................................................................................40
Critical Analytical Performance Parameters ...................................................................................................... 40
Selectivity (Specificity) ....................................................................................................................................... 42
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 2 of 74
Analytical Quality Control Group
Regulatory References
Quality involvement/Responsibilities
The extent of Quality oversight is very dependent on individual company requirements. Organisation and
nomenclature of Quality Control and Assurance functions and assignment of responsibilities are also highly company
specific. This Guideline does not dictate or recommend specific steps that must be supervised by specific quality
functions other than those required by regulation. Therefore the term Quality Unit (QU) as used in the revised
chapter 1 of EU GMP Guide, is used here.
Introduction
Currently, there are many activities to set requirements to improve the definition of a suitability of an analytical
procedure with respect to preciseness and / or accuracy which is important for an assessment e.g. on case of an
atypical or an OOS result or even as a control parameter for a synthesis of a drug substance or manufacturing of a
drug product. This, especially, includes the requirements for validation which should be designed specifically to the
purpose of the respective analytical procedure.
In addition, the development of an analytical procedure should be more understood as a process divided into 3
stages, the development, validation (qualification) and the lifecycle process which will help to define the variability
of a respective analytical procedure and provides the ability to support a manufacturing process, especially with
respect to critical performance parameters. Basic information for the implementation of the analytical procedures
into a process are already implemented into the ICH guidelines Q8, Q9, Q10, Q11 and the draft guideline Q12.
Therefore, it is necessary to create a guidance which implements current ICH requirements for analytical controls
into a process covering the development, validation (qualification) and the use of the procedure during life cycle into
a process called “Analytical Target Profile” (ATP) including future requirements, especially, with respect to validation.
This guideline is intended to apply to chemical methods of analysis for both univariate methods such as HPLC, UV
spectrophotometric assays for example and multivariate methods for identification of materials using FTIR, FTNIR
and Raman spectroscopies for example. The principles also apply to biopharmaceutical assay methods.
The ICH guideline Q2 (R1) describes a general validation protocol for the validation of chromatographic procedures
with respect to assay, determination of impurities and limit tests. Furthermore, the guideline defines a terminology
for all described validation parameters.
• Moreover, the examples given for the calculation of detection and quantification limits are based on the
slope of a linear calibration model and thus cannot be used when the calibration model is non-linear.
• With respect to “linearity”, two fundamentally different aspects are confused.
1. The “ability to obtain test results which are directly proportional to the concentration (amount) of
analyte in the sample” refers to the reportable result of the analytical procedure, but is already
included in accuracy.
2. The relationship of “signals as a function of analyte concentration” refers to the analytical response
function, i.e. the calibration model, which is often the linear one described in the guideline, but may
assume any other mathematical model.
Furthermore, regulatory changes require a redefinition of the terms for validation. The redefinition of “validation”
does not only address the determination of each validation parameter at a single point in time with respect to its
variability but should also be extended to a continuous process in order to control the analytical procedure under
life-cycle conditions.
Information with respect to robustness and requirements for a stability-indicating procedure should be included.
These experiments are essential and are required for submissions. In the case of forced degradation studies there
should be clear differentiation between the need to characterise a drug substance or a drug product and the
assessment of the capability to detect relevant impurities during stability testing. Many authorities require mass
balance determinations as part of the submission.
General chapter <1225> on validation of analytical procedures was a precursor to ICH Q2(R1) and was written more
than 15 years ago with a focus on chromatographic methods. General chapters <1224> on transfer of analytical
procedures and <1226> on verification of compendial procedures are more recent and were written in response to
user needs. As these three general chapters were written separately there is no overall consistent lifecycle
approach. The intention by USP is to write a new general chapter <1220> which covers the analytical lifecycle and all
the topics in the previous three general chapters.
The current regulatory climate has highlighted deficiencies and concerns regarding data integrity and security in
laboratories in a global context. These observational findings cover both fraudulent activities and bad practices.
Therefore, the lifecycle management approach for analytical procedures is an essential part of assuring technical and
procedural controls for data integrity as illustrated in Figure 1.
Data integrity and security is concerned with more than just the generation and security of correct numbers. It is
concerned with the totality of arrangements under a QMS to ensure that data, irrespective of the format in which
they are generated, are recorded, processed, retained and used to ensure a complete, consistent and accurate
record throughout the data life cycle.
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 7 of 74
Analytical Quality Control Group
Data governance is the term used for the overall control strategy to ensure data integrity and security. Functional
aspects within the laboratory and the quality function must be under control otherwise data integrity could be
compromised despite the best efforts of all staff. The foundation of data governance is the engagement and
involvement of executive and senior management throughout any organisation. Therefore, there must be
management leadership, data governance and data integrity policies that cascade down to laboratory data integrity
procedures together with data integrity training at all levels.
QMS
(PQS)
• Policies
• Procedures
• Training
DATA
GOVERNANCE Ethical
Technical &
Culture
Procedural
& Corporate
Controls
Management
for
Practices
Data Integrity
The main elements of data integrity have been well defined in the US regulation 21CFR 211 for many years. For
example
• § 211.68 requires that backup data are exact and complete, and secure from alteration, inadvertent
erasures, or loss
• § 212.110(b) requires that data be stored to prevent deterioration or loss
• §§ 211.100 and 211.160 require that certain activities be documented at the time of performance and that
laboratory controls be scientifically sound
• § 211.180 requires true copies or other accurate reproductions of the original records; and
• §§ 211.188, 211.194, and 212.60(g) require complete information, complete data derived from all tests,
complete record of all data, and complete records of all tests performed.
The integrity of the primary analytical record is dependent on many factors but the analytical procedure and its on
going control is critical.
Formulation &
Method of Site Master File Good Manufacturing
and Controls for
Manufacture
Corporate Policies Interpretation Practices & Regulatory
& Procedures Requirements
Trained implementation
across all functions
Laboratory
Manufacturing
Sampling Analysis &
Batch Record Plan & Record Testing Record
Raw Meta
Data Data
Derived data
& information
Figure 2 The primary analytical record in the context of Data Governance and Data Integrity 1Position of this guideline with
other ECA Guidelines
This guideline forms part of a set of ECA Guidelines and are available on the member’s area of the ECA website for
download. Previous relevant guidelines are;
OOS Guideline
OOE & OOT Guideline; v1.1 November 2016
Data Governance and Data Integrity Guideline; v2.0 October 2017
Process Validation Guideline
1
C. Burgess, R.D. McDowall, What’s In a Name?, LCGC Europe, 28(11) 621–626, 2015,
The overall lifecycle management three stage approach to analytical procedures is illustrated in Figure 3
Change Control
Procedure Procedure
Design Performance
(Development &
Qualification
Understanding) Analytical
Target
Profile
Deviation Management
& Change Control
Stage 3
Continued Procedure
Performance Verification
Monitoring
Any analytical procedure must be shown to be fit for its intended purpose before use. The current process of
demonstrating this ‘fitness for purpose’ in analytical laboratories takes place is by way of a documented validation
study as required by ICH Q2(R1) and USP <1225> and, if required, a verification under actual conditions of use, USP
<1226>, or transfer process, USP <1224>, to demonstrate the procedure performs appropriately in the laboratory in
which it will be routinely used. Hence these activities are seen to be discrete activities disconnected from
development activities and rigidly controlled. These documents are guidance documents but have acquired the
status or expectation of mandatory regulatory requirements.
Many of the elements necessary for the APLM are already in place in the current process but are not linked in a
lifecycle manner. Figure 4 shows the current process flows for both Pharmacopeial and User developed procedures.
User Developed
Validation <1225> & Verification
Controlled
Analytical
Procedure
Version
Operational use
Change Deviation
Control Management
Investigations
& CAPA
Analytical
Procedure
Release for Transfer
operational use Protocol
Transfer to
• New CDS system
Transfer <1224>
Figure 4 Current process flows for analytical procedure design to use processes
The lifecycle concept described in ICH Q8, and now reinforced in ICH Q12, is applicable to analytical procedures if we
consider that an analytical procedure is a process and the output of this process is the reportable result, that is, the
value that will be compared to the specification or acceptance criterion 2.
‘The purpose of applying lifecycle principles to analytical procedures is to holistically align analytical procedure
variability with the requirements of the product to be tested and to improve the reliability of the procedure by
understanding, reducing, and controlling sources of variability.
Enhanced understanding of variables that affect the performance of an analytical procedure provides greater
assurance that the quality attributes of the tested product can be reliably assessed. The lifecycle management
process provides a framework for defining the criteria for and development of an analytical procedure that meets the
acceptance criteria.’ 3, 4
The overall lifecycle from specification of the purpose (the ATP) through design and development (Stage 1),
qualification for routine use (confirmation of ‘fitness for purpose’ Stage 2) and on-going satisfactory performance
(Stage 3) is illustrated in Figure 3
2
USP40/NF35, General Notices 7.10. Interpretation of Requirements
3
Lifecycle Management of Analytical Procedures: Method Development, Procedure Performance Qualification, and Procedure
Performance Verification; Pharmacopeial Forum 39(5) 2013,
4
Proposed New USP General Chapter: The Analytical Procedure Lifecycle <1220>, Pharmacopeial Forum 43(1) 2017;
Regulatory requirements for on-going performance verification (Stage 3) are already defined under both US & EU
GMPs.
21 CFR§ 211.180(e) (and EU GMP Part I chapter 6.9) requires an ongoing program to collect and analyze product and
process data that relates to product quality must be established. Annex 15 of EU GMP has similar requirements.
The data collected should include relevant process trends and quality of incoming materials or components, in-
process material, and finished products. The data should be statistically trended and reviewed by trained personnel.
The information collected should verify that the quality attributes are being appropriately controlled throughout the
process.
Under the EU GMPs, manufacturers should monitor product quality, as part of Product Quality Review, to ensure
that a state of control is maintained throughout the product lifecycle with the relevant process trends evaluated.
Statistical tools should be used, where appropriate, to support any conclusions with regard to the variability and
capability of a given process and ensure a state of control.
All investigations pertaining to the batch being certified (including out of specification and adverse trend
investigations) have been completed to a sufficient level to support certification
In addition, USP General Chapter <1226>; Verification of Pharmacopeial Procedures and the FDA Analytical
Procedures and Methods Validation for Drugs and Biologics Guidance for Industry, specify that the suitability of all
testing procedures used shall be verified under actual conditions of use quoting 21 CFR§ 194(a)(2).
These requirements are part of ICH Q10’s (Pharmaceutical Quality System) continual improvement of process
performance and product quality section as shown diagrammatically in Figure 5 and specifically as part of the
process performance and product quality monitoring system.
In other words, a State of Control is established and confirmed during Stages 1 & 2 of the APLM, in which the set of
controls (The Analytical Control Strategy) consistently provides procedure performance assurance and is verified
during Stage 3.
The draft ICH Q12 uses the term Established Conditions (ECs) in place of the analytical control strategy and discusses
them in relation to analytical procedures in section 3.2.3.2. and in 8.1 regarding a structured approach to analytical
procedure changes over the lifecycle.
ECs related to analytical procedures should include elements which assure performance of the procedure.
Appropriate justification should be provided to support the identification of ECs for analytical procedures. The
extent of ECs could vary based on the method complexity, development and control approaches.
• Where the relationship between method parameters and method performance has not been fully studied
at the time of submission, ECs will incorporate the details of operational parameters including system
suitability.
• When there is an increased understanding of the relationship between method parameters and method
performance defined by a systematic development approach including robustness studies, ECs are
focused on method-specific performance criteria (e.g., specificity, accuracy, precision) rather than a
detailed description of the analytical procedure 5.
ICH Q12 is therefore preparing the way for performance based procedures as proposed by USP in 2009 6, 7 to be
developed.
In addition, the PDA proposed a similar but less rigorous lifecycle model in 2012 for biotechnology products.
Their process flow is illustrated in Figure 6.
5
ICH Q12: Technical and Regulatory Considerations for Pharmaceutical Product Lifecycle Management
Step 1, Core guideline, June 2017, section 3.2.3.2
6
R.L. Williams, D.R. Abernethy, W.F. Koch, W.W. Hauck, and T.L. Cecil, Pharmacopeial Forum 35(3) 2009, Stimuli to
the Revision Process: Performance-based Monographs
7
W.W. Hauck, A.J. DeStefano, T.L. Cecil, D.R. Abrnethy, W.F. Koch, R.L. Williams, Pharmacopeial Forum 35(3) 2009,
Stimuli to the Revision Process: Acceptable, Equivalent, or Better: Approaches for Alternatives to Official
Compendial Procedures
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 14 of 74
Analytical Quality Control Group
Risk-Target
Performance
Expectations FIND
ALTERNATIVE
Method Selection
Resources
Method ADJUSTMENT
Development
Investigate
Method root cause
Qualification
NO
Routine Use? END
YES
Method Validation
to ICH Q2(R1)
Figure 6 PDA 2012 development and qualification lifecycle for biotechnology products
(adapted and redrawn from Figure 1.2-1 8)
8
PDA Technical Report 57-2 Analytical Method Development and Qualification for Biotechnology Products, 2012
The ultimate purpose of the analytical procedure is to obtain a reportable value that will tell us something about the
quality of the product and/or the process step where the sample is taken. The test result or reportable result will
then be used to make decisions on the batch or process step. Therefore, it is important to design an analytical
procedure that will give the ‘right’ answer to the question asked. Defining the question (i.e. the requirement) is the
essential first step in starting the process of designing the analytical procedure.
It is important to note that the requirement is independent of the analytical technique and it is based on product
and process understanding (knowledge gathering). The establishment of a predefined objective and purpose of the
analytical procedure with predefined performance requirements is gathered in the Analytical Target Profile (ATP).
By way of an example, suppose that it is decided that it is necessary to travel to London from Copenhagen. This can
be done in several ways: fly, sail, go by car, train, bus, bicycle and even walk or do a combination of transport
options. So, the first part of the ATP is defining the purpose: It is necessary to travel to London.
The second part in defining the ATP is “the how-to”. I need to travel from Copenhagen to London because of a
dinner at 19:00 today. This requirement limits my transportation options to only flying, because any of the other
options will not get me to London in time. However, if the requirement was stated “I want to travel from
Copenhagen to London in the most cost effective manner”, choosing to fly may not be the best option. We can
continue to build the requirement by stating that the dinner is at a busy restaurant that will not hold the reservation
if I am more than 10 minutes late. Then we might want to take an early flight so we are more likely to get to the
restaurant in time. All these requirements and constraints are necessary to establish the ATP and in order to select
and design the analytical procedure that is capable of providing the performance needed to meet the ATP criteria.
This concept is also gathered in various Quality by Design papers 9, 10.
9
P Borman, P. Nethercote, M. Chatfield, D. Thompson, K. Truman. The Application of Quality by Design to Analytical
Methods, Pharmaceutical Technology ,31(10), 2007
10
G.L. Reid, J. Morgado, K. Barnett, B. Harrington, J Wang, J.Harwood, David Fortin. Analytical Quality by Design
(AQbD) in Pharmaceutical Development. American Pharmaceutical Review. 2013
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 16 of 74
Analytical Quality Control Group
It is the measured value and the uncertainty associated with it which are used in the reportable result.
Following the Journey analogy , the TMU is the error associated with the transportation. It includes, e.g. flight
scheduled duration and unforeseen delays due to traffic conditions.
The Target Measurement Uncertainty (TMU) is an essential element of the ATP as it establishes the fitness for ACS.
ATP
Target MU of
Reportable Value
Replication
strategy for Specifications
Reportable Value
11
J. Weitzel, Private Communication
The TMU is calculated using the model of a normal distribution that has a central value. In practice, this value may
not be centred on the midpoint in a specification range. The commercial manufacturing range may also not be
centred and/or the analytical procedure may have a bias. Both of these instances will impact the setting of the TMU.
Technical details of setting the TMU and the associated Decision Rules are beyond the technical scope of this
guideline and reference is made to more detailed discussion papers on this topic 12,17.
A major reason for adopting the lifecycle approach to an analytical procedure is to ensure that the
reportable result is fit for use. The analytical target profile (ATP) concisely defines the requirements for a
reportable result to be fit for use. The Decision Rules (DRs) define the use of the reportable result, and can
provide the information, such as acceptable probabilities, needed to set the target measurement uncertainty
(TMU), which is defined in the International Vocabulary of Metrology 13 as a "measurement uncertainty (MU)
specified as an upper limit and decided on the basis of intended use of measurement results.
Specifications for drug substances and drug products are currently primarily set by regulation without
explicit regard for metrological uncertainty. Strictly speaking, pharmacopoeial limits are not specifications
but standards which every sample must meet within shelf life under appropriate storage conditions 14.
Therefore pharmacopoeial limits should not be used as a release testing for a batch or lot as they relate
only to the sample tested in accordance with the monograph 15.
DRs give a procedure for the acceptance or rejection of a product based on the measurement result, its uncertainty,
and the specification limit or limits 16. Additionally, DRs can take into account an acceptable level with regard to the
probability of making a wrong decision. The wrong decision can lead to accepting an out-of-specification (OOS)
reportable result which is not true or rejecting an OOS reportable result which is true, or a false failure or a missed
fault 17.
12
C. Burgess The Basics of Measurement Uncertainty in Pharma Analysis. Pharmaceutical Technology, 37(9) 2013
13
ISO International vocabulary of metrology - Basic and general concepts and associated terms (VIM), ISO/IEC Guide
99:2007. International Organization for Standardization, Geneva; 2007.
14
L.D. Torbeck, In Defense of USP Singlet Testing, Pharmaceutical Technology, February, 38-40, 2005,
15
C. Burgess, Singlet Determination Revisited; Is there a difference between a Specification or a Standard?.
Pharmaceutical Technology, 41(10) 2014
16
C. Burgess Using the guard band to determine a risk-based specification. Pharmaceutical Technology, 38(10)
2014
17
C. Burgess, P. Curry, D.J. LeBlond, G.S. Gratzl, E. Kovacs, G.P. Martin, P.L. McGregor, P. Nethercote, H. Pappa,
J.Weitzel, Fitness for Use: Decision Rules and Target Measurement Uncertainty, Pharmacopeial Forum, 42(2) 2016
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 18 of 74
Analytical Quality Control Group
The analysis may be regarded as process whereby the inputs of samples, standards, reagents etc are converted via
an analytical procedure into reportable results from transformed measurement data. Like any process, there are
controllable and uncontrollable factors requiring risk mitigation. Such a process is illustrated in Figure 8.
CONTROLLABLE FACTORS PHARMACEUTICAL
QUALITY
SYSTEM
Policies Qualified
Qualified
and Validated and
Instruments
Procedures Systems Competent
& Apparatus
Staff
SECURED
REQUIRED OUTPUTS
INPUTS • Raw data
• Samples Analytical Procedure • Meta data
• Reference Within a ‘fit for purpose’ facility • Reportable
Standards results
• Reagents
As a precursor to the development and qualification of an analytical procedure, certain elements need to be in place.
This section briefly describes them and references more detailed guidances. Given the importance of data
governance and data integrity in a regulatory context, it is useful to note the 4 layers of analytical data integrity
lifecycle shown in Figure 9.
LEVEL 3:
Right Analysis for the Right Reportable Result
Data Acquired and Transformed that are Complete, Consistent and Accurate
LEVEL 2:
Right Analytical Procedure for the Right Job
Validated / Verified Under Actual Conditions of Use
LEVEL 1:
Right Instrument or System for the Right Job
Qualification and / or Validation for Intended Purpose
FOUNDATION:
Right Culture and Ethos for Data Governance & Integrity (DI)
Management Leadership, DG & DI Policies and Procedures, Staff DI Training
18
R.D. McDowall, Validation of Chromatography Data Systems, 2nd Edition, Figure 7.5, Royal Society of Chemistry 2017
This Stages 1 and 2 in this APLM guideline are primarily concerned with Level 2 assuming the prerequisites of Level 1
and the Foundation. Data Governance and Data Security is the subject of another ECA Guideline where these topics
are discussed in more detail. 19 Stage 3, Performance Verification maps more clearly to Level 3 in the Analytical Data
Integrity model.
It is a basic tenet for all analytical scientists that, before the commencement of an analytical procedure, they must
ensure the suitability and proper operation of any instrument or system that is part of the measurement process in
order to demonstrate ‘fitness for purpose’. This challenge is becoming increasingly difficult as spectroscopic
applications increase 20, 21, 22.and specialist knowledge in the laboratory becomes rarer. This is particularly true for
newer sensor technologies such as those used in PAT.
A large variety of analytical instruments, ranging from simple apparatus to complex computerized systems are used
in the pharmaceutical industry to acquire data to help ensure that products meet their specifications. The majority
of these instruments combine a metrological function with software control.
As a precursor to the development and qualification of an analytical procedure, any analytical instrument or system
needs to have its “fitness for purpose” established. Regulatory guidance in this matter can be found in USP General
Chapter <1058> 23 and additionally for metrological and calibration guidance in the EDQM’s technical guide for the
ELABORATION OF MONOGRAPHS 24.
There are many ways of demonstrating that an instrument is qualified and under control and these can include
qualification, calibration, validation, and maintenance. In order to assure “fitness for purpose” an integrated
approach based upon a risk assessment is recommended.
USP General Chapter <1058> provides general guidance in a scientific, risk-based approach for carrying out an
analytical instrument qualification (AIQ). Detailed instrument operating parameters to be qualified are found in the
respective general chapters for specific instrument types.
19
ECA Data Governance and Data Integrity Guideline; v2.0 October 2017
20
C. Burgess, J Hammond, Standards in spectroscopy and spectrometry for the 21st Century; Establishing trust in
measurement, Spectroscopy Europe,20(5), 24-26, 2008
21
C. Burgess, J Hammond, Exploding the Myths – an update on UV-Visible Spectrometry, Spectroscopy Europe,21(1),
22-24, 2009
22
C. Burgess, J Hammond, Exploring the ‘forgotten’ region – an update on NIR Spectrometry, Spectroscopy
Europe,21(1), 19-22, 2009
23
USP 40 2017 (1st supplement), General Chapter <1058>,ANALYTICAL INSTRUMENT QUALIFICATION
24
Technical guide for the ELABORATION OF MONOGRAPHS, 7th Edition, EDQM, 2015
It is left to each laboratory to justify and document its specific approaches. The instrument owners/users and their
management are responsible for assuring their instruments are suitably qualified and calibrated.
Analytical instrument qualification is not a single continuous process, but instead results from several discrete
activities over the life time of the instrument. For convenience, these activities can be grouped into four phases:
design qualification (DQ), installation qualification (IQ), operational qualification (OQ), and performance qualification
(PQ). Some AIQ activities cover more than one qualification phase, and analysts potentially could perform them
during more than one of the phases. However, in many instances there is need for specific order to the AIQ
activities; for example, installation qualification must occur first in order to initiate other qualification activities. All
AIQ activities should be defined and documented.
Routine analytical tests do not constitute OQ testing. OQ tests are specifically designed to verify the instrument's
operation according to specifications in the user's environment as documented in the DQ and repeating the testing
at regular intervals may not be required. However, when the instrument undergoes major repairs or modifications,
relevant OQ and/or PQ tests should be repeated to verify whether the instrument continues to operate
satisfactorily. If an instrument is moved to another location, an assessment should be made of what, if any, OQ test
should be repeated.
There is an increasing inability to separate the hardware and software parts of modern analytical instruments and
systems. In many instances the software is needed to qualify the instrument and the instrument operation is
essential when validating the software. Therefore to avoid overlapping and potential duplication, software
validation and instrument qualification can be integrated into a single activity.
Software used for analytical work can be classified into four categories: firmware; instrument control, data
acquisition, and processing software; and standalone software.
Where applicable, OQ testing should include critical elements of the configured application software to show that
the whole system works as intended. Functions to test would be functions applicable to data capture, analysis of
data, and reporting results under actual conditions of use and security and access control and audit trail. The user
should apply risk assessment methodologies and leverage the supplier's software testing to focus the OQ testing
effort.
Detailed advice regarding CSV may be found in the GAMP Good Practice Guide 25, GAMP 5 26 and the PIC/S Guide 27.
25
GAMP® Good Practice Guide: Testing GxP Systems, 2nd Edition, ISPE, December 2012
26
GAMP® 5: A Risk-Based Approach to Compliant GxP Computerized Systems, ISPE, 2008
27
Pharmaceutical Inspection Convention / Pharmaceutical Inspection Co-operation Scheme, Computerised Systems
in GXP Environments (PI-011-3), 2007
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 21 of 74
Analytical Quality Control Group
In addition, there has been a proposal to harmonise the GAMP and USP <1058> approaches 28. In the proposal, the
data quality triangle is expanded to include more detail and the responsibilities of the user with respect to the 4Qs
model plus also the responsibility of the instrument manufacturer and supplier see Figure 10.
Both the GAMP® 5 / Laboratory GPG and USP<1058> present a risk-based approach (using categorization) for
compliant laboratory computerized systems (one based on software and one on hardware) and are designed to
ensure that laboratory computerized systems are fit for purpose and operated in a controlled manner to produce
the expected results.
In addition, the proposal maps the GAMP®5 and USP <1058> in order to provide a harmonised approach (Figure 11).
Figure 10 The Proposed update to the USP <1058> Data Quality Triangle28
USP <1058>
Group Group Group
Instrument
A B C
Driven Approach
28
L. Schuessler, M.E. Newton, P. Smith, C. Burgess and R.D. McDowall, Pharmaceutical Engineering, Jan/Feb, 46-56,
2014,
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 22 of 74
Analytical Quality Control Group
Reference Standards
Traceable reference standards are an essential prerequisite for assuring freedom from bias when developing and
using analytical procedures. Standards are required for identity and purity of chemical entities as well as for
calibration of instruments and systems. Data must be available to establish that the analytical procedures used in
testing meet proper standards of accuracy, sensitivity, specificity, and reproducibility and are suitable for their
intended purpose 29 and 30. Specific holistic testing standards should also be developed by the user to ensure
instrument, system and procedure suitability under routine conditions of use 31.
In the glossary of EU GMP 32, the role of reference standards in calibration is defined as;
‘The set of operations which establish, under specified conditions, the relationship between values indicated by a
measuring instrument or measuring system, or values represented by a material measure, and the corresponding
known values of a reference standard’
EDQM USP
PRIMARY
WHO • CRS Chemical Reference Standard • Reference Standards
REFERENCES
Antibiotics • BRS Biological Reference Standard • Authentic Substances
(Substance)
• HRS Herbal Reference Standard • Authentic Visual References
Impurities
EDQM & USP
Assay Identity & related
secondary
substances
Specialist mixtures
Viscosity
Accredited Impurities Industrial
Traceable artefacts Secondary or Working Standards
Laboratories Laboratories
Spectroscopic standards
etc
• Mass
• Temperature
• Primary
• Buffers
• Secondary
• Wavelength
• Working
• Absorbance
Transfer
Standards
National
PRIMARY Laboratories
REFERENCES • NIST USA
(Metrological) • NPL UK
• NRCC CA
etc
29
21 CFR 211.165(e) and 211.194(a)(2)
30
FDA Analytical Procedures and Methods Validation for Drugs and Biologics Guidance for Industry, July 2015
31
W B Furman, T P Layloff & R T Tetzlaff, J AOAC International, 77(5) 1314-1317 (1994)
32
https://ec.europa.eu/health/sites/health/files/files/eudralex/vol-4/pdfs-en/glos4en200408_en.pdf
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 23 of 74
Analytical Quality Control Group
Reference substances may play also an important role e.g. for the determination of a correction factor.
In many cases separate analytical procedures might be necessary in order to characterize a reference standard in a
sufficient manner. Also these analytical procedures should be scientifically sound e.g. using a reduced validation
program shown to be ‘fit for purpose’ which could e.g. precision, linearity, and QL for the determination of an
impurity.
Secondary or working standards should be qualified using appropriate statistically methods for example ISO Guide
80 33
The importance of correct sampling cannot be overemphasised 34. Based upon the testing of a sample, the purpose
of analysis is to be able to predict the properties of the batch or lot. As Deming 35 reminds us ‘The object of taking
data is to provide a basis for action’.
If the laboratory test sample is not representative of the batch or bulk material, it will not be possible to relate the
analytical result measured as an estimate of the analyte concentration in the original material, no matter how good
the analytical procedure is nor how carefully the analysis is performed.It is important to understand that sampling is
always an error generating process and that although the reportable result may be depen dent upon the analytical
method, it will always be dependent upon the sampling process 36.
A guideline for sampling principles and practices has been published by WHO in 2005 37 and from the food industry,
AFFCO 38. However, the sections dealing with reduced sampling plans in the WHO guidance have to be read very
carefully to avoid misinterpretation.
33
ISO Guide 80, Guidance for the in-house preparation of quality control materials (QCMs), 2014
34
L D Torbeck, Representative Sampling: Understanding the differences between convenience, target, and self-
selected samples, Pharmaceutical Technology, 36(4), 38-40, 2012
35
W Edwards Deming, Statistical Adjustment of Data, 1943
36
C Burgess, Valid Analytical Methods and Procedures, The Royal Society of Chemistry, 2000
37
World Health Organization, WHO guidelines for sampling of pharmaceutical products and related materials, WHO
Technical Report Series, No. 929, 61-, Annex 4, 61-93, 2005
38
Sample and Sample Handling Working Group, Good Samples, FDA, AAFCO, AFDO,APHL and Industry, October 2015
http:///www.aafco.org/Punlications/GOODSamples
need. In addition, the skill base in the laboratory should cover the statistical aspects of data evaluation and
interpretation
The practical effectiveness of training should be periodically assessed. Training programmes should be available and
approved by management as appropriate. Training records should be kept.
Training requirements cover technical and operational aspects as well as GMP needs under the QMS.
Once the ATP and the TMU have been established, the analytical technique can be selected that provides the
performance needed to meet the ATP requirements. The technique to be selected must be available both in
development and in the production sites and the staff must have the necessary expertise to run the technique
during routine operation. Typical analytical techniques that are required by the specifications ICH (Q6A and B)
include those designed to evaluate physical and chemical/biological properties of the drug substance or drug
product including identity, assay (quantity and potency), purity and impurities and safety (for sterile products).
Some of the techniques may also be required to show stability indicating properties in which case, the ATP should
also include that the analytical procedure must be able to detect physical or chemical changes due to storage
conditions.
Figure 13
Analytical Target
Profile
Knowledge
gathering
Knowledge
gathering
Samples Selection of
Design
Reportable
and Analytical
result
Standards Technique
Identify Method Conduct DoE and of cPPs on the Design Region (MODR)
Variables =CPP robustness studies Reportable result and Normal Operating
(CQA) Range (NOR)
Analytical
Strategy
Control
ANALYTICAL
PROCEDURE
The analytical procedure is a process and we need to start thinking in a process-wise way in order to design and
control the analytical procedure. The principles of QbD developed for manufacturing processes can well be applied
to the design of analytical procedures to ensure that they are robust, will be successfully transferred from
development to the manufacturing facility and delivers quality reportable results. 39,40
A comparison of the application of QbD to production processes are shown in Figure 14 and Table 1.
39
P. Nethercote, J.Ermer, Quality by Design for analytical methods: implications for method validation and transfer,
Pharmaceutical Technology, 36(10), 74–79, 2012
40
D.Thompson, K. Truman, The application of Quality by Design to analytical methods, Pharmaceutical Technology,
31(10), 142–152, 2007
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 27 of 74
Analytical Quality Control Group
Process The activities performed to develop Analytical The activities performed to develop a
Development a manufacturing process which will Development measurement procedure which will
Activity produce a product that meets the Activity produce a reportable value that meets
requirements of the QTTP. the requirements of the ATP.
Continued ICH Q8(R2) describes CPV as an Process Ongoing assurance is gained during
Process approach to process validation Performance routine commercial use that Analytical
Verification that includes the continuous Verification: procedure remains in a state of control
monitoring and evaluation of
manufacturing process
performance (Q8, Q9 & Q10
Questions and Answers,
Appendix Q & A’s from training
sessions (FDA, July 2012)
Table 1 Comparison of activities and nomenclature for QbD for Manufacturing Processes and Analytical Procedures
Application of Quality Risk Management principles over the Analytical Procedures Lifecycle
The principles of the Quality Risk Management (QRM) should be applied through all the stages of the analytical
lifecycle. It starts with the identification of the method variables and evaluation of the risks to be investigated in DoE
and robustness studies and continues through the design phase and transfer phase to establishment of the analytical
procedure in the final QC lab. Risk evaluations should be revisited whenever a problem occurs (non-conformity, OOS
or trends). In the context of analytical procedures, the “product” is the reportable result and the risks are evaluated
in relation to the impact on the quality of the reportable result.
The QRM for an analytical procedure is a systematic process for assessment, evaluation, control and review of the
quality of the reportable result. Risk Management shall be carried out in order to ensure that design of an analytical
procedure is in accordance with the ATP and the TMU.
The risk management process 41,42 should include the activities shown in Figure 15.
The process of evaluating measurement uncertainty includes risk analysis. The Eurachem guide 43 gives good
guidance on this topic including identifying MU component. Significant risks need to be mitigated using the
analytical control strategy.
41
ICH HARMONISED TRIPARTITE GUIDELINE, Q9, QUALITY RISK MANAGEMENT,2005
42
ISO Guide 14971:2012, Application of risk management to medical devices
43
Eurachem Guide for Quantifying Uncertainty in Analytical Measurement, 3rd Edition, 2012,
https://www.eurachem.org/index.php/publications/guides/quam
Risk analysis
• Identification of method
Risk assessment
variables
• Estimation of the risk(s) for
each identified situation
Risk evaluation
Risk control
Risk management
• Risk reduction
• Implementation of risk control
measures (ACS)
• Risk/benefit analysis
• Risk acceptance
Risk review
The first version of the RMP is established in the Design Input phase and it shall be amended, as required, at each
stage in the lifecycle, through the incorporation of additional material as appropriate for the next phase or phase
review.
Risk Assessment
This is a stage of learning and understanding what are the hazards (method variables) that influence the quality of
the reportable result. During Risk Assessment, the following is performed:
1. Risk Analysis:
Systematic use of available information to identify any method variables and estimate the risks
associated with those method variables. Experience and prior knowledge together with tools like
Ishikawa diagrams (Fishbone or Cause and Effect diagrams) are useful in identifying analytical method
variables such as material variables related to reagents or reference standards, equipment variables,
environment variables (e.g. temperature), training, procedural and measurement variables (e.g.
instrument accuracy). Risk identification addresses the question: What might go wrong? Risk analysis
estimates the risk associated with the identified hazards and addresses the questions: What is the
likelihood (probability) it will go wrong? and Will I be able to detect (measure) if it goes wrong?
Figure 16 Example of risk identification and mapping taken from Borman et al9
Risk Evaluation:
Process of comparing the estimated risk against predetermined acceptability of the risk. Once the
variables have been identified, these should be evaluated in relation to the impact on the quality of the
reportable result, i.e. the cQAs. Critical quality attributes of the reportable result could be e.g. precision
and accuracy. Risk evaluation addresses the question: What are the consequences (severity) of going
wrong?
Risk evaluation and assessment can be performed using tools such as a priority matrix and Failure Mode
and Effect Analysis (FMEA)
In the example shown in Figure 17, a 3-factor FMEA is used. The FMEA is based on the evaluation of the
impact of each method variable on the Severity (S), as well as the Weighing Factor (W) on the method
CQAs, the Probability (P) of Occurrence and Detectability (D) of the failure. Each of the factors can be
scored as follows:
The weighing factor is defined in relation to each CQA (e.g. precision, accuracy and sensitivity) and how
important is the CQA for the quality of the final reportable result. The weighing factor is linked to the type
of the analytical procedure as defined by ICH Q2 (Identification, Limit test for impurities, Quantitative test
for impurities and Assay). For example, an analytical procedure used for Identification will have a
weighing factor of 5 for the CQA Selectivity, whereas precision and accuracy are less important (weighing
factor of 1). Thus, some method variables may impact the precision of the analytical procedure (severity
9) but not be relevant for the overall impact on the reportable result.
Weighing
Description
Factor
1 Small impact on overall quality of the reportable result
3 Intermediate impact on overall quality of the reportable result
5 High impact on overall quality of the reportable result
Severity = Effect of the method variable on the method CQAs (e.g. precision, accuracy and sensitivity)
Severity Importance
The potential failure has a low impact on this method CQA factor
1
- CQA is effected by a severe failure in method variable
The potential failure has an intermediate impact on this method CQA factor
3
- The CQA factor is effected significantly by severe failure in method variable
The potential failure has a high impact on this method CQA factor
9 - The CQA factor is effected significantly by minor failure (very sensitive to method
variables)
The overall Severity (S) is calculated by the sum of (severity of each CQA x weighing factor) divided by
the sum of weighing factors.
∑𝐶𝐶𝐶𝐶𝐶𝐶 𝑆𝑆𝑖𝑖 ∗ 𝑊𝑊𝑖𝑖
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 = 𝑖𝑖=1𝐶𝐶𝐶𝐶𝐶𝐶
∑𝑖𝑖=1 𝑊𝑊𝑖𝑖
Probability = Likelihood of Occurrence of the failure. It is related to method knowledge and controls.
Includes uncertainty for new processes or process changes
The priority of the risk is defined by the combination of Severity, Probability and Detection. The overall
Risk Priority Number (RPN) is calculated from S. P and D and normalized to 100.
When the RPN number is higher than 27 there is a high average severity coupled with an occasional
failure and medium detection. If the a CQA is weighted as having a high impact on overall quality of the
reportable result, and the failure under analysis estimated to have a high impact on the CQA factor, then
this failure is considered so critical irrespective of other factors that it should be escalated and proper
measures identified that mitigate the failure.
Figure 17 Example of a risk assessment using FMEA for an ELISA analytical procedure to measure low level of
impurities (only the first 3 steps are illustrated)
Risk Assessment should be conducted throughout the entire life cycle of the analytical procedure.
Risk Control
After the Risk Assessment has been conducted, all identified risks should be reduced. Risk Control measures should
be implemented to reduce each identified risk As Far As Possible (AFAP) or to an acceptable level.
All Risk Control options should be applied (if possible and still contributing to the risk reduction). The amount of
effort used to mitigate the risk should be proportional to the importance of the risk and the benefit of the mitigation
procedure. Implementation of the risk control measure and its effectiveness should be determined, verified and
documented.
In the case that some of the risks cannot be mitigated or reduced to an acceptable level, the analytical procedure
should be redesigned and the risk management process repeated.
Risk Review
After the Risk Control measures are applied, a Risk/Benefit analysis shall always be made. All residual risks shall be
combined (accumulated) and the resulting risk shall be balanced against the benefit of analytical procedure. Risk
review should be an on-going part of the quality management process. The performance of the analytical procedure
should be reviewed on a regular basis and as part of the monitoring process in Stage 3.
Figure 13. However it should be recognised that this is an iterative process which includes the ATP itself. It may be
that knowledge and understanding gained over the development lifecycle may require modification of the ATP prior
to Stage 2.
Analytical Procedure
Technique Development and
Selection Understanding Estimated Target
Strategy measurement
Initial
Translation of uncertainty
Analytical Technology ATP into CPPs
Target Capability Establish
Profile Robustness
Technology Procedure
studies
Preferences
Finalise
ATP Final
DESIGN Analytical
DEVELOPMENT & Procedure
UNDERSTANDING
Control
Collating
Communication
Assessment Disseminating
Gathering
Risk review
Quality Risk Knowledge
Management Management Classification
(QRM) Tools
As part of process development and understanding and guided by the risk assessment, the effect of analytical
variables will need to be investigated over the proposed operational range. A powerful methodology to investigate
the effect of these variables is Design of Experiments. It is a systematic method to determine the relationship
between the variables of an analytical procedure and their impact on the analytical result. In a DoE, method
variables are varied simultaneously and planned using, for example, a factorial design. This methodology allows for a
significant reduction in the number of experiments needed to establish size of effects and interactions between the
factors.
Once all method variables have been identified, investigated and assessed, those evaluated as critical need to be
controlled.
Variables that are known or found to impact the quality of the reportable result are to be minimised or limited to
ranges where the impact is reduced to an acceptable level. Hence, DoEs are useful to establish the ‘design space’ or
boundaries of the analytical procedure allowing a control strategy to limit impacts on the reportable result.
The analytical procedure is a process that can be defined in inputs (materials and equipment) and outputs
(Reportable test result).
Pipettes
Defining the inputs and outputs for each process step and evaluation of the impact of each input variable is
performed during the risk assessment. Once the critical input variables have been identified, these should be
mitigated via analytical procedure control.
An analytical procedure variable is critical when its size has a significant impact on a critical quality attribute and
presents a significant risk to the procedure failing the ATP.
Under this definition, the performance of an analytical procedure depends on its Critical Method Variables (CMVs )
and the Critical Input Material (CMs) as illustrated in Figure 20.
The Analytical Control Strategy, ACS, is a planned set of controls, derived from understanding the requirements for
fitness for purpose of the reportable value, the understanding of the analytical procedure as a process, and the
management of risk, that assures the performance of the procedure and the quality of the reportable value, in
alignment with the ATP, on ongoing basis.
ACS will in addition will also include selection of performance indicators which are intended to provide a means of
verifying that the procedure is performing as expected (“system suitability” tests )
The Analytical Control Strategy is the mechanism for mitigating the risks identified. Typically this covers 4 main areas
as shown in Figure 21.
Standards
and Sample Analytical Replication Reportable
Reagents Preparation Measurement Strategy Result
Preparation
Risk Assessment
A unit operation is any part of potentially multiple-step process which can be considered to have a single function
with clearly defined boundary. Since the unit operations are sequential, and some of the process steps are linked by
input-output relationships, it is important to consider controls for unit operations that have an impact on
downstream processing and/or end-product quality.
44
E.Kovacs et al, Analytical Control Strategy; Pharmacopeial Forum 42(5) 2016
The discussion on criticality begins in the development stage and it is important that new analytical procedures is
appropriate for the intended use and should fulfil the scientific requirements 45 whilst keeping the respective
business requirements in the focus, e.g. handling and costs.
For example, HPLC-UV/DAD is a technique which often allows a selective separation of impurities with a good
sensitivity and the possibility to determine the variability by performing precision and/or accuracy experiments.
However, it is time consuming with respect to the experimental work beginning with a sample preparation, the
chromatographic procedure itself up to the evaluation of the analytical data.
45
J.Ermer, P Nethercote (Eds), Method Validation in Pharmaceutical Analysis, 2nd Edition, Wiley, 192ff, 2015 (ISBN978-3-527-
33563-3)
On the other hand, NIR can in some instances be directly applied for a determination of an assay without a sample
preparation obtaining analytical results immediately after the measurement. However NIR, does not allow the
definition of variability based on precision and /or accuracy experiments in the same way as for HPLC.
Also in Stage 1, appropriate forced degradation studies should be performed especially for chromatographic
procedures used for the determination of organic impurities and assay. Stress testing is necessary to establish a
basis for sufficient selectivity which is a prerequisite for developing a stability indicating analytical procedures.
In addition, it should be taken into account that robustness testing is still part of stage 1 of the APLM process and
may reveal critical aspects with respect to variability. In case of HPLC, important method parameters include e.g.
mobile phase composition, pH of mobile phase and column temperature. In the case of NIR, other parameters are
likely to be critical such as sources of excipients in a drug product formulation in order to determine (e.g.) an assay of
a drug product.
Repeatability - + - + +
Precision
Intermediate - + (1) - + (1)
Precision
Sensitivity / + + + +
Specificity (2)
Detection - - (3) + -
Limit
Quantitation - + - -
Limit
Linearity - + - +
Range - + - +
Table 2 Analytical Performance parameters required by procedure type to be confirmed during PPQ
Key to Table 2
Selectivity (Specificity)
According to the current ICH Guideline Q2(R1), specificity is the ability to assess unequivocally the analyte in the
presence of components which may be expected to be present. Typically these might include impurities,
degradants, matrix etc.
However, there is much discussion with respect to the correctness of the term selectivity vs. specificity.
For chromatographic procedures it can be clearly stated that the selectivity can be described as the ability of the
analytical procedure to separate all relevant impurities from the main component as well as from all other relevant
analytes.
Also in case of ICP-OES or ICP-MS selectivity can be demonstrated using a “False-positive-test” in order to
demonstrate the presence of elemental impurities using spiking experiments.
The definition of “specificity” is more difficult e.g. with respect to titration methods for which it only can be
demonstrated that the titration is not influenced by any solvents or chemicals used for the respective application.
In case of titration specificity can only be demonstrated by checking the influence of matrix, drug substance and drug
product.
Identification
- To ensure the identity of an analyte. Suitable tests should be able to discriminate between compounds of
closely related structures which are likely to be present.
Examples for the identification of a drug substance are e.g. Infrared Spectroscopy (IR) identifying the drug substance
using a comparable infrared spectrum of the certified reference standard or identifying characteristic wave numbers
in the infrared spectrum.
Other identication tests are generally e.g. polarimetry, melting point or UV-spectra which are not considered citical
with respect to qualfication.
In case of a drug product, identication may be simply carried out, e.g using liquid chromatography, by comparing the
retention times with the certified reference standard.
Impurity profiling
Impurity profiling includes identification structure elucidation and quantitative determination of related organic
impurities as well as degradation products in drug substances and pharmaceutical formulations. Impurity profiling is
a very important not to identify relevant impurities but also to prove potentially toxic impurities. Identification is
generally performed by HPLC and should be supported by hyphenated and / or orthogonal techniques e.g. using
HILIC, HPTLC or SFC.
Stress testing
According the ICH guideline ICH-Q1A(R2): stress testing is defined as:
„Stress testing helps to determine the intrinsic stability of the molecule by establishing degradation pathways in order
to identify the likely degradation products and validate the stability indicating power of the analytical procedure“
FDA-Guideline on Analytical Procedures and Method Validation specifies :
“To demonstrate specificity of a stability-indicating test, … a combination of challenges should be performed. Some
challenges include the use of samples spiked with target analytes and all known interferences; samples that have
undergone various laboratory stress conditions; and actual product samples that are either aged or have been stored
under accelerated temperature and humidity conditions.”
Depending on the speed of degradation, it is recommended to perform the analytical determinations after at least
four time points (incl. zero value) in order to get sufficient information.
Mass balance should be checked in order to compare the sum of degraded compounds with the main component.
The calculation of the mass balance using the sum of all impurities above a reporting threshold may be appropriate.
Photostability testing according to ICH Q1B 46 should be also performed for the drug product
• Tests on the exposed drug product outside of the immediate pack
• If necessary: Tests on the drug product in the immediate pack
• If necessary: Tests on the drug product in the marketed pack
46
ICH Q1B
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 43 of 74
Analytical Quality Control Group
Peak purity
Peak purity in HPLC can be checked by using a photo diode array detection. In this case peak purity means the
spectral homogeneity of a chromatographic peak compared to a known component.
Alternative analytical procedures such as TLC, HPLC-MS can be used in order to check the selectivity of the main
analyte and its impurities. Specialist techniques such as qNMR can also be used
Range:
According to the current ICH guideline the following ranges are defined and recommended:
The range of analytical procedure is the interval between the upper and lower concentration (amounts) of analyte in
the sample (including these concentrations) for which it has been demonstrated that the analytical procedure has a
suitable level of precision, accuracy and linearity.
According to the ICH Guideline Q3A (R2), a reporting threshold (RTh) was introduced. ICH defined reporting
thresholds above which the analytical result have to be reported quantitatively. As a consequence, a reporting
threshold of 0.05 % means that amounts of ≤ 0.05 % are not reported. Therefore, the quantitation limit should be at
least not greater than the reporting threshold.
Furthermore, it should be pointed out that for elemental impurities as well as for residual solvents a different
process compared to the determination for organic impurities should be set. The guidelines Q3C for residual solvents
and Q3D for elemental impurities give limits with respect to the toxicological concern.
In other words, precision, accuracy and linearity should be demonstrated within or at the extremes of the specified
range of the analyte in the sample matrix..
Assay: 80 – 120% of test concentration
Content uniformity: 70 – 130% or as required
Dissolution: +/- 20% over the specified range
Impurities: Reporting threshold to 120% of the specification (minimum requirement)
Note: It should be pointed out that, especially, for ICP-MS or ICP-OES larger ranges for the elemental impurities are
described in the respective pharmacopoeias.
47
J Ermer ref
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 44 of 74
Analytical Quality Control Group
Another recommendation to provide acceptance criteria was described in the Australian TGA Guideline 49 for
veterinary chemical products:
For a main component peak assay the following acceptance criteria are suggested for guidance only:
HPLC:
Accuracy: 98.0 – 102.0 %
Repeatability: ≤ 1.0 % (≤ 2.0 % in case of e.g. poor solubility of the drug substance may be accepted)
Intermediate precision: ≤ 1.0 % (≤ 2.0 % in case of e.g. poor solubility of the drug substance may be accepted)
48
J.M. Miller, J. B. Crowther (Eds.): Analytical Chemistry in a GMP Environment, A Practical Guide, John Wiley, N.Y.
2000, p.448-449:
49
Guidelines for the Validation of Analytical Methods for Active Constituent, Agricultural and Veterinary Chemical Products;
Australian Pesticides and Veterinary Medicines Authority, Oct. 2004
Examination of the analyte response function should be performed over the specified operational range in order to
check if the chosen calibration model it fit for purpose. If the signal is directly proportional to the respective
concentration them a linear model is appropriate.
Most HPLC applications are performed using a UV-detector for which in most cases a linear relationship is justified
based upon the Beer-Lambert Law.
In such cases a single point calibration seems to be acceptable under the prerequisite that experimental data on
accuracy confirm that there no other impact on the measurement.
If linearity is to be performed, more data at the upper and lower specified limit should be generated in order to get
more information with respect to the variability. Linearity can be well used for the support of correction or response
factor e.g. for impurities in HPLC with respect to the y-intercept and the slope for the respective analyte. However,
it is not advisable to evaluate an intercept over too large a calibration range.
In case of very small concentrations of an analyte standard addition techniques should be used, e.g. adding a
reference standard to the sample.
Other detection systems than UV like conductivity detection require e.g an evaluation using a non-linear calibration
with e.g. a quadratic function. Therefore, this has to be evaluated and the number of calibration points
(concentrations) that are necessary for the establishment of the calibration function.
Sensitivity Analysis
One good method of detection of deviations from linearity in single point calibration is that of sensitivity analysis 50.
In theory, the peak area divided by the concentration over the operational range should be a constant. However in
practice there will be some variation due to experimental noise.
3150
3125
( P e a k a r e a / c o n c e n t r a t io n )
3100
3075
S e n s itiv ity
3050
3025
3000
2975
2950
Figure 22 Example of a main peak sensitivity analysis over an analytical range of 70 to 130% of target
Figure 22 illustrates a well behaved linear calibration over the operational range from 6 replicates at each
concentration. The data are evenly scattered around the mean ratio and without discernible trend.
By contrast, Figure 23 shows the dangers of extrapolating a linear model 6 data points per concentration level, over
a wide operational range. Over a smaller 60 to 120% range there is not a problem but over the full range there is
both a trend, indicating non linearity and that the variances below the 0.1% concentration are much larger than
those in the 60 to 120% range.
This trend will have an impact on the integrity of the reportable analytical result (≈3%) if main peak normalisation is
used to quantify impurities at low concentration levels. Whilst this may not be of practical analytical significance, it
is a data integrity issue. In addition, the inhomogeneity of variance across the operational range invalidates the use
of ordinary linear least squares analysis.
50
J Emer, P Nethercote (Ed.) , Method Validation in Pharmaceutical Analysis, 2nd Edition, Wiley -VCH, page 153, 2015
2 1 .0
2 0 .5
2 0 .0
( P e a k a r e a / c o n c e n t r a t io n )
1 9 .5
1 9 .0
S e n s itiv ity
1 8 .5 M e a n S e n s itiv ity
0 .0 5 to 1 2 0 % o f T a rg e t
1 8 .0
1 7 .5
M e a n S e n s itiv ity
1 7 .0 6 0 to 1 2 0 % o f T a rg e t
1 6 .5
1 6 .0
0
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
105
110
115
120
C o n c e n tra tio n (% ta r g e t)
Figure 23 Example of a main peak sensitivity analysis over an analytical range of 0.05 to 120% of target
The most common calibration model or function in use in analytical procedures assumes that the analytical response
is a linear function of the analyte concentration. Most chromatographic and spectrophotometric methods use this
approach because the detection principle is based upon the Beer-Lambert Law which is a linear function.
The usual calculation approach adopted is the method of ordinary least squares (OLS) whereby the sums of the
squares of the deviations from the predicted line are minimised. However, the OLS statistical approach assumes that
all the errors are contained in the response variable, Y, and the concentration variable, X, is error free. As all
measurements are subject to error, the relationship between Y, the response variable, and X the concentration is
given by;
Y =β 0 + β1 X + ε
However, the true values of β 0 the intercept and β1 , the slope are not known but estimated from the calibration
model such that
Y�= b0 + b1 X
where Y � is the calculated value of the response variable, Y, based on the least squares estimates of the true values
of the slope b1 and the intercept b0
Standard works, such as Draper and Smith 51 , should be consulted for more technical and statistical details of the
least squares approach for linear regression.
The calibration model adopted should be shown to be ‘fit for purpose’ by demonstrating that the model fits the data
well to predetermined acceptable extent.
A good fit to the linear model is shown by there being a random scatter around zero according to the variability
expected for the given experimental conditions but without discernible shape. The recommended minimum number
of data points is 5 standard concentrations spanning the analytical operational range. Ideally these should be
independently prepared and if practicable in duplicate to allow sufficient data points to clearly indicate the presence
or absence of a pattern.
Traditionally, the use of the correlation coefficient, R, has been used to indicate the quality of fit. This statistic has
been misinterpreted to mean a measure of a ‘good’ straight line. It is not. It is a measure of the degree to which the
variability of the calibration data is accounted for by the linear model. In other words R measures the strength of
the linear correlation between the variables X & Y. A perfect straight line has a value of unity for R. The equation for
determining R is given by;
2
SS XY
R=
SS X .SSY
where SS stands for the sum of squares for the variables X, Y and XY.
Hence it is clear that to claim that for a given calibration a value for R of say 0.99 means it is a ‘better’ straight line
than one for a value of R of 0.98 is not correct. This is demonstrated graphically and statistically in a paper by
Anscombe 52.
The adequacy of the calibration model should also be demonstrated using residual sum of squares or mean square
error;
SS − b1SS XY
Mean Square Error MSE = Y
n−2
As the purpose of the linear model is one of inverse regression ie predicting a value of sample concentration Xs from
a response of Ys , the 95% prediction interval is ±t0.05,n − 2) s X s where
51
N.R. Draper and H. Smith, Applied Regression Analysis, 3rd Edition, John Wiley & Sons, New York, 1998.
52
J Anscombe, Graphs in Statistical Analysis, American Statistician, 27, 17-21 (1973)
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 49 of 74
Analytical Quality Control Group
( Ys − Y )
2
MSE 1 1
=sX s + + 2 2
b1 m n b1 ∑ ( X s − X )
This prediction interval shown here is only an approximation but is commonly used in practice as the exact solution
is much more complex 53. m is the number of replicate measurements for n samples. Hence the value of sX s
determines if the target measurement uncertainty of the ATP is likely to be met. However, this is only takes into
account the variability of calibration.
Non-Linear
Some analytical techniques have univariate response functions which do not behave in a linear manner with
respect to analyte concentration over the required measurement range. For example ion chromatographic
detection. Under these circumstances, it may be possible to transform the data mathematically into a linear
function and apply the linear least squares regression methodology described in the previous section.
However, mathematical linearisation is not always possible or desirable 54. Indeed it is not really appropriate to
do so now as software packages handling non linear calibration models are readily available and should be used
directly where possible. Examples include Weibull models for dissolution or sigmoidal models for ELISA etc. The
same principles apply for the demonstration of ‘goodness of fit’ for non linear and linear models namely, residual
sums of squares and distribution of residuals. The correlation coefficient may also give an indication of ‘goodness
of fit’.
Statistical software packages are readily available to perform non linear calibrations and statistical evaluation
without the need for programming.
Multivariate spectroscopic techniques such as NIR and Raman usually requires more sophisticated calibration
approaches such as PLS for example. For more details, the specialist literature should be consulted. 55 56.
53
Reference 29 page 84
54
Dillard http://pubmedcentralcanada.ca/pmcc/articles/PMC2751416/pdf/12248_2008_Article_92260.pdf
55
H Martens, T Naes, Multivariate Calibration, Wiley 1989
56
H Mark, J Workman, Statistics in Spectroscopy, 2nd Edition, Academic Press 2003
According to ICH Q2 (R1) the accuracy of an analytical procedure expresses the closeness of agreement between the
value which is accepted either as a conventional true value or an accepted reference value and the value found
Assay
Drug Substances (APIs)
- Accuracy can be performed using a minimum of 9 determinations (independent weighings) over a minimum
of 3 concentration levels within the specified range (3 determinations per concentration level) or 6
determinations (independent weighings) at the targey concentration.
- Standard deviation, coefficient of variation and the mean confidence interval (p = 95 %) should be reported.
The confidence interval of the mean should include 100% of the expected value. It may be possible that the
coefficient of variation is very small and the 100 % value is not included. In such cases plausibility assessment
should be applied.
- Comparison of analytical result with a second, independent and well characterized (validated) analytical
procedure
- Drug substance impurities: in this case spiking experiments at the respective levels should be applied.
- The extract with a defined amount (e.g. a dried extract) is used and markers or lead substances are added to
the extract which can be a pure extract or a finished product (to concentrations of e.g. 80, 100 and 120 % for
an assay). For these determinations the full analytical sample preparation should be applied.
- Spiking of an extract or specific reference substances to a test sample. One possibility would to add the
reference substance to a matrix in which a defined amount of the analyte already is present. The respective
substance can be spiked e.g. to an amount of 50 % and adding reference substances to a total amount of e.g.
80 / 100 / 120 %.
Drug Products
In case of a drug product the most common procedure is to spike an active drug substances to a matrix (placebo)
with levels of 80, 100 and 120 % of the declared concentration. 3 determinations per level should be prepared.
Impurities
In case of impurities: the respective impurities should be spiked to a placebo sample also containing the drug
substance (at a level of 100 %). Levels of impurities between the reporting threshold and 120 % of the
acceptance criterion to be spiked to the placebo solution containing the drug substance.
Quantitation Limit QL
The quantitation limit of an individual analytical procedure is the lowest amount of analyte in a sample which can be
quantitatively determined with suitable precision and accuracy. The quantitation limit is a parameter for
quantitative assays with low levels of compounds in sample matrices and is used particularly for the determination
of impurities and/or degradation products.
• Signal to noise ratio comparing measured signals from samples with known concentrations of the respective
analyte with those of blank samples and be stablishing the minimum concentration at which the analyte can
be reliably quantified. Recommended signal to noise ratio at the QL is 10:1.
• Standard deviation of the response according to;
10𝜎𝜎
𝑄𝑄𝑄𝑄 = where σ is the standard deviation of the response and s is slope of the linear calibration curve
𝑠𝑠
using the calibration curve in the range of QL. The residual standard deviation of a regression line or the
standard deviation of y-intercepts of a regression line may be used as the standard deviation.
• The estimate of s may also be carried out as described in the following;
based on standard deviation of the blank measurement of the analytical background
response is performed by analysing an appropriate number of blank samples and calculating
the standard deviation of these responses.
based on the calibration curve using samples, containing an analyte in the range of QL. The
residual standard deviation of a regression line or the standard deviation of y-intercepts of
regression lines may be used as the standard deviation.
• Repeatability test (n at least 6) which can be also derived from precision and /or accuracy studies. The USP
describes an approach in the draft of General Chapter <1210> 57.
The experimental quantitation limit should be below the reporting threshold also called in the Pharm. Eur. the
disregard limit.
Especially, in case of repeatability experiments, it is advisable to repeat the experiments in case of chromatography
using different detectors.
Setting a QL at 50% of the reporting threshold for an impurity gives a better safety margin for the approach of the
real QL. Furthermore, it also might help to define a reference solution at the reporting threshold, which is required
in the Pharm. Eur 58. For unknown impurities the reference solution should be at 50% of the reporting threshold.
Intermediate QL
In case of impurities which classified as toxic, according to the ICH M7 Guideline, the so-called real quantitation limit
below the reporting threshold should be provided.
For this determination all factors which may vary or impact the future application s should be included in order to
define the expected routine distribution of experimental QLs.
For the estimation of the QL including a safety margin, the intermediate QL 59Fehler! Textmarke nicht definiert. may
be derived from:
QLintermediate = QLaverage + 3.3 sQL
Detection Limit DL
The detection limit of an individual analytical procedure is the lowest amount of an analyte in a sample which can be
detected but not necessarily quantitated with any reasonable degree of precision.
57
USP General Chapter <1210>
58
Pharm. Eur. General Chapter 2.2.4.6
59
Reference 39 page 166
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 53 of 74
Analytical Quality Control Group
In general it is not considered necessary with respect to quantitative determinations e.g. for organic impurities using
HPLC but may be requested by some health authorities in case of a submission or change or variation.
PRECISION
“Precision” is one of the most important parameters to obtain information about the variability of the analytical
procedure.
According to the definition in the ICH Q2 (R1) Guideline the precision of an analytical procedure expresses the
closeness of agreement (degree of scatter) between a series of measurements obtained from multiple sampling of
the same homogeneous sample under the prescribed conditions. Precision, under ICH, is considered at three levels:
repeatability, intermediate precision and reproducibility.
However, it is also necessary to include the system or instrument precision into this precision hierarchy. The number
of measurement replicates plays an important role, especially, with respect to the discussion when “atypical results”
might occur in a quality control testing.
System precision
The smallest variance contribution comes from the metrological uncertainty itself. This is found by repeated
measurements of the same sample preparation by the same operator. It is not to be confused with repeatability.
Repeatability
Repeatability expresses the precision under the same operating conditions of a complete analytical procedure over a
short interval of time. Repeatability is also termed intra-assay precision.
60
Reference 39 Figure 5.6
A minimum of 9 determinations covering the specified range for the procedure (e.g. 3 concentrations, 3 replicates
each) should be performed or a minimum of 6 determinations at 100% of the test concentration which can also be
carried out including a number of replicates. For a main peak assay, where the analytical range is small the
6determinations at 100% is to be preferred as the 9 determinations over the range offer little advantage in the
uncertainty of the standard deviation.
Assay
• The repeatability can be performed using a representative batch of drug substance.
• For drug product, repeatability should be carried out using a homogeneous sample wherever possible e.g. a
powder prepared from a representative number of tablets.
Impurities
• Repeatability for drug substances can be performed using spiked solutions (e.g. of the specified impurities)
or using a suitable sample solution containing the respective impurities.
• With respect to the concentration one possibility is to perform the spiking experiments at the reporting
threshold in order to demonstrate the suitability of the analytical procedure. The acceptance criteria should
be set with respect to the lowest quantifiable amount of analyte.
• For drug product, spiked samples should be used or samples containing the respective impurities can be
used.
Please note that in case of more complex tests, e.g. content uniformity or dissolution, more determinations might be
necessary
Intermediate Precision
Intermediate precision is an extension of repeatability expresses within-laboratories variations: Different days,
different analysts, different equipment, different lots of materials and reagents, etc. In this experiment the impact
of so-called effects of random events onto the precision will be evaluated. The preparation of samples is identical to
that described under repeatability.
• Analyst
• Calibration
• Instrument or system (laboratory)
• Column (batch)
• Time (days)
• Reagents
To perform a suitable approach for intermediate precision, an experimental design could be performed as described
in the next section. With respect intermediate precision, designed experiments should be performed with a
sufficient minimum of degrees of freedom and there are sufficient data in order to evaluate statistically. The use of
a statistician is recommended for all but the simplest experimental designs.
Reproducibility
Reproducibility expresses the precision between laboratories (collaborative studies, usually applied to
standardisation of methodology; round-robin tests). The number of samples tested per site may be dependent on
the variability of the respective analytical procedure. Statistical approaches are required in order to evaluate if the
transfer was successful based upon predetermined acceptance criteria.
Reproducibility is not required for a first submission, however, a transfer of analytical procedures between 2 sites
have to be carefully documented (incl. protocol and report) and can be came relevant for inspections.
Trueness
According to ISO guide 3534, trueness can be explained as the closeness of agreement between an average value
obtained from a large series of measurements and an accepted reference value whereby trueness implies the lack of
bias.
According to the proposed chapter on Analytical Lifecycle Procedure <1220> 61 the following table provides an
example on the impact on bias as well as on precision:
61
Analytical Lifecycle Procedure <1220>
Implementing this process into an ATP, the following aspects should be considered if relevant:
- Sample
- Matrix in which the analyte is detected
- Establishment of the TMU by checking the allowable error for the measurement with respect to accuracy
(bias) and precision
- Establishment of respective acceptance criteria and assurance that the measurement uncertainty and risk
criteria are met.
Different models can be used in order to carry out to perform a DoE e.g. in order to optimize the chromatographic or
sample preparation conditions:
Design of experiment (DoE) is an important tool in order obtain maximum of information about an analytical
procedure performing a minimum number of experiments.
In order to perform these experiments it should be pointed out that there are some limitations.
Performing DoE experiments the range of the analytical parameters are test should be in a linear range.
One of the easiest possibilities is to perform a three factor two level full factorial design which may be visualised as a
cube who’s vertices indicate the combinations of the tested parameters. As a consequence, a minimum of 8
analytical determinations should be performed in order to get a maximum amount of information with respect to
the impact of the tested parameters and their interactions. Details of suitable experimental designs can be found in
standard textbooks and software packages 62, 63.
There are different software systems available in order to model the conditions being tested.
One of the most applied systems for chromatography is “Dry Lab®”, a system which mainly works on the basis of
have an average k’ value which shows a linear relationship to the organic solvent concentration in e.g. reversed
phase liquid chromatography. Furthermore, the program can be applied to other chromatographic procedures.
One of the most used applications within the robustness testing for chromatographic procedures is the variability of
mobile phase composition (e.g. acetonitrile / methanol composition, pH of the mobile phase and column
temperature) for the determination of assay or organic impurities.
Other optimisation software is available, e.g. Chromsword® which is based on the respective pK –values and also a
suitable tool for the determination of the robustness, especially, with respect to the selection of the analytical HPLC
column.
62
For example, D.C. Montgomery, Design and Analysis of Experiments, 8th Edition, Wiley 2012 and the companion
volume, Minitab Manual Design and Analysis of Experiments
63
Design-Expert version 10, Stat-Ease
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 57 of 74
Analytical Quality Control Group
It should be pointed out that the results obtained by computer simulation models should always be verified
experimentally.
System suitability testing (SST) is an integral part of specific analytical procedures and are, in part, confirmation
of the analytical control strategy. SSTs can be thought of as “in process checks” on the performance of the
analytical procedure to ensure the ATP requirements are met.
• The tests are based on the concept that the equipment, electronics, analytical operations and samples to be
analysed constitute an integral system that can be evaluated as such.
• SST Parameters to be established for a particular procedure depend on the type of procedure being
validated. See pharmacopoeias for additional information.
However, it is also possible to introduce SST test for other analytical procedures, e.g. in case of titration using a
titrimetric standard before starting the titration of the test sample.
Many analytical procedures produce data which are univariate from a statistical point of view. That is to say that a
test result is dependent on a single analytical metrological outcome. Examples include;
• Weights from analytical balance
• A pH measurement
• A melting point
• A peak area
Hence a simple statistical test, such as a t test, may be employed for comparison purposes with a standard value for
example. In such instances univariate statistics are employed. Be aware, however, that statistical significance may
not be the same as identifing analytically significant differences.
However there are many analytical procedures which are inherently multivariate ie there is no one to one
correspondence between an analytical parameter and the analytical measurement.
Examples include;
• Spectral data, UV, NIR etc for identification in spectral libraries in qualitative analysis
• Spectral data, UV, NIR etc for development of calibration models in quantitative analysis
• Impurity profiles
• Design of Experiment data
Mathematical models are important tools for describing the analytical procedure and for the analytical
instrumentation used in measurement. For this reason, it is scientifically sound practise for the chemometric
methodologies to be employed during Stage 1 for compendial and pharmaceutical applications. Some examples are
given in Table 3. Chemometrics is a chemical part of a discipline which applies the use of statistical methods in order
to plan analytical experiments in an optimal design and to evaluate these data to get all relevant information with
respect to e.g. robustness, variability and, if applicable, selectivity. Some examples of multivariate calibration in
analytical procedures are given in 64.
64
H Martens T Naes, Multivariate Calibration, John Wiley 1989
As described in the last section, it won’t be sufficient to determine the quantitation limit once.
With respect to measurement uncertainty the experimental errors like random errors, systematic errors and
independent (not correlated) errors should be taken into account.
Therefore, it is essential, to repeat the QL determinations under different conditions, especially, using different
detectors. This might help to achieve a representative value which is a prerequisite for a robust analytical procedure.
Health and Safety information should also be included if not documented separately.
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 60 of 74
Analytical Quality Control Group
As shown in Figure 3, the procedure performance qualification will replace the current ICH Q2 validation process
described in the current ICH Guideline Q2A (R1) within the APL. It has to be demonstrated, that the final analytical
procedure is fit for the intended purpose which is a current requirement and will remain a requirement in the future.
With respect to the definition of the finalised analytical target profile (ATP) which defines the objective of the test
and quality requirements for the reportable result, it has to be demonstrated that the analytical procedure, used
under operating conditions, will meet the respective requirements.
Confirmation of proper
performance of Procedure
Final ATP Analytical Instruments Performance
requirements & Systems Protocol
DESIGN PROCEDURE
DEVELOPMENT & PERFORMANCE
UNDERSTANDING QUALIFICATION
Implementation of Knowledge
Analytical Transfer
Control
Strategy
Figure 25 Transition from Stage 1 to Stage 2 of APLM
The current ICH guideline Q2 describes attributes for validation which can be readily applied to chromatographic
procedures, only. Furthermore, the description of the validation parameters might not be sufficient with respect to
the variability of the reportable result. Therefore, procedure performance qualification means the confirmation the
analytical procedure being capable of performing reproducible data which consistently meets the ATP requirements.
Procedure performance qualification will include well designed precision studies in order to define the variability of
an analytical procedure more precisely which is important, especially, for the assessment of OOS results.
The traditional concepts of analytical transfer as well as the implementation of compendial procedures will be
included in procedure performance qualification (PPQ).
The proposed qualification procedure will ensure that the attributes which are considered under traditional ICHQ2
validation are addressed. However, the new approach focusses on generating evidence in Stage 1 that the sources of
potential bias have been identified and removed as far as possible. These stage one activities include ensuring
selectivity (with forced degradation studies if the method is to be stability indicating) and a calibration to ensure that
the procedure has no bias over the intended range of use. The focus of the PPQ in stage 2 is to identify if any bias is
introduced by operating the procedure in a new laboratory and to gain an understanding of the overall precision of
the procedure and ensure these values are less than allowed by the Final ATP.
The descriptions given in Stage 1, clearly indicate the requirements for the procedure performance qualification.
The purpose of performing PPQ is to confirm the ATP and demonstrate the adequacy of the analytical control
strategy in a routine operation.
Users of compendial analytical procedures are not required to validate these procedures in their laboratories as
understood by ICH Q2(R1) but documented evidence of suitability must be established under actual conditions of
use. In the United States, this requirement is mandated in Good Manufacturing Practice 21 CFR §211.194(a)(2) which
states that the “suitability of all testing methods used shall be verified under actual conditions of use.” The current
pharmacopoeial approach is described in USP General Chapter <1226>.
Especially, for complex analytical procedures like chromatography verification should be performed using the
validation parameters e.g. precision, linearity - and in case of the determination of impurities also the quantification
limit. Verification can be also performed with a transfer process, e.g. according to USP <1224> described in this
section below.. For chromatographic procedures the pharmacopoeias give a certain adjustment range for the
chromatographic parameters under which conditions the respective chromatography can be performed. Therefore,
it is recommended to extend the degree of validation dependent on the extent on permissible deviation of the
described analytical procedure.
Chapter <1226> does not exactly describe which validation parameters should be performed. The following table
might give a recommendation which validation parameters should be performed:
Quantitative
Analytical
Assay determination of Limit test Other
procedure
impurities
HPLC / GC Accuracy (for drug Precision, QL Specificity, DL Specificity if
product), Precision, appropriate
specificity, linearity
Spectroscopic Accuracy (for drug Precision, QL Specificity, DL Specificity if
methods product), Precision, appropriate
linearity , if
appropriate
Titration Accuracy (for drug Precision
product), Precision
TLC Specificity, QL Specificity, DL Specificity if
appropriate
Electrophoresis Specificity, QL Specificity, DL Specificity if
appropriate
Table 4 Possible performance criteria for verification of a compendial analytical procedure
However verification is not required for basic compendial test procedures that are routinely performed such as loss
on drying, residue on ignition, various wet chemical procedures such as acid value, and simple instrumental
determinations such as pH measurements unless there is an indication that the compendial procedure is not
appropriate for the article under test. It is assumed that analysts should have the appropriate experience,
knowledge, and training to understand and be able to perform the written analytical procedure.
Verification/Transfer protocols are required during procedure transfer between laboratories to ensure ‘fitness for
purpose’. The general principles are outlined in USP General Chapter <1224>. The critical parameters to be
confirmed may be a subset of those considered in Stage 2 (PPQ) for in-house procedures.
The Verification/Transfer protocol is a agreed documented procedure that qualifies a laboratory (the receiving unit)
to use an analytical procedure that originated in or was qualified in another laboratory (the transferring unit). This
protocol ensures that the receiving unit has the procedural knowledge and ability to perform the transferred
analytical procedure as intended.
A well-designed protocol should be discussed, approved by both parties, indicating the intended execution strategy
including each party’s roles and responsibilities.
After execution of the protocol, the receiving unit prepares PPQ report which;
• Summarises the lifecycle requirements of the protocol
• Documents any deviations with a justification and impact assessment
• Documents and evaluates the results obtained in relation to the acceptance criteria
• If all acceptance criteria are not met, the procedure cannot be considered transferred until effective
remedial steps are adopted in order to meet the acceptance criteria
Laboratory data quality management processes are a part of the overall Quality Management System as required by
Chapter 1 of EU GMP and the FDA cGMPs as specified in 21 CFR §210 & §211.
Analytical processes and procedures should be managed as part of a lifecycle concept. Laboratory data integrity and
security are critical requirements under the GMPs. Such a process is illustrated in Figure 26.
INPUTS
OUTPUTS
CONTROLLED FACTORS
Qualified &
Policies &
Qualified Validated Validated Trained
Procedures
Instruments Systems Methods Staff
PAPER
RECORDS
ELECTRONIC
Method Drift RECORDS
Deviation
Instrument System & Human
from
Failures Failures Uncontrolled Error
Procedures
Modifications
UNCONTROLLED FACTORS
All test measurement results have variation that comes from measurement (system) variation and process
performance variation.
There are two types of variation; Common Cause variation, inherent noise, and Special Cause variation owing to, for
example, a process shift, drift or excessive noise.
The control chart is designed to detect the presence of special causes of variation.
If a process is unstable it means that either of All test results have variation that comes from measurement (system)
variation and process performance variation.
A control chart is designed to detect the presence of special causes of variation. The normal distribution may be
characterised by two particular parameters; a measure of location (the arithmetic mean or average) and a measure
of dispersion (the standard deviation). If a process is unstable it means that both of these parameters are changing
in an uncontrolled manner (Figure 27 (a)).
The task is to bring these two parameters into a state of statistical control. This would entail ensuring that the mean
and the standard deviations were not varying significantly. This ideal situation is illustrated in (Figure 27(b)). This
process would then be said to be under statistical control i.e. no special cause variation and stable common cause
variation. In this state, the process is amenable to the tools of Statistical Process Control (SPC). However, a stable
process may not be statistically capable of meeting the specification limits. Figure 27(c) illustrates this showing that
the red process albeit stable is incapable. The desired state is to arrive at the blue capable state.
INCAPABLE
SPECIFICATION
LIMITS
(a) An unstable process (b) A stable process (c) Stable Processes; Capable
and Incapable
Figure 27 Analytical Processes; Stability and Capability (adapted from QMS – Process Validation Guidance GHTF/SG3/N99-
10:2004 (Edition 2) Annex A Statistical methods and tools for process validation
http://www.imdrf.org/documents/doc-ghtf-sg3.asp
Process capability refers to the performance of the process when it is operating in control that is to say within the
measurement uncertainty specified in the ATP.. Two capability indices are usually computed: 𝐶𝐶𝑝𝑝 and 𝐶𝐶𝑝𝑝𝑝𝑝 . 𝐶𝐶𝑝𝑝
measures the potential capability in the process, if the process was centred (it does not take into account where the
process mean is located relative to the specifications), while 𝐶𝐶𝑝𝑝𝑝𝑝 measures the actual capability in a process which is
off-centre. If a process is centred, then 𝐶𝐶𝑝𝑝 = 𝐶𝐶𝑝𝑝𝑝𝑝 .
(𝑈𝑈𝑆𝑆𝑆𝑆 − 𝐿𝐿𝐿𝐿𝐿𝐿)
𝐶𝐶𝑝𝑝 =
6𝜎𝜎
USL − X X − LSL
C pk = min
3s
,
3s
Typical values for 𝐶𝐶𝑝𝑝 and 𝐶𝐶𝑝𝑝𝑝𝑝 are 0.5 to 1 for incapable processes, 1 to 2 for capable processes and >2 for highly
capable processes.
A Cpk value alone is not an appropriate metric to demonstrate statistical performance or equivalence. Cpk analysis
requires a normal underlying distribution and a demonstrated state of statistical process control. When reporting a
Cpk value, a 95 or 99% confidence interval should always be reported because this takes into account the sample size
used in the calculation. 65, 66
This is extremely important because it is not always recognised that for tight confidence intervals around Cpk values
the number of data points needs to be large. Figure 28 shows that to have a confidence interval in Cpk of ±10% at
95% confidence requires in excess of 200 data points. One commonly used approximation formula for the
confidence interval is;
1 2pk
C
pk Z
C pk =±
C +
α 2
9n 2n − 2
3 .0
C o n f id e n c e In t e r v a ls fo r a C p k o f 1 .3 3
2 .5
± 4 0 % e r ro r (n = 1 5 )
fo r 9 5 % c o n fid e n c e
2 .0
± 2 0 % e r ro r (n = 5 5 )
fo r 9 5 % c o n fid e n c e ± 1 0 % e rr o r (n = 2 0 0 )
fo r 9 5 % c o n fid e n c e
1 .5
1 .0
0 .5
9 9 % c o n fid e n c e in te r v a ls
9 5 % c o n fid e n c e in te r v a ls
0 .0
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300
N u m b e r o f d a t a p o in ts , n
65
ASTM E2281-15, Standard Practice for Process and Measurement Capability Indices
66
S. Klotz and N.L. Johnson, Process Capability Indices, Chapman & Hall/CRC, 1993
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 67 of 74
Analytical Quality Control Group
Hence the use of Cpk values for comparison of performance needs to be interpreted with care when n is small!
Other process metrics may be used eg Cpmk which estimates process capability around a target, T, and accounts for
an off-center process mean.
PROCEDURE PROCEDURE
PERFORMANCE PERFORMANCE
QUALIFICATION VERIFICATION
The purpose of Stage 3 is to generate data that the analytical process remains in a ‘state of control’ during the life
cycle by;
1. verifying that the performance of the procedure or of appropriate steps within it (for example, injection and
sample preparation variability) maintains an acceptable level over the procedure lifetime.
2. providing an early indication of potential procedure performance issues or adverse trends
3. identifying any changes required to the analytical procedure
4. providing confidence that the reportable result(s) generated are ‘fit for purpose’.
The control charting approach is based on the idea that no matter how well a process is designed there exists a
certain amount of natural variability in output measurements. When the variation in process quality is due to
random causes (process noise) alone, the process is said to be in statistical control. If the process variation includes
both random and special causes of variation, the process is said to be out of statistical control. The purpose of any
control chart is to detect the presence of special causes of variation or trends (OOT).
A trend is a sequence of time related events. Trend analysis refers to techniques for detecting an underlying pattern
of behaviour in a time or batch sequence which would otherwise be partly or nearly completely hidden by noise.
These techniques enable specific behaviours (OOT; Out of Trend) such as a shift, drift or excessive noise to be
detected.
Analytical process data, where the expectation is that there is no trend, is monitored to confirm that it is under
statistical control.
Conceptually, a control chart is simply a plot of a response variable against time or analytical run whereby the
degree of variation is predicted by the chosen distribution (mathematical model) around a mean or target value.
Hence for a continuous variable which is assumed to be normally distributed the trend plot is shown as the normal
distribution with respect to time. The decision rules regarding an out of trend result come from the likelihood of the
pattern of responses or the distance from the target or mean value. An idealised concept is shown in Figure 30.
UAL
+3σ
UWL
+2σ
RESPONSE VARIABLE
+1σ
P=95.45%
P=68.27%
P=99.73%
Mean
-1σ
-2σ LWL
-3σ
LAL
TIME VARIABLE
A more detailed description of the tools and techniques applicable for Stage 3 together with commonly used
decision rules are given in the ECA AQCG LABORATORY DATA MANAGEMENT GUIDANCE Out of Expectation (OOE)
and Out of Trend (OOT) Results 67.
There are a large number of existing legacy procedures which have not been developed in accordance with the
principles and processes described in this guideline. These include the majority of Pharmacopeial methods. The
question arises as to what needs to be done with such methodologies. In practice, Stage 3 of the lifecycle is the
most important aspect for legacy procedures.
During the conduct of Product Quality Review (PQR), the performance evaluation of analytical methodologies is
required especially with respect to deviations and change control 68.
The prioritisation of problematic procedures identification, as part of PQR, and their remediation strategies will be
based upon a risk assessment and a business case.
Q
Figure 31 Remediation flow for problems identified in Stage 3
Issues may be resolvable by modifications to the analytical control strategy and minor technical changes triggering a
repeat of PPQ. However the possibility exists that a redevelopment or redesign is required in some instances
necessitating a new Stage 1.
67
ECA _AQCWG_ SOP 02_OOE OOT_v1.1_November 2016_rev10 available from the member’s area on the ECA
website
68
“The Rules Governing Medicinal Products in the European Union”, Volume 4, Good Manufacturing Practice (GMP)
Guidelines 2017, Chapter 1 Quality Management System 1.10; Product Quality Review
ECA _AQCG_ SOP 03_APLM_v0.6_Feb 2018 final draft.docx Page 70 of 74
Analytical Quality Control Group
Technical Glossary
The analytical terms are defined primarily according to guidelines (ICH) and (FDA) and international standard (VIM)
and the wording is taken from the references.
Acceptance Numerical limits, ranges, or other suitable measures for acceptance of the final -
criteria test result of a validation parameter. Acceptance criteria are defined based on
(validation) the product critical quality attributes, customer requirements, specifications
(regulatory) and product and process tolerances.
Acceptance Numerical limits, ranges, or other suitable measures for acceptance of the final FDA
criteria reportable results of analytical procedures.
(specification)
Accuracy The accuracy of an analytical procedure expresses the closeness of agreement ICH
between the value which is accepted either as a conventional true value or an
accepted reference value and the value found. This is sometimes termed
trueness. It is typically expressed as percent recovery or percent inaccuracy
compared to the theoretical value.
The error associated with inaccuracy is referred as Bias or systematic error.
Analyte Is a substance or chemical constituent that is of interest in an analytical Wiki
procedure
Analytical Elimination or limitation of variables that are known or found to have an impact
Control Strategy on the quality of the reportable result and selection of performance indicators
(ACS) which are intended to provide a means of verifying that the procedure is
performing as expected (“system suitability” tests). The combination of input
controls and performance indicators as components of a control strategy will
make sure the risk of harm to the quality of the reportable value will not exceed
the TMU thus ensuring ATP will be met on ongoing basis
Analytical The analytical procedure refers to the way of performing the analysis. It must ICH
procedure (AP), describe in detail the steps necessary to perform each analytical test. This may
Also called Test include but is not limited to: the sample, the reference standard and the
Method reagents preparations, use of the apparatus (equipment), generation of the
Procedure calibration curve, use of the formulae for the calculation, etc.
Also referred as Test Method Procedure.
Analytical run The act of performing analytical tests according to the analytical procedure one -
time, which is one execution of the analytical procedure.
Analytical Target ATP is a predefined written record of the requirements for the quality of the USP
Profile (ATP) reportable test result that the analytical procedure must generate, in order for
the procedure to be considered fit for purpose throughout its lifecycle of use.
Sometimes referred as URS or functional requirements for test methods
Blank Sample with no analyte present. Often a sample of dilution buffer. -
Blank signal Signal obtained from a phenomenon, body, or substance similar to the one VIM
(background under investigation, but for which a quantity of interest is supposed not to be
signal) present, or is not contributing to the signal.
Intermediate Condition of measurement, out of a set of conditions that includes the same VIM
precision measurement procedure, same location, and replicate measurements on the
condition same or similar objects over an extended period of time, but may include other
conditions involving changes.
Note: For instance for inclusion of procedure in the pharmacopoeias. Such data
are not part of marketing authorization dossiers for only one analytical site is.
Robustness The robustness of an analytical procedure is a measure of its capacity to remain ICH
unaffected by small, but deliberate variations in method parameters and
provides an indication of its reliability during normal usage.
Semi- A test result reported as ‘greater than’, ‘smaller than’, or’ equal to’ a reference -
quantitative test or a control analysed simultaneously with the sample.
result For some biological tests the scale used is a combination of both qualitative and
semi-quantitative measures e.g. ‘weak positive’, ‘dark blue’.
Sensitivity May be expressed as Detection Limit and/or Quantitation Limit or by means of
low control samples included during development and Method Design, as
required by the type of Analytical Procedure.
Specificity Specificity is the ability to assess unequivocally the analyte in the presence of ICH
components which may be expected to be present. Typically these might include
impurities, degradants, matrix, etc.
Lack of specificity of an individual analytical procedure may be compensated by
other supporting analytical procedure(s).
Spiking The addition of a small known amount of a known compound to a standard, -
sample, or placebo, typically for the purpose of confirming the performance of
an analytical procedure or the calibration of an instrument.
System Tests based on the concept that the equipment, electronics, analytical ICH
Suitability Tests operations and samples to be analysed constitute an integral system that can be
(SST) evaluated as such. System suitability test parameters to be established for a
particular procedure depend on the type of procedure being validated.
Test General term for analytical procedures that provide information on a specific ICH
characteristic of the sample (e.g. identification test, purity test).