20 - Continual Improvement
20 - Continual Improvement
20 - Continual Improvement
20.B.3 Goals/Objectives/Benefits
20.B.4 Implementation
20.B.4.1 Pre-requisites for successful implementation
20.B.4.2 When to apply / when not to apply
20.B.4.3 How to implement
20.B.5 Tools
20.B.6 Variations
20.B.7 Examples
20.B.7.1 API Synthetis
20.B.7.2 Reducing cycle time
20.B.7.3 Improving the yield of a drug product
20.B.7.4 Improving the laboratory process
20.C.3 Goals/Objectives/Benefits
20.C.4 Implementation
20.C.4.1 Prerequisites
20.C.4.2 When to apply/when not to apply
20.C.4.3 How to implement
20.C.5 Tools
20.C.6 Variations
20.C.7 Examples
20.E References
Printed by: 168305-3 Date: 24.04.2014 GMP MANUAL © Maas & Peither AG
20.A Preface
Up07 Thomas Peither
The pharmaceutical industry is facing challenges to provide safe medicinal drug products in a high quality and for an affordable price that are compliant
with the appropriate legal requirements and state-of-the-art procedures. The achievable prices for drug products decrease and, in order to maintain R&D
investment, the companies are motivated to reduce their costs without negative impact to product quality.
There are methods available and already in place that can help to achieve this goal. Most of these methods have already proved themselves in other
industries for a long period of time and also in the pharmaceutical industry in the last years. They are neither legal requirements (like a change control
procedure) nor necessary to be compliant with current GMP requirements. But they can help to reach the goal of compliance with less effort and cost,
and in many cases they can improve quality for the patient and/or health-care practitioner.
The Chapter 20 Continual Improvement delivers a basic understanding of the presented concepts, methods and tools, and provides references to sources
of more detailed information for those wishing to implement their use. Examples highlight perspectives for pharmaceutical manufacturers and suppliers.
This chapter will give a good overview of the methods and provide some helpful how-to-do descriptions, checklists and examples. But you will also read
about pitfalls and troubles in the implementation phase.
Most of the methods are used with economical objectives to save effort in achieving targets. And all of them contribute to quality demands and have been
invented to reach quality goals. They don't have to be implemented but they are often very helpful to reduce effort and cost, and to demonstrate the value
of quality.
Printed by: 168305-3 Date: 24.04.2014 GMP MANUAL © Maas & Peither AG
20.B Six Sigma
Up07 Rolf Staal
20.B.1 Definition
Figure 20.B-1 Six Sigma Definitions
Definition Six Sigma is a set of practices originally developed by Motorola to systematically improve processes by eliminating defects
(http://en.wikipedia.org/wiki/Six_Sigma#_note-ssdefnfin#_note-ssdefnfin).
A defect is defined as nonconformity of a product or service to its specifications.
Other A holistic and flexible method that combines known tools with the aim to improve all types of processes
Definitions
A way to increase success, sustain success and maximise success of your company
Mathematically Six Sigma refers to a maximum of 3,4 defects per million opportunities, which is practically defect free. When adding and subtracting six
standard deviations from an average, the remaining probability is listed in Figure 20.B-2.
Within the scope of Six Sigma it is assumed that a process shifts about plus and minus 1,5 standard deviations. This shift increases the defect rate to
3,4 parts per million opportunities.
Figure 20.B-2 DPMO and Sigma Levels
Where:
f() represents the mathematical function which defines the relationship between Xs and Ys.
E is the unknown portion since the equation might not be able to explain hundred percent of the total variability.
The goal of any Six Sigma project (Figure 20.B-3) is to clearly define the Ys to improve and the target levels to achieve, identify all the critical Xs in the
equation and understand the relationship between Xs themselves and on Y, and to reduce E to a very small portion. By doing this, it becomes possible
to accurately control the output of the process.
20.B.2.2 Six Sigma Expert, the Black Belt
The Black Belt is a key role in the organization. As the Six Sigma tools and methodology expert, he will lead and motivate project teams, act as the
critical link with management and be responsible for timely and successful completion of projects. This role requires a person who is customer focused,
a good team player, committed and respected, results oriented, has good computer and mathematical skills, and who is not afraid of a challenge. Black
Belts should also be approachable by others, flexible to navigate through rigid structures and unfriendly waters, and be good communicators.
A good Black Belt will also have:
■ Basic understanding of statistical procedures and techniques
■ Basic understanding of the organization’s overall business processes
■ Experience in managing projects of varying size and complexity
■ Experience in leading cross-functional teams
■ Experience in teaching, coaching and/or internal consulting roles
An organization will normally train between 1 and 3 % of their employees as Six Sigma Black Belts. These key individuals are normally hand selected
high potentials who are results oriented, able to communicate effectively at all levels and are well respected by management and operators alike. The
Black Belt role is a full time commitment which traditionally lasts for a minimum of two years after which they will often move on to a position of greater
responsibility where they are able to put their new skills and experience to good use in pursuing company objectives.
A Black Belt normally reports to a Master Black Belt or to senior management directly, and is generally responsible for mentoring a number of Green
Belts within the organization.
20.B.2.3 Six Sigma organization
The Six Sigma organization is normally structured similarly to other groups. The overall lead is taken by the CEO (Chief Executive Officer), while
strategic co-ordination is led at the vice-president level and to whom all sites and groups as well as any Master Black Belts report. Black Belts are
normally linked to a particular site, and so would report to the site manager directly. Green Belts and team members would normally report to their
respective department heads as these roles are only part time and should not take up more than a few hours a week out of their regular responsibilities.
Both Master Black Belts and Black Belts can generally be deployed across all departments and groups, while GreenBelts and team members remain
focused on their particular areas of competence and responsibility.
Figure 20.B-4 Six Sigma Project Organization
A Master Black Belt will normally manage and support a number of Black Belts, as well as run some projects and coordinate all training activities.
With the correct Green Belt and team member support, a Black Belt should similarly be able to manage 2–3 projects simultaneously, while offering
guidance and mentoring to Green Belts and team members (see Figure 20.B-4).
20.B.2.4 Six Sigma roles and responsibilities
As within any successful organization there are a number of key roles and responsibilities which need to be established in order to create a successful
Six Sigma implementation. These can be defined as follows:
Executives
■ Generally the CEO of an organization.
■ This person helps create the vision, define the implementation, set strategic goals and impact to bottom line.
Sponsors
■ Generally members of senior management circles.
■ Act as link between senior management, champions, Master Black Belts.
■ Ensure proper resources are made available.
■Support Six Sigma initiative during exceptional occasions, such as during annual reports, management reviews, company conferences, and for
achieving targets.
Controllers
■ Estimate expected and actual savings achieved.
■ Create clear guidelines for determining direct and indirect (hard and soft) savings.
Champions
■ Generally site or operations managers.
■ Help focus Six Sigma activities on areas of greatest need and impact.
■ Set goals to be achieved with Six Sigma, and define targets to be achieved by managers.
■ Responsible for Six Sigma in their area of responsibility.
■ Create Six Sigma structures and provide resources needed.
■ Help select projects and Black Belt candidates.
■ Approve training plans and define types and frequencies of reviews.
Team Members
■ This role can be filled by almost any person within an organization.
■ Must be willing to work in a team, be guided by data, and assist in basic tasks within their work areas.
■ Team members are generally selected so as to have a cross-functional team and have most process stakeholders represented.
■ In some cases team members may receive special training, but in most cases this is done on-the-go as the project progresses.
■ Generally requires a 5–10 % time commitment for the duration of the project.
The resulting DMAIC acronym has since become synonymous with the Six Sigma approach to fixing problems in established processes.
As the names imply, these steps are not much different from those advocated by Deming in the form of the PDCA cycle (Plan, Do, Check and Act). The
Six Sigma approach has however included additional depth to these steps as well as identified some key items in each which greatly improve the
likelihood of success at the end of the cycle (Figure 20.B-5).
Figure 20.B-5 DMAIC Phases, with examples of activities and tools used
Project selection Define project and Identify potential causes Create process model based on vital factors Confirm process is
Team selection boundaries Reduce potential causes Determine new optimum for process stable and capable
Create project Collect data on current down to vital few Validate results Implement monitoring
charter state procedures
Set metrics and Assess suitability of Update quality systems
goals measurement system Standardize
Process analysis Determine cur-rent
performance levels
Voice of the cus-
tomer
Tools
Project charter Process mapping Detailed process maps Simulations Process capability
sheet (VIS) Data collection plan Graphical data analysis Design of experiments (historical, screening, SPC charts
Process flow dia- Process capability tools full or partial factorial, etc.) Control plans
gram (SIPOC) Hypothesis testing Response surface designs
Measurement system FMEA
Process yield analysis (MSA) Variance, regression and Improvement impact and benefit
(RTY) Replication opportunities
Gage R & R correlation analysis
Voice of the Project report
customer (VOC) FMEA
Kano analysis & Cause and Effects
CTQ tree Diagram
Historical data plot YX Diagram
Pareto Diagram
Although the Six Sigma approach clearly defines the basic steps to be followed when tackling a problem, the remainder of the approach as well as
specific tools used are generally very flexible. This permits its easy adaptation to the particular needs of the process under investigation. As mentioned
before, the goal of this approach is to investigate, understand and then determine how to proceed based on the new knowledge gained, rather than
pushing ahead on a fixed course based on a set procedure or pre-conceived idea of the problem and solution.
20.B.2.6 The DMAIC Cycle
Define Phase
The define phase of a project generally starts once a project has been scoped out and selected. This is generally carried out by the Black Belt or Master
Black Belt and management. Though a number of techniques exist for scoping projects, these generally focus on delighting the customer, a combination
of aligning projects to organizational needs and targets, ensuring necessary resources are available, and identification of greatest cost/benefit potential in
the shortest time.
Once a new project has been identified, the define phase can be started. Although most people recognize this phase to be the foundation of a good
project, it is all too often skimmed over rapidly in the rush to get on with fixing a process and pressure to produce results soon.
As the name implies, this phase focuses on defining not just the project topic in itself, but also a number of critical elements within and about the
project:
■ clear description of the problem and the process it is found in
■ project scope including deliverables
■ definition of the performance metric to be improved
■ references to historical and target performances
■ the team who will be tackling the issue and management who has sponsored the project
■ a rough time frame highlighting expected completion dates for the main DMAIC phases of the project
Determining and communicating these elements to the whole team at the very beginning of a project is critical to its success. Some or all of this
information is often summarized in a “Vital Information Sheet” or “Project Charter”, which may in addition state the planned budget and expected benefits
to the organization, identify secondary metrics, define the boundaries of the project and act as a project contract when it includes the signatures of key
stakeholders (e.g. process owner, project champion, Black Belt and financial controller).
The resulting document serves as the foundation for the project, outlining who will be working together to solve a particular issue and achieve the project
goals.
Figure 20.B-6 SMART
When describing the project, it is always a good idea to be SMART
Once the project has been outlined, it officially starts at the “Kick-off-Meeting” between the team members and management (Figure 20.B-7).
Figure 20.B-7 Kick-off-Meeting
The Kick-off-meeting should include
Presentation of problem statement and project description, including metrics to be used, as well as historical and target performances
Agreement of team and meeting rules (timeliness, openness, confidentiality, mutual respect, etc.)
Definition of project boundaries (what will and will not fall under the responsibility of the team and project in question)
The define phase concludes upon having a signed project charter and conducting a successful gate review meeting with the Black Belt, project
champion, Master Black Belt, together with a clear action plan for next steps containing topic, deadline and person responsible (Figure 20.B-8).
Figure 20.B-8 Example of a project
action list (project charter)
Project charter
■ Business case
■ Description of the problem
■ Description of the goal
■ Focus and Scope of the project
■ Potential benefits
■ Required support
■ Project cost
■ Project plan and milestones
■ Roles and responsibilities
■ General concept deliverables
Achieving a good define phase is essential as the team’s focus and concentration will be tested often throughout the project. Not getting side-tracked
into addressing side issues or tackling ‘urgent’ matters is one of the main challenges the team will face. To achieve this it is essential they have a clear
task defined and the support of their management to focus on this particular high priority issue.
Measure Phase
Once the project has been clearly defined, the measure phase focuses on identifying the true customers of the process and what their needs are,
visualising the current state of the process concerned, evaluating the amount and quality of data being measured and establishing the existing
performance levels. Additionally this phase also focuses on the initial steps towards determining important influences on the process and identifying the
root cause for the desired output.
There are a number of tools used in this phase to help achieve the above, often including the following:
■ Critical to quality (CTQ) / critical to business (CTB) trees
■ Detailed process map showing process variables (inputs, outputs, noise factors, etc.)
■ Statistical process control (SPC) charts
■ Process capability analysis (CPK)
■ Measurement system analysis (MSA)
■ Baseline Performance calculations (Sigma level)
■ Brainstorming techniques
■ Cause and effect diagrams
■ YX and X tracking diagrams
■ Data collection plans
The list is by no means an exhaustive list of the tools available, and it is up to each team, and particularly to the leading Black Belt to select the most
appropriate tools for each task and project. In addition, even the tools mentioned above often appear in a number of different forms, having being adapted
across teams and industries. For example, in order to offer a better overview, the VOC, CTQ and CTB can be combined into a single table, giving the
team a tool with which to visually link customer needs to process metrics (Figure 20.B-9).
Figure 20.B-9 Example of a relationship matrix linking VOC-CTQ/B-Metrics
Process Mapping Process mapping is critical to establishing a basic picture of the process in question, and is a whole topic in itself, with numerous
techniques available. These tend to range from a simple high level process map, showing little more than an overview of the key processes, to a detailed
value stream map (VSM), including extensive information such as duration and defect rates for each step, transport, work in progress and waiting times
(Figure 20.B-10).
Figure 20.B-10 Example of Value Stream Map
In order to obtain a good process map, it is essential for the team members to personally walk the process. This allows them both to get a much better
understanding of the process under investigation, and helps in identifying activities which do not follow the planned or expected process plan, known as
the “hidden factory” (Figure 20.B-11).
Figure 20.B-11 Example of cross functional process map
It is important at this stage that the team should only focus on obtaining the necessary information in order to understand the process and to move
forward.
Should the team on the other hand attempt to gather all possible information on a process at this stage, it is likely to turn into a rather time consuming
exercise and though educational, of little advantage to achieving the overall project goals.
Brainstorming techniques As we know, one of the most important features of the Six Sigma approach lies in the “funnelling effect” i.e. progressively
reducing the number of potential factors of interest until only those proven to have a significant effect on the process output remain.
While the more technical or analytical funnelling down of factors often takes place in the analyze phase, the initial filling of the funnel takes place in the
measure phase. This is usually achieved in part by referring to past documentation such as FMEAs (Failure Mode and Effects Analysis) for the same or
similar processes, reports from similar projects conducted in the past and general literature searches (see chapter Chapter 10.I). The other main source
of potential factors is the team. Based on their diverse skills, experiences and points of view towards the process, a number of good factors can come to
light during brainstorming. As it is essential to start off with a comprehensive list of potential factors, initial activities tend to focus on producing quantity,
and only at a second stage are these then sifted through to identify those factors more likely to be of importance to the process and output of interest.
Similarly, this first step tends to be a purely subjective exercise, with the more analytical approach being used in the analyze phase to continue filtering
down the Xs under consideration.
Cause and effect diagrams, also known as fishbone or Ishikawa diagrams, are another useful tool for identifying potential root causes to an issue, as well
as potential X factors in general (see Chapter 10.F Using a Fishbone Diagram and Figure 20.B-12).Once the large number of potential Xs has been
identified, it is essential to reduce this number of factors, which can be in the dozens or even hundreds at this stage, to a more manageable level. Some
of the tools used for this are the YX diagram and multi-voting techniques which will be expanded on later. Basically these are subjective ranking
techniques used to try to separate the likely or highly potential factors from those less likely to be important. While further activities will focus on the
former, it is always possible the team will return to the latter should it become clear some factors are missing in future phases of the project. Generally,
the team will seek to come out of this exercise with 5 to 20 high potential factors, which will be candidates to a more rigorous analytical analysis.
Figure 20.B-12 Example of a Cause and Effect Diagram
Data collection plans Once the high potential factors have been identified, a data collection plan must be created, whose purpose is to collect useful
data on each of these factors and the respective process output. This data will then be used to conduct further analysis to determine whether a particular
factor is indeed critical to the process output, and get an initial indication of how.
In general, the data needs to be collected in pairs or sets, that is recording both the X input value and the respective Y output for that particular input. In
addition the data collection plan must establish how the data is to be collected and when or how often. An example of a data collection table is shown in
Figure 20.B-13.
Figure 20.B-13 Template for Data Collection Table
Statistical process control (SPC) charts Statistical process control is used on historical data to determine whether the process is in control (natural
variation only) or out of statistical control (assignable causes of variation present). This in turn helps the team determine what techniques and priorities to
set in solving the issue at hand.
Process capability analysis In order to better understand what sort of issue is being faced, it is generally necessary to conduct a process capability
analysis. The purpose of this tool is to help establish how well the current process is able to meet the requirements set. There are in principal two main
issues to be assessed, which are centering, or location, and variation, i.e. is the process centered on the target value or range permitted, and is the
variation less than or greater than that allowed. Naturally, the lower the process variation relative to the window permitted, the more forgiving is the
position.
Measurement system analysis ( MSA) As the Six Sigma approach is highly focused on drawing conclusions from hard data, it is essential that the
quality of the data also be checked. The measurement system analysis does just that by ensuring that only reliable and accurate data is being used in
decision making.
One of the main components of an MSA is the gauge repeatability and reproducibility analysis (GR & R), which checks whether a measurement system
is repeatable over time, and can be reproduced under different environments, such as different pieces of equipment or operators.
Besides these short term investigations it has been found to be of great importance to establish the long term performance of these measurement
system analysis. Many times these are very complex processes themselves and need to be analyzed over a longer period of time.
Analyse Phase
The goal of the analyze phase is to use the data gathered on potential Xs and the corresponding Ys to evaluate how important they are to the output of
interest. The number of tools and techniques available at this stage is very large, and varies significantly depending on the type of process and data at
hand.
The basic procedure to be followed is however the same, called hypothesis testing. In this case, a null hypothesis is presented (known as Ho) together
with an alternative hypothesis (known as Ha).
An example might be:
■ Ho: The person reading this article is under the age of 40
■ Ha: The person reading this article is not under the age of 40
Once this is done, the team can use the data collected and appropriate tools to either prove or disprove the “Ho”. In the case of the “Ho” is rejected, the
“Ha” is failed to reject and taken to be the valid hypothesis statement.
By using this relatively simple methodology on each potential X, and for each Y of interest, it is possible for the team to methodically and confidently
determine whether the factor is indeed important to the output of the process.
As it is generally not possible to completely prove or disprove the significance of a particular factor, the Six Sigma methodology uses the strength of
statistics to determine whether there is sufficient evidence to accept the null hypothesis or reject it. This is accomplished by setting a maximum
acceptable error probability and then assessing whether the data can explain the effect beyond this level of reasonable doubt. Although this methodology
does technically allow for mistakes, these are relatively rare when used properly and by trained Black Belts.
By reducing the number of factors to only those truly important, the team is better able to focus on understanding the critical aspects of how these might
interact with each other and most importantly, how they impact the final output. Gaining a clear understanding of the relationship between the inputs and
outputs of a process is the prime focus of the subsequent improve phase of any Six Sigma project.
Improve Phase
Once the team has reduced the field of potential Xs down to only those proven to be critical they are ready to move into the Improve phase. This phase
concentrates on determining solutions to be implemented. This in turn involves using everything the team has thus far learnt to reduce defects and
variation and establishing the new target process. Once the new process has been tested, validated and approved by the process owner it must then be
implemented into every day life.
Many process improvements may also have already been implemented during earlier phases of the project, such as “Nike” events (“Just Do It”) resulting
from Kaizen workshops or 5S activities. All these actions lead to the overall new and improved process, and should be tracked in a master action plan,
regardless of when or in what phase an improvement was implemented.
Some of the more detailed analysis during this phase concentrates on investigating the relationships between factors and determining the optimal levels
of each factor in order to achieve the desired output. In addition, the team will during this phase attempt to establish a mathematical equation of the form
Y = f (x) to generally explain the relationship between Xs and Ys, as well as determine the process robustness window. This robustness window is the
result of analysing the sensitivity of the Xs to both each other and external noise sources, i.e. uncontrolled factors such as environmental conditions,
wear and tear, etc.
The most commonly referred to tool used during the improve phase of a Six Sigma project (though not necessarily most used tool) is the DOE, or
Design of Experiments. This tool, as many will already know, consists of using a systematic approach towards experimental trials and supersedes the
traditional “trial and error” technique often still found in use today. The major advantage of this tool lies in the ability to gain more in-depth knowledge of
the process from the same number or less of experimental trials. As in many cases it may be very difficult to gain access to a process in order to run
experiments, and not to mention the cost of running experiments and assessing results, making the best use of these trials is paramount. The field of
DOEs has by now exploded into dozens of general and special designs allowing the user to plan experiments ranging from the quick and roughly
indicative, to detailed and highly precise investigations of complex interactions.
Other tools commonly used in this phase include cost-benefit matrices, RASIC charts, Gantt diagrams and general action plans. In many projects SOPs
must also be created or revised, equipment or tools modified or purchased and personnel trained, all of which require careful planning and budgeting to
ensure successful implementation.
Once the improvements have been implemented to the process owners approval and the improve phase gate review has been completed, the team is
ready to move into the final phase of a Six Sigma project, that is the control phase.
Control Phase
The control phase is the final phase of the DMAIC methodology and incorporates a number of measures to ensure the changes made during the improve
phase result in the output metrics achieving their target levels and long term process stability. Additionally the phase incorporates a number of activities
aimed at making sure the process is handed over to the process owner smoothly and in such a way that all improvements made endure in the long term.
Some of these activities include establishing metrics to be monitored, as well as how and when to monitor them. Control charts are particularly useful in
determining who, when, how and how often particular aspects of the process need be monitored. The use of statistical control charts in this phase
greatly enhances the ease of monitoring a process and its behaviour.
Standardisation, documentation and communication aspects are also looked at in order to create more transparency, and allow quicker and more
accurate responses in case the process shows signs of deviating from its normal range. By creating clear and concise reaction plans, users are able to
rapidly and effectively intervene in the case of minor events, and major issues can be all but eliminated.
Another important aspect of the control phase involves a confirmation of the overall benefit to the organization. It is important to understand whether the
improved metrics have indeed had a positive and measurable effect. Should this not be the case, greater care must be taken during selection of future
projects to align these to the organizational big picture and their key performance indicators (KPIs). These naturally do not need to be solely financial
benefits as they may focus on other priorities such as customer satisfaction and loyalty, community support, employee health, safety and motivation
and a myriad of other important concerns to any organization.
One of the final steps in the control phase involves creating a project report for knowledge sharing and documentation. It is also advisable at this stage to
look at the spreading of best practices and to present recommendations for further activities (potentially new projects) and identifying possible
snowballing effects, that is, areas where this new process knowledge can be applied with little or no need for additional analysis, and thus additional
benefits reaped at very little cost or effort.
20.B.2.7 Six Sigma Training
Basic training for the main Six Sigma roles is as follows, though naturally a vast number of additional options are available for increasing skill levels and
effectiveness (Figure 20.B-14).
Figure 20.B-14 Six Sigma training
Six Sigma training
These are the most predominant types of workshops and training programmes commonly available and the generally accepted minimum durations
necessary for effectively communicating the respective topic.
Figure 20.B-15 Training weeks for Black Belts
Black Belt training over 4 weeks
Week 2 ■ Cause and Effect Analysis, Failure Mode and Effects Analysis (FMEA)
Analyze Phase
■ Graphical and Statistical Data Analysis
■ Hypothesis Testing
■ Correlation and Regression Analysis
The Green Belt, Black Belt and Master Black Belt trainings are generally certifiable on completion of a theory and practical portion. Although there is
not yet a governing body for this sort of training, certain commonly accepted standards are beginning to emerge such as the ASQ CSSBB certification.
The Black Belt training is designed to follow the natural progress of a project. This set-up allows the candidate to learn the necessary methodology and
tools in class, and then gain practical experience by applying these to a real issue within their organization. Additional onsite support is generally
provided by MBBs during this training phase. The four training weeks can roughly be split up as shown in Figure 20.B-15.
All training weeks should include a theory portion, in class and homework exercises, and hands on practical team exercises. As the training is meant to
run in parallel to a real project lead by the BB candidate, time is also usually set aside for in class project reviews and coaching. Additionally, some time
might be reserved for a weekly test to ensure the candidates have correctly understood the theory, and ensure minimum standards are achieved.
20.B.3 Goals/Objectives/Benefits
In order to remain competitive in today’s aggressive and demanding markets companies need to remain finely tuned to their customers and their needs
at the same time as keeping their own operations both efficient and flexible towards the always changing demands and constraints.
In order to remain competitive, organizations need to focus on:
■ knowing their customers
■ knowing their products
■ knowing their processes
Simply put, knowledge of their surroundings is key, and as can be expected, proactive organizations are on a continuous search for the best practices,
concepts and techniques to build this. Over the past two decades many organizations large and small alike have discovered and continued to improve
the Six Sigma methodology, helping it become one of today’s key initiatives being applied to help companies rise to the challenges they face.
While there are a number of official goals for implementing a Six Sigma programme in an organization, these invariably boil down to the core goal which
in general terms can be described as: To continuously improve an organization’s ability to fulfil its internal and external customer’s requirements,
completely and efficiently, by steadily increasing their knowledge of their processes and using this knowledge to unrelentingly improve their products and
services, and customers satisfaction.
By doing so, organizations are able to provide better products cheaper and faster, with less waste and at a lower cost, leading to greater sales and
improved margins. This is mostly achieved by recognizing key processes, systematically and analytically identifying their key input factors,
understanding how these influence the output, learning how to control the process accurately, and ensuring long term stability of the resulting process,
i.e. by applying the Six Sigma DMAIC or a similar process optimisation procedure. One of the great advantages of the Six Sigma methodology is that it
is a concept applied to improve all kinds of processes, be these in production, in product or process development, or in transactional and service industry
environments. In any of these cases, the output of any process has key characteristics which can be measured in regards to quality, cost, or time.
The projects undertaken thus tend to target one or more of the following areas:
■ Quality improvements
■ Efficiency improvements
■ Reduction of waste
■ Reduction in processing times
■ Reduction of total costs
■Improvement of competitiveness (i.e. where quality, cost and time have been improved to levels which are world class and have not been achieved
by anyone else)
■ Customer satisfaction (“delighting the customer“)
■ Cultural Change
The Six Sigma concept, in contrast to most other process optimisation or business excellence concepts, incorporates an additional big picture goal: it
uses the opportunity in order to hand pick, train and groom potential next generation leaders. In essence, the programme acts as an in-house training
and development programme to develop high potential candidates from within the organization by training these people on both the analytical and soft
skills necessary to successfully lead others in roles of increasing responsibility. Rather than this training being an investment and expense to the
organization, the Six Sigma approach means the candidates gain knowledge and hands on experience while saving money and generally working
towards the organizations’ strategic goals. This aspect has become one of the cornerstones of a Six Sigma deployment and has been so successful
that companies such as GE have made Black Belt certification a requirement for managerial roles.
20.B.4 Implementation
20.B.4.1 Pre-requisites for successful implementation
In order for a Six Sigma roll-out to be successful, there are a few key elements which must exist and be carefully considered in any organization:
■There must be a willingness to change at a high hierarchical level. Although this does not necessarily mean it is necessary to have the
CEO’s buy-in, it is necessary that they not be against such an initiative, and that another senior figure is willing to drive it (see next point).
Management must be open to new ideas and willing to support change where justified and beneficial to the organization.
■It is highly desirable to have some form of sponsor at senior management level. This has both the effect of highlighting the importance of the
programme to all those involved, and as a key remover of roadblocks, particularly those which go beyond local areas of influence. While a roll-out
is still possible from a bottom up approach, having the sponsor will often speed up the time it takes to establish the programme and see significant
results.
■The Six Sigma concept needs to be customised to meet the particular needs of the organization, focusing on those areas and aspects most in
line with the top level strategy and particular circumstances.
■It is highly recommended that Black Belt training be linked to real projects, thus providing the candidates with an immediate opportunity to
test and learn tools and techniques in a real world environment, while simultaneously drastically reducing the payback time for the training and
support provided to the candidates.
■In the case of projects without a clear ROI (Return of Investment), it is essential to get consensus of all involved, particularly of the sponsor
and champion, before kicking the projects off. If this is not possible, it is a good sign the project is not a current priority and therefore unlikely to be
closed successfully or on time.
■In order to achieve the results timely and efficiently, it is vital to apply the DMAIC system vigorously. While the selection of many tools is left to
Black Belt or Green Belt, it is essential the basic steps are carefully followed, and all gate reviews are completed before proceeding. Although this
may result in the occasional step back to an earlier stage (e.g. if there are insufficient vital X’s to control the process in the improve phase), the
end result will be a deeper understanding of the process and the ability to achieve the project goals over and over again.
■As Six Sigma projects invariably lead to change and some form of conflict, it is highly recommended that the Black Belt training include a
workshop on Managing Change as part of the curriculum.
■For best results it is recommended that during the initial stages individual on-site coaching of Black Belts be provided, particularly during
the training and to the end of their first project.
■Continuing coaching of the Black Belts after the training and for at least the first year helps ensure they are able to reach full potential earlier,
leading to faster and greater benefits to the organization.
■Achieving best results and clear communication by standardising tools also applies to Six Sigma. It is recommend to use standard software
packages, reporting formats and requirements, and performance metrics across the whole organization.
Overall, any successful implementation should only take place in an environment which shows a sincere interest in learning and tackling their problems,
and never in one which has no intention in investing the time or resources necessary to see success, or in respecting the sometimes painful truth of the
current situation and use it as a stepping stone to improvement.
20.B.4.3 How to implement
There are a number of theories regarding successful implementation of a Six Sigma programme. All and all said and done, there is no “one-size-fits-all”
solution, and the best approach will very much depend on the individual organization and their culture. You will find below a brief description of the main
approaches used. In order to know which might suit your organization best it is recommend to perform a “Six Sigma assessment workshop” with the aid
of a trusted Six Sigma expert or organization.
The four most common approaches are:
Top-down The top-down approach involves the training and roll-out of resources progressively from the top of the organization down to the sponsor,
champions, black belts and team members. In this case senior management decides to roll-out the programme, assigns a top management sponsor and
executor, makes an appropriate announcement about the importance of the programme, brings in or makes the necessary leadership resources
available, selects and trains high potential internal candidates, and continues to show interest and support throughout the roll-out phases.
Big Bang The big bang approach is where you do everything at the same time (or as close to it as you can) in order to minimize deployment risk, get
the full benefits of the company wide deployment almost immediately and to minimise the effect of local issues on the overall results. On the down side,
it also means there is little time to learn from mistakes and adjust new knowledge, and takes a much more co-ordinated deployment effort than some of
the other approaches.
Pilot Implementation The pilot implementation roll-out is based on creating one or a few pilot programmes which will then act as the seeds for future
growth. This approach requires a reduced investment in time and resources when compared to the big bang approach, and can be found as a bottom-up
or top-down approach. Once the pilot programmes have shown their success, and the resources have gained first hand knowledge, these become the
trainers of future generations of Black Belts. By using this approach the organization can allow the program to grow organically from within. On the
downside, pilot programs even when successful can become marginalised, and the benefits to the organization will be restrained to the growth of the
programme.
Outsourced support This approach is used when no resources are available internally and roll-out is wanted immediately. By using external
resources such as hiring experts or bringing in consultants, the organization benefits from the immediate availability of experienced resources to help
tackle high priority areas, prove the value of the programme and train internal resources as they become available. Although the external support
resources are often expensive, the payback may often be similar to other approaches as experienced Black Belts are often able to close projects faster
and with greater success than candidates in training.
In order to select the appropriate implementation strategy, it is essential that the organization understand their strategic goals, both financially and time
wise, determine potential constraints such as budget or resource availability, and run a cost-benefit analysis on the different scenarios.
20.B.5 Tools
■ Analysis of Variance
■ Brainstorming
■ Cause and Effect Analysis
■ Correlation Analysis
■ Critical to Quality (CTQ)
■ Critical to Business (CTB)
■ Cross-Functional-Process-Mapping (see Chapter 10.E)
■ Data collection Plan, Data collection
■ Design of Experiments (DoE), Response Surface Designs
■ Failure Mode and Effects Analysis (FMEA, see Chapter 10.I)
■ Five S (5S)
■ Hypothesis testing
■ Gantt Chart
■ Key Performance Indicators (KPI)
■ Kick-Off-Meeting
■ Long Term Measurement System Analysis (LMSA)
■ Measurement System Analysis (MSA) also called Gauge R & R
■ Multi-voting
■ Project Charter
■ Pareto Principle and Pareto Diagram
■ Pay back and Pay back time
■ Process Capability
■ Process flow diagram (SIPOC)
■ Process Stability
■ Process Mapping (see Cross-Functional-Process-Mapping)
■ Process Robustness Analysis
■ Project Identification
■ Project Report
■ Project Selection
■ RASIC Charts
■ Regression Analysis
■ Statistical Process Control
■ Supplier-Input-Process-Output-Customer (SIPOC) see Process flow diagram SIPOC
■ Voice of the Customer (VOC)
20.B.6 Variations
As is to be expected, the strategy which may best fit your organization may be none of the above, or a combination of them (see Chapter 20.B.4
Implementation). In some cases it is necessary or recommended that the basic approach be modified to meet specific demands. This may include but
not be limited rebranding in order to provide ownership, or adaptations to better tie the program into existing programmes and techniques.
Should an organization wish to customise their programme, it is highly recommended that they do so with the aid of an experienced Six Sigma resource
in order to avoid devaluating the original concept or making some of the mistakes others have already had to suffer. All in all, the Six Sigma approach
has become well established and enjoyed almost two decades of success. As a result, any variations from this tried and tested technique should
therefore be undertaken with extreme care and careful planning.
20.B.7 Examples
Motorola saved more than 11 Billion dollars since the implementation of Six Sigma. It gained about 12 % productivity improvement per year on the
average. Motorola was able to reduce cost of quality by more than 84 %. They achieved a reduction of 99,7 % in process defects, resulting in a metric of
5,6 Sigma.
Allied Signal reported 1,5 Billion dollars in savings on the implementation of Six Sigma and had about 7 % productivity improvement per year.
General Electric reported 20 % of their earnings due to Six Sigma and a total of more than 4 Billion dollars in savings since the implementation.
Many pharmaceutical companies have invested in Six Sigma. Few pharmaceutical companies report on the specifics in regards to savings and earnings.
Aventis started implementing Six Sigma in 1999 and achieved savings of over 20 Million Euros over a period of three years.
20.B.7.1 API Synthetis
One pharmaceutical company produced an active pharmaceutical ingredient. The content of the impurity had a specification of 110 ppm. During
production the company experienced levels higher than 110 ppm, resulting in additional cost, lower capacity and increased cycle time. About 11% of all
batches had to be re-crystallized in order to get the concentration of the impurity below the specification limit.
A Six Sigma team was formed and the Black Belt used the DMAIC sequence for solving this problem. Within six month the team found four root causes
and eliminated these. The result looked like the chart in Figure 20.B-16.
Figure 20.B-16 Individual Chart for impurity in Active Ingredient
After the improvement the yield improved, the cycle time was reduced and the re-calculation indicated a reduction in manufacturing cost of 23 %.
This example also proves that a chemical reaction in a batch type process can be “in statistical control”. It is also capable, since there is a buffer
between the upper process limit and the specification limit.
20.B.7.2 Reducing cycle time
One subsidiary of a global pharmaceutical company produced and packaged tablets in a small site. The cycle time varied substantially with an average
of 16 days.
A Six Sigma team was formed and the Black Belt followed the DMAIC sequence. The result after 7 month was a substantial reduction of cycle time: the
new average was 6 days, including the release by the quality department.
20.B.7.3 Improving the yield of a drug product
One pharmaceutical company experienced relative high losses at the optical automated inspection station. A Six Sigma team was formed and the Black
Belt used the DMAIC cycle to identify the root causes for the losses. Within seven and a half month the losses were reduced by 75 %. The savings
amounted to over 600 000 Euros.
20.B.7.4 Improving the laboratory process
One pharmaceutical company had problems in precisely identifying the amount of impurities in an active pharmaceutical ingredient. This resulted in
repeats of the analysis and sometimes in rejecting batches, even though there were no indications that something had changed in production.
A Six Sigma team was formed and the Black Belt used the DMAIC cycle to improve the analytical method. First he used a Design of Experiments (DoE)
in order to find the process parameters which affected the analysis most. In the second Design of Experiments he optimized the process parameters
which had the highest impact on the laboratory process. During the control phase the company experienced 80% reduction of the rejected batches,
resulting in savings over more than 300 000 Euros.
Summary
Six Sigma is a systematic approach to improve processes by eliminating defects. It is applied in order to increase, sustain and maximize the success
of a company.
One of the key functions within the Six Sigma organization is the Black Belt. He represents a critical link between management and the Six Sigma
Team organization and is responsible for the timely and successful completion of projects.
One of the main characteristics of the Six Sigma methodology is the structured approach brought to problem solving. It can be broken down into the five
principal steps followed to tackle any problem: Define, Measure, Analyze, Improve and Control. The resulting DMAIC acronym has since become
synonymous with the Six Sigma approach to fixing problems in established processes.
The core goal of Six Sigma is the continuous improvement of an organization's ability to fulfill its internal and external customer's requirements,
completely and efficiently, by steadily increasing their process knowledge and using this knowledge to unrelentingly improve their products and
services, and customer's satisfaction.
A successful implementation of Six Sigma requires willingness to change at a high hierarchical level and sponsoring by management. Training of the
Six Sigma concept is a prerequisite for correct application of the DMAIC cycle to real projects. Several approaches are possible for the implementation
of Six-Sigma, e.g. top-down or Big Bang.
A large number of methods and tools are used in the context of Six Sigma. Many of them belong to statistics and project management.
Printed by: 168305-3 Date: 24.04.2014 GMP MANUAL © Maas & Peither AG
20.C Statistical Process Control (SPC)
Up09 Rolf Staal
20.C.1 Definition
Statistical Process Control (SPC) is a tool used to monitor, analyze, and improve the variability of a process. Its main function is to distinguish between
natural common cause variability and un-natural special cause variability. Processes which display only common cause variability are predictable,
and generally offer higher quality, lower costs, shorter cycle times, lower risks and lower inventory levels. Should the process capability be above 1,3, the
risk of producing defects or off-standard material for this type of process is extremely small.
He also found that the total variation of real processes consists of two types (see Figure 20.C-1): the natural portion of variation, which is part of the
process, and which can be reduced but cannot be eliminated (Common cause variability). The second portion of the total variability is the un-natural
variability which is not part of the original design (Special cause variability). It can be eliminated, if the root causes are known. This unnatural variability
is based on special causes.
This concept can be described by the following equation:
In this equation, Y is the output of a process, xi represent the input factors, which have an impact on the output Y and f() represents the mathematical
function which defines the relationship between the Xs and Ys. E is known as the error, and is the unknown portion since the equation might not be able
to explain hundred percent of the total variability.
E in this case is the special cause variability which makes the process unpredictable. If this portion exists and in particular if it is large, mathematics
do not apply and all the equations and statistical models might no longer be applicable.
In the model mentioned above the common cause variability is described by the equation:
Such a system is affected by chance only and therefore predictable. All the mathematical functions, equations and statistical models apply.
Figure 20.C-2 Total variation in the process
The concept of total variation is summarized in Figure 20.C-2.
20.C.2.2 Descriptive methods and their shortcomings
It is common practice to describe systems and in particular processes with numbers. The average X-bar and standard deviation s are well known
parameters. What is not so well known is the fact that the standard deviation is not well suited to describing a process over time. This can be
demonstrated with the aid of the example data in Figure 20.C-3.
Figure 20.C-3 Average and Standard deviation
In this example 30 measurements have been taken from a process over time. The process is a bowl of white and colored beads. A sample of 100 beads
is taken at regular intervals and the number of white beads is determined.
The first measurement was 90 white beads, the second measurement counted 81 and the last sample taken contained 88 white beads. When the
standard deviation is calculated, the first step involves calculating the average by summarizing all values and dividing the total by the number of values. In
this example the average X-bar is 83,1.
The next step involves determining for each measurement the difference between the value and this average, as shown in the third column of the table. In
the fourth column the difference from the third column is squared. These differences are then added to obtain the sum of the squared differences, which
in this example is 323,5. This value is then used in the equation for the standard deviation, where the sum of the squared differences is being divided by n
– 1, and then the square root taken. The resulting standard deviation in this example is 3,34.
Figure 20.C-4 Average and standard deviation (Example)
As can be seen from the equations in Figure 20.C-3, so long as the sum of the squared differences is constant, the standard deviation will also remain
constant. In Figure 20.C-4 the values of the process have been plotted in the form of control charts. On the left hand side the values have been placed in
chronological order. On the right hand side of the figure the same values have been sorted in the order of magnitude. Since both sets of data contain the
same values, the average and the standard deviation are the same. However we can clearly see that the different sequence of the values produces very
different control charts. We can therefore conclude that the standard deviation, as the mean, cannot correctly or accurately describe a process which is
a sequence of events over time.
The consequence is that we need something better than the average and the standard deviation in order to describe a process which is running over
time. The only tool to describe such a process is control charts.
20.C.2.3 Control charts
The main purpose for the use of control charts is to highlight, and ideally predict an event which is or will become “out of control”. This is also often
referred to as “not in statistical control” or NISC. In layman’s terms this means it is possible to determine objectively when an event or series of events
step outside of what is considered to be normal levels of variation for that process, where normal levels of variation are determined by the historical
performance of the process. A NISC event therefore refers to an event which is statistically different from past performance. Over the past few decades,
control charts were thus applied across many industries to monitor process outputs in order to identify when a problem had occurred or was imminent,
thus driving the necessary containment actions in order to prevent the issue from reaching the customer. In recent years there has however been an
additional push towards using these same control charts to monitor process inputs. The logic behind this is that if an organization has reasonable
knowledge on how the variation of an input will affect the process output, there is much greater benefit to be achieved by monitoring and reacting to NISC
events on the input before they become severe enough to cause a NISC situation on the output of a process, thus saving significant containment
resources.
Figure 20.C-5 lists the most common control charts used nowadays. For attribute data these are the C- and U-chart for data that follow the Poisson
distribution, and the NP and P-charts for data following the binomial distribution. For further detail please see Chapter 20.C.2.7 Control charts for attribute
data.
Figure 20.C-5 Control charts
For variable data there are several control charts, depending on the application (Figure 20.C-5). Use of the control chart for individual values and moving
range, also known as the I-MR control chart, is wide spread in the pharmaceutical industry because a lot of processes exist which generate individual
values.
Applications are found for the following:
■ yield as a percent of theoretical,
■ cycle time,
■quality parameters such as
■ hardness, weight, film thickness, dissolution and dimensions of tablets,
■ pH of solutions,
■ assay of active pharmaceutical ingredients,
■ particle size distribution and concentration of impurities in active pharmaceutical ingredients.
These control charts are based on the assumption that the data is coming from a normal distribution.
X-bar and R charts (average and range charts) are used in the event when several values are generated from one sample and when the variability within
the sample is similar compared to the variability between the samples. This might be the case with a tabletting machine where several tablets might be
measured and analyzed for dissolution, weight or thickness. The X-bar and R chart is independent of the distribution of the data, as long as a suitably
large sample size is used. For details please see the literature cited in Chapter 20.E References.
The X-bar and s chart (average and standard deviation) generally provides less information than the X-bar and R chart, even though more effort is used
to construct it.
The CuSum Chart is slightly more sensitive for the start-up of a campaign. The I-MR chart and the X-bar and R chart can be applied as well in these
situations.
The Multivariate chart is ideal for monitoring data from clean room testing as it is particularly sensitive to small drifts and trends in uninterrupted
processes.
The Tool-wear control chart or Short Run SPC is recommended in case the process average is moving due to wear and tear of a tool or the system
itself and where this results in only a small amount of data available for establishing the control chart.
20.C.2.4 Set up of control charts
I-MR charts for individual values and moving ranges
The first study with a limited number of measurements is called initial study. This initial study is established by using a minimum of 20 samples but not
more than 30 samples. From these measurements the average as well as the average moving range is then calculated. The distance 3s used for
determining the control limits is calculated via the range method as indicated in Figure 20.C-6.
Figure 20.C-6 Control limits for charts for I-MR charts (worksheet)
Figure 20.C-7 Control limits for I-MR charts
(data from the example bead game)
In addition the upper control limit (UCL) for the ranges is calculated as shown. As recommended in Figure 20.C-6, the chart for moving ranges is
established first by using the values for the average moving range and the upper control limit for the moving ranges. A check for biased limits is then
performed as indicated. The reason for this check is to always use narrow limits in order to detect special cause variability as soon as possible, as this
could otherwise cause errors in the calculation of the final control chart limits.
If either the range chart is out of control using this limit, or 2/3 of the range values are below the average moving range, then the revised limits are
calculated using the median moving range. A subsequent check for the need to use revised limits is obtained by comparing 2,66 × R2-bar with
which is an indication whether the bias of the initial study is significant or not. In case the revised limits are not necessary
then the 3s based on 2,66 × R2-bar are appropriate and can be used in order to calculate the control limits for the individual values chart.
The calculation of control limits of the data from the bead game example is shown in Figure 20.C-7.
In the pharmaceutical industry many processes provide data in the form of individual values. The release data e.g. give many examples:
■ Hardness of tablets
■ Assay of an active pharmaceutical ingredient
■ Weight of tablets
■
■ Concentration of impurities
■ Thickness of tablets
■ pH
■ Dissolution of tablets
■ Yield in % of input (packaging)
■ Disintegration of tablets
■ Cycle time
In all of these examples a single value represents the quality of a batch or lot. Each sample provides one value only. In such cases it is appropriate to
use the I-MR control chart for individual values and moving ranges.
The control chart for individual values and moving ranges of the measurements used in the example in Figure 20.C-3 is displayed in Figure 20.C-8.
Figure 20.C-8 Control chart for individual values and moving ranges
These statistical rules (which are a selection only) are used by many companies throughout the industry and are also used in software programs. For
better understanding at the shop floor level, these rules can be visualized as demonstrated in Figure 20.C-13.
Figure 20.C-13 Statistical rules
For defining whether a process is “in statistical control” (ISC) or “not in statistical control” (NISC) the absence or presence of data points beyond the
control limits are used. These rules were identified with a * in the listing above and are marked with the numbers 1 and 3 in Figure 20.C-13.
If one or more statistical rules apply special cause variability is likely to be present. Although it is statistically possible for example for an ISC process to
display a data point outside of the control limits, this is a very rare and unlikely event, and as such it is more likely that the data point is due to a special
cause and so is an indication of an NISC situation.
If this special cause variability has a negative impact on the process the team should investigate, maybe by using additional quality tools such as
brainstorming and Ishikawa diagram, in order to find the root causes and to eliminate them. If this special cause variability has a positive impact on the
process the team should also investigate and find the root cause in order to make it part of the standard process.
It is important to point out the significance of the statement that the process is ISC or “in statistical control”. In this case the process only displays
natural variability or as Dr. Deming stated, common cause variability. The benefit lies in the fact that such processes are stable over time and thus
predictable. It also means that all variables which have an impact on the output are known or at least well controlled. Therefore the parts or items
between the tests which have not been tested will display exactly the same “ISC” process as the overall ISC process. This assurance naturally only
exists and applies for processes which are truly in statistical control. The consequences of bringing a process into statistical control could be: reducing
inventory or reducing the frequency of testing. This aspect should be considered when discussing Risk-Management.
As discussed in Chapter 20.C.2.1 The two types of variation process variation can be described by the equation
A process which is ISC has no or only a negligible E component. Therefore the equation for the ISC process becomes:
In summarizing it can be stated that the processes which are ISC when compared to other processes, are thus likely to have
■ the lowest risk,
■ the highest quality (provided that the capability is given),
■ the lowest cost,
provided the natural variability is within the specification limits. For details please see Chapter 20.C.2.10 Process capability.
20.C.2.6 Use of the initial study
Once the initial study has been established the control limits are extrapolated for use in the future as shown in Figure 20.C-14.
Figure 20.C-14 Initial study and control to standard
Each time a new measurement is generated the data is plotted in the control chart. The statistical rules are then used to detect any anomaly, trend or
change in the process. Mathematically the new data is compared with the data from the initial study in order to detect statistically significant changes
between the initial study and the new data points. As long as the process does not undergo any modifications, the initial study should continue to be
representative of a stable or ISC process indefinitely. Conversely, an initial study will no longer be representative of the process after the process has
been modified in any way, requiring a new initial study be completed.
Sometimes very slow trends affect a process. If the control limits are recalculated with each new data point the trend might not be detected. It is
therefore recommended not to recalculate too often. In general there are four rules to keep in mind, when considering recalculating new control limits.
These are listed in Figure 20.C-15.
Figure 20.C-15 Prerequisites for recalculation of control limits
Prerequisites for recalculating control limits
When all four of these rules are met, a new initial study can be run and new limits for the process determined.
20.C.2.7 Control charts for attribute data
Variable data often provide information regarding a quality characteristic which is measured on a scale. Many times these characteristics have
specifications and are registered in the dossier. Examples are listed in Figure 20.C-16.
Attribute data often refer to defects and defective parts. In these cases the tests are of a “go / no-go” nature. There are only two possible outcomes: the
product or the parts inspected are “good” or some of these parts are “not good”. Some examples for attributive data are also given in Figure 20.C-16.
Figure 20.C-16 Attribute and variable data
Attribute Data Variable Data
For attribute data the most common control charts used nowadays are the “c” and “u” charts for the Poisson distribution and the “np” and “p” charts for
the binomial distribution.
As test results in the form of attribute data do not provide any information regarding how bad the bad parts are, there is the tendency to convert these
tests in order to get variable data. This is most commonly achieved by creating a Likert scale which establishes several degrees of severity for a
particular attribute, ranging from perfectly acceptable to absolutely not acceptable. Each of these levels can then be supported with a standard, allowing
for a repeatable qualification of the severity. For more information on this topic please consult Chapter 20.E References.
In addition the level of defects considered to be acceptable has decreased drastically over the last decades, especially in the pharmaceutical industry
and here especially in Japan. For example, the printing on the labels of a drug bottle has to be perfect and will otherwise be rejected, even though such a
defect has no impact on the quality of the drug itself. Such low acceptance levels for defects also necessitate an increased sample size. If the
acceptable defect rate for bulk tablets is 3 broken tablets in 300.000 tablets, the sample size must be quite large in order to find a broken tablet. If the
test is of destructive nature then such sample size is intolerable.
Since the acceptable defect rate is ever decreasing, we may ask ourselves whether it is useful to establish a control chart with an average defect rate
and an upper control limit, bring such process into statistical control and to keep such process constant (see Figure 20.C-16). The aim for defects is
always toward zero. It is therefore often suggested that defects be handled with other tools than with control charts within the process improving
strategy.
The first step in reducing defects would be the Pareto Analysis with regard to the type of defects. An alternative could be used by expressing the defects
in monetary terms. Once the ranking is established, each defect should be addressed individually in order to reduce it to zero or to an acceptable level
as close to zero as possible. Instead of control charts for attribute data, tally sheets and defect record sheets can often be used with great success (see
Figure 20.C-16).
In case the reader is interested in the set-up and theory of control charts for attribute data please refer to Chapter 20.E References.
20.C.2.8 Automation and SPC
Some processes in the pharmaceutical industry are heavily automated, such as tabletting, encapsulation or the filling of ampoules, vials and bottles. The
question remains whether SPC can or should be applied. These processes in question should first be analyzed in order to see whether they are ISC (in
statistical control) or if they are NISC (not in statistical control).
Figure 20.C-17 Automation and SPC
SPC APC
Statistical Process Control Automatic Process Control
■ Long-term ■ Short-term
Common objectives:
■ Elimination and / or reduction of variability
■ Improvement of process transparency
■ Improvement of processes
As shown in the left hand half of Figure 20.C-17 the aim of SPC is to highlight and hence eliminate the special cause variability by finding the root
causes and eliminating them. This monitoring and finding of root causes is performed by the employees within the “continuous improvement process”.
Since it is generally the output characteristics that are used to monitor the health of a process, the cost for these tests can be substantial. Similarly the
cost for elimination of the root causes can also be substantial.
In comparison the automatic process control tries to compensate disturbances which may occur over time. In order to accomplish this, some process
characteristics are measured continuously or at intervals and compared with set points. Any difference between the actual measurement and the set
point is compensated for by a signal which enunciates the activator in the process. In these cases the input process parameters (often referred to as Xs
from our equation in Chapter 20.C.2.1 The two types of variation) which have an impact on the output parameter have to be known in order to correct the
process output. There are several techniques to achieve this, the most widespread of which today is the use of the Six Sigma methodology. This
technique is used to create amongst other benefits, a more intimate knowledge of the process, including the identification of all main influencing factors,
and a quantification of how they affect the process output (see chapter 20.A Six Sigma).
Figure 20.C-18 shows a closed loop controller, which measures certain critical aspects of the process and sends the signal to the automatic controller.
Figure 20.C-18 Combination of SPC and APC
When necessary, a signal is generated back to the machine to correct the process, thus creating a closed feedback loop. This figure also shows the
control loop for the SPC activities. As one can easily observe the possibilities available to the team are more advanced and might have an impact on
manpower, the method and the material as well as on the machine itself.
The fact that processes are automated does not necessarily mean that these are ISC. As an example, a bottle filling machine for a drug product was
installed and validated. After some time a “process robustness analysis” was performed. The analysis showed some significant deviations. The process
was definitely NISC and in addition some of the measurements also were well beyond the specifications (Figure 20.C-19).
Figure 20.C-19 Fill weight of drug bottles
This analysis is based on IPC data. Due to the fact that the process is NISC it is unknown how the process performs between the IPC tests. It may well
be that the measurements outside the specifications were just the tip of the iceberg and nobody really knew how many OOS (out of specification) units
actually went to the customer.
Similar results have been experienced with tabletting processes which are also highly automated. The example shown in Figure 20.C-20 displays data
from the dissolution test. Each time the dissolution is tested six tablets are dissolved. The minimum value, the maximum value, the average and the
standard deviation are all recorded. For reasons of clarity all individual values have been plotted.
Figure 20.C-20 Dissolution of tablets
Figure 20.C-20 shows the shift in the average as well as the points beyond the control limits. Therefore this process is NISC. It is impossible to predict
how the dissolution values vary between the tests. If more tablets are analyzed it could very well be, that there are some tablets with a dissolution below
the specification of 85. This indicates a relatively high risk, as is always the case with processes which are NISC.
20.C.2.9 The link between the measurement system and
the manufacturing process
When measurements from samples are taken from a process and characteristics such as hardness, dissolution, cycle time or yield are determined, one
commonly observes variation similar to that seen in Figure 20.C-19 and Figure 20.C-20. This overall control chart variation (generally called VCC) has two
sources as shown in Figure 20.C-21.
Figure 20.C-21 The link between process and measurement system
One part of the total variation is derived from the actual process (VP) while a second part comes from the measurement system (VMS). In order to
detect changes in the process, the total variation of the measurement system has to be small relative to the variability of the process. Over the
past few decades there have been several standards. The tendency today is to accept the total variation of the measurement system VMS to be a
maximum of 15 % of the specification width (or of the total process variation). Some software packages suggest accepting only 10 % of the specification
width as the total variation of the measurement system. On the other hand, for very complex measurement systems, it must be admitted too that
sometimes up to 30 % may have to be accepted.
Naturally, the measurement system should also be ISC, because a lack of this could lead to an unpredictable influence of the measurement system on
the process parameter being measured.
When discussing the total variation of the measurement system one should not just think of the variation due to the instrument. As the name implies
it is a whole system within which many potential sources of error can be listed: taking the sample, cleanliness of the sampler and container, transport to
the laboratory, storage of the sample, sample preparation, preparation of the machine, performing the analysis/measurement, calculation of the results,
documenting the results and finally reporting of the results. Therefore one should not confuse this total variation of the measurement system with the
validation of the method used. Often, the conditions during validation are not comparable with the conditions of the routine measurement system.
Specific investigations in the pharmaceutical industry have shown that the variation of the routine measurement system was up to 3,5 times higher than
the variation during the validation.
This link between the measurement system and the process leads to a clear consequence. If the control chart shows a signal that something has
changed (as evidenced by one or more of the statistical rules), one should keep in mind that the root cause could be within the process or within the
measurement system.
In cases where the measurement system is ISC and its total variability is less than 15 % of the specification width or the process variation, then the
probability that any signal might have its root cause in the (production) process is quite high. This implies that the measurement/signal found is at least
3 standard deviations (of the measurement system) apart from any specification limit.
20.C.2.10 Process capability
In the previous chapter the importance and the benefits of a process which is ISC compared to a process which is NISC were described. When
evaluating a process, one should always also consider the capability of the process, that is, the comparison of the total variation of a process which is
ISC to the specification width defined by the customer and which it must work within.
Figure 20.C-22 shows a graphical example of the concept by considering the relative size of a car and the garage you want to park it in. On the left hand
side the car is comfortably smaller than the garage, which means the car fits into the garage and there are buffer zones or playing room on either side. In
the example on the right the width of the car is almost as wide as the garage. In this case the car is a very tight fit and the slightest movement to either
side would mean the car will hit the garage walls and cause damage. This could be considered the bare minimum capability. The example displayed in
the middle of Figure 20.C-22 shows a car which by all means does not fit into the garage.
Figure 20.C-22 Capability of a process: Cars and garages
Figure 20.C-23 The four (six) possibilities based on stability and capability
The question “ISC/NISC” and the question “capable/not capable” lead to a total of four distinct possibilities, when analyzing parameters with
specifications. We get 6 possibilities, when we include yield and cycle time, which normally do not have specifications. Nevertheless it is a very
important question for business reasons, if these parameters are in statistical control or not – both yield and cycle time have monetary aspects for the
company. The possibilities mentioned here are illustrated in Figure 20.C-23.
In this figure the column in the middle refers to the cases where all the data analyzed are within the specifications. The column to the right refers to
cases where some or all of the data analyzed fall beyond the specifications. The rows refer to data which is either ISC or NISC. There is also a fifth/sixth
option for data without any specification as may be the case with some forms of yield and cycle time.
Figure 20.C-24 Process capability
The capability of a process, that is how well it fits the specifications, can be expressed with numbers. It is however important to note that only ISC
processes should be analyzed. If a process is NISC or in other words if the process is unpredictable the calculation of these indices is meaningless and
the results could be misleading. In some articles this is even referred to as “data terrorism”. This is why the Cpk values in the lower row of Figure 20.C-23
have been crossed out. If these indices are calculated anyhow the outcome is often deceptive and may indicate results which deviate widely from the
true capability of the process.
It is the aim of process improvement activities in general as well as in the interest of the customer and of regulatory agencies to only ever encounter
process characteristics which are ISC and capable. Process characteristics such as yield and cycle time should always be ISC. As a result, many
companies analyze their processes based on the model displayed in Figure 20.C-23, in order to find out which of the many quality characteristics of their
products fall into which category. Such an analysis is called a Process Robustness Analysis.
In Figure 20.C-24 we can see how the Cp value is the ratio of the process tolerance (or the difference between the upper and lower specification limits)
and the total variation of the process, given as 6s or six times its standard deviation. It is important to note however that this index does not consider the
position of the process with respect to the specification limits. It therefore only informs us of the relative spread of the process to the specification
window. If the total variation of the process (6s) is larger than the specification width (USL–LSL), then the Cp value will be smaller than 1. If the total
variation of the process is equal to the specification width, then the Cp value will be equal to 1,0, and if the total variation of the process is smaller than
the specification width, then the Cp value will be larger than 1. The Cp is thus generally referred to as the process capability entitlement or the best
capability the process could achieve simply by centering the process within the specification limits. A Cp-value of 1,0 is generally considered the
minimum acceptable, and if one takes into account the difficulties surrounding process centering, a value between 1,33 and 2,0 is often expected.
If we wish to also consider the position of the process with respect to the specification limits in this evaluation, the Cpk is the correct coefficient to use.
First the Cpu is calculated to determine the position of the process with regard to the upper specification limit. Then the Cpl is calculated to determine
the position of the process with regard to the lower specification limit (see Figure 20.C-24). The Cpk is then the smaller of these two values since it
represents the “poorest” capability, which is the situation where the process is closest to one of the two limits and so at highest risk of producing results
outside of the specification limits.
The process capability indices can also notably be divided into short-term capability, referred to as Cp or Cpk, and long-term capability, referred to
as Pp and Ppk. The only difference between the two indices relates to the nature of the standard deviation estimation used in their calculation. The
short-term standard deviation refers to data representing a period of the process during which there are no known assignable causes of variation as well
as no drift of the process, and only a limited number of causal factors affect the output. In the case of a traditional production process this might for
example mean data originating from only random variation within a subgroup of overall process variability, such as that from a period which used the
same machines, setup, operators, material batches, and similar. The data collected over a longer period of time, that is within and across several
subgroups, including additional sources of assignable cause variation as well as process drift over time, provide an estimate of the long-term process
variation. Such a situation might be the result of data originating over multiple machines, setups, operators, material batches or changing environmental
conditions. As can be deduced from the definitions above, the duration of a single subgroup period is highly dependent on the nature of the process at
hand and the variables it is influenced by, and can be as short as a few minutes and as long as hours or days. In general, data is considered to be long-
term if it is thought to include at least 75–80 % of the overall process variability. Naturally, the long-term variability
of a process will always be greater than its short-term equivalent, and so the Pp or Ppk value will always be smaller than the corresponding Cp or Cpk
value. Today the widespread availability of process data as well as the use of computers and statistical programs such as Minitab, Statgraphics or other
software packages make calculating both indices a very quick and simple task.
If the data are not by chance only indicating the presence of special cause variation then the Cpk provides higher values than the Ppk values. If the data
are truly by chance indicating only common cause variability then the Cpk and Ppk are identical. When using statistical software programs it is very
easy to get both coefficients. Therefore it is suggested to look at both values. If these are very different then there is probably not a true chance system
and there it is very likely that special cause variability is present.
Figure 20.C-25 summarizes the various possibilities regarding Cp and Cpk, which apply equally to the Pp and Ppk.
Figure 20.C-25 Process capability: Distribution curves
The first three rows show the Cp values. The fourth row shows a process in which the process average is centered and 5s distant from the specification
limits. The Cpk for this process is thus 1,67 (5s/3s or 5 divided by 3). The portion of the process beyond the upper specification limit is 0,00003 %. As
the process shifts to the right, the portion of the process outside the upper specification limit increases and the Cpk becomes smaller. The Cpk
becomes zero if the process average is equal to a specification limit, meaning half the process lies beyond this specification limit. If the process average
lies outside the specification limit then the Cpk becomes negative. If the buffer between the process (assumed to be 6s wide or 3s to each side of its
mean) and the specification limit is at least 1s the Cpk value grows to 1,3 (4 /3). In cases where the buffer between the process and the specification
limit is at least 2s, the Cpk reaches a value of 1,67 (5/3) and if the buffer is 3s the Cpk becomes 2,0 (6/3). When looking at Figure 20.C-25 we see the
process moving to the right. Even though the total variation is constant (Cp is constant) the Cpk is decreasing. When the process average is outside the
specification then the Cpk becomes negative.
When calculating the long-term capability it is suggested to collect the data over a longer period of time in order to ensure sufficient long-term variability
has been included. In the pharmaceutical industry it is often recommended this should be at least six months, though it generally depends on the
processes being studied. Companies should set objectives for the Cpk values of critical characteristics as well as for the time frame over which data for
determining whether the process is ISC or not are collected. In 2009 a good process capability in the pharmaceutical industry was considered to be a
Cpk of 1,3 or greater.
It was mentioned earlier that the process has to be in or close to statistical control when calculating the process capability. Using data which are clearly
not ISC can lead to unreliable and possibly misleading results. Figure 20.C-26 explains the reasoning behind this prerequisite. The data from the bead
game as explained in Chapter 20.C.2.2 Descriptive methods and their shortcomings have been used. For this example the upper specification limit has
been set as 95 and the lower specification limit as 70. The data in chronological order were used to construct a control chart for individual values and
moving ranges. This can be seen on the left hand side in Figure 20.C-26. The standard deviation was calculated according to the general formula and
resulted in a value of 3,34.
Figure 20.C-26 Prerequisites for recalculation of control limits
The Cpk was calculated using the average range as the basis for the calculation of the standard deviation s, with the d2 = 1,128 (as n = 2 when
calculating the range between two consecutive values). The resulting Cpk is 1,24. The Ppk was also calculated using the general formula (with n = 30).
The Ppk obtained is 1,18. Since the data is ISC this calculation is acceptable. The values for Cpk and Ppk are similar, with only a slightly worse
capability over the long-term.
The same data was then sorted in order of magnitude. This control chart for individual values and moving ranges is displayed on the right side in Figure
20.C-26. The standard deviation for this set of data is the same since the sequence or chronological order has no impact on the general formula for s.
Also, the value for the Ppk is the same as in the case with the chronological ordered data. Due to the sorting the ranges have however changed
significantly. Therefore the estimate for s used in the short-term capability calculation is different, leading to a very different Cpk value of 9,24 instead of
the 1,24 obtained previously. In this case it is obvious that the process is NISC, as it is not constant over time and is not predictable. The calculation of
process capability however makes only sense for ISC processes, as mentioned above.
The same results are obtained if the capability is calculated using a software package such as Minitab or Statgraphics. In this case the difference comes
from the two different estimates of s calculated as within each subgroup (as used for estimating Cpk) or overall (as used for calculating the Ppk value).
We hope this example will lead you into the habit of always asking two questions when reporting or discussing capabilities:
■ Is the process ISC?
■ What is the time span for the data used for this analysis?
20.C.3 Goals/Objectives/Benefits
One of the main objectives of using SPC is to bring a process in statistical control (ISC) from a previous state of NISC. This usually involves finding the
root causes for special cause variability. If this variability has a negative impact on the process, as is often the case, the root causes should be
eliminated. If the special cause variability has on the other hand a positive impact on the process, it is equally important that it should be investigated
and understood, with the goal of making it a permanent part of a more stable and improved process.
Once the process is ISC another objective could be to improve the process capability. In such cases the process parameters should be systematically
analyzed with regard to their impact on the process output, either directly and individually, or through interactions with other parameters. This is
commonly achieved by using Design of Experiments (DoE). Since this endeavour generally requires a greater level of resources and knowledge, it is
often ideally suited to the Six Sigma methodology, where a Black Belt tackles the issue through five distinct phases known as Define, Measure,
Analyze, Improve and Control (DMAIC) in order to gain a deeper understanding of the process inner workings and thus improve the process (see chapter
20.A Six Sigma).
Another objective might be to reduce the overall risk associated with the process. This can be achieved by again bringing the process into ISC and
achieving a high capability (e.g. >1,6) which greatly reduces the probability of an out of control situation occurring.
Yet another objective could be to monitor excellent processes (meaning ISC and capable, or with a Cpk > 1,3) to avoid a negative trend from developing
or getting out of hand. Since nature does not prefer the status of ISC, it is very likely that after some time a process will become affected by some form
of special cause variability. The aging of processes is a good example how previously ISC processes can over time become NISC. Long-term
monitoring can nowadays be achieved quite easily by electronic means. Technology is now readily available to take data from various data sources and
automatically generate control charts (be it a case of initial study or control to standard), as well as monitor ongoing processes and warn of any NISC
events or trends. These automatically generated results can then be posted on the intranet or internet at specified intervals, allowing remote monitoring
as well as building confidence in the high standards achieved. It is also possible to generate e-mails or other forms of general notification in case any
points are showing signs of NISC, which in turn permits a rapid, focused and well informed preventive intervention and effective correction of the process.
One further tool should be mentioned: Process Robustness Analysis (PRA). This is a simple yet powerful tool which is used by companies to evaluate
all their processes with regard to particular quality characteristics, yield and cycle time. The PRA analysis leads to the creation of a portfolio of
opportunities for further improvement as well as creating a risk-overview of the process. The Process Robustness Index (PRI) can then be used as the
yardstick for internal and external benchmarking, evaluation of subcontracted manufacturer’s capabilities and by management for setting objectives
concerning the continuous improvement process and Six Sigma projects.
20.C.4 Implementation
20.C.4.1 Prerequisites
There are certain fundamentals which, while not being indispensable, generally lead to a more effective use of SPC. First of all, we suggest always using
first time through data (FTT data) rather than release data! The intention of using SPC is to establish a true picture of the process and this can be
achieved best by using the original process FTT data, that is before any form of sorting or rework is undertaken. Sometimes some organizations like to
present a more “positive” picture and so prefer to use release data which have generally already been “cleaned” of most non-conformances. However, this
only encourages the so called “hidden factory” which focused on reworking non-conforming product rather than fixing the problem at the root cause. A
simple means of detecting this situation is by constructing a frequency distribution of the data. If the data is “cut-off” at or close to the specification value,
chances are high that either some lots have been removed or that rework was done prior to this measurement being obtained, neither of which is in
reality adding value to the organization.
Another common practice for organizations is the use of data storage systems where only one value can be stored for a particular parameter in each
production lot. In cases where the first measurement results in an unacceptable value, a second sample might be taken and analyzed. The new value
might then overwrite the first result. In such cases it might be difficult to obtain true FTT data from the data storage system, and an alternative data
collection system might be needed.
Several levels of hierarchy are involved in the implementation and application of SPC, as will be outlined later in Chapter 20.C.4.3 How to implement. As
such it is very important that each level of the hierarchy gets the appropriate training. A one day basic training (8 hours) is considered essential in most
cases. Here the two types of variability are explained, the set up of control charts practiced, and process improvements with the aid of control charts
experienced in real-life examples and exercises. Such training is a good investment for all management levels. All managers including the site manager
should have detailed enough knowledge of the topic to ask relevant questions as well as to identify potential issues in review meetings. It is strongly
recommended for example that site managers should review the progress with regard to the application and use of SPC in their organizations at least
twice per year.
20.C.4.2 When to apply/when not to apply
In situations where SPC is implemented in order to improve processes in manufacturing and production, the processes should ideally run fairly evenly
over the whole year. Specifically one should avoid very large time gaps where the process does not run in the initial period of application, as this may
lead to confusion in the implementation, errors in the evaluation of results and lapses in the corrective steps required. Excellent results have been
achieved where dedicated equipment was running 24 hours per day with the same product, though less frequent but regular runs are just as effective a
target for SPC applications.
As is generally the case, best results are achieved when senior management clearly supports the use of SPC, promoting and encouraging shop floor
employees to be actively engaged in the resulting continuous improvement process.
Some people fear that difficulties might be encountered in process development where only a few lots of the same size are generally produced. The
measurement systems in process development can however easily be monitored as well since specific SPC tools allow for different batch sizes as well
as other common process characteristics.
20.C.4.3 How to implement
In cases where SPC will be used for process improvement, senior management should be involved in the implementation and the use of SPC, as well as
defining how it will be used to monitor and guide improvement efforts. Ideally the site manager in cooperation with the site management team should set
objectives for individual managers and their respective areas, while these should then break the objectives down for all employees in their group. Shop
floor employees could then have individual objectives agreed upon, or common objectives could be created for each group or shift. Once the objectives
are agreed upon, everyone from the site manager to the shop floor employee should be trained in the basic principles of SPC. It is essential that
everyone should know his or her role with regard to the use of SPC in the organization.
■The site manager’s role, as mentioned previously, should be to include the SPC data in his regular review meetings, evaluations and decision
processes.
■The production managers on the other hand should identify the processes and the characteristics to be monitored by the shop floor personnel.
They should also monitor the status of SPC on a regular basis, e.g. every month, from their supervisors and shift employees. The production
managers should have the necessary training and experience to interpret the general messages the SPC data are relaying and to know when it is
necessary to dig deeper in order to get clear answers.
■The supervisors should have the training necessary in order to set up a control chart and to recalculate the control limits if necessary, even if this
may ordinarily be carried out by a different person or generated automatically. They should also be able to interpret the control chart accurately, as
well as to identify conflicting or illogical situations.
■The shop floor employees should participate in the data collection as well as the regular meetings where the status of the process is discussed
with the supervisor. The shop floor employees should also participate in brainstorming for identifying possible root causes of special cause
variability, as they are often the people with the most hands-on experience on the process. They can also be involved in collecting data in order to
identify and test these potential root causes, and naturally will be involved in defining and implementing any improvements agreed upon.
Since the root causes for special cause variability can also be within the measurement system or method itself, it is a good idea to evaluate these
measurement systems on a long-term basis. Specifically one should consider those measurement systems where the production samples may
experience OOS (out of specification) values. Here we have the same situation as in production, and so the manager of the laboratory should have the
same level of training as the laboratory assistant who runs the control charts on a regular basis.
Another scenario might be one where SPC is used for monitoring excellent processes electronically. In such cases the site management team should
decide which functional area is in charge of this activity, which experts are in charge of the set up of such a system and who will be the experts who
review the control charts on a regular basis in order to detect deviations or new NISC situations. It will also be necessary to determine where to publish
the control charts. In cases where a notification will be sent automatically to a particular list of recipients by email or similar, the recipients have to be
determined and trained, and the necessary actions have to be decided upon in order to prevent a NISC event from being mismanaged and progressing
from a preventive warning to a full blown corrective intervention.
In such automatic or electronic systems it is also important to decide whether the limits are to be calculated from
a) all available data and recalculated with every new value or
b) using the control to standard method where the initial study has been determined as the basis.
In most cases option b) is used because deviations from the initial study performance levels as well as slow trends are both detected earlier than when
using option a). Should the process undergo a change, then a new initial study should be undertaken, rather than allowing a very large and slow drifting
set of data to mask the changes. Please also consider the prerequisites for recalculating control limits as described in Chapter 20.C.2.6 Use of the initial
study.
20.C.5 Tools
■ Control charts
■ Brainstorming
■ Pareto Analysis
■ Regression Analysis
20.C.6 Variations
One company the author has come across had a very different approach in applying SPC. Here the QC department generated control charts of all quality
characteristics where the measurements were generated in the laboratory based on samples from production. These control charts were updated every
24 hours during night time and published on the intranet. Anyone within the organization could view these charts. After two years of application, they
decided to reorganize this approach due to the low efficiency of this kind of SPC on creating true improvement to the processes.
20.C.7 Examples
Company X produced an active pharmaceutical ingredient. The process was not stable and 11 % of all the lots had to be recrystallized due to the high
amount of impurities. A team was formed and SPC was implemented. With the aid of the SPC charts, four different special causes were identified and
eliminated. After these improvements, the production was ISC, and the manufacturing cost had been reduced by 23 %.
Company Y had a problem with several quality characteristics being NISC and also experienced several out of specification incidents over the year. The
QC manager decided to investigate all measurement systems (methods) over a longer period of time. From a total of 48 different measurement systems
less than 50 % were ISC and capable. SPC was introduced to the QC department and all employees involved in measurement systems were trained in
SPC. All the measurement systems were monitored with control charts. After three years of application of SPC, the number of methods reported as ISC
and capable had increased significantly.
Company Z introduced SPC in the packaging area. Bulk tablets were packaged and distributed after release by QC. The original cycle time was about 16
days including the release process. After the application of SPC, highlighting of special cause variation and resulting continuous improvement actions the
cycle time was reduced to less than 7 days including the release process. This improvement was achieved within 6 months.
Summary:
Statistical Process Control (SPC) is a tool used to monitor, analyze, and improve the variability of a process.
The average and standard deviation are well known parameters which are often used to characterize a process. However the standard deviation is not
well suited to describe a process over time.
Control charts display data in a chronological order with relation to boundaries which have been derived from the process itself (e.g. initial study),
therefore they can highlight, and ideally predict a process change which might lead to the situation “out of control” if no one reacts to the signal.
The set-up of control charts requires data from an initial study. The interpretation of control charts is done with the aid of statistical rules.
The aim of SPC is to highlight and hence eliminate the special cause variability by finding the root causes and eliminating them. In comparison to that,
the automatic process control tries to compensate disturbances which may occur over time.
The process capability compares the total variation of an ISC process to the specification width defined by the customer, and gives information about
the position of the process with relation to the upper and lower specification limits.
The objective of SPC is to bring processes in statistical control as a prerequisite for improvement of the process capability. Capable processes bear a
lower risk of out-of-control-situations and show higher efficiency.
One of the prerequisites for implementing SPC is the use of first time through data.
In cases where SPC will be used for process improvement, senior management should be involved in the implementation and the use of SPC. All
hierarchies involved should know their roles and objectives and receive adequate training.
Printed by: 168305-3 Date: 24.04.2014 GMP MANUAL © Maas & Peither AG
20.D Process Analytical Technology (PAT)
Up09 Seamus O'Neill, Bronwyn Grout, Brad Diehl, Elen Garsthein, Steve Hammond, Mojgan, Moshgbar, Simon Maris, Joep Timmermans,
Karl Redl, John O'Sullivan
Here you will find answers to the following questions:
■ What is Process Analytical Technology (PAT)?
■ How is PAT being introduced in process development and in manufacturing?
■ How can PAT add real business benefit?
■ What is the regulatory perspective on PAT?
■ How do you introduce PAT effectively in a GMP environment?
■ What type of applications have been successful?
■ How will PAT change manufacturing and quality assurance of pharmaceuticals?
This chapter covers the role of Process Analytical Technology in pharmaceutical manufacturing and development. It details GMP considerations for PAT
application and outlines how PAT is helping to enable the evolution of quality systems. A number of examples are provided to show how PAT is currently
being used.
20.D.1 Definition
Process Analytical Technology (PAT) can be defined as follows:
A system for designing, analyzing, and controlling manufacturing through timely measurements (i.e. during processing) of critical quality and
performance attributes of raw and in-process materials and processes, with the goal of ensuring final product quality.
From a holistic perspective PAT should be seen as more than the introduction of in-line (measurement in vessel), on-line (measurement in sampling
loop) or at-line (measurement close to process) analytical instrumentation. PAT should be viewed as an approach that drives the use of process
analyzers, process control tools, multivariate tools and knowledge management tools to deliver effective process understanding and control throughout
the lifecycle of a product, in a manner that adequately addresses associated risk and helps to ensure product quality.
20.D.2.2 Role of PAT in Process Development and Design
The aim of pharmaceutical development is to design a quality product and its manufacturing process to consistently deliver the intended performance of
the product. The key focus of PAT application during process development and design is the building of process understanding and the development of
the control strategy for manufacturing.
Structured experimental approaches are often used during development to fully understand relationships between critical quality attributes of the product
and process parameters. Process analyzers (i.e. PAT) can be used during experimental runs and during scale up to provide real time data and maximize
process understanding. PAT data collected during development runs may also be used to help to define the control strategy for the manufacturing
process. This control strategy may include the introduction of process analyzers to monitor and control critical quality attributes during subsequent
manufacturing operations.
The complexity of the control strategy and the level of PAT integration will vary from process to process.
20.D.2.3 Role of PAT in Manufacturing for Monitoring and Control
In manufacturing PAT should be employed to increase process understanding and/or to demonstrate and maintain a state of control for manufacturing
processes.
Most companies initially introduce PAT to help understand manufacturing processes by collecting data in real time/near time, in-line/on-line or at-line,
during manufacturing. These datasets are often used to support process optimization (e.g. optimize a cooling profile during API manufacturing to
improve a crystallization process based on in-line particle size data) or to introduce attribute based endpoints for unit operations (e.g. determining the
end point of a drying step based on moisture level rather than time). PAT provides a new window to the manufacturing process, and the process
understanding, gained from the PAT data collected from a well designed monitoring and experimentation strategy, can be very valuable. Manufacturing
processes need to be understood before they can be optimized and as a prerequisite for effective control.
PAT can also be employed to help monitor and control manufacturing processes, as part of the control strategy. The use of PAT may be focused on an
individual unit operation or across the entire manufacturing process.
20.D.2.4 PAT and Evolving Quality Systems
The pharmaceutical industry is currently showing increasing interest in the application of continuous quality verification (CQV) and real time release
(RTR) as part of the evolving manufacturing quality systems. Effective use of PAT systems is key for both CQV and RTR.
The role of PAT for Continuous Quality Verification (CQV)
CQV is an approach to process validation (see Chapter 20.D.5 Application of PAT in a GMP environment) where the performance of manufacturing
processes (or supporting utility systems) is continuously monitored, evaluated and adjusted (as necessary). It is a science-based approach to verify that
a process is capable (see Chapter 20.C.2.10 Process capability) and will consistently produce product meeting its predetermined critical quality
attributes.
PAT systems can be used as part of a CQV strategy to continuously demonstrate that a manufacturing step is producing material with the expected
attributes, and to evaluate and adjust the process, as necessary.
With enhanced process controls, Continuous Quality Verification (CQV) can be implemented in lieu of traditional three-batch process validation, where
appropriate and justified. With a CQV approach, validation is not a discrete exercise, rather each batch is continuously monitored to demonstrate Quality
Assurance.
The role of PAT to enable Real Time Release Testing (RTRt)
RTR testing is the ability to evaluate and ensure the acceptable quality of in-process and/or final product based on process data, which typically includes
a valid combination of measured material attributes and process controls (ICH Q8, see Chapter E.8).
PAT systems can be used to provide the initial process understanding, to help define process boundaries and to provide real time information on quality
attributes during manufacturing. It may be possible, based on process understanding, to adjust manufacturing processes during operation to ensure that
product attributes are within specifications. Product release can be based on a combination of process parameters and real time material attributes in
lieu of finished product (FP) testing.
Figure 20.D-2 shows the typical components of an RTR strategy.
Figure 20.D-2 Components of a RTR strategy
In all cases, the implementation of RTR must be based on the fundamental understanding of the process to demonstrate that real time monitoring and
controls can suitably eliminate the need for off-line testing.
An example of the use of PAT in a RTR approach is the use of real time potency data for drug product release. Potency of solid oral dose products is
traditionally monitored off-line by HPLC analysis, based on the assay of a small number of samples. By using an on-line Near Infrared (NIR)
spectrophotometer to continuously monitor the potency of either the finished blend or finished tablets/capsules, the sample size can be significantly
increased compared with traditional off-line end product testing. Ultimately the analysis of this increased sample size provides enhanced data with which
to make the release decision on the product, and also allows the traditional end product off-line HPLC analysis to be eliminated as a testing/release
requirement.
20.D.2.5 Enabling new manufacturing paradigms
Continuous Manufacturing
Pharmaceutical manufacturing has historically been predominantly based on batch processing. Manufacturers are increasingly evaluating continuous
manufacturing processing as a means of increasing throughput and decreasing process cost. The realization of the full benefits of continuous processing
depends to a large extent on the appropriate use of PAT.
Effective use of PAT is essential to ensure adequate control of some continuous processes. Well designed continuous processes are controlled to reach
and maintain “steady state”, however, drifts or transient upsets can happen during the long processing runs associated with some continuous processes,
due to variations in starting material as well as other variations in key and critical process parameters. PAT can be used to effectively monitor the
continuous processes to identify and highlight the occurrence of unwanted trends or process upsets, to enable troubleshooting and to enable the
segregation of portions of non-conforming material. In a more sophisticated system undesirable trends would be attenuated by providing data to
Advanced Controllers that would act to maintain the process in an optimal steady state.
Furthermore, the use of PAT during pre- and post- scale-up trials can generate valuable data and information on scale-dependency of various parameters,
which may be used in support of a regulatory filing.
PAT based Advanced Process Control (APC)
PAT provides a significant advantage in traditional process control in terms of monitoring a unit process or batch maturity and determining the process or
batch endpoint. Of even greater potential benefit is the use of PAT based Advanced Process Control (APC) to optimally control the process outcomes in
terms of both product quality, such as improved potency and reduced impurities, as well as business attributes such as yield, batch time, and
processing costs.
The premise of PAT based APC is to use PAT to monitor the process performance and product quality and use the real-time PAT data and appropriate
process models to control the critical process parameters within the desired operating range, thereby optimizing process performance and outcomes.
PAT and enabling Lean Manufacturing
PAT is now being implemented by a number of pharmaceutical companies focused on improving the agility of manufacturing processes. PAT offers the
potential to contribute to reduction in manufacturing and testing lead times by
■ enhancing product understanding to enable process optimization, improve process robustness and reduce reworks,
■ moving the verification of product attributes upstream, therefore facilitating RTR and reduction in cycle time for unit operations,
■ enabling innovative manufacturing approaches such as continuous processing.
In tandem with FDA’s cGMPs for the 21st Century initiative, the FDA liaised with ASTM (American Society for Testing and Materials) in 2003 to form the
ASTM Committee E55 on Manufacture of Pharmaceutical Products.
As of 2009, ASTM E55 is composed of three subcommittees, two of which support application of PAT:
■ E55.01 – PAT System Management
■ E55.02 – PAT System Implementation and Practice
These sub-committees are focused on developing consensus standards, in coordination with vested stakeholders (including manufacturers, regulators,
professional trade associations etc.), that facilitate implementation of PAT by industry and clarify FDA expectations.
Currently approved standards specifically for PAT include:
■ E2474-06 Standard Practice for Pharmaceutical Process Design Utilizing Process Analytical Technology
■ E2363-06a Standard Terminology Relating to Process Analytical Technology in the Pharmaceutical Industry
Additional PAT–related standards are under development. The current status of these and other activities covering the manufacture of pharmaceutical
products can be found at the link to the ASTM E55 committee website:
http://www.astm.org/COMMIT/COMMITTEE/E55.htm
20.D.3.7 Pharmacopoeia Guidance
PAT related guidance within the USP and EP are captured in monographs for specific analytical techniques such as NIR spectroscopy. The most
notable are
■ USP <1119> Near-Infrared Spectroscopy,
■ USP <1120> Raman Spectroscopy and
■ EP 2.2.40 Near-Infrared Spectrophotometry.
These monographs provide details about the instrument capabilities, diagnostic system suitability, method validation and other scientific practices for the
associated techniques.
20.D.3.8 Regulatory Submission Requirements for PAT
Several regulatory agencies are working with industry to develop guidance around PAT. FDA and EMEA, for example, have invited pharmaceutical
industry to submit applications and variations involving PAT applications as part of pilot schemes. NIHS (Japan) and TGA (Australia) have engaged in an
open dialog with industry on PAT activities and applications.
In general the strategy and content of any regulatory filing (application or variation) involving PAT applications depends on many factors, including, but not
limited to:
■ the level of knowledge of the manufacturing process,
■ whether the PAT method is a replacement of, or an alternative to, an existing method,
■ whether product specifications are expected to be impacted.
Therefore, the preparation of a filing application/variation involving a PAT application typically requires input from quality operations, manufacturing,
regulatory affairs and others.
Submissions to the FDA can originate through already established regulatory processes or by contacting the FDA PAT Team for open dialog. The
contacts can be found at http://www.fda.gov. These contacts can help outline the expected documentation content for a submission.
Submissions in EU, involving PAT applications, can be processed through the current EMA Pilot scheme or through established regulatory processes.
EMA PAT documents (http://www.ema.europa.eu/Inspections/PAThome.html), including a Reflection Paper on chemical, pharmaceutical, biological
information to be included in dossiers when PAT is employed provide expectations for the content of the submission.
Most regional and national regulatory agencies have shown an interest in having early and open dialog for discussing and piloting PAT implementation
activities.
2. Assess the suppliers’ quality compliance capabilities and the robustness of the instruments that are produced under their quality system. In other
words, how the stability and longevity of the instrument has been ensured by the quality systems. This assessment may involve a number of
steps :
■a determination of whether the supplier has been evaluated previously and/or whether the supplier’s instrument is already commonly used within
the firm or by other firms within the pharmaceutical industry for the same application and use.
■a supplier quality assessment (e.g. questionnaire), to get information about the potential supplier’s quality system and a sense of the potential
risks associated with this supplier. Previous assessments for the same supplier may be leveraged to reduce the need to perform multiple
assessments on the same supplier.
■a PAT system robustness assessment (e.g. questionnaire). The integrity of the measurement system will be dependent on data being
generated by robust PAT systems, where quality is built into the design of the instrument. Therefore a system robustness assessment should
be considered essential where the instrument is used for monitoring and control purposes, but could be optional for instruments for development
use.
Documented evidence should be available to demonstrate the robustness of the measurement system according to customer specifications.
The extent of this documented evidence will depend on the PAT application and is based on the intended use as well as risk to quality and
regulatory compliance. Supplier documentation, including qualification documents, can be used if deemed satisfactory.
■Where there are risks relating to the supplier’s ability to provide a robust PAT system or to provide documentation to verify that delivered
equipment/systems meet specifications for its intended use, these should be addressed and mitigated.
A formal audit of the supplier is recommended for PAT systems used for monitoring and control of manufacturing processes, where the impact on
product quality and regulatory compliance is considered to be critical. The audit should serve to assess and document how the supplier is using its
quality system to ensure a robust and reliable PAT system, and typically involves a review of the supplier’s quality system, test plans, test results,
defect tracking and software code management system, to assure compliance based on intended use of the system and its associated methods. This
audit may not be required where a previous audit of the same supplier has covered the specific PAT instrument being considered, and all the information
is available.
A systematic approach to PAT instrument supplier qualification provides increased confidence in vendors and the robustness of the PAT measurement
system and reduces the risk of buying a product that is not fit for purpose. Pharmaceutical companies will get significant benefits from forming
partnerships with PAT suppliers to ensure that the highest quality PAT instrumentation is deployed.
PAT usually plays a key role in the implementation of CQV approaches, as the availability of PAT data on a real-time basis facilitates the confirmation
that processes are continuously operating in a validated state during product manufacture.
In Figure 20.D-8 the left hand plot below shows key spectral changes (based on increasing levels of a released protecting group as the reaction
progresses). The right hand plot shows MIR predicted reaction completion.
Figure 20.D-8 MIR reaction monitoring example spectra and time plot
In-line reaction monitoring offers an excellent window for process understanding, allowing observation of chemical changes as they happen. The real time
data generated by the PAT system can be readily used as an input to the process control system and this can facilitate optimum reaction progression,
with opportunity to maximize yield and minimize impurity formation.
Crystallization Reaction monitoring in API Manufacturing
In this example an intermediate in the API synthesis is converted to API in a reactive crystallization step. The crystallization process was not well
understood and some batches were difficult to isolate because of their small particle size (fines) or variable particle size.
Figure 20.D-9 shows the number of ‘fine’ particles generated during the crystallization step.
Figure 20.D-9 FBRM example time vs particle count
A Focused Beam Reflectance measurement system (FBRM) was installed in the reaction vessel to monitor particle size. The following aspects were
achieved with this PAT application:
■ FBRM highlighted the atypical crystallization pathway of problem lots.
■ FBRM allowed the crystallization to be observed in real-time, increasing understanding of factors that impact crystallization.
■ FBRM allowed rapid assessment of the impact of process changes on particle size, and detected atypical crystallizations.
■Process understanding from FBRM supported the optimization of the crystallization (e.g. reduced agitation rate, shorter granulation time, modified
water addition).
■ FBRM allowed monitoring of critical quality attributes during crystallization that correlate to properties of isolated product.
In-line monitoring by NIR spectroscopy was implemented in the dryer, providing a real-time picture of the product condition during drying, and enabling
improvements in process efficiency. Off-line sampling and analysis was subsequently replaced by in-line NIR.
Figure 20.D-10 shows a typical real time drying profile, where the percentage of water content is plotted versus time, determined by NIR during the drying
step for one lot.
Figure 20.D-10 % Water by NIR vs times.
The installation of NIR enabled a switch to an attribute based drying process, which allowed API to be released from the dryer based on the level of
moisture rather than time. The drying process was optimized based on the following:
■ Redundant time in dryer was reduced.
■ In-process sampling and laboratory testing of these samples was eliminated.
■ There was an increase in process understanding.
■ NIR spectral data provided information on the impact of drying on product particle size.
Variance plots (of univariate absorbance data for API and excipient) as a qualitative approach were very useful for comparative purposes. These proved
especially important during product transfers, validation of a product in different equipment, and/or for up-scale or down-scale of processes. In-process
blend uniformity data were also used in lieu of end-product content uniformity testing as part of a real time release testing (RTR) strategy.
Content uniformity of Solid Oral Dose Products
NIR spectroscopy offers the possibility of performing non-destructive, whole tablet testing of potency at very high speeds and provides increased
process understanding during compression and quality assurance because of the large sample size involved. In this example content uniformity was
monitored by measuring the potency of tablets in-line after compression, and applying statistically relevant control limits for a large sample set.
Figure 20.D-12 shows the predicted potency by NIR versus sample number for an in-line sample set.
Figure 20.D-12 NIR potency plot
The NIR data collected provided useful information on a number of important process issues, for example potency variability at the following stages:
■ during the run of the tablet press (intra batch)
■ between runs on a tablet press (inter batch)
■ between tablet presses
■ with different blending or granulation conditions
■ correlation with variation in physical properties, such as hardness, which could be correlated to dissolution.
It should be noted that the area of setting specifications for large sample sizes is an evolving area and that regulators and industry are currently
discussing and optimizing approaches for large N acceptance criteria.
In-line monitoring of tablet content uniformity can be a key element of Real Time Release (RTR).
Summary
PAT is playing an important role in the manufacturing of pharmaceuticals.
PAT implementation should not be viewed as a goal in its own right. However, significant benefits can be realized through effective application of PAT.
PAT deployment should be tied to a clear business strategy, e.g.
■ increasing process understanding,
■ reducing process lead time,
■ reducing process variability,
■ enabling innovative manufacturing,
■ improving process control,
■ increasing agility of operations,
■ continuous quality verification etc.
Decisions on the implementation of PAT should be driven by the process requirements. An effective PAT strategy should ensure that firms are
“monitoring what matters and controlling what is critical”.
With due consideration, PAT systems can be easily accommodated within existing quality systems, and meet GMP requirements. As manufacturing
processes evolve, PAT systems are expected to play an increasingly important role in demonstrating product quality.
A high level of support has been shown by regulators for PAT and existing regulatory systems can readily accommodate PAT submissions.
1 Parametric two-stage sequential quality assurance test of dose content uniformity - Journal of Biopharmaceutical Statistics 17(1), 143–157 (2007)
2 Development of a content uniformity test suitable for large sample sizes – Drug information journal 40(3), 337–344 (2006)
Printed by: 168305-3 Date: 24.04.2014 GMP MANUAL © Maas & Peither AG
20.E References
Up09
Regulatory Requirements Europe
1. EMA Guidance Reflection Paper “Chemical, pharmaceutical and biological information to be included in dossiers when Process Analytical
Technology (PAT) is employed” (EMEA/INS/277260/2005), March 2006.
2. EMA Note for Guidance on the Use of Near Infrared Spectroscopy by the Pharmaceutical Industry and the Data Requirements for New
Submissions and Variations (CPMP/QWP/3309/01 and EMEA/CVMP/961/01), August 2003.
3. European Pharmacopoeia
Chapter 2.2.40: Near-infrared spectrometry
Regulatory Requirements US
4. Pharmaceutical cGMPs for the 21st Century: A Risk Based Approach, FDA, 2002.
5. Guidance for Industry: PAT – A Framework for Innovative Pharmaceutical Development, Manufacturing, and Quality Assurance, FDA, 2004 (see
Chapter D.11).
6. General principles of software validation, final guidance for industry and FDA staff , FDA, 2002.
7. United States Pharmacopoeia
USP <1119> Near-infrared spectrophotometry
USP <1120> Raman Spectroscopy
8. ASTM Standards
E2474-06 Standard Practice for Pharmaceutical Process Design Utilizing Process Analytical Technology
E2363-06a Standard Terminology Relating to Process Analytical Technology in the Pharmaceutical Industry
ICH
9. ICH Q8(R2) Pharmaceutical Development (see Chapter E.8)
10. ICH Q9 Quality Risk Management (see Chapter E.9)
11. ICH Q10 Pharmaceutical Quality System (see Chapter E.10)