Statistics and Errors in Pharmaceutical Calculations

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 47

PHARMACEUTICAL ANALYSIS

ERRORS AND STATISTICAL VALIDATION

PRESENTED BY: RITIKA BHATIA M.PHARMA-II SEM


GUIDED BY: MR. RAKESH MARWAHA ASSISTANT PROFESSOR DEPTT. OF PHARMACY M.D.U ROHTAK

INTRODUCTION
Statistics are concerned with the presentation, organization, and summarization of data. In this presentation we are going to discuss: 1. Errors 2. Statistical Validation

ERRORS IN PHARMACEUTICAL INDUSTRY

INTRODUCTION
The terminology ERROR refers to the difference in the numerical values between the measured value and the true value. It has become universally accepted in methods of comparison that percentage composition of a standard sample provided and certified by NIST or BPCRS or European Pharmacopoeia must be regarded and treated as absolutely correct, pure and authentic while evaluating a new analytical procedure.

CLASSIFICATION OF ERRORS

DETERMINATE (SYSTEMATIC) ERROR


These are the errors that posses a definite value together with a reasonable assignable cause. These errors are avoidable and can be measured and accounted for convenience.

TYPES OF DETERMINATE ERRORS


PERSONAL ERRORS
INSTRUMENTAL ERRORS REAGENT ERRORS
They are caused due to personal equation of an analyst & have no bearing either on the prescribed procedure or methodology involved.

These are caused due to faulty and uncalibrated instruments

These errors that are introduced by individual reagents. E.g., impurities inherently present in reagent, unwanted introduction of foreign substances by action of reagent on porcelain or glassware.

CONSTANT ERRORS

They are rather independent of the magnitude of measured amount and turn out be less significant as magnitude enhances.

PROPORTIONAL ERRORS

The absolute value of this kind of error changes with the size of the sample in such a way that the relative error remains constant.

ERRORS DUE TO METHODOLOGY

Both improper sampling and incompleteness of a reaction often lead to serious errors

ADDITIVE ERRORS

The additive errors are independent of quantum of substances actually present in assay.

EXTERNAL ERRORS

These errors are usually caused by environmental conditions over which the observer has no control They cannot be eliminated but necessary corrections may be applied

INDETERMINATE (RANDOM) ERROR


These cannot be pin-pointed to any specific well defined reasons. They are usually manifested due to the minute variations which take place in several successive measurement performed by the same analyst, using utmost care, under almost identical experiment parameters.

Random fluctuations

Recognition of specific defiant value which are beyond anyone's control

SALIENT FEATURES

Random variation

Random scattering

PRECISION

ACCURACY

ACCURACY
It is the measure of degree of closeness

PRECISION
Its defined as the agreement amongst a cluster of experimental results. Under similar conditions the results are reproducible

GRAPH
R PROBABILITY DENSITY Accuracy

R=reference

Precision

VALUE

Accuracy indicates proximity of measurement results to the true value Precision is the reproducibility of measurement

MINIMIZING SYSTEMATIC ERRORS


Calibration of instruments, apparatus & applying necessary corrections
Instruments and apparatus commonly used in for analytical purposes must be calibrated duly and necessary corrections must be incorporated to the original measurement.

Performing a parallel control determination

It comprises of performing altogether separate estimation under almost identical experimental parameters with a quantity of standard substance that comprises of same wt. of component as in unknown sample.

Blank determination

It may be established by performing a separate parallel estimation without the sample under same experimental conditions.

Cross checking of results by different methods of analysis

The accuracy of results may be cross checked by performing another analysis of the same substance by different method.

Method of std. addition

Small qty. of component is added to sample which is subject to analysis for total amount of component present. Gives recovery of quantum added component

Method of internal standards

A fixed quantity of reference std. or internal std. is added to a series of unknown concentration of material to be assayed. In chromatographic and spectroscopic determinations.

STATISTICAL VALIDATION

INTRODUCTION
The data generated may show fluctuations & may be random in nature . The powerful & effective technique is employed to render such random results into a set pattern. Such technique is called STATISTICS. The specific statistical treatment of the calibration data, aided by preprogrammable calculators and micro-computers, yielded a fairly accurate and more presentable determination of the graphs.

STATISTICAL TREATMENT OF FINITE SAMPLES


MEAN

MODE MEDIAN

MEAN
It is the average of a series of results. X = x1+x2+x3+xn = Xi N N This is the arithmetic mean X= x N Mean is the measure of central tendency.

MEDIAN

MODE

The median is that half of data points fall above it and half below it

The mode is most frequent occurring category

THE MEAN MODE AND MEDIAN ARE MEASURE OF CENTRAL TENDENCY

MEASURE OF DISPERSION

A measure of dispersion refers to how closely the data cluster around the measure of central tendency.

RANGE
1.The range is the difference between the highest and the lowest value. 2.Because of problems with range such as instability from one sample to other or new subjects are added, another term called interquartile range or midspread is used. 3.Interquartile=QL-QU

INDEX OF DISPERSION
D =K(N2-fi2) N2(K-1) Where, K-no. of categories fi-no. of rating in each category N-total no. of rating

MEAN DEVIATION
The sum of deviation of any set of nos. around it mean is 0.

VARIANCE
Variance is the measure of avg. deviation of individual values divided by N

STANDARD DEVIATION
Its square root of variance

Mean deviation= X-X N

S2=x2/N

Constant addition to every no. doesnt change &hence S.D

SKEWNESS & KURTOSIS


Skew refers to the symmetry of the curve

CURVE A CURVE B

Curve A: skewed right(+ve skew) Curve B: skewed left (-ve skew)

Curve B

Curve A

Curve C

The 3 curves in the above graph are symmetric (skew=0) but they differ w.r.t flat or Peaked they are, this is called Kurtosis.

CURVE A Bell shaped(normal distribution) MESOKURTIC DISTRIBUTION

CURVE B Peaked LEPTOKURTIC DISTRIBUTION

CURVE C Flatter PLATYKURTIC DISTRIBUTION

BOX PLOT

Its one of the most powerful graphing techniques. It was introduced by John Turkey

GRAPHS

Mean, mode, median

Mean, mode, median in symmetric distribution.

median

mode

mean

Mean, mode, median in skewed distribution

SIGNIFICANT FIGURES
All digits that are certain plus which contains some uncertainty are significant figures

COMPUTIONAL RULES
Addition & subtraction Multiplication & division

Rounding numbers

+16.48+9.375+118.9-3.5450 +16.48+9.38+118.9-3.55= 141.21=141.2

2.64X3.126x0.8524X32.9453 2.64X3.126X0.852X32.95=231 .675=231.68{ 5 significant figures}

8.62=8.6 9.38=9.4 If the last digit is 5, always round the preceding digit to nearest even no ,eg 8.75= 8.6

STANDARD DEVIATION &STANDARD ERROR


SEm = S.D = s (Sample size) N The S.D reflects how close individual scores cluster around their mean S.E shows how close mean scores from repeated samples may be to true mean Z = (X- U) S.E

COMPARISION OF RESULTS
Statistically, it may be possible to ascertain whether the analytical procedure adopted has accurate or precise or if its superior to one of two methods.

TWO METHODS FOR COMPARISON

STUDENT tTEST

ANOVA

DEGREE OF FREEDOM
Its the no. of individual observation

which can be allowed to vary under


conditions that mean(X) and SD

ones determined, beheld constant

STUDENT t-TEST
This test focus on the distribution of differences between the two groups, so that we test another hypothesis. Under null hypothesis its presumed that the difference arising from a distribution of differences with a mean of zero & SD is related original distribution. Its used to compare the mean obtained from sample having certain sample value and have to express certain degree of confidence insignificance of comparison. t= (X1-U)N/S

ANOVA
The test uses ratio variances of two sets of result to determine if SD are significantly different F= S12/S22 S1 &S2- SD of two sets of results. Used to compare the results obtained from two different labs or from two different analytical procedures

METHOD OF LEAST SQUARE


It has been noticed that experimental points rarely fall exactly upon straight line by virtue of indeterminate errors caused by instrumental readings. Its tedious to obtain the best straight line for a std. plot, based on observed point so that the error is brought down to least possible extent. At this stage method of least squares is used.

REGRESSION LINE
Its the straight line passing through the data that minimizes the sum of the square differences between original data and fitted points. The best fitted line is regression line. SS(regression)=(y-Y)2 Where SS-sum of square, y-no. that results from plugging the X value of individual regression

SAMPLE SIZE
The sample sixe withdrawn from a heterogeneous material is guided by V=K/n V-sampling variance, n-actual no. of sampling increments, K-constant

CONCLUSION
Statistical analysis is a widely used method in pharma industry. Apart from pharma industry its widely employed in form of biostats, medicinal statistics, chemometrics, epidemology etc. Its also used in various analysis for NDD.

REFERENCE
Geoffrey R. Norman, David L. Streiner, Biostatistics, the bare essential II edition. Published by BC Deckers Inc, Hamliton, 2000. Ashutosh Kar Pharmaceutical drug analysis II edition. Published by New Age International Publishers, New Delhi 2008. pg 71-87 A.L Nagar, R.K Das Basic Statistics II edition. Published by Oxford University Press, New Delhi 2000. pg 1-6, 30-56 Sanford Bolton, Charles Bon Pharmaceutical Statistics, practical and clinical applications IV edition. Published by Marcel Dekker, New York, 2004.

THANK YOU

You might also like