Why do method validation




















Verification vs. CPT Labs. September 8, What is Method Validation? Upcoming Events. New Events Coming Soon! Janet Thode Trainings - Dr. Janet Thode Michael Thode. What is the ISO ? What is method validation? Verification of compendial methods Method transfers - good to know Procedure and requirements for method transfers Planning phase — Preparing a method transfer Types of transfer and transfer strategy Content of a transfer plan Acceptance criteria of comparative method transfers Filter validation: Good to know Filter validation: Aspects to be considered Filter validation: Information from the filter supplier Filter validation: Product-specific studies that need to be performed by the filter user Filter integrity tests — principles and influencing variables The bacterial retention test according to ASTM Fa HPLC troubleshooting and method optimization Examples for technical scientific documents.

The following table provides an overview: Validation parameter Provides information about: Is determined by e. Measurements at e. Robustness Susceptibility to disturbance parameters that can occur during future routine measurements Small variations in method parameters Relative difference in respect to the normal condition Trueness and precision can be explained with the aid of dartboards.

Ok Decline More information. Validation parameter. Provides information about:. Is determined by e. Spiking experiments. Random errors within one measurement series in one laboratory short period of time. Repeated consecutively performed measurements. Mean, standard deviation, relative standard deviation. The standard deviation of the individual deviations of measured values in Y, above and below the linear line fitted line is:.

The above calculations can be programmed in a computer but before every use, the computer program must be validated using the example given in section. The above procedure can also be used for obtaining LOD and LOQ of the method from recovery test results by taking fortified concentration on X-axis and obtained concentrations on Y-axis.

In this example, the linear regression equation is employed to find out the extent of linear response of an Detector to a reference analytical standard in the concentration range of about 0.

The details were presented in Table 2. Fitting the data of concentration of standard solution and mean detector response peak area counts in a linear equation. The calculations were presented in Table 3. Using the above parameters, calculate the following. Calculation of a, b, and r. Note : Sometimes r 2 is also used to express the goodness of fit. Calculation of standard deviation for a and b:. The standard deviation for a is calculated as:. The standard deviation for b is calculated as.

Note: Assay procedures vary from highly exacting analytical determinations to subjective evaluations of attributes. Therefore different test methods require different validation schemes. Category I. Category II. Analytical methods for determination of impurities or degradation compounds in finished goods.

These methods include quantitative assays and limit tests, titrimetric and bacterial endotoxin tests. Category III. Analytical methods for determination of performance characteristics, e. Data Elements Required for Assay Validation. Unlike when a solution of matrix is spiked with a standard solution, such influences are suppressed by overestimating the recovery values. The percentage of recovery can be influenced by the precision of the analyte addition process weighing and mixing , incorporating a variation into the recovered value.

For biological medicines and herbal medicines in which a matrix free of the analyte is not available or its preparation is not possible, a procedure such as that described above may be performed. For this, one can fortify an authentic sample of the medicament with known amounts of standard. In order to provide more meaningful evidence, it is recommended that the upper limit be not too far from the routine.

This approach may be influenced by the dependence of the precision as a function of the determined concentration, which is a summation of the original content of the matrix plus the added amount of analyte. This approach allows for evaluation of the accuracy at the same levels of the proposed range for the application of the method in the laboratory routine. However, the influence of the matrix is halved.

Thus, if it is proven that there is no matrix effect, such procedure is adequate. If available, a sample with a low concentration of the analyte of interest can be fortified, thereby maintaining the full influence of the matrix under the appropriate concentration levels.

The recovery can be calculated with Equation 7. Whenever the standard addition procedure is used, use of the same reference substance to obtain fortified matrix samples is recommended. Thus, errors related to the uncertainty of the purity of the analyte of interest are minimized. The assessment of the accuracy of pure substances by the addition of standard to the matrix has limited applicability, making it extremely difficult to evaluate the accuracy when CRMs or reference methods are not available.

However, every effort should be made to identify an appropriate method of comparison. Instead of quantitative comparison, the results could be supported by another analytical technique, such as verification of the very high purity of a drug substance by differential scanning calorimetry DSC. For an impurities assay, use of the standard addition technique may be a viable alternative. However, greater variability may be expected at low concentration ranges due to the more pronounced effects of the matrix.

Generally, at low concentrations, a representative fraction of the analyte may be chemically related to the matrix e. The maximum acceptable variation for the recovery percentage depends on factors such as the analyte fraction in the sample, sample processing, and the level of quality associated with the methods used. However, such statistical tests do not consider variations in practical relevance.

For instance, small variabilities at one or more levels of accuracy, which present no practical risk for routine application, may be identified as significant.

The t-test describes the relationship between the difference of two means and a standard deviation, with the maximum allowable difference given as a function of the standard deviation. In its turn, the mean recovery may be tested statistically versus the theorical value. If the theoretical value is not included within the CI but the observed standard deviations are acceptable, additional evaluations should be performed to compare statistical significance to practical relevance. In contrast to the significance tests, where confidence intervals must include the theorical value, equivalence tests must be within an acceptable user-defined range.

Here, the user can define an acceptable difference, i. Another alternative is to use absolute acceptance limits, defined as the maximum acceptable absolute difference for recovery, e. Such approach may be derived from practical experience gained during various validation processes carried out in the same laboratory.

In addition, it is recommended that the results obtained be plotted in order to detect trends or the concentration dependence. The dispersion of the results may be influenced by the concentration at which the analyte is in the matrix, as well as the concentration at which it is determined analytically. Acceptance limits can also be stipulated considering the dispersion of values as a function of concentration.

Normally, the associated deviations increase as the analyte concentration decreases both relative to the fraction found in the matrix, as well as the analytical concentration. According to the range of analyte concentration present in the matrix, acceptable recovery values may be given according to Table 1. When the recovery is determined by the matrix fortification procedure that already contains the analyte of interest, it should be considered the original content in the sample plus the quantity added for the application of Table 1.

Analytical results are influenced by systematic determinate and random indeterminate errors. Systematic errors are caused by problems that persist throughout the entire experiment, and they may be methodological, instrumental, or personal mistakes. Such errors are repeatable in a set of measurements, diverting the experimental results from the direction of the true value.

Conversely, random errors are inconsistent and unrepeatable, caused by uncontrollable or unknown variables that lead to dispersion in the data. These errors cannot be corrected or deleted, characterizing the precision of the analytical method. The precision of an analytical method represents the closeness among multiple measurements acquired through the analysis of homogeneous samples under similar specified conditions.

This analytical validation parameter should be realized for tests and assays for quantitative determinations, and it is usually expressed as the coefficient of variation CV or the relative standard deviation RSD , which is the ratio between the standard deviation and the mean, multiplied by This normalization allows for direct comparison. Regarding an analytical procedure, each step has its own variabilities that contribute to the overall dispersion of the results.

However, some authors categorize the dispersion of the results in the precision through four categories: system precision, repeatability, intermediate precision, and reproducibility Figure 4. Grasas Aceites. Figure 4 Representation of precision levels and their respective contributions. According to current guidelines, the system precision, or instrumental precision, is not considered to be a level that should be assessed.

This level of precision addresses the variability in the analytical system, mainly of the instrument e. Although not necessary, knowing such variability can be essential to establishing criteria for system suitability tests, which are carried out through sequential repetitive injections of the same sample.

The within-laboratory variability of an analytical method must be determined by repeatability and intermediate precision tests. Repeatability reflects the agreement among the values obtained through successive measurements under the same operating conditions and the same analyst within a short period of time.

Repeatability evaluates the contribution of sample preparation to the variability of the method and can be influenced by dilution, weighing, homogenization, and extraction. This term is considered synonymous to intra-day precision and differs from instrumental precision.

In practice, samples should be prepared independently from the start of the analytical procedure, and for the same solid and semi-solid samples the same stock solution cannot be used. Considering that precision is modified by the concentration of the analyte, especially if the analytical method covers a large concentration range, and that the samples tested should be representative of the whole, the points tested in this parameter should ideally encompass the limits established by the method.

In this approach, acceptance criteria are defined and justified according to the test performed and its objective, based on the intrinsic variability of the method, the working range, and the analyte concentration in the sample. When the results obtained do not meet the acceptance criteria, new solutions should be prepared, and in case of failure again the possible causes of error must be investigated.

It is important to note that the distribution of the repeatability reflects the complexity of the sample, its preparation, and the analytical technique used. Thus, it is possible to define the repeatability limits, which enables the analyst to define whether the difference between the analyses conducted is significant at a specified level of confidence.

The limit may be calculated using the following equation, Equation 8 :. The acceptance criteria for intermediate precision and reproducibility are calculated in a similar way, replacing the SD repeatability for the SD obtained for the intermediate precision or the reproducibility. Intermediate precision expresses the effect of within-laboratory variations due to events such as different days, analysts, and equipment, or a combination of these factors, in order to reflect the expected routine laboratory variability.

The intermediate precision includes the influence of additional random effects according to the intended use of the method in the same laboratory and can be regarded as an estimate of the long-term variability. Moreover, such an evaluation assesses the procedural capacity to provide the same results, considering that in different analytical runs changes in the reagent or supplier lots may occur, as well as variations in calibration standards, equipment recalibration, and alterations in temperatures.

Regarding intermediate precision, also referred to as inter-assay precision, this can be determined through the analysis of similar samples on different days, with different analysts and different instruments.

The required number of determinations and levels tested in order to evaluate the intermediate precision follows the same recommendation for repeatability and can also be expressed by the RSD. Moreover, planning and execution should include the same approach in terms of concentration levels and the same number of determinations previously performed in the repeatability assessment.

It is very important to address intermediate precision appropriately since it is an estimate of the expected variability. Firstly, the RSD for the two series of analyses repeatability and intermediate precision tests should be calculated.

The F-test evaluates whether the observed variances between groups of measures are statistically equivalent. The t-test is then used to verify if the means of the results of the two groups can be considered statistically equal. However, sometimes the two series of measurements may differ significantly by such statistical test.

This is particularly frequent in the case of good performance measurements in which the two sets show little scatter. If at the level of significance adopted there is no significant difference between the means, it is considered that the method has adequate intermediate precision.

However, when there is a difference between the precision levels, the cause needs to be identified by investigation of the individual effects of the various factors. Depending on the cause, the recommended solution consists of defining absolute upper limits for the various precision levels since duly justified. The last test used to evaluate the precision of a method consists of testing the reproducibility, which expresses the agreement among the results obtained in different laboratories that analyze homogeneous samples.

This parameter provides the largest expected precision because it is obtained by varying all the factors that may compromise the results. Reproducibility should be measured in at least two laboratories, although IUPAC recommends a minimum of five, ideally eight. Acceptance criteria like those established for repeatability and intermediate precision also apply to reproducibility.

As an acceptance criterion for reproducibility, the equation by Horwitz et al. This equation establishes the exponential relationship between the values of the RSD and the analyte concentration C Equation The predicted relative standard deviation of reproducibility RSDr obtained by the Horwitz equation is independent on the nature of the analyte, matrix, and analytical technique.

The reproducibility of the method is satisfactory when the RazHor value is close to 1 and the acceptable limit is up to 2. Values greater than 2 demonstrate that the analytical method performs poorly and that participating laboratories should review their techniques and procedures in order to identify possible errors. The robustness of an analytical method describes its ability to withstand small and deliberate variations in analytical parameters, whilst maintaining acceptable precision and accuracy.

The primary goal of robustness studies is to identify the method variables that are critical to ensure reliability and reproducibility of the results and to monitor routine analysis.

Most experimental conditions are susceptible to normal fluctuations and occasional mistakes. The robustness provides essential information to predict the behavior of the results, maintaining the quality of the analysis, and occasionally guides troubleshooting during the daily execution of the method.

These parameters should ideally be accessed during the development of the method prior to validation, whereas evaluation of their effect can be easily done when manipulating the method to achieve the optimal method conditions. The changes in the chromatographic conditions applied during method development are often harsh; however, it helps to indicate what parameters should be narrowed during validation.

There is no standard that describes which parameters should be evaluated in the analysis of robustness. They must be determined by the analyst and will differ with different equipment and applied techniques. There are some suggestions of which parameters to choose, shown in Table 2. As mentioned above, these are suggestions of commonly evaluated parameters, and nothing restrains the analyst from including a pertinent parameter that may imply a detectable deviation in selectivity or signal intensity.

For instance, when changing the flow from 1 mL min -1 to 1. If these changes that are inherent to the equipment are probable, then verification of their influences must be conducted during robustness tests.

The variable response to quantify and evaluate the robustness of the method will also be dependent on the purpose of the method and may be different for each parameter and sometimes directly related to specific ones.

For instance, if the method purpose is identification of a specific analyte among its impurities by LC, the resolution, peak purity, and capacity factor might be good variables to evaluate since these parameters demonstrate the selectivity of the analyte. Given the relevance, the analyst may add any quantifiable variable response. Robustness tests are conducted in univariate and multivariate ways.

The univariate approach involves varying each parameter individually in order to identify the influence of this change. The deviation limits that are acceptable in univariate experiments are represented graphically and statistically. Graphical evaluation is useful for expressive effects, e.

The majority of analysts apply the univariate way in any situation; however, the investigation can be useful to evaluate few parameters, making it impractical as the number of parameters increases. For example, if the test has 7 alternating parameters, the analyst must run analyses, varying each factor individually to achieve all possible combinations of conditions.

Whereas the impracticality number of experiments, the analyst may adopt the systematic approach with multivariate experimental design, which is a mathematical tool to minimize the sample number, using combinatory designs to vary parameters simultaneously, rather than one at a time. This approach is more effective at evaluating a higher number of parameters and allows for the detection of the effect of each parameter individually, as well as its synergies.

There are several ways to design a multifactorial experiment. The most common examples are utilization of fractional factorial and Plackett-Burman designs. However, interactions between the different factors cannot be detected.

There are normal and half-normal probability plots. Normal probability plots are used to assess whether the data set is approximately normally distributed, while half-normal probability plots can identify the important parameters and interactions between the factors. Probability plots draw a line through the data, and a sample that deviates from the line is considered to be critical to the method Figure 5.

The effect plot uses bars as graphical representation, and the Pareto chart shows the magnitude of the effects, that is, the influence of individual and joint effects on the evaluated response Figure 5. Figure 5 Half-normal probability plot A and Pareto chart B.

When no significant effects are found on these graphical plots, the method may be considered robust to the specific parameter. Regardless of the chosen parameters, the variable response, and its robustness, it is possible to continue the validation process since robustness is not a parameter for approval or rejection of the analytical condition.

The results of robustness are an indicative of what is criticality to the method and what factors must be followed carefully to ensure the reproducibility of results. Chemical compounds may decompose during the preparation of solutions or during storage post preparation and prior to analysis, short-term, and long-term storage. Therefore, pre-establishing the handling and storage conditions is fundamental for proper analytical development, as well as for later analytical validation.

Pre-determining the stability profile of the analytical solutions in the early stages of method development makes it possible to reduce spending on the use of freshly prepared solutions for each test, maintaining reliability. Additionally, the experimental data helps us to understand the limitations of the analytical method, assisting in planning the analytical validation procedure.



0コメント

  • 1000 / 1000