Guide to Method Validation of Test Procedures

Thursday, October 4, 2018

Method validation of test procedures is the process by which one establishes that the testing protocol is fit for its intended analytical purpose. This process has been the subject of various regulatory requirements. For example, in its Current Good Manufacturing Practice (CGMP) for Finished Pharmaceuticals (21 CFR Part 211), the U.S. Food and Drug Administration (FDA) states that “the accuracy, sensitivity, specificity, and reproducibility of test methods … shall be established and documented.” Likewise, the U.S. Pharmacopeia (USP) requires that certain procedural steps are followed in the validation of compendial procedures (First Supplement to USP 40-NF 35).

In order to provide a harmonized regulatory framework for the method validation of analytical procedures, the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) has promulgated guidelines for the validation of analytical procedures (ICH Q2/R1). Based on ICH Q2/R1 as its primary reference, the FDA has offered guidance for industry on the analytical procedures and method validation for drugs and biologics, and the USP has published specific guidelines for the method validation of compendial procedures.

Parameters to validate

The ICH, FDA, and USP define the test procedure parameters to validate as encompassing accuracy, precision (repeatability, intermediate precision, and reproducibility), specificity, limit of detection, limit of quantitation, linearity, robustness, and system suitability testing. Since there is not a single accepted procedure for conducting method validation, it is generally performed in an iterative manner, with most adjustments or improvements made during the process as dictated by the method validation itself.

Below are the parameters to validate and some methods used to validate them.

Accuracy

The accuracy of an analytical procedure is defined as how close the test results of the parameters are for a specific analyte compared to the true measure of these parameters. In the case of trace analysis, accuracy can be established through the analysis of a certified reference material, a comparison to data obtained by an independent validated method, or an interlaboratory comparison involving a laboratory accredited by the International Organization for Standardization (ISO) and compliant with the general requirements for the competence of testing and calibration laboratories (ISO/IEC 17025). Alternatively, accuracy can be established through spike recovery experiments and the use of standard additions.

When a drug substance is to be assayed, accuracy can be determined by applying the analytical procedure to an analyte of known purity or comparing the results of the procedure with those of a second, well-characterized procedure, the accuracy of which has been stated or defined. In a case in which a drug in a formulated product must be assayed, accuracy can be determined by analyzing the synthetic mixtures of the drug components to which known amounts of analyte have been added during the procedure. For the quantitative analysis of impurities, accuracy can be assessed on samples spiked with known amounts of impurities. Accuracy is calculated as the percentage of recovery by the assay of the known added amount of analyte in the sample, or as the difference between the mean and the accepted true value, together with confidence intervals.

Precision

The precision of an analytical procedure (as measured by standard deviation or relative standard deviation) is the degree of agreement among individual test results when the procedure is applied repeatedly to multiple samplings of a homogeneous sample.

Precision can be expressed as: repeatability, or intra-assay precision (as a measure of precision under the same operating conditions over a short interval of time); intermediate precision within-laboratory variations (different days, different analysts, and different equipment); and reproducibility, which is the precision between collaborative laboratories.

Single laboratory precision or repeatability can initially be based on one homogeneous sample and is expressed as standard deviation.

Specificity

Specificity is the ability to unambiguously measure the content or potency of analyte in the presence of other components such as impurities, degradation products, and matrix components. In the case of trace analysis and in the context of inductively coupled plasma-optical emission spectroscopy (ICP-OES), and inductively coupled plasma-mass spectrometry (ICP-MS), specificity can be evaluated by confirming that the interferences in the measurement process are not significant. Spectral interference studies founded upon single analyte analyses are generally the basis for the evaluation of specificity, and the incorporation of carefully designed synthetic matrices that mimic the samples under study are particularly useful when evaluating specificity for complex sample types.

Limit of detection

Limit of detection expresses the lowest amount of analyte in a sample that can be reliably distinguished with a stated confidence level from the absence of that substance when using a specific test method. This measurement can be determined using several approaches, depending on whether the procedure is non-instrumental or instrumental. These can be based on visual evaluation, signal-to-noise, the standard deviation of the response, and the slope of the instrument response.

In the case of trace analysis, limit of detection is defined as 3*SD0, where SD0 is the value of the standard deviation as the concentration of the analyte approaches 0. The value of SD0 can be obtained by extrapolation from a plot of standard deviation versus concentration where three concentrations are analyzed ~11 times each that are at the low, mid, and high regions of interest using a matrix that matches the sample matrix.

Limit of quantitation

Limit of quantitation is the lowest amount of analyte in a sample that can be determined with acceptable precision and accuracy. As with limit of detection, depending on whether the procedure is instrumental or non-instrumental, numerous approaches can be used. These can be based on visual evaluation, signal-to-noise, standard deviation of the response and the slope, standard deviation of the blank, and the calibration curve.

In the case of trace analysis, the limit of quantitation is defined as 10 SD0 with an uncertainty of about 30% at the 95% confidence level.

Linearity

This refers to the linearity of the relationship of concentration and assay measurement across the range of the analytical procedure. In validation, if linearity is not attainable, a nonlinear model can be used. The goal is to have a model, whether linear or nonlinear, that closely describes the concentration–measurement relationship.

In the case of trace analysis, linearity is a property that is between the limit of quantitation and the point where a plot of concentration versus response goes nonlinear.

Range

Range is the interval between the upper and lower concentration (amounts) of analyte in the sample (including these concentrations) for which it has been demonstrated that the analytical procedure has a suitable level of precision, accuracy, and linearity.

For trace analysis, range is between the limit of quantitation and the point where a plot of concentration versus response goes nonlinear.

Robustness

Robustness is the capacity of a method to remain unaffected by deliberate variations in method parameters.

In the case of trace analysis using ICP, robustness parameters include temperature (laboratory and spray chamber), concentration of reagents, RF power, nebulizer, spray chamber, torch design, torch height, sampler and skimmer cone design/construction material, integration time, detector design, reaction/collision cell type or conditions, and resolution capability. These parameters can be altered, which would affect the reliability of the determination.

System suitability testing

The idea behind system suitability testing is that all components that make up the system—i.e., the equipment, electronics, analytical operations, and samples to be analyzed—form a complete system. The test parameters to be established for a particular procedure depend on the type of procedure being validated.

Importance of method validation

Method validation of test procedures is an important aspect of compliance with the various regulations. Compliance requires that the validation of test procedures be conducted before their introduction into routine use. Whenever the experimental conditions for which the test procedures have been validated change, validation of test procedures must be reconducted. Once a test procedure has been developed and validated, a report should be prepared that includes the scope of the test procedure and the methods followed to validate them. This validation report will become part of the documented evidence to demonstrate compliance with the regulations impacting the test procedures and that they are fit for their intended purpose.

Therefore, it is advisable for a laboratory to develop a master plan to validate its test procedures to ensure its continued compliance with the various regulations and requirements, including the FDA’s CGMP regulations and the USP’s compendial test requirements.

For more information on method validation of test procedures, please visit www.inorganicventures.com.

Additional reading

  1. https://www.fda.gov/downloads/drugs/guidances/ucm386366.pdf
  2. https://hmc.usp.org/sites/default/files/documents/HMC/GCs-Pdfs/c1225_1SUSP40.pdf
  3. http://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q2_R1/Step4/Q2_R1__Guideline.pdf

Lina Genovesi, Ph.D., JD, is a technical, regulatory, and business writer based in Princeton, NJ, U.S.A.; e-mail: [email protected]; www.linagenovesi.com.

  • <<
  • >>

News