Negative Percent Agreement Vs Specificity

-December 13, 2020-

Negative Percent Agreement Vs Specificity

Mike Burroughs

To avoid confusion, we recommend that you always use the terms positive agreement (AAE) and negative agreement (NPA) when describing the agreement of these tests. We have seen that the information produced for a COVID-19 rapid test uses the terms "relative" sensitivity and "relative" specificity compared to another test. The term "relative" is an erroneous term. This means that you can use these "relative" ratios to calculate the sensitivity/specificity of the new test based on the sensitivity/specificity of the comparative test. That is simply not possible. We also consider data from a study conducted in the United States and the Netherlands for a new sepsis diagnostic test [25]. Three independent diagnoses per patient were made by expert panelists based on information contained in case report forms, and the diagnostic combination was used to determine the overall confidence of the classification for each patient, as described in S2 Supporting Information ("A Method for Estimating Patient Classifications By an Expert Panel Comparator"). Erroneous classifications were introduced at random, weighted by the distribution of uncertainty observed in the classification of patients, as described in S3 Support information ("Weighting for Misclassified Classification Events"). In order to present a statistically valid representation of the randomness of the selection, each injection of classification noise was randomly drawn from the distribution of uncertainty observed in the study and introduced in 100 iterations, and aggregate results are displayed. Four different patient selections from the study as a whole (N -447) were made separately and analyzed separately: (1) the subset of patients (N-290; 64.9% of all patients) who received unanimous diagnoses from external expert panelists and to whom the same diagnosis was attributed by the researchers at the clinical sites. We thought it was the "super-unanimous" group and assumed that if external panel experts and clinical site researchers agreed, the diagnoses were pretty correct. These patients represent the study cohort layer with the lowest probability of error in comparison; (2) the subset of patients (N-410; 91.7% of the total) who received a consensual diagnosis (majority diagnosis) by the external organ. This subgroup of patients excluded 37 patients considered "indeterminate" because the experts had not reached a consensual diagnosis; (3) all patients (N -447) with a forced diagnosis of positive or negative, regardless of the degree of uncertainty associated with each patient; (4) the sub-quantity of patients with clinical records indicating respiratory disorders (N-93; 20.8% of the total) for whom a relatively high classification uncertainty was expected and observed.

Each of these 4 patient selections had an expected misclassification rate, determined on the basis of the average residual uncertainty, as measured in the evaluations of the three external panelists, as described in S4 Supporting Information. In the FDA`s latest guidelines for laboratories and manufacturers, "FDA Policy for Diagnostic Tests for Coronavirus Disease-2019 during Public Health Emergency," the FDA explains that users should use a clinical trial to establish performance characteristics (sensitivity/AAE, specificity/NPA). While the concepts of sensitivity/specificity are widely known and used, the terms AAE/APA are not known. Although the positive and negative matching formulas are identical to those for sensitivity/specificity, it is important to distinguish them because the interpretation is different.

click below to share with friends and social networks
Facebook Twitter Tumblr Stumbleupon Reddit Email

Leave a comment and participate in the discussion.
Social links powered by Ecreative Internet Marketing