Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Laboratory Analysis: Terms and Concepts, Summaries of Mathematics

An overview of various terms and concepts related to laboratory analysis, including accuracy, bias, precision, imprecision, control, and statistical methods. It covers topics such as analytical and diagnostic sensitivity, specificity, and predictive value, as well as quality control and proficiency testing.

What you will learn

  • What is the difference between bias and imprecision?
  • What is accuracy in laboratory analysis?
  • How is precision measured in laboratory analysis?

Typology: Summaries

2021/2022

Uploaded on 03/13/2022

MGolden
MGolden 🇺🇸

5

(1)

5 documents

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1. Accuracy without error; closeness to the true value
2. Analytical
Measurement
Range (AMR)
Also known as linear or dynamic range.
Range of analyte concentrations that can be
directly measured without dilution,
concentration, or other
Pretreatment
3. Bias systematic discrepancy between a
measurement and the true value; may be
constant or proportionate and may adversely
affect test results; the difference between the
observed mean and the reference mean.
Negative bias indicates that the test values
tend to be lower than the reference value,
whereas positive bias indicates test values are
generally higher. Bias is a type of constant
systematic error
4. Clinical
Laboratory
Improvement
Amendment
(CLIA)
regulations signed into federal law in 1988;
mandate standards in clinical laboratory
operations and testing
5. Clinically
Reportable
Range (CRR )
range of analyte that a method can
quantitatively report, allowing for dilution,
concentration, or other pretreatment used to
extend AMR
6. Confidence
Interval
Range of values that include a specified
probability, usually 90% or 95%. For example,
consider a 95% confidence interval for slope =
0.972 - 0.988 from a method comparison
experiment. If this same experiment were
conducted 100 times, then the slope would
fall between 0.9772 and 0.988 in 95 of the 100
times. Confidence intervals serve to convey
the variability of estimates and quantify the
variability
7. Constant
Error
a type of systemic error in the sample
direction and magnitude; the magnitude of
change is constant and not dependent on the
amount of analyte
8. Control a substance or material of determined value,
used to monitor the accuracy and precision of
a test. Controls are run with the patient's
specimens
9. Control Limit Threshold at which the value is statistically
unlikely
10. Descriptive
Statistic
statistics or values (e.g., mean, median, and
mode) used to summarize the important
features of a group of data; analysis of a data
set that helps describe, show or summarize
data in a meaningful way such that, for
example, patterns might emerge from the data
11. Dispersion the spread of data; most simply estimated by
the range, the difference between the largest
and smallest observations. The most
commonly used statistic for describing the
dispersion of groups of single observations is
the standard deviation, which is usually
represented by the symbol s
12. Histogram graphical representation of data where the
number or frequency of each result is placed
on the y-axis and the value of the result is
plotted on the x-axis
13. Imprecision Dispersion of repeated measurements about
the mean due to analytic error
14. Inaccuracy Difference between a measured value and its
true value due to systematic error, which can
be either constant or proportional error
15. Inferential
Statistics
values or statistics used to compare the
features of two or more groups of data;
techniques that allow us to use representative
samples to make generalizations (inferences,
probabilities) about the populations from which
the samples were drawn
16. Levy-
Jennings
Control
Chart
a chart illustrating the allowable limits of error
in laboratory test performance, the limits being
a defined deviation from the mean of a control
serum,most commonly +/-2 s tandard deviations
17. Limit of
Dete ction
(LoD)
Lowest amount of analyte accurately detected
by a method
18. Linear
Regression
statistical calculations of the slope (b), the y
intercept (a), and the standard deviation of the
points about the regression line (sy/x), and the
correlation coefficient (r) to compare two
methods
19. Multirule
Procedure
Decision criteria to determine if an analytic run
is in control; used to detect random and
systematic error over time
20. Negative
Predictive
Value
Chance an individual does not have a given
disease or condition if the test is within the
reference interval; NPV = TN/(TN + FN) × 100
Chapter 3 Terms
Study online at quizlet.com/_2hj0zy
pf2

Partial preview of the text

Download Laboratory Analysis: Terms and Concepts and more Summaries Mathematics in PDF only on Docsity!

  1. Accuracy without error; closeness to the true value
  2. Analytical Measurement Range (AMR)

Also known as linear or dynamic range. Range of analyte concentrations that can be directly measured without dilution, concentration, or other Pretreatment

  1. Bias systematic discrepancy between a measurement and the true value; may be constant or proportionate and may adversely affect test results; the difference between the observed mean and the reference mean. Negative bias indicates that the test values tend to be lower than the reference value, whereas positive bias indicates test values are generally higher. Bias is a type of constant systematic error
  2. Clinical Laboratory Improvement Amendment (CLIA)

regulations signed into federal law in 1988; mandate standards in clinical laboratory operations and testing

  1. Clinically Reportable Range (CRR)

range of analyte that a method can quantitatively report, allowing for dilution, concentration, or other pretreatment used to extend AMR

  1. Confidence Interval

Range of values that include a specified probability, usually 90% or 95%. For example, consider a 95% confidence interval for slope = 0.972 - 0.988 from a method comparison experiment. If this same experiment were conducted 100 times, then the slope would fall between 0.9772 and 0.988 in 95 of the 100 times. Confidence intervals serve to convey the variability of estimates and quantify the variability

  1. Constant Error

a type of systemic error in the sample direction and magnitude; the magnitude of change is constant and not dependent on the amount of analyte

  1. Control a substance or material of determined value, used to monitor the accuracy and precision of a test. Controls are run with the patient's specimens
  2. Control Limit Threshold at which the value is statistically unlikely 10. Descriptive Statistic

statistics or values (e.g., mean, median, and mode) used to summarize the important features of a group of data; analysis of a data set that helps describe, show or summarize data in a meaningful way such that, for example, patterns might emerge from the data

  1. Dispersion the spread of data; most simply estimated by the range, the difference between the largest and smallest observations. The most commonly used statistic for describing the dispersion of groups of single observations is the standard deviation, which is usually represented by the symbol s
  2. Histogram graphical representation of data where the number or frequency of each result is placed on the y-axis and the value of the result is plotted on the x-axis
  3. Imprecision Dispersion of repeated measurements about the mean due to analytic error
  4. Inaccuracy Difference between a measured value and its true value due to systematic error, which can be either constant or proportional error
  5. Inferential Statistics

values or statistics used to compare the features of two or more groups of data; techniques that allow us to use representative samples to make generalizations (inferences, probabilities) about the populations from which the samples were drawn

  1. Levy- Jennings Control Chart

a chart illustrating the allowable limits of error in laboratory test performance, the limits being a defined deviation from the mean of a control serum,most commonly +/-2 standard deviations

  1. Limit of Detection (LoD)

Lowest amount of analyte accurately detected by a method

  1. Linear Regression

statistical calculations of the slope (b), the y intercept (a), and the standard deviation of the points about the regression line (sy/x), and the correlation coefficient (r) to compare two methods

  1. Multirule Procedure

Decision criteria to determine if an analytic run is in control; used to detect random and systematic error over time

  1. Negative Predictive Value

Chance an individual does not have a given disease or condition if the test is within the reference interval; NPV = TN/(TN + FN) × 100

Chapter 3 Terms

Study online at quizlet.com/_2hj0zy

  1. Nonparametric Method

Statistical test that makes no specific assumption about the distribution of data. Nonparametric methods rank the reference data in order of increasing size. Because the majority of analytes are not normally (Gaussian) distributed, nonparametric tests are the recommended analysis for most reference range intervals

  1. Parametric Method

Statistical test that assumes the observed values, or some mathematidcal transformation of those values, follow a (normal) Gaussian distribution

  1. Population includes all the data you are interested in
  2. Positive Predictive Value

Chance of an individual having a given disease or condition if the test is abnormal; PPV = TP/(TP + FP) × 100

  1. Precision the closeness of repeated results; quantitatively expressed as standard deviation or coefficient of variation
  2. Predictive Value Theory

referring to diagnostic sensitivity, specificity, and predictive value. The predictive value of a test can be expressed as a function of sensitivity, specificity, and disease prevalence

  1. Proficiency Testing

confirmation of the quality of laboratory testing by means of "unknown" samples; the results are compared with other external laboratories to give an objective indication of test accuracy

  1. Proportional Error

A type of systemic error where the magnitude changes as a percentage of the analyte present; error dependent on analyte concentration

  1. Quality Control (QC)

system for recognizing and minimizing (analytical) errors. The purpose of the quality control system is to monitor analytical processes, detect analytical errors during analysis, and prevent the reporting of incorrect patient values. Quality control is one component of the quality assurance system

  1. Random Error a type of analytical error; random error affects precision and is the basis for disagreement between repeated measurements. Increases in random error may be caused by factors such as technique and temperature fluctuations
  2. Reference Interval

the usual values for a healthy population; also normal range

  1. Reference Method

an analytical method used for comparison. It is a method with negligible inaccuracy in comparison with its imprecision

  1. Sample smaller data set used to represent the larger population
  2. Sensitivity (Analytic)

ability of a method to detect small quantities of an analyte

  1. Sensitivity (Diagnostic)

the ability of a test to detect a given disease or condition; the proportion of patients with a given disease or condition in which a test intended to identify that disease or condition yields positive results; % Diagnostic Sensitivity = TP/(TP + FN) × 100

  1. Shift a sudden change in data and the mean
  2. Specificity (Analytic)

ability of the method to measure only the analyte of interest; in regard to quality control, the ability of an analytical method to quantitate one analyte in the presence of others in a mixture such as serum

  1. Specificity (Diagnostic)

the ability of a test to correctly identify the absence of a given disease or condition; % Diagnostic Specificity = TN/(TN + FP) × 100

  1. Standard Deviation Index (SDI)

Refers to the difference between the measured value and the mean expressed as a number of SDs. An SDI = 0 indicates the value is accurate or in 100% agreement; an SDI = 3 is 3 SDs away from the target (mean) and indicates error. SDI may be positive or negative

  1. Systematic Error

results from inaccuracy; a type of analytical error that arises from factors that contribute a constant difference, either positive or negative, and directly affects the estimate of the mean. Increases in systematic error can be caused by poorly made standards or reagents, failing instrumentation, poorly written procedures, etc

  1. Total Error Random error plus systematic error
  2. Trend a gradual change in data and the mean