Evidence-Based Practice Glossary

Entries in this glossary were adapted from a number of sources, including

The definitions often differ slightly among these sources. Although we have selected a definition for each entry to get you started, your understanding of these concepts will grow if you examine the definitions provided at the websites listed above.

This glossary is a work in progress; entries will be added and updated over time and in response to feedback from users. Please send your comments and suggestions to ncep@asha.org.

Base rate

The prevalence (i.e., percentage) of people with a disorder in a sample

For example, if 20 people in a sample of 100 have dysphagia the base rate of dysphagia in this sample is 20%.

Baseline

A benchmark, measurement, or calculation used as a basis for comparison

Bias

Systematic deviation of study results from the true results, because of the way(s) in which the study was conducted

Blinding/masking/concealment

The practice of keeping investigators or subjects of a study ignorant of the group to which a subject has been assigned as a means of minimizing bias

Case series

An uncontrolled description of events and outcomes for a sequence of individual cases (i.e., patients, clients, or students)

Case study

An uncontrolled observational (descriptive) report of events and outcomes in a single case

Case-control study

A retrospective, observational study comparing a group of people with a disorder (cases) and a group of people free of the disorder (controls) to determine whether differences in the groups' previous exposures, experiences, risk factors, etc. could explain their different outcomes

Clinical practice guideline

A statement or recommendation developed by a group of experts to assist practitioners and clients with clinical decision making. Clinical practice guidelines may or may not be based on a systematic review and critical appraisal of the scientific evidence supporting the recommendation.

Cohort study

An observational study in which a sample of participants is followed over time in an effort to determine the factors leading to different outcomes

Confidence interval

For a certain value of a variable in a sample, a confidence interval is a range of values within which the value of that variable in the population is thought to lie with a specified probability

For the sake of correctness, a small technical point is necessary. Often, confidence intervals are misinterpreted as "the confidence interval has a 95% chance of containing the population value." However, the correct interpretation of a confidence interval requires a bit of imagination. If exactly the same study (as that reporting a confidence interval) were carried out many times with samples of the same size from the same population(s), and a confidence interval were calculated for each set of results, 95% of the confidence intervals would contain the population value.

Let's say a study of literacy in 5th graders yields a 95% confidence interval for the overall score on the Towre reading test having a lower bound of 80 and an upper bound of 120. If this study were carried out in 100 different samples, 95 of them would contain the population value. In a practical sense, then, we conclude that the population value is likely [to lie] between 80 and 100.

Confounder

A factor extraneous to the main question in a study, but that acts to affect the outcome, and thus distorts the true relationship between study variables

For example, a researcher wants to compare the speech production outcomes of children who are deaf or hard of hearing receiving one of two different sensory devices: hearing aids [HAs] or cochlear implants [CIs]. However, by nature of the post-implantation process, children in the CI group receive more intervention sessions than do children with HAs. In this case, treatment intensity (amount of treatment) is a confounder that makes it difficult to attribute group differences to the effects of the sensory devices themselves.

Controlled study

A study involving a comparison (control) group

Critical appraisal

The process of assessing and interpreting evidence by systematically considering its validity, results and relevance

Cross-sectional study

A study of a single sample at one point in time in an effort to understand the relationships among variables in the sample

Cross-over trial

A study in which participants first receive one type of treatment and then are switched to a different type of treatment

Effect size

In general terms, an effect size is a statistical measure of the size of a relationship that is being investigated. For example, when two groups (such as a treatment group and a control group) are being compared, an effect size reflects the size or magnitude of any difference found between them. In a correlational study an effect size reflects the strength of the association between two variables. And in a study of a diagnostic tool for identifying people who have a certain characteristic or condition, effect size measures indicate the accuracy of the new measure when compared to the gold standard diagnostic measure.

There are dozens of types of effect size. Reports of studies on the clinical utility of diagnostic protocols often include measures of effect size such as positive likelihood ratio (i.e., the value of a test result for ruling in the presence of a communication disorder) and negative likelihood ratio (i.e., the value of a test for ruling out the presence of a communication disorder).

Several variations of Cohen's (1988) value d are often reported in group comparison treatment studies. These index degrees of change from pre-treatment to post-treatment, or degrees of change in groups undergoing different treatment experiences. The effect sizes termed relative-risk ratio and odds ratio are often used to assess the success of a treatment protocol, in a categorical sense, against a particular outcome criterion (e.g., discharge to an independent-living environment, entry into mainstream kindergarten, self feeding with an unrestricted diet, quality-of-life score exceeding a critical value, hearing-aid satisfaction with 10 hours of aural habilitation).

Clinicians and researchers may use estimates of effect size and statistical probability to achieve a more full interpretation of a research outcome than is possible through an inspection of statistical probability alone. That is, measures of effect size provide one means for moving beyond statistical significance and moving on to clinical significance. Furthermore, estimates of effect size make it possible to combine results from several related studies. A meta-analysis pools estimates of effect size taken from several research studies addressing a common clinical question. In that manner, the weight of accumulated scientific evidence on a certain clinical protocol can be assessed and interpreted.

Effectiveness

The extent to which an intervention produces favorable outcomes under usual or everyday conditions

Efficacy

The extent to which an intervention produces favorable outcomes under ideally controlled conditions

Evidence-Based Practice

The integration of (a) clinical expertise, (b) current best evidence, and (c) client values to provide high-quality services reflecting the interests, values, needs, and choices of the individuals served

Experimental study

A study in which the investigator actively manipulates (alters) one or more variables in order to contrast the experimental and control conditions

Gold standard/reference standard

A method, procedure or measurement which is widely conceived to be the best available, against which new diagnostic tests should be compared

Incidence

The number of new cases of a condition occurring in a population over a specified period of time

Intention-to-treat analysis

An analysis of a randomized controlled trial where participants are analyzed according to the group to which they were initially randomly allocated, regardless of whether or not they had dropped out, fully complied with the treatment, or crossed over and received the other treatment. Because it maintains the original randomization, an intention-to-treat analysis contributes to the internal validity of a treatment study.

For example, some participants in treatment studies may drop out before the study ends, may attend few treatment sessions, or may receive treatments other than the ones they were assigned to receive. However, excluding the data from these individuals could bias the results (e.g., if participants who remained in the study were more highly motivated to comply with treatment recommendations than participants who dropped out).

Levels of evidence

An approach to evaluating evidence based on its quality, using criteria such as protection from bias and confounding, effect size, and precision.

Likelihood Ratio

The probability (odds) that a given score on a diagnostic test would be expected in a patient with the disease or condition of interest compared to the odds that the same result would be expected in a patient without that disease.

The positive likelihood ratio (LR+) of a diagnostic test reflects the probability that a person with the target disorder will obtain a score in the positive (affected) range, divided by the probability that a person without the disorder will score in the affected range. To calculate the LR+, divide the sensitivity of the test by 1 minus its specificity.

The negative likelihood ratio (LR-) of a diagnostic test reflects the probability that a person with the target disorder will obtain a negative (unaffected) score divided by the probability of a negative score in a person who does not have the disorder. To calculate the LR-, divide 1 minus the test's sensitivity by the test's specificity.

Meta-analysis

A specialized form of systematic review in which the results from several studies are summarized using a statistical technique to yield a single weighted estimate of their findings

Negative predictive value

The proportion of people who score in the negative (unaffected) range on a diagnostic test who actually are free of (do not have) the disorder of interest

Observational study

A study in which events are observed as they unfold, without any experimental manipulation

Odds ratio

A ratio of odds of an event in one group to the odds of the same event in a different group

Odds ratio (OR) is a measure of effect size. It is often reported in case-control studies designed to assess a certain clinical outcome as it occurs in (a) a group of individuals receiving a certain treatment protocol, and (b) another group of individuals who do not receive that protocol. In this usually retrospective research design, an important clinical outcome is assessed using a dichotomous dependent variable (e.g., independent feeding - or not, satisfaction with hearing aid fit - or not, return to work - or not, grade appropriate reading level - or not). In technical terms, OR is the incidence rate of a particular clinical outcome among individuals exposed to a clinical protocol (i.e., experimental group) divided by the incidence rate of that outcome among individuals who are not exposed to the protocol (i.e., control group). Values of or exceeding 1.0 indicate that the clinical outcome is more likely to occur in the experimental group. Values of or that are less than 1.0 indicate that the outcome is more likely in the control group. When or equals 1.0, the likelihood of the outcome occurring is just as likely in the control and experimental groups.

Positive predictive value

The proportion of people who score in the positive (affected) range on a diagnostic test who actually have the disorder of interest

Prevalence

The proportion of people with a finding or disease in a given population at a given time

Randomized controlled trial (RCT)

A study in which people are assigned at random (by chance alone) to receive one of several treatment conditions, including the experimental treatment and either a different type of treatment or no treatment

Relative risk ratio (RR)

Relative risk ratio is measure of effect size. It is often reported in a cohort study designed to assess a certain clinical outcome as it occurs in (a) a group of individuals receiving a certain treatment protocol, and (b) another group of individuals who do not receive that protocol. In this usually prospective research design, an important clinical outcome is assessed using a dichotomous dependent variable (e.g., elimination of vocal nodules - or not, increasing two or more points on a scale of functional communication - or not, discharge to independent living - or not, passing/failing a hearing screening).

In technical terms, relative risk ratio (RR) is the incidence rate of a particular clinical outcome among individuals exposed to a clinical protocol (i.e., experimental group) divided by the incidence rate of that outcome among individuals who are not exposed to the protocol (i.e., control group). Values of RR exceeding 1.0 indicate that the clinical outcome is more likely to occur in the experimental group. Values of RR that are less than 1.0 indicate that the outcome is more likely in the control group. When RR equals 1.0, the likelihood of the outcome occurring is just as likely in the control and experimental groups.

Reliability

The extent to which a measurement instrument yields consistent, stable, and uniform results over repeated observations or measurements under the same conditions each time

Single-subject designs

Also known as single-case experimental designs, this type of experimental design allows researchers to closely examine specific changes in each participant. Each participant serves as their own control (i.e., compared to themselves) and researchers measure the outcome or dependent variable repeatedly across phases (e.g., baseline phase, intervention phase, withdraw phase). There are many variations of a single-subject design study. 

Sensitivity

The proportion of people previously diagnosed with a disorder according to a gold standard or reference test who score in the positive (or affected) range on a different or index test

For example, a new test to diagnose stuttering is administered to 100 adults who have previously been diagnosed as people who stutter (PWS). Of these PWS, 90 obtain a score on the new test that identifies them as people who stutter. In this sample, the sensitivity of the new (index) test is 90/100 = 0.90 (or 90%), indicating that the new test only missed 10 of the people who stutter.

Specificity

The proportion of people previously identified as free of a particular disorder who score in the negative (unaffected) range on a new diagnostic test

For example, a new test to diagnose stuttering is administered to 100 adults who have never been suspected or diagnosed as people who stutter. Of these, 95 obtain a score within the negative (unaffected) range. The specificity of the new test is 95/100 = 0.95 (or 95%), indicating that in this sample the new test only misidentified 5 typical speakers as people who stutter.

Systematic review

A summary of the scientific literature in which explicit methods are used to perform a comprehensive search and critical appraisal of individual studies. In some systematic reviews, known as meta-analyses, statistical techniques are used to summarize the degree or extent of the findings across studies.

Validity

The degree to which a measurement, an inference, or a conclusion is likely to be true and free of systematic error

ASHA Corporate Partners