Evidence-Based Practice

Return to the NJC Topic List.

If an intervention or assessment is "evidence-based," it has been shown to produce meaningful and reproducible results in carefully controlled research studies.

The phrase evidence-based practice (or EBP) is heard frequently today in both educational and health care settings. In the past, it was not uncommon for an intervention method to be promoted and used just because it seemed to "make sense," and because one or more "experts" could share anecdotes of how well a given procedure worked in their experience. Today, however, there is a growing demand from consumers, health care insurers, and policy makers for actual proof that a procedure is effective.

There are many ways to evaluate and document the effectiveness of a method, and knowledge of the principles of applied research and experimental design are needed to evaluate the quality of evidence offered in support of a particular method. At a minimum, however, it is appropriate for professionals and consumers alike to ask about the evidence base for any assessment or treatment procedure they are considering.

The gold standard of evidence for any intervention is to have its effectiveness demonstrated in a large, carefully controlled experimental research study involving hundreds or even thousands of subjects who are randomly assigned to experimental and control (or comparison) groups. The advantage of these designs is that they allow us to be certain, up to a specified level (e.g., 95% or 99% certain), that the reported results for an intervention were actually due to that particular intervention and not to other uncontrolled factors, like changes in environment or health status. Such designs are referred to as "true experimental designs" (Campbell & Stanley, 1963).

In reality, however, it is often not practical or even possible to conduct randomized trials with large sample sizes when we want to study the effectiveness of a communication intervention method for use with individuals who have the most severe disabilities. For this reason, we must be willing and able to evaluate the quality of evidence along a continuum from least to most convincing. Although there are many unique ways to design research or combine research designs, we can identify three basic points along this continuum of evidence: Case reports < controlled single-subject, and quasi-experimental designs < true experimental designs.

With very rare conditions or disabilities, or when a procedure is very new, the only evidence available may be a published case study. Case studies are written by clinicians to share information about unique cases or treatment results. High-quality case studies will include detailed documentation of the specific history and characteristics of the individual who received the treatment, as well as a detailed description of the intervention used and the apparent outcomes. By definition, a case study involves no experimental controls, so the results must be considered with great caution. Although a case study may sound convincing, it must be remembered that any number of uncontrolled variables might actually account for the positive outcome of an intervention. Although a case study report may include descriptions of more than one case, this still does not compensate for the lack of experimental control.

For most interventions that are employed with individuals who have severe disabilities, the evidence base will consist of studies that have used either a single-subject (N=1) or a quasi-experimental research design. In a single-subject design, treatment conditions are scheduled so that each participant can serve as his or her own experimental control. For example, the target response may be measured before there is any treatment, then during treatment, and then again without any treatment. If the response only improves during treatment and goes away without treatment, then we may be more convinced that the treatment caused the improvement. In most single-subject designs, there are actually several subjects, and the effect of the treatment is replicated across each subject. Standards for single-subject research have been provided by Kratochwill and colleagues (2013).

In addition to single-subject designs, there are a number of creative, quasi-experimental designs in which groups of subjects are compared to assess the effect of a particular intervention, but without the types of randomized assignment required for true experimental designs. For example, a researcher may select two preschool classrooms serving children who are similar in age and socioeconomic background and then provide an experimental intervention to the children in Classroom A, but not Classroom B. If the children in the A classroom perform higher on the target response measure than the children in Classroom B at the end of the study period, the researcher may conclude that the difference is due to the intervention. The stronger the evidence that these two natural groups of children were truly comparable at the outset of the study, the stronger will be the believability of this conclusion. There are many variations and combinations of such quasi-experimental designs, and these represent by far the largest evidence base for most communication services provided to individuals with severe disabilities.

Bottom Line: At the most basic level, Evidence-Based Practice (EBP) means that there is empirical evidence to document the effectiveness of a particular treatment procedure or assessment instrument. Such evidence increasingly is required before an insurance company will pay for a procedure or a state education agency will approve funding for a particular program.

References

  • Campbell, D. T. & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Boston, MA: Houghton Mifflin.
  • Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2013). Single-case intervention research design standards. Remedial and Special Education, 34(1), 26-38. doi:10.1177/0741932512452794.

More Information

Chapters, Articles, and Books

  • Dollaghan, C. (2004, April 13). Evidence-based practice: Myths and realities. The ASHA Leader, p. 12.
  • Frattali, C., Bayles, K., Beeson, P., Kennedy, M. R. T., Wambaugh, J., & Yorkston, K. M. (2002). Development of evidence-based practice guidelines: Committee update. Journal of Medical Speech-Language Pathology. 11(3), ix–xviii.
  • Gillam, S. L., & Gillam, R. B. (2006). Making evidence-based decisions about child language intervention in schools. Language, Speech, and Hearing Services in Schools, 37(4), 304-315.
  • Joseph, G. E., & Strain, P. S. (2003). Comprehensive evidence-based social-emotional curricula for young children: An analysis of efficacious adoption potential. Topics in Early Childhood Special Education, 23(2), 65–76.
  • Sackett, D. L., Rosenberg, W. M. C, Gray, J. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence-based medicine: What it is and what it isn't. British Medical Journal, 321, 71–72.
  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.
  • Sigafoos, J., & Drasgow, E. (2003). Empirically validated strategies, evidence-based practice, and basic principles in communication intervention for learners with developmental disabilities. Perspectives on Augmentative and Alternative Communication, 12(4), 7–10.
  • Yorkston, K. M., Spencer, K., Duffy, J., Beukelman, D., Golper, L. A., & Miller, R. (2001). Evidence-based medicine and practice guidelines: Application to the field of speech-language pathology. Journal of Medical Speech-Language Pathology, 9(4), 243–256.

Web Resources

  • What Works Clearinghouse is funded by the U.S. Department of Education and presents information on evidence-based standards for educational practices.
  • Hill, K., & Romich, B. (2002). AAC evidence-based clinical practice: A model for success. AAC Institute Press, 2(1), 2–6.