You do not have JavaScript Enabled on this browser. Please enable it in order to use the full functionality of our website.

What system of levels of evidence should be used in communication sciences and disorders?

One of the central tenets of evidence-based practice (EBP) is that not all evidence is equal. Most of us will agree that the findings from some studies are much more persuasive than others. Does the famous saying about pornography, attributed to Supreme Court Justice Potter Stewart, "I don't know how to define it, but I know it when I see it" apply here, or is there some set of objective criteria upon which we can get widespread agreement as to what constitutes strong evidence?

The usual approach to this issue is the application of a scheme of levels of evidence, i.e., a formal system of categorizing evidence based on study design and, in some cases, study quality and relevance. There are over 100 such schemes in use today (Lohr 2004). There are clear benefits of ASHA's potential adoption of a single scheme of levels of evidence for use throughout the Association. Communication would be made simpler by having a single standard and terminology familiar to all. A single scheme would also help to ensure consistency across documents (such as practice guidelines and systematic reviews) developed by ASHA. An additional benefit would follow from the adoption by ASHA of a scheme developed and in widespread use outside of ASHA. This would help combat suspicions of bias by Association staff or members in their assessment of evidence.

Are there any existing schemes that would provide the ideal fit for ASHA?

Is it even theoretically possible to have a single scheme used throughout the Association, or should there perhaps be separate schemes for diagnostic studies, treatment efficacy studies, cost-effectiveness, etc.?

Is there any potential harm in the adoption of a single system of levels of evidence throughout the Association?

With what tradeoffs between reliability (everyone using the same scheme) and validity (the imperfections of that scheme) are we comfortable?

What are the characteristics of the ideal scheme of levels of evidence for ASHA?

ASHA's Advisory Committee on Evidence-Based Practice and staff of the National Center for Evidence-Based Practice in Communication Disorders are currently grappling with these questions.


Lohr, K.N. (2004). Rating the strength of scientific evidence: Relevance for quality improvement programs. International Journal for Quality in Health Care, 16(1): 9–18.


Evidence-based practice section of the ASHA Web site.

Some of the most widely-used schemes of levels of evidence:

Oxford Center for Evidence-Based Medicine

Liddle J, Williamson M, Irwig L. (1996). Method for evaluating research and guideline evidence. Sydney: New South Wales Department of Health.

Ropka ME & Spencer-Cisek P. (2001). PRISM: Priority Symptom Management Project, Phase 1. Oncology Nursing Forum, 28(10):1585–1594.

Canadian Medical Association

Discussions of some of the issues surrounding levels of evidence and CSD:

Robey RR. (2004). A five-phase model for clinical-outcome research. Journal of Communication Disorders 37(5):401-411.

Horner RH, et al. (2005). The use of single-subject research to identify evidence-based practices in special education. Exceptional Children 71(2):165–179.

Odom SL, et al. (2005). Research in special education: Scientific methods and evidence-based practice. Exceptional Children 71(2):137–148.

Evidence-based practice section of the ASHA Web site

This article first appeared in the June 2006 issue of Access Academics and Research.

ASHA Corporate Partners