Evidence-Based Practice in Communication Disorders: An Introduction

Technical Report

Research and Scientific Affairs Committee


About this Document

This technical report was developed by the The Research and Scientific Affairs Committee of the American Speech-Language-Hearing Association (ASHA) and approved by ASHA's Executive Board on August 20, 2004. Members of the Committee included Christine A. Dollaghan (chair), Raquel T. Anderson, M. Patrick Feeney, John H. Grose, Peggy B. Nelson, D. Kimbrough Oller, Elena Plante, C. Melanie Schuele, Linda M. Thibodeau, Sharon E. Moss (ex officio), and Brenda L. Lonsbury-Martin (monitoring officer).

The Research and Scientific Affairs Committee would like to acknowledge the peer reviewers who made written comments concerning the original draft of this document; their contributions resulted in a substantially improved report. We also thank the former members of the Research and Scientific Affairs Committee who contributed to earlier stages of this work: Vera F. Gutierrez-Clellen, Christopher A. Moore, Lori O. Ramig, and Julie L. Wambaugh. Finally, special thanks to Julie J. Masterson for her enthusiastic support during her tenure as Vice President for Research and Technology.



Definition of Topic

Evidence-based practice (EBP) is a perspective on clinical decision-making that originated in evidencebased medicine, and has been defined as “… the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients … [by] integrating individual clinical expertise with the best available external clinical evidence from systematic research” (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996, p. 71). Recent discussions of EBP (e.g., Guyatt et al., 2000; Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000) have emphasized the need to integrate patient values and preferences along with best current research evidence and clinical expertise in making clinical decisions.

The EBP orientation has the potential to improve the quality of the evidence base supporting clinical practice in speech-language pathology and audiology, and ultimately to improve the quality of clinical services to patients with speech, language, and hearing disorders. Accordingly, this technical report has four purposes: (a) to provide an overview of some of the principles and procedures of EBP; (b) to describe the relevance of EBP to current clinical issues in speech-language pathology and audiology; (c) to raise awareness of the importance of EBP research as one component of the research mission of the American Speech-Language-Hearing Association; and (d) to recommend potential steps toward increasing the quantity of credible evidence to support clinical activities in the professions. It is not possible to include or address all of the issues and information concerning EBP within the scope of this report; the final section provides a list of sources for individuals interested in learning more about EBP.

Return to Top


Overview of Evidence-Based Practice

An important impetus for the EBP orientation has been the growing awareness of the limitations of expert opinion as the sole basis for clinical decision making.

As noted by Sackett, Haynes, Guyatt, & Tugwell (1991), the history of medicine includes a number of cases in which the recommendations of respected authorities have turned out to be wrong or harmful when subjected to scientific investigation. These cases range from William Osler's 19th century recommendation that opium be used to treat diabetes (Sackett, Haynes, Guyatt, & Tugwell, 1991) to the 1940s-era “best practice” of oxygenating premature infants to prevent retrolental fibroplasia, a condition that careful research eventually showed to be caused, not cured, by this treatment (Meehl, 1997). More recent examples are easy to find (e.g., Barrett-Connor, 2002). At the time they were made, all of these recommendations were consistent with current clinical thinking; only when they were evaluated by rigorous scientific tests were they discounted (Sackett et al., 1991). For this reason, the EBP orientation accords greater weight to evidence from high-quality studies than to the beliefs and opinions of experts.

In the EBP framework, explicit criteria are used to evaluate the quality of evidence available to support clinical decisions. Some of these criteria are common to all scientific investigations, but others are specific to studies of clinical activities. Many systems for ranking the credibility of evidence have been proposed; in some cases evidence “grades” are then assigned to clinical recommendations according to the strength of their supporting evidence. The criteria for evaluating evidence differ somewhat according to whether the evidence concerns screening, prevention, diagnosis, therapy, prognosis, or healthcare economics; the Oxford Centre for Evidence-based Medicine (http://cebm.jr2.ox.ac.uk/docs/levels) describes a set of criteria relevant to each type of clinical question. Table 1 shows a system specifically designed for rating evidence from studies of treatment efficacy; other criteria are needed to rank evidence from studies of other questions, such as those concerning treatment effectiveness or diagnostic accuracy. However, regardless of the particular question being addressed, five common themes appear to contribute to ratings of evidence quality in the EBP literature. Each of these is described briefly in the following section.

Table 1. Levels of evidence for studies of treatment efficacy, ranked according to quality and credibility from highest/most credible (Ia) to lowest/least credible (IV) (adapted from the Scottish Intercollegiate Guideline Network, www.sign.ac.uk).

Level Description
Ia Well-designed meta-analysis of >1 randomized controlled trial
Ib Well-designed randomized controlled study
IIa Well-designed controlled study without randomization
IIb Well-designed quasi-experimental study
III Well-designed nonexperimental studies, i.e., correlational and case studies
IV Expert committee report, consensus conference, clinical experience of respected authorities

Return to Top


Five Themes in Evidence Ratings

1. Independent confirmation and converging evidence

It is extremely rare for a single study to provide the definitive answer to a scientific or clinical question, but a body of evidence comprising high quality investigations can be synthesized to approach a definitive answer even when, as is likely, results vary across studies. When the question concerns treatment efficacy, the highest evidence ranking goes to well-designed meta-analyses that summarize results across a number of scientifically rigorous studies. In many cases, results are expressed using both summary statistics and a graphic representation of the direction, size and precision of findings from individual studies. This level of evidence remains relatively rare even in medicine, but a growing number of studies of treatment efficacy are eligible for meta-analysis and meta-analyses are beginning to appear in the communication disorders literature (e.g., Casby, 2001; Robey, 1998). A number of organizations sponsor reviews of evidence according to explicit and stringent criteria; these include the U. S. Department of Health and Human Service's Agency for Healthcare Research and Quality (http://www.ahrq.gov), the Cochrane Collaboration (www.cochrane.org), and the Scottish Intercollegiate Guideline Network (www.sign.ac.uk). A single meta-analysis or systematic review of evidence may not yield results that are so uniform as to preclude disagreement and debate, especially if the number of high quality studies available for inclusion is relatively small. However, the principle of seeking converging evidence from multiple strong studies is inextricably linked to the EBP orientation.

Return to Top


2. Experimental control

The design features of individual studies also influence ratings of evidence quality. In the EBP framework, evidence from studies that are controlled (i.e., that contrast an experimental group with a control group) and that employ prospective designs (in which patients are recruited and assigned to conditions before the study begins) is rated more highly than evidence from retrospective studies in which previously collected data are analyzed, because the reliability and accuracy of many measures are difficult or impossible to ensure post hoc. In addition, group comparison studies are rated more highly when patients are randomly assigned to groups than when they are not, because random assignment reduces the chance that groups might differ systematically in some unanticipated or unrecognized ways other than the experimental factor being investigated.

Lower evidence ratings generally are assigned to quasi-experimental studies, including cohort studies in which patients with and without a variable of interest are followed forward in time to compare their outcomes, and case-control designs in which patients with and without an outcome are identified and compared for their previous exposure to a variable of interest. Evidence from quasi-experimental studies ranks lower than evidence from controlled studies because only through random assignment can the risk of differences due to unknown biases be minimized. Evidence from nonexperimental designs such as correlational studies, case studies (N = 1), and case series is rated even lower due to the lack of a control group, but even evidence from nonexperimental study designs outranks statements of belief and opinion in EBP rating schemes.

It is worth emphasizing that experimental control is only one of the themes important to rating evidence quality; regardless of its design, every study addressing a clinical question also must be evaluated with respect to such factors as the potential for subjectivity and bias, the importance of its results, and its relevance and feasibility. Especially when they are designed so as to maximize experimental control and to minimize bias, quasi- and nonexperimental studies can provide evidence that is crucially important to the early stages of investigation into a phenomenon and can lay the necessary groundwork for studies with larger samples, random assignment, and strict experimental control. For example, investigators can provide some evidence of experimental control in single-subject studies by comparing treated and control goals in a multiple-baseline design or by randomly assigning treatment and control conditions to different time periods in a multiple crossover or alternating treatments design. In addition, as noted by Sackett et al. (2000), well-designed single-subject studies can be extremely helpful in assessing the effectiveness of treatment for an individual patient. Thus, carefully conducted single-subject studies should be recognized as having an important role to play in EBP although their results will always require confirmation via stronger designs.

Return to Top


3. Avoidance of subjectivity and bias

An important criterion for credible evidence is that observers, investigators, statisticians, others involved with patients, and if possible the patients themselves, be kept unaware of information that could potentially influence, or bias, the results of a study. This tactic is known as blinding, concealment, or masking. Blinding addresses a particular threat to the validity of patient-oriented evidence: the seemingly inescapable bias that clinicians have toward believing that their efforts are beneficial. George Pickering (1964; cited in Barrett Connor, 2002) observed that belief in the value of one's efforts is a pre-requisite to clinical practice, but such belief is at odds with the objectivity that is fundamental to the scientific method (Meehl, 1997). There is persuasive empirical evidence of the need for blinding in studies of medical treatment; one analysis showed that estimates of treatment effects from studies without blinding were substantially larger than those from studies in which treatment conditions were concealed (Schulz, Chalmers, Hayes, & Altman, 1995). Observer expectations have been shown to influence even such seemingly objective measurements as recording fetal heart rates from monitors (Sackett et al., 1991). The fact that relatively few studies in speech-language pathology and audiology employ strategies to ensure adequate blinding may be one reason literature on communication disorders is underrepresented in evidence-based reviews. Complete blinding of patients and clinicians may be impossible in some studies, especially for behavioral treatments for which a placebo condition cannot be constructed. However, even in such studies a number of steps can be taken to minimize the potential for bias, such as ensuring that treatment effects (positive or negative) are measured not by the clinician, the investigator, or a family member but rather by independent examiners who rate patients without knowing their treatment assignments. Similarly, examiners can rate unlabeled, randomly ordered recordings from different stages in the course of intervention (e.g., pre-, intra- and post-treatment) to minimize the potential influence of their expectations about treatment effects.

Another important control for potential bias that influences evidence ratings is the requirement that outcomes be reported for every patient originally enrolled in a study, not just for the patients who complete it. This ensures that patients who did not complete the study as planned are taken into account in analyzing effects, avoiding the understandable tendency to focus only on patients who have positive outcomes. In randomized trials this approach, known as the “intention-to-treat” analysis, means that patients must be analyzed as part of the treatment group to which they were originally assigned even if they did not actually receive the treatment as planned (e.g., Moher, Schulz, Altman, et al. 2001; Sackett et al., 2000).

Return to Top


4. Effect sizes and confidence intervals

The EBP orientation emphasizes that studies of clinical questions should specify and justify the size of effect that is deemed clinically important and should provide evidence that statistical power is adequate to detecting an effect of this magnitude. Appreciation of the need to consider not just statistical significance (i.e., the probability that differences or effects were not chance events), but also practical significance (i.e., the magnitude of differences or effects, usually in the form of a standardized metric such as d or omega-squared) has been growing for at least 25 years, culminating in the mandate that information on effect sizes and statistical power be included in every published study (Wilkinson & APA Task Force on Statistical Inference, 1999). A variety of effect size indices exist (e.g., Huberty, 2002); according to Cohen (1990, p. 1310) the important point is to convey “… the magnitude of the phenomenon of interest appropriate to the research context.”

The EBP orientation also emphasizes the need for investigators to report the confidence interval (CI) associated with an experimental effect. CIs reflect the precision of the estimated difference or effect, specifying a range of values within which the “true” value is expected to occur with a given probability for a certain level of Type I error. Narrower CIs offer stronger (i.e., more precise and interpretable) evidence than wider CIs; studies in which samples are large and measurement error is small yield narrower CIs. This fact explains why, all else being equal, evidence from studies with large samples is likely to be ranked higher than evidence from studies involving smaller samples. It is increasingly common for investigators to provide CIs in published reports of their studies. Sackett et al. (2000; Appendix 1) provide a helpful review of the interpretation of CIs as well as procedures for calculating CIs for various types of diagnostic and treatment studies.

Return to Top


5. Relevance and feasibility

Relevance and feasibility are also considered frequently in rating the quality of patient-oriented evidence. Relevance of evidence is considered highest when the patients studied are typical of those commonly seen in clinical practice (Ebell, 1998), and/or when the clinical decision being studied is one that is difficult to make. Feasibility or applicability (Scottish Intercollegiate Guidelines Network, 2002) is high when the screening, diagnostic, or treatment activity being investigated is one that could reasonably be applied or used by practitioners in real-world settings. For example, some conditions can be diagnosed as accurately by interview as by time-consuming and expensive tests; the former would accordingly out-rank the latter on feasibility. It may not be possible to provide evidence of relevance and feasibility for results from studies at the early stages of investigation into a clinical question, but these factors must ultimately be included in evaluating the strength of evidence as a line of inquiry progresses.

Return to Top


EBP and Current Issues in Speech-Language Pathology and Audiology

The EBP orientation has obvious relevance for many aspects of clinical practice in speech-language pathology and audiology. The growing number of randomized controlled clinical trials in the communication disorders literature, as well as efforts such as those undertaken by the Academy of Neurologic Communication Disorders and Sciences (ANCDS; Yorkston et al., 2001a, 2001b) to develop practice guidelines based on systematic evidence reviews, are encouraging developments. However, there is an enormous need for additional work aimed at applying EBP principles to communication disorders. Studies designed and conducted in accordance with EBP criteria could help to resolve questions about the nature and defining characteristics of controversial diagnostic categories such as childhood apraxia of speech, auditory processing disorder, nonverbal learning disability, and many others. Evaluating diagnostic procedures and measures according to EBP criteria would provide a rational basis for selecting the maximally informative and cost-effective diagnostic protocols from among the hundreds of diagnostic tools that are reported or advertised each year. There is a critical need for evidence concerning the effectiveness of efforts aimed at preventing and remediating communication disorders. EBP offers criteria and approaches for tackling these difficult questions.

A vivid example of the need for increased use of EBP principles in studies of communication disorders can be found in an evidence summary prepared for the U. S. Preventive Services Task Force. The review panel concluded that due to design flaws in existing studies, “…the evidence is insufficient to recommend for or against routine screening of newborns for hearing loss during the postpartum hospitalization” (Helfand et al., 2001, p. 1). Specifically, the panel reported an absence of high-quality evidence that children whose hearing losses were detected by newborn screening had better language outcomes at age 3 years than did infants whose hearing losses were identified later in infancy. Designing studies so that they meet the EBP appraisal criteria will result in stronger evidence concerning not just universal newborn hearing screening but virtually all other activities aimed at improving outcomes for clients with communication disorders.

Of course, EBP is not a panacea. Several analysts have discussed real and potential limitations of the EBP framework and have noted that the question of whether EBP has positive effects on clinical care itself should be studied empirically (Cohen, Stavri & Hersh, 2004; Sackett et al., 1996, 2000). Some studies of the impact of evidence on medical practice are beginning to appear (Majumdar, McAlister, & Soumerai, 2003). In addition, some of the EBP criteria and procedures may need to be adapted to meet the particular challenges of studying complex behavioral conditions such as communication disorders. However, the potential benefits of EBP appear to far outweigh the potential harms (Woolf et al., 1999). Awareness of the principles of EBP by researchers and practitioners in speech-language pathology and audiology seems likely to improve substantially the quality of evidence available to support clinical decisions, one step in ongoing efforts to provide optimal care to people with communication disorders.

Return to Top


EBP Research as a Key Component of the Research Mission of the Association

Basic research aimed at understanding the fundamental mechanisms and processes of normal and abnormal functioning is extremely important. However, it is unwarranted to assume that findings from such studies are necessarily relevant to clinical practice. Speculation about the clinical implications of basic research findings, being based on opinion rather than research, ranks low on the evidence quality scale. Accordingly, the distinction between basic research and research designed to provide credible evidence on clinical issues should be acknowledged, and both types of endeavor should be encouraged and valued equally within the research mission of the Association. The EBP literature shows that research into clinical questions demands not only the scientific acumen needed for more theoretically oriented investigations, but also additional expertise specific to designing, conducting, and analyzing data from patients and clinicians. Ensuring that investigators in communication disorders have the knowledge and skills needed to conduct high-quality studies of clinical activities should have a prominent place on the research agenda of the Association over the coming years.

Return to Top


Potential Steps Toward Increasing the Quantity of Credible Evidence to Support Clinical Activities in the Professions

  1. Make educational offerings concerning EBP widely available to Association members, to increase their knowledge and skills with respect to the principles, processes, and uses of EBP in their clinical and scholarly pursuits. Raise awareness of the potential contributions of EBP to increasing accountability to other health care providers and to funding agencies. Publicize the wealth of free information on EBP that is readily available on the Internet and assist members in accessing it, for example by including links to information sources on the Association's Web site.

  2. Assist university programs in including information on EBP in their curricula by sponsoring conference sessions aimed at current and future faculty members and by supporting Internet-based instruction and sharing of course materials.

  3. Ensure that editors, reviewers, and authors of publications in ASHA journals are familiar with recommendations made by the CONSORT (Moher, Schulz, & Altman, for the CONSORT Group, 2001) and STARD (Bossuyt et al., for the STARD Group, 2003) groups for improving the quality of published reports concerning studies of treatment and diagnosis, respectively. Discourage speculation about the clinical implications of studies not explicitly designed to address clinical questions in ASHA publications.

  4. Highlight exemplary uses of EBP principles by researchers and clinicians, both on the Association's Web site and at the annual Convention.

  5. Support the creation of an independent, broadly representative EBP task force including researchers, clinicians, members of related professions, and consumers. This group initially would be charged with identifying and prioritizing clinical questions in communication disorders and with recommending a process by which evidence reviews on these questions could be conducted. Allocate resources to publicize this effort broadly, seeking collaborative relationships with other professional organizations to plan, conduct, and disseminate results from evidence reviews.

  6. Recognize that full-fledged systematic evidence reviews require a great deal of time, resources, and training, and that impartiality is crucial to their credibility. The Scottish Intercollegiate Guideline Network (SIGN, www.sign.ac.uk ) publications provide a detailed description of the process. According to these investigators, 24 months is a reasonable estimate of the minimal time required to go from identifying a clinical question worthy of review to the point at which evidence ratings can be disseminated. SIGN also describes costs and sources of potential funding support for such efforts.

Return to Top


Selected References

Barrett-Connor, E. (2002). Hormones and the health of women: past, present and future. Menopause, 9, 23–31.

Bossuyt, P. M., Reitsma, J. B., Bruns, D. E., Gatsonis, C. A., Glasziou, P. P., Irwig, L. M., Lijmer, J. G., Moher, D., Rennie, D., & de Vet, H. C. W. , for the STARD Group. (2003). Toward complete and accurate reporting of studies of diagnostic accuracy: The STARD initiative. Annals of Internal Medicine, 138, 40–45.

Casby, M. W. (2001). Otitis media and language development: A meta-analysis. American Journal of Speech-Language Pathology, 10, 65–80.

Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 1304–1312.

Cohen, A. M., Stavri, P. Z., & Hersh, W. R. (2004). A categorization and analysis of the criticisms of evidence-based medicine. International Journal of Medical Informatics, 73, 35–43.

Guyatt, G. H., Haynes, R. B., Jaeschke, R. Z., Cook, D. J., Green, L., Naylor, C. D., Wilson, M. C., & Richardson, W. S. , for the Evidence-Based Medicine Working Group. (2000, September 13). Users' guides to the medical literature: XXV. Evidence-based medicine: principles for applying the users' guides to patient care. JAMA, 284, 1290–1296.

Helfand, M., Thompson, D., Davis, R., McPhillips, H., Lieu, T. L., & Homer, C. J. Newborn hearing screening: a summary of the evidence for the U. S. Preventive Services Task Force. 2001 10. Rockville, MD: Agency for Healthcare Research and Quality. Accessed at http://www.ahcpr.gov/clinic/3rduspstf/newbornsum1.htm.

Huberty, C. J. (2002). A history of effect size indices. Educational and Psychological Measurement, 62, 227–240.

Majumdar, S. R., McAlister, F. A., & Soumerai, S. B. (2003). Synergy between publication and promotion: Comparing adoption of new evidence in Canada and the United States. American Journal of Medicine, 115, 467–472.

Meehl, P. E. (1997). Credentialed persons, credentialed knowledge. Clinical Psychology: Science and Practice, 4, 91–98.

Moher, D., Schulz, K. F., & Altman, D. G. , for the CONSORT Group. (2001, April 14). The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. Lancet, 357, 1191–1194.

Robey, R. R. (1998). A meta-analysis of clinical outcomes in the treatment of aphasia. Journal of Speech, Language, and Hearing Research, 41, 172–187.

Sackett, D. L., Haynes, R. B., Guyatt, G. H., & Tugwell, P. (1991). Clinical epidemiology: A basic science for clinical medicine (2nd ed.). Boston: Little, Brown.

Sackett, D. L., Rosenberg, W. M. C., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. Evidence-based medicine: What it is and what it isn't. Article based on an editorial in the British Medical Journal, 1996. 312, 71 72. Accessed at http://www.cebm.jr2.ox.ac.uk/ebmisisnt.

Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (2000). Evidence-based medicine: How to practice and teach EBM. Edinburgh: Churchill Livingstone.

Scottish Intercollegiate Guidelines Network. Glossary of key terms. 2002. Accessed at http://www.sign.ac.uk.

Schultz, K. F., Chalmers, I., Hayes, R. J., & Altman, D. G. (1995). Empirical evidence of bias: Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. Journal of the American Medical Association, 273, 408–412.

Wilkinson, L. , & American Psychological Association Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594–604.

Woolf, S. H., Grol, R., Hutchinson, A., Eccles, M., & Grimshaw, J. (1999). Potential benefits, limitations, and harms of clinical guidelines. British Medical Journal, 318, 527–530.

Yorkston, K. M., Spencer, K., Duffy, J., Beukelman, D., Golper, L. A., Miller, R., Strand, E., & Sullivan, M. (2001). Evidence-based medicine and practice guidelines: Application to the field of speech-language pathology. Journal of Medical Speech-Language Pathology, 4, 243–256.

Yorkston, K. M., Spencer, K., Duffy, J., Beukelman, D., Golper, L. A., Miller, R., Strand, E., & Sullivan, M. (2001). Evidence-based practice guidelines for dysarthria: Management of velopharyngeal function. Journal of Medical Speech-Language Pathology, 4, 257–274.

Return to Top


Additional Resources

Note: The amount of information on evidence-based practice in healthcare and other fields is expanding rapidly. An Internet search on the topic will yield many resources and sites; the list below provides an entrée to some of the oldest and most widely used sites, but it is far from exhaustive.

  • http://www.cebm.utoronto.ca: The Centre for Evidence-Based Medicine at the University of Toronto Health Network; provides teaching suggestions, an excellent glossary, and a comprehensive list of other EBP resources, with descriptions and links.

  • http://www.cebm.net: Oxford Centre for Evidence-Based Medicine; provides excellent resources such as an EBM toolbox, practice problems, and links to many other EBP sites and journals.

  • http://bmj.com/collections: British Medical Journal site; includes a section listing resources and collections concerning EBP as well as a compilation of disease-specific information; also links to the new evidence-based mental health journal at http://ebmh.bmjjournals.com

  • http://www.poems.msu.edu/InfoMastery: This site, copyrighted by Mark H. Ebell, MD (Department of Family Practice, Michigan State University) in 1998 and 1999 includes self-tutorials on EBP, with separate instructional modules addressing how to evaluate articles about diagnosis, prevention, therapy, prognosis, metaanalysis, and decision analysis.

  • www.ahrq.gov: Agency for Healthcare Research and Quality site; allows investigators to search for evidence about a large number of health conditions with direct links to studies as well as summary statements and funding opportunities.

  • www.guideline.gov: National Guideline Clearinghouse site (also accessible via AHRQ), allows searches for evidence according to condition, disease or treatment; interested investigators can receive free weekly guideline updates by e-mail.

Return to Top


Index terms: evidence-based practice

Reference this material as: American Speech-Language-Hearing Association. (2004). Evidence-based practice in communication disorders: an introduction [Technical Report]. Available from www.asha.org/policy.

© Copyright 2004 American Speech-Language-Hearing Association. All rights reserved.

Disclaimer: The American Speech-Language-Hearing Association disclaims any liability to any party for the accuracy, completeness, or availability of these documents, or for any damages arising out of the use of the documents and any information they contain.

doi:10.1044/policy.TR2004-00001

ASHA Corporate Partners