Research reports, even those studies that are dazzlingly intelligent and make immensely useful contributions to clinical practice, often go ignored by practicing clinicians. Although required to read research in our academic preparation, we may have experienced professional drift once we went to work. Yet knowledge of the recent and relevant research literature is basic to achieving and maintaining clinical competence. And staying abreast of clinically relevant research is especially important in the current climate in health service delivery with an increasing emphasis on evidence-based practice. Thus, reading research is a "basic" that many of us may need to get "back to."
Motivation: What's In It for Me?
While there is no empirical evidence to support the notion that familiarity with your profession's current body of knowledge improves clinical practice, heuristically it makes sense. Reading published articles, especially case studies and single-case research, provides clinicians with a means to refine and improve clinical procedures and formalize services provided to clients. The result is improved accountability. The methods presented in single-case research reports are essentially those employed every day by clinicians as they seek to determine the outcome of an intervention.
An additional motivation for reading research is to inform our clinical decision-making and to develop practice guidelines (Yorkston et al., 2001). Evidence-based practice guidelines are derived from systematic reviews and intended to assist clinicians in making clinical decisions about the management of specific populations based on the best available research evidence, prevailing expert opinion, and client/patient preferences. The goals are to decrease variations in practice, improve the quality of services, identify the most cost-effective intervention, prevent unfounded practices, and stimulate research. Practice guidelines typically are derived from systematic reviews (SRs) conducted by a group of reviewers of the relevant research in a given area. These reviewers determine and rank the level of evidence available to support specific practices and summarize the results to signify what constitutes current best practices. Armed with a practice guideline, a clinician can select treatments that are substantiated by a body of evidence and can evaluate their outcomes by comparing research-supported practices with their own experiences.
Although determining the "best evidence" comes from systematic reviews, research evidence alone is not enough. Patient preferences and the clinician's experience are additional factors that must be taken into account when designing treatment in an evidence-based framework. Bringing the clinician's expertise into the decisions avoids tyrannizing a patient when the evidence may not be appropriate for them. It permits including the patient's specific predicaments, rights, and preferences in making decisions about care. Thus, clinicians should be motivated to read research as a basic tool of clinical practice that can be integrated with their training, experiences, and expertise.
Studying the Studies
It is difficult to put research into practice when we have had little guidance as to how to go about what Riegleman (2005) called "studying a study." The traditional approach to reading the research literature in graduate school was, "Here's the Journal of Speech, Language, and Hearing Research . Read it!"
For those slightly more fortunate, there was a more gentle immersion method of introduction called "find the flaw." This approach can result in a destructive approach to research rather than developing constructive consumers of research. Of course, research reports should be read critically. Errors can occur in published research, but not every flaw is fatal. The goal in reading the research literature is to recognize the limitations of a study and avoid putting them into practice. As Kent (1985) cautioned, caveat emptor applies to all consumers of research. Perhaps the best approach is to be a believer with reasonable doubts. Proceed with an open-minded skepticism.
The following are our suggestions for reading research reports:
Read with a purpose. Let's consider some of the reasons one might choose to read a specific research report. First, clinicians may be seeking information about an appropriate treatment. For this purpose, seek single-case research (SCR) reports. Typically, in SCR reports the treatment employed may be more easily replicated because it is described in greater detail than it is in a report of a group treatment study.
Second, clinicians may want to find empirical evidence to support the efficacy of treatment for a particular diagnosis or when using a particular procedure. For this purpose, seek out reports of randomized, controlled treatment trials (RTCs) that randomly assigned patients to treatment and no-treatment conditions. Single-case reports are inappropriate for this purpose, because, as Robey and Schultz (1998) observe, "…efficacy is a property of a treatment delivered to a population and inference to a population requires a group experiment…and, as single-subject experiments do not provide inference to a population…they do not and cannot index efficacy" (p. 805).
Third, clinicians may seek a tutorial on a particular subject, a review of clinical cases, or an expert opinion. Journals accept a variety of manuscripts, and these articles may assist clinicians in focusing their purpose.
Remember that your purpose may not be the same as the author's "purpose." For example, the title "A Systematic Review of Early Intervention Approaches to Aural Habilitation of Children with Cochlear Implants" may describe the author's purpose in that it surveys the best evidence to support the efficacy of a treatment approach with a particular population. However, if a clinician reads this paper with the purpose of finding detailed descriptions of a variety of treatments, disappointment is likely.
Seek Out Systematic Reviews. Internet access to databases and reports drawn from systematic reviews of the research literature are a gift to the clinician. Dollaghan (2004) suggests busy clinicians should access systematic reviews that have been conducted in an area of interest, as this is likely to provide good information without investing a great deal of time and effort. Guidelines for accessing electronic libraries and reviews can be found posted with this article in The ASHA Leader Online .
Ask Questions. It may be useful to pose and seek the answers to a series of questions as we travel through the manuscript. It is important to remember, as Riegleman (2005) reminds us, our goal is not to find the flaws, but "to find the truth" (page vii). Consider the following questions:
1. Does the research report present a rationale? Do you understand its purpose, and are there specific questions the study is designed to answer? The author should indicate his or her motivation-rationale-for conducting the study. The most frequent rationale is that something is not known. While this is a justification for a study, the rationale should provide a good reason for seeking the missing evidence; or one may be left with the question, "Who cares?"
After we have ascertained the rationale, with the justification usually developed from the literature review, we look for a specific statement of the investigation's purpose. The best ones are stated clearly, for example, "The purpose of this investigation was…" The purpose should be consistent with, logically linked to, and justified by the rationale. The statement of purpose also permits a comparison between the author's purpose and the reader's purpose.
Next, look for the hypotheses or the research questions. These provide a map that indicates where the research is going and where it has been. The author's subsequent methodology and analyses are dictated by the research question. If the methods and analyses are not designed to answer the research question, they are inappropriate. Similarly, the research question should be consistent with the rationale and purpose. Formulating a research question is the most difficult task in conducting research. But, it is essential. It puts the train on the right track and ensures every station along the way is the right station.
2. Are the research subjects appropriate, and is the number of subjects sufficient to answer the research question? Robey and Schultz (1998) tell us one difference between efficacy and effectiveness outcomes research is that the former includes "ideal" treatment candidate, and the latter includes "typical" treatment candidates. Thus, if the investigation is designed to determine a treatment's efficacy, rigid selection and exclusion criteria should be employed. Moreover, are the study participants described adequately, including the important biographical, medical, and behavior characteristics? And, is the description consistent with the selection and exclusion criteria? Adequate description permits the reader to determine to whom the results apply.
In group studies, do the groups differ in important characteristics? If study participants meet the selection criteria and are assigned randomly to groups, typically random assignment will equate groups on those important characteristics. But, investigators should assure us that the groups are balanced by providing data on all of the important variables. If random assignment to groups is not employed, is there a potential for selection bias?
3. Are the procedures appropriate and sufficiently described to permit replication? Again, the research questions dictate the procedures. Every procedure should answer the question, "Why did the investigator do that?" If the answer is not consistent with the investigator's purpose or the answer cannot be found in a research question, the procedure is inappropriate.
In addition, the procedures should be described in sufficient detail to permit replication, which is in short supply in most sciences. Too often "facts" are established on the basis of a single study only to be questioned later by someone who was sufficiently impertinent to ask, "Can that be right?" in a replication. Sometimes, journal space does not permit sufficient elaboration of procedures. Nevertheless, glossing over the procedures frequently leads the reader to be skeptical and to ask questions that need not be asked. Questions about procedures should be few, and the procedures should be presented in sufficient detail to permit obtaining the missing information.
4. Are the outcome measures appropriate for the research questions? When one goes looking for something, there should be a means for finding it. Typically, the outcome measures provide that means. For example, if one is interested in the influence of a treatment designed to improve functional communication, an outcome measure that assesses functional communication and change in functional communication is appropriate.
Outcome measures should be psychometrically sound and have demonstrated validity and reliability. And, with regard to reliability, we want to know the reliability of the measures as employed in the investigation and not just the reliability that resides in the measures' manuals. Thus, investigators need to tell us how reliable they were in employing the measures and not simply appeal to "established" reliability.
5. Are the analyses appropriate for answering the research questions? Again, the research questions will dictate the analyses. If one is interested in relationships, a variety of correlational analyses are available, and the selection is influenced by the properties of the measures employed-nominal, ordinal, interval, ratio. And, one should not confuse a relationship with predictability. A correlation will provide the former, but multiple regression analysis is essential to determine the latter. If one is interested in making group comparisons, an analysis of variance may suffice. However, if groups differ on specific variables that may influence outcome, an analysis of covariance is preferred.
Sometimes, we like our analyses so much we overindulge. Gluttony in statistical analysis can be eased by a Bonferroni procedure (Dunn, 1961), typically dividing the alpha (e.g., .05) by the number of analyses to control for experimentwise error, or a modified Bonferroni procedure (Larzelere & Mulaik, 1977). If a research report includes numerous analyses and the experimenters have not controlled for experimentwise error, they are not attempting to answer their questions; they are looking for a favorable tide.
6. Are the results clear, internally consistent, and presented in sufficient detail? In treatment studies, has the effect size and confidence interval been reported? Reading results can be one of the most difficult tasks in reading research. A plethora of statistical terms, symbols, and comparison of tables and figures with text can immerse the reader in an incomprehensible marinade of numbers. A recent report by the ASHA Research and Scientific Affairs Committee (ASHA, 2004), reminds us that information on effect size and statistical power needs to be included in every published investigation. To allow us to draw recommendations from studies of clinical questions to treatment, the investigators should "justify the size of the effect that is deemed clinically important and should provide evidence that statistical power is adequate for detecting an effect of this magnitude" (p. 4).
Lastly, it is reasonable to expect results to be orderly. A way of satisfying this preference is for the investigators to say, "The first research question asked …" and "to answer this question, we . . ." Then the results are presented and followed by a statement that indicates the results' answer the research question. If the author of research does not do that, the reader can use the sequence as a map to find his or her way through the thicket. For example, we ask: "What research question is being addressed? What is the analysis being employed to address it? What are the results? And what is the answer to the question provided by the results?"
7. Is the Discussion a Discussion? Too often a research report's Discussion is little more than a reiteration of the Results. One feels like he or she is experiencing déjà vu. This can be avoided if the investigators employ an orderly sequence, for example, address each research question; supply the answer provided by the results; compare their answer with previous reports and, if syzygy is not apparent, provide an explanation for why differences may have occurred; and discuss the implications-theoretical, clinical, or other-of the answer to the question.
As always, caveat emptor! There can be drift between the Results and the Discussion. The discussion may contain more of what the investigators hoped to observe than what they actually observed. This is seldom overt, and it is probably unintentional. But, if the investigators discarded a sizeable percentage of their data and use only the data that support their theory, be suspicious. In a Discussion, we want to know what was observed, what that means, and how it relates to previous findings. Essentially, as readers of research, we seek answers to Wendell Johnson's (1946) questions: What do you mean? How do you know? And, what difference does it make?
Proceed with Caution
The above has been a quick romp through why one might want to read research and how one might do that. Apply the same open-minded skepticism we have recommended for reading research to what we have said here. We have made a few suggestions and posed a few questions for your consideration. Riegelman (2005) warns that reading the research literature can be habit-forming. Some may find it enjoyable. Moreover, developing a taste for the research literature may tempt clinicians to evaluate empirically their own interventions in an effort to increase accountability or to conduct studies in their specific clinical settings. In evidence-based practice, it is the responsibility of clinicians to apply the research results and to put research reports into the context of their own practices and experience. Reading reviews can be a good start in developing a relationship with the research literature. Accessing ASHA journals online now give you the option to include PubMed journals and other journals in your online search, and the individually designed Weekly Literature Review (available to ASHA members by subscription) is only a mouse click away. Visit The ASHA Leader Online for more information.
Portions of this paper appeared in Golper, L. and Wertz, R. T. (2002, June). Back to Basics: Reading Research. Perspectives: Neurophysiology and Neurogenic Speech-Language Disorders, 12(2), 27-31.