COVID-19 UPDATES: Find news and resources for audiologists, speech-language pathologists, and the public. 
Latest Updates | Telepractice Resources | Email Us | Chat With Action Center

Step 3: Assess the Evidence

Now that you’ve identified evidence to address your client’s problem or situation, the next step in the EBP process is to assess the internal and external evidence. When assessing the evidence, keep in mind that each type of evidence serves a unique purpose for your clinical decision making.

Internal evidence, or the data and observations collected on an individual client, is collected both for the accountability of your session and for tracking a client’s performance. When assessing the internal evidence, you are determining whether an intervention has impacted your client. You may analyze your data to address the following questions (adapted from Higginbotham & Satchidanand, 2019):

  • Is your client demonstrating a response to the intervention?
  • Is that response significant, especially for the client?
  • How much longer should you continue the intervention?
  • Is it time to change the therapy target, intervention approach, or service delivery model?

External evidence, found in scientific research literature, answers clinical questions such as whether an assessment measures what it intended to measure or whether a treatment approach is effective in causing change in individuals. Because the quality of external evidence is variable, this step of assessing the evidence is crucial and includes determining the reliability, importance, and applicability of the relevant scientific research to your client’s condition and needs.

Critically appraising the external evidence can help you determine if the conclusions from one or more studies can help guide your clinical decision. To assess the external evidence, you should

Determine the Relevance to Your Question

Relevance refers to how closely connected the study's elements (e.g., study aim, participants, method, results) are to your clinical question and how well the external evidence fits your needs. Relevant research literature increases the likelihood that you can generalize the results and outcomes to your client.

Ask yourself:

  • Does this study investigate a population similar to my client?
  • Does the study review an intervention that I could use to advance my client's goals?
  • Are the study's outcomes related to my question?
  • If the study does not perfectly match my PICO elements, can it provide clinically relevant insights to guide clinical decisions?

Use your clinical judgment to decide whether the study's elements are comparable and/or generalizable to the population, intervention, comparison, and/or outcome in your PICO question.

Example: You are providing cognitive intervention to a teenager with traumatic brain injury, and most studies you’ve found examine cognitive treatments for veterans with blast injuries. You will need to decide whether these studies are clinically relevant and applicable to your client, despite their focus on a somewhat different population

Quick Tip:

If there's no relevant research available, you may need to reconsider your PICO question and return to your search, or continue to Step 4 of the EBP process.

Appraise the Validity and Trustworthiness of the Evidence 

Appraising the validity of the external evidence means that you have considered whether the study effectively investigates its aim. The study should be transparent about its methodology―the research procedure, the data collection methods, and the analysis of data and outcomes. This helps you decide whether the research evidence is trustworthy and whether you can have confidence in its results.

Ask yourself:

  • Will this research design help me answer my question?
  • What are the limitations of the research evidence?
  • Is the external evidence from a trusted source of information?

Research Design and Study Quality

To appraise the validity of the external evidence for a clinical question, it is necessary to consider both the study design and the methodological quality of the study. Because certain research designs offer better controls against bias, many EBP hierarchies rank study quality solely based on study design. However, these hierarchies often fall short because research design alone does not necessarily equate to good external evidence. Moreover, as noted in Step 2, no one study design can answer all types of PICO questions. The chart below details the types of study designs that are best suited for various types of clinical questions. 

Type of Question Example Preferred Study Design(s) Other Relevant Study Design(s)

Screening/Diagnosis
Accuracy in differentiating clients with or without a condition

Is an auditory brainstem response screening more accurate than an otoacoustic emissions screening in identifying newborns with hearing loss? Prospective, blind comparison to reference standard Cross-sectional
Treatment/Service Delivery
Efficacy of an intervention
What is the most effective treatment to improve cognition in adults with traumatic brain injury? Randomized, controlled trial Controlled trial;
single-subject/single-case experimental design
Etiology
Identify causes or risk factors of a condition
What are the risk factors for speech and language disorders? Cohort Case control;
case series
Quality of Life/Perspective
Understand the opinions, experiences, and perspectives of clients, caregivers, and other relevant individuals
How do parents feel about implementing parent-mediated interventions? Qualitative studies (e.g., case study, case series) Ethnographic interviews or surveys of the opinions, perspectives, and experiences of clients, their caregivers, and other relevant individuals

Limitations of the Evidence

In addition to considering research design, you should also consider study methodology to identify any limitations of the external evidence. Limitations are the shortcomings or external influences for which the investigators of a study could not, or did not, control. Because study limitations can influence the outcomes of an investigation, it is crucial to identify any sources of bias or systematic errors in methodology.

To help determine what limitations exist, you can appraise the methodological quality of each study using one of many available research design–specific checklists. Depending on the checklist, you can appraise some or all of the following features:

  • The study had a clearly stated and focused aim or objective. 
  • Investigators used methods, such as blinding or random assignment.
  • The study clearly described the methods used, the intervention protocol applied, and the participants involved (e.g., age, medical diagnosis, severity of condition).
  • The study objectively identified and accounted for any other confounding factors (e.g., restrictions of design, implementation fidelity).

Although other sources of bias exist, they are not typically assessed as part of these checklists. Other sources of bias to consider include conflicts of interest and publication bias.

  • Conflict of interest refers to factors that may compromise the investigator's objectivity in conducting or reporting their research. Financial funding from product developers or employment with the sponsoring organization are common examples of conflicts of interest within research. Be sure to interpret with caution any sources that appear to (a) sensationalize information, (b) lack editorial peer review, or (c) have an alternative agenda.
  • Publication bias occurs when the results of a study influence whether or not the study is published. This may result in studies with positive or significant findings being more likely to be published than those with null or negative findings.

When an investigator takes steps to minimize bias, clinicians can have greater confidence in the study findings.

Quick Tip:

Information is abundant and easy to find, but it may not always be trustworthy or valid. Save time by using resources that have reviewed the included studies for quality and bias:

  • Synthesized research such as evidence-based systematic reviews, meta-analyses, and guidelines usually include quality and/or bias assessment of the included studies.
  • ASHA's Evidence Maps appraise and summarize synthesized research evidence for you.

Review the Results and Conclusions

Once you determine that the research is applicable and valid, you are ready to examine the findings. The results can tell you if the desired outcome of the study was achieved (i.e., “Was there a benefit from the intervention or assessment, or was there no effect?”) and whether any adverse events occurred (i.e., harm). Knowing the extent of the effects ultimately determines if the results of a study are clinically meaningful and important.

Ask yourself:

  • Does the study provide statistical information—such as confidence intervals or effect sizes—about the outcomes?
  • Is this information strong enough to help me make a clinical decision?
  • Can I generalize the results to my client or clinical situation?

When examining the results and conclusions, consider the study's

  • statistical analyses,
  • direction and consistency, and
  • applicability and generalizability.

Statistical Analyses

Review the data and/or the statistical outcomes reported in the study to determine the magnitude of the results (i.e., “How large is the treatment effect?”) and whether the results are significant and clinically important. That is, whether the results are due to chance—and, if not, whether they are meaningful enough to consider in clinical practice. Information such as sample size, confidence interval, and effect size allow you to decide how large and precise the intervention effect is. A p value can help you determine whether the results of a study are statistically significant (in other words, they likely did not occur by chance), but it cannot tell you whether the results are clinically significant or clinically important. For example, a study may find a statistically significant difference between the outcomes of two groups, yet the real-life impact for the individuals in each group could be similar. Researchers can use measures such as relative risks and minimally clinically important difference (also referred to as minimally important difference) to report clinical significance.

Direction and Consistency 

Consider the results from individual studies and determine whether the overall conclusions across studies are similar. For example, taken together, are the results from the body of external evidence similarly positive or negative? Does the direction and consistency of the evidence support a change in clinical practice?

Be sure to factor in any details (e.g., participant sample size and heterogeneity of participants) that you identified in the individual studies that may limit the applicability of the results.

Applicability and Generalizability

Although studies reporting definitive outcomes are ideal, sometimes the results from individual studies or the body of external evidence are inconclusive. In other cases, there may be very little to no scientific evidence available. In these instances, it may be valuable to consider research evidence from similar populations or interventions and to determine whether the results are generalizable to your client or clinical situation. In this circumstance, it is even more critical to collect and consider data taken from your client’s performance to determine whether the approach you are taking is having the intended effect.

Quick Tip:

Research results and conclusions require careful consideration to determine whether they could be clinically meaningful to your client.

References

Higginbotham, J., & Satchidanand, A. (2019, April). From triangle to diamond: Recognizing and using data to inform our evidence-based practice. Academics and Research in Context. https://academy.pubs.asha.org/2019/04/from-triangle-to-diamond-recognizing-and-using-data-to-inform-our-evidence-based-practice/.

ASHA Corporate Partners