Step 3: Assess the Evidence

Now that you have identified potential research to address your client's problem or situation, the next step in the EBP process is to assess the reliability, importance, and applicability of the external scientific evidence. Critically appraising the evidence can help you determine if the conclusions from one or more studies can help guide your clinical decision.

To assess the evidence, you should:

Determine the Relevance to Your Question

Relevance refers to how closely connected the study's elements are to your clinical question and how well the evidence fits your needs. Relevant research literature increases the likelihood that you can generalize the results and outcomes to your client.

Ask yourself:

  • Does this study investigate a population similar to my client?
  • Does the study review an intervention that I need?
  • Are the study's outcomes related to my question?
  • If the study does not perfectly match my PICO elements, can it provide clinically relevant insights?

Use your clinical judgment to decide if the study's elements are comparable or generalizable to the population, intervention, comparison, and/or outcome in your PICO question.

Example: You are providing cognitive intervention to a teenager with traumatic brain injury and find studies that examine cognitive treatments for veterans with blast injuries. You will need to decide if the studies are clinically relevant and applicable to your client, despite having a different population

Quick tip:

If there's no relevant research available, you may need to reconsider your PICO question and go back to your search, or continue to Step 4 of the EBP process.

Appraise the Validity and Trustworthiness of the Evidence 

Appraising the validity of the evidence means that you have considered whether the study effectively investigates its aim. The study should be transparent about its methodology―the research procedure, the data collection methods, and the analysis of the data and outcomes. This helps you decide if the evidence is trustworthy and if you can have confidence in its results.

Ask yourself:

  • Will this research design help me answer my question?
  • What are the limitations of the evidence?
  • Who is the author, and is there a conflict of interest?
  • Is the evidence from a trusted source of information?

Research Design and Study Quality

To appraise the validity of the evidence for a clinical question, it is necessary to consider both the study design and the methodological quality of the study. Because certain research designs offer better controls against bias, many EBP hierarchies rank study quality solely based on study design. However, these hierarchies often fall short because research design alone does not necessarily equate to good evidence. Moreover, as noted in Step 2, no single study design can answer all types of PICO questions. The chart below details the types of study designs that are best suited for various types of clinical questions. 

Type of question Example Preferred study design(s) Other relevant study design(s)

Screening/Diagnosis
Accuracy in differentiating clients with or without a condition

Is an auditory brainstem response screening or an otoacoustic emissions screening more accurate in identifying newborns with hearing loss? Prospective, blind comparison to reference standard Cross-sectional
Treatment/Service Delivery
Efficacy of an intervention
What is the most effective treatment to improve cognition in adults with traumatic brain injury? Randomized, controlled trial Controlled trial;
single-subject/single-case experimental design
Etiology
Identify causes or risk factors of a condition
What are the risk factors for speech and language disorders? Cohort Case control;
case series
Quality of Life/Perspective
Clients' opinions and experiences
How do parents feel about implementing parent-mediated interventions? Qualitative studies (e.g., case study, case series)

Limitations of the Evidence

In addition to research design, you must also consider study methodology to identify any limitations of the evidence. Limitations are the shortcomings or external influences for which the investigators of a study could not, or did not, control. Because study limitations can influence the outcomes of an investigation, it is crucial to identify any sources of bias or systematic errors in methodology.

To help determine what limitations exist, you can appraise the methodological quality of each study using one of many available research design–specific checklists. Depending on the checklist, some or all of the following features are appraised:

  • The study had a clearly stated and focused aim or objective. 
  • Investigators used methods, such as blinding or randomized group assignments.
  • The study clearly described the methods used, the intervention protocol, and the participants involved (e.g., age, medical diagnosis, severity of condition).
  • The study objectively identified and accounted for any other confounding factors (e.g., restrictions of design, implementation fidelity).

Although other sources of bias exist, they are not typically assessed as part of these checklists. Other sources of bias to consider include conflicts of interest and publication bias.

  • Conflict of interest refers to factors that may compromise the investigator's objectivity in conducting or reporting their research. Financial funding from product developers or employment with the sponsoring organization are common examples of conflicts of interest within research.
  • Publication bias refers to where the research is published and disseminated. Peer reviewed, scientific journals, specifically those related to the discipline of communication disorders—are typically reputable research resources. Be sure to interpret with caution any sources that appear to sensationalize information, lack editorial peer review, or have an alternative agenda.

When an investigator takes steps to minimize bias, clinicians can have greater confidence in the study findings.

Quick tip:

Information is abundant and easy to find, but it may not always be trustworthy or valid. Save time by using resources that have reviewed studies for quality and bias:

  • Synthesized research such as systematic reviews, meta-analyses, and guidelines usually include quality and/or bias assessment of the included studies.
  • ASHA's Evidence Maps appraise and summarize evidence for you.

Review the Results and Conclusions

Once you determine that the research is applicable and valid, you are ready to examine the findings. The results can tell you if the desired outcome of the study was achieved (i.e., Was there a benefit or no effect from the intervention or assessment?) and whether any adverse events occurred (i.e., harm). Knowing the extent of the effects ultimately determines whether the results of a study are clinically meaningful and important.

Ask yourself:

  • Does the study provide statistical information about the outcomes such as confidence intervals or effect sizes?
  • Is this information (strong) enough to help me make a clinical decision?
  • Can I generalize the results to my client or clinical situation?

When examining the results and conclusions, consider the:

Study's statistical analyses

Review the data and/or the statistical outcomes reported in the study to determine the magnitude of the results (i.e., how large is the treatment effect?) and whether the results are significant and clinically important. That is, whether the results are due to chance and, if not, whether they are meaningful enough to consider in clinical practice. Information such as sample size, confidence interval, and effect size allow you to decide how large and precise the intervention effect is. A p-value can help you determine if the results of a study are statistically significant (in other words, they likely did not occur by chance) but it cannot tell you if the results are clinically significant or clinically important. For example, a study may find a statistically significant difference between the outcomes of two groups, yet the real-life impact for the individuals in each group could be similar. Researchers can use measures such as relative risks and minimally clinically important difference (also referred to as minimally important difference) to report clinical significance.

Direction and consistency 

Consider the results from individual studies, and determine if the overall conclusions across studies are similar. For example, taken together, are the results from the body of evidence similarly positive or negative? Does the direction and consistency of the evidence support a change in clinical practice?

Be sure to factor in any limitations that you identified in the individual studies such as participant sample size, length of the study, and heterogeneity of participants that may limit the applicability of the results.

Applicability/Generalizability

Although studies reporting definitive outcomes are ideal, sometimes results from individual studies or the body of evidence are inconclusive. In other cases, there may be very little to no scientific evidence available. In these instances, it is important to consider evidence from similar populations or interventions and determine if the results are generalizable to your client or clinical situation.

Quick tip:

Research results and conclusions require careful consideration to determine whether they could be clinically meaningful to your client.

Previous: Step 2–Find Evidence | Next: Step 4–Make your Clinical Decision

ASHA Corporate Partners