Steps in the Process of Evidence-Based Practice
Step 3: Assessing the Evidence
There are at least two important factors to keep in mind when assessing a systematic review.
The first is relevance of the review to your specific clinical question (see framing the clinical question). If the brain-injured patient whose care prompted your question is a member of a cultural or linguistic minority, for example, how useful is a brain-injury review that excludes or makes no specific mention of culturally or linguistically diverse populations? If you are treating an autistic teenager, of what relevance are reviews based primarily on studies of younger children? Once again, the expertise and experience of the individual clinician is an absolutely essential part of evidence-based practice.
The second factor to consider is who wrote and published the review. While many reviews are produced by academic institutions and interdisciplinary collaborations, others are produced by advocacy groups or payors. It is important to consider who produced the reviews and to what extent they would likely be affected by positive or negative findings. However, reviews emanating from a "trusted" source are no more guaranteed to be of high quality than are reviews coming from a less objective source guaranteed to be flawed.
As noted elsewhere, publication of a study in a peer-review scientific journal is not a guarantee of quality. Individual studies are generally assessed along two dimensions: level of evidence and study quality. Level of evidence refers to the establishment of a hierarchy of study designs based on the ability of the design to protect against bias. While there is no one universally accepted hierarchy, randomized controlled trials (RCTs) are considered to be the design least susceptible to bias, and various hierarchies follow from there through observational studies and non-experimental designs. The table below is one example of a hierarchy of levels of evidence.
||Well-designed meta-analysis of >1 randomized controlled trial
||Well-designed randomized controlled study
||Well-designed controlled study without randomization
||Well-designed quasi-experimental study
||Well-designed non-experimental studies, i.e., correlational and case studies
||Expert committee report, consensus conference, clinical experience of
Adapted from the Scottish Intercollegiate Guidelines Network
Study quality is an assessment of the extent to which a study, of whatever design, was designed and implemented appropriately. Again, there is no single universally accepted set of criteria for what constitutes a high quality study. For examples of study quality criteria, see the Scottish Intercollegiate Guidelines Network.
See also: Tutorials on assessment of individual studies.