June 13, 2006 Feature

Bias and Blinding: Self-Fulfilling Prophecies and Intentional Ignorance

Primer on Research: Part 2

see also

Scientists, like everyone else, can be led astray by their beliefs and prejudices. Because of this, scientists have taken steps to protect themselves against bias in order to ensure an objective knowledge base. In the long run, this is essential if clinicians are going to trust the information that supports their understanding of communication disorders. The goal of this article, as part of the series on science offered by the ASHA Research and Scientific Affairs Committee, is to examine the influence of bias on research and methods for minimizing it. For this article, bias is defined as a systematic distortion of research findings because of subjective influences. Perhaps the most insidious form of bias is a self-fulfilling prophecy because it taps into our good intentions.

Types of Bias

A self-fulfilling prophecy refers to a prediction that becomes true because of the influence our expectations have on seeing what we want to see. The effect and impact of self-fulfilling prophecies have been documented across various contexts, ranging from experimenter effects on animal behavior to teacher effects on student learning (Rosenthal, 1994). The problem with self-fulfilling prophecies is that personal expectations are subjective; therefore, the influence of expectations undermines building an objective knowledge base. Consequently, scientists are concerned with self-fulfilling prophecies, especially in the form of experimenter bias that can occur when conducting research.

Experimenter bias encompasses a range of concerns that can introduce subjectivity to the research process and lead to results that might not have occurred otherwise. One of these is experimenter expectancy, which refers to when the experimenter's hypothesis plays an unintended role in determining the study's outcome. Like a self-fulfilling prophecy, the experimenter's expectations increase the probability of the results occurring as predicted (e.g., Rosenthal & Lawson, 1964). This outcome is especially likely if the experimenter is the same person who has personal contact with the participants during the research. The specific mechanism for this influence is unclear and, although the experimenter does not appear to do anything intentionally to bias the results, there are nonetheless a range of cues such as the experimenter's tone of voice, posture, facial expressions, and delivery of instructions that can influence participants' responses (Kazdin, 2003). At the same time, the participants are likely attending to those cues because of their motivation to determine the purpose of the experiment and respond in a way that will support the hypothesis being tested (Orne, 1962). For example, if the experimenter expects a particular hearing aid would be preferred over another, the participants with hearing loss may glean this preference from the experimenter's behavior and self-rate their preferences accordingly.

Observer bias is another form of experimenter bias that can arise as a result of the experimenter's expectations during the study. This occurs when the experimenter's expectations affect the measurement of the participants' behavior. This type of bias does not depend on personal interaction between experimenter and participant. Rather, the experimenter's expectations result in errors of observation that bias the results in the direction of the study's hypothesis. For example, if the observer is anticipating that a treatment condition will lead to reductions in stuttering frequency, she would count unambiguous stuttering moments but inadvertently discount or explain away milder stuttering moments; thus obtaining the lower stuttering frequency as predicted.

Finally, interpretive bias is another form of experimenter bias. Interpretive bias refers to when the experimenter's evaluation and judgment of the data during and at the completion of a study are tilted in favor of the experimenter's hypothesis. This form of bias can be present in various ways (Kaptchuk, 2006). For example, the experimenter might unwittingly engage in confirmation bias, which means judging the data that supports the hypothesis favorably and applying minimal scrutiny to its shortcomings. Whereas, data that raise doubts about the hypothesis are ignored or dismissed by holding them to a tougher standard. Another possibility is the experimenter openly acknowledges the contradictory data, but suggests that they would not have occurred if conditions had been different. This attempt to rescue the study is referred to as an "ad hoc hypothesis" (Kaptchuk, 2006). Examples of ad hoc bias were evident in recent media accounts (Kolata, 2006) concerning the health benefits of low-fat diets, where it was argued that clinical trials which found reduced fat diets did not lower the risk of cardiovascular disease in women (Howard et al., 2006) because the investigators had targeted the wrong kind of fat. This ad hoc hypothesis suggested that the outcome would have been different if specific kinds of fat had been reduced rather than overall fat.

Blinding

The most effective control for minimizing experimenter bias is blinding. The term, blinding, appears to have emerged during the 18th century in studies of mesmerism, a therapeutic approach in which treatment was based on inducing a hypnotic state in the patient using a force called animal magnetism. These investigations, overseen by Benjamin Franklin, included conditions where participants were blindfolded so that they would not know which of their physical ailments were being targeted for treatment (Kaptchuk, 1998). The results showed that mesmerism was ineffective when participants were blind to the treatment condition. Therefore, the goal of blinding, as this example suggests, is to minimize the impact of bias by implementing conditions where there is deliberate ignorance about the experiment.

There are two types of blinding. A single-blind condition refers to when the participants are intentionally kept unaware of which treatment condition they are participating in, such as described above. It is typically implemented when there are concerns that participants' knowledge of the treatment might influence their perceptions of the treatment and its outcome. It might also be important to employ deception so that participants are led to believe the condition is one thing when in reality it is another. In a recent study, for example, participants were told that they were comparing the effectiveness of two hearing aids, a digital aid versus a conventional aid, when in reality the aids were identical, only the labels had been changed (Bentler, Niebuhr, Johnson, & Flamme, 2003). The results showed that labeling biased the participants' preference in favor of the "digital aid."

Double-blind condition refers to when both participants and the experimenters who have direct contact with the participants are unaware of the experimental condition. This is believed to control for expectancy bias because the experimenters cannot inadvertently cue the participants about the expected outcome, if they do not know what it is. Implementation of blinding does not mean it necessarily worked.

Experimenters and participants will likely try to discern the "real" purpose of the experiment, regardless of blinding. Thus, it is essential for the investigator who oversees the investigation to conduct an integrity check to determine if the experimenters and participants figured out the study's actual hypothesis (Schulz, Chalmers, & Altman, 2002). It should be added that two controls often used in conjunction with blinding are randomization and placebos, but their discussion is beyond the scope of this article.

Blinding also can be implemented to control for observer bias. This means the experimenter responsible for measuring the participant's behavior is kept in the dark about what experimental condition is taking place when participant behavior is assessed, which minimizes the influence of expectancy on the measurement process. This control is also important for studies where experimenters cannot be kept naïve about the treatment condition. Independent assessors, not involved in any other aspect of the study and naïve to the experimental hypothesis and test conditions, should be recruited to double-check the measures of the experimenter's observations.

Blinding is a powerful control, but it cannot minimize interpretive bias because evaluation and judgment of data typically occurs during the study's completion when it is written for publication. The first level of control for interpretive bias will be the investigators themselves because most scientists understand that it is important that they should be honest, open-minded, and apply the same critical analysis to their own work that they apply to others. But interpretive bias is more likely revealed when independent readers identify issues through the journal peer review process or by writing letters to the editor. The best control will occur when findings are replicated by other researchers who do not have the same commitment to the experimental outcome as the original investigator.

Controlling for Bias

Scientists understand and respect the problems that bias can introduce when developing an objective knowledge base; therefore, they have developed controls for minimizing and accounting for their effects. However, in the era of evidence-based practice, it is important for clinicians to understand that they are also prone to self-fulfilling prophecies; that their clinical expectancies can cloud their judgments about the treatment's outcome and why it worked. It would be unrealistic to expect clinicians to implement the level of controls that experimenters use to minimize bias. At the same time, clinicians should guard against naïvely believing that their expectancies do not affect their judgments. Clinicians, like experimenters, need to be honest with themselves, open-minded, and apply the same critical thinking to their clinical outcomes. The clients will be the beneficiaries.

Patrick Finn, is an associate professor in Speech, Language, and Hearing Sciences at the University of Arizona. He has published journal articles and chapters on the scientific evaluation of stuttering treatment and the nature of recovery from stuttering. He is currently writing on critical thinking skills and evidence-based practice. Contact him by e-mail at pfinn@email.arizona.edu.

cite as: Finn, P. (2006, June 13). Bias and Blinding: Self-Fulfilling Prophecies and Intentional Ignorance : Primer on Research: Part 2. The ASHA Leader.

References

Bentler, R. A., Niebuhr, D. P., Johnson, T. A., & Flamme, G. A. (2003). Impact of digital labeling on outcome measures. Ear & Hearing, 24, 215-224.

Cole, K. C. (1985, September). Is there such a thing as scientific objectivity? Discover, 6, 98-99.

Howard, B. V., Van Horn, L., Hsia, J., Manson, J. E., Stefanick, M. L., Wassertheil-Smoller, S., & et al. (2006). Low-fat dietary pattern and risk of cardiovascular disease: The Women's Health Initiative randomized controlled dietary modification trial. Journal of American Medical Association, 295(6), 655-666.

Kaptchuk, T. J. (1998). Intentional ignorance: A history of blind assessment and placebo controls in medicine. Bulletin of the History of Medicine, 72, 389-433.

Kaptchuk, T. J. (2006). Effect of interpretive bias on research evidence. British Medical Journal, 326, 1453-1455.

Kazdin, A. E. (2003). Research design in clinical psychology (4th ed.). Boston, MA: Allyn and Bacon.

Kolata, G. (2006, February 14). Maybe you're not what you eat. The New York Times.

Orne, M. T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776-783.

Rosenthal, R. (1994). Interpersonal expectancy effects: A 30-year perspective. Current Directions in Psychological Science, 3, 176-178.

Rosenthal, R., & Lawson, R. (1964). A longitudinal study of the effects of experimenter bias on the operant learning of laboratory rats. Journal of Psychiatric Research, 2, 61-72.

Schulz, K. F., Chalmers, I., & Altman, D. G. (2002). The landscape and lexicon of blinding in randomized trials. Annals of Internal Medicine, 136, 254-259.



  

Advertise With UsAdvertisement