May 2, 2006 Feature

Primer on Research: An Introduction

see also

With this article, the ASHA Research and Scientific Affairs Committee introduces the "Primer on Research," a series of articles that will appear in upcoming issues of The ASHA Leader. The series will include discussion by committee members and invited guests of some basic concepts in research and their applications to the speech, language, and hearing sciences.

As clinicians we rely on research to inform our daily clinical decisions regarding diagnosis and treatment. This knowledge base is generated by researchers in the speech, language, and hearing sciences who gather data in the lab, and by clinical researchers who gather data from their patients. Both endeavors are needed to advance the professions and secure a better future for our patients. The recent emphasis on training in evidence-based practice highlights the need to base our clinical decisions on data rather than dogma and intuition. Evidence-based practice involves "the integration of best research evidence with clinical expertise and patient values" (Sackett et al., 2000, page 1).

Research is about asking questions and solving problems. The scientific method involves the identification of a problem that can be studied by asking testable questions (or stating hypotheses) and devising experiments to help answer the questions by collecting data. The data are then analyzed and conclusions are drawn to help answer the original question and to direct further study.

Research Questions

A key to research is the ability to ask good questions, which are questions important to those who know the field, and focused appropriately to lead to valuable answers. Good questions don't always just roll off the tongue, even for linguists. For example, if you don't have a proficient grasp of your field of study, you may develop a research question that has already been answered (and that's not a good question). So perhaps the first place to start if you want to ask good questions is the literature in your field. As Sir Isaac Newton put it in 1676, "If I have seen further, it is by standing on ye shoulders of Giants." You climb on the shoulders of those giants by reading the literature in your field and by talking to some of the living giants in your field who are known to frequent research conferences. In the first article in this series, Peggy Nelson and Lisa Goffman address the issue of asking good research questions.

Experiments

After you have developed your good question, you are ready to devise an experiment. You may have asked, "What is the effect of the level of the 226 Hz admittance probe on the acoustic stapedius reflex threshold?" as Jessica Day recently did in my laboratory. This was a good question because she stood on the shoulders of some giants (for example, Terkildsen, Osterhammel & Nielsen, 1970) who were interested in a similar question of the effect of the probe level on the acoustic reflex growth function.

Commercial admittance systems don't allow the 226 Hz probe level to vary from its typical level of 85 dB SPL, so Jessica used an experimental admittance system, which allowed her to vary the probe level from 70 to 85 dB SPL. Using this system, she was able to obtain estimates of the contralateral acoustic reflex threshold for a 1000 Hz activator stimulus for 40 young-adult subjects at each of four 226 Hz probe levels.

She analyzed the reflex threshold data using a statistical test in the form of an analysis of variance, and found that the probe level did affect reflex threshold. As the probe level increased, the reflex threshold decreased. The average effect in changing from a 70 to an 85 dB SPL probe level was a reflex threshold that was 2.6 dB lower.

This is small in relation to the 5 dB step size we typically use in the clinic for this procedure. However, some subjects had as much as a 12 dB decrease in reflex threshold as the probe level increased, which is a large individual effect that is clinically significant.

Some experimental results may be difficult to interpret because they are on a scale, such as a percent-correct score, that doesn't readily allow a judgment of their practical significance. One way of examining effect size is to normalize the mean difference between a control and treatment group by dividing it by the standard deviation of scores. If the mean difference were equal to the standard deviation of scores, the effect size would be 1.0, a relatively large effect size, which indicates that the mean of the treatment group is at the 84th percentile of the control group. This and other ways to evaluate the effect size will be explored in a future article in this series by Melanie Schuele and Laura Justice.

We are often interested in how two variables are related. For example, we might study the relationship between the overall middle-ear admittance and acoustic reflex threshold in a group of young adults. A Pearson Product Moment correlation statistic may be used to characterize the relationship between these variables.

Let's say there is a positive correlation between our two variables, such that subjects with large middle-ear admittance have associated high reflex thresholds. In this case, the positive correlation helps describe the strength of the relationship between the variables, but does not suggest causation, i.e., that the middle-ear admittance variable caused the reflex threshold to vary. A detailed discussion of the interpretation of correlation will be undertaken by Kim Oller in a future article in this series.

Bias

Investigators must always guard against bias when obtaining and interpreting data during experiments. Consider an experiment to evaluate two new therapies for stuttering, Treatment A and Treatment B. If the experimenter's hypothesis was that Treatment A was likely more effective than Treatment B, this bias might influence the way the experimenter interacted with participants in Treatment A or measured their behavior in comparison to participants in Treatment B.

For example, the experimenter might unwittingly communicate her/his different expectations of the outcome to the participants such that they influenced the participants' beliefs about how well the treatment they were being administered would ameliorate their problem. Thus, the participants in Treatment A may behave differently because they are anticipating a better outcome than the participants in Treatment B, who are, in turn, anticipating a lesser outcome.

The experimenter's bias might also influence the way in which different participants' behavior is measured and interpreted, such that there is a systematic bias in favor of the amount of change that occurs as the result of Treatment A relative to Treatment B. One way the investigator might minimize this type of bias is to make certain that experimenters who interact with or measure the participants' behavior are unaware of the investigator's hypothesis concerning the expected treatment effects, so that in a sense they are "blinded" to the anticipated outcome of the study. In the next article in this series, Patrick Finn will explore the intricacies of experimenter bias and blinding.

We hope that our Research Primer series provides you with food for thought-a smorgasbord, really-and serves as a springboard to further discussion with your colleagues. If you have comments or questions concerning this series, contact Sharon Moss, ASHA's ex officio for the Research and Scientific Affairs Committee, at smoss@asha.org.

Patrick Feeney, is associate professor and chief of audiology in the Department of Otolaryngology, Head, and Neck Surgery at the University of Washington. He serves on the ASHA Research and Scientific Affairs Committee, and is the coordinator for the ASHA Special Interest Division 6, Hearing and Hearing Disorders-Research and Diagnostics. Contact him at pfeeney@u.washington.edu.

cite as: Feeney, P. (2006, May 02). Primer on Research: An Introduction. The ASHA Leader.

References

Sackett, D.L., Straus, S.E., Richardson, W.S., Rosenberg, W., & Haynes, R. B. (2000). Evidence-based medicine: How to practice and teach EBM. Edinburgh: Churchill Livingstone.

Terkildsen, K., Osterhammel, P., & Nielsen, S.S. (1970). Impedance measurements: probe-tone intensity and middle-ear reflexes. Acta Otolaryngologica, 263, 205-207.



  

Advertise With UsAdvertisement