June 7, 2011 Features

Deciphering Single-Subject Research Design and Autism Spectrum Disorders

see also

The number of children diagnosed with autism spectrum disorders (ASDs) has risen at an astonishing rate over the past decade. Ten years ago, one in 10,000 children was diagnosed as having an ASD. Today, one in 110 children is diagnosed with an ASD (Centers for Disease Control and Prevention, 2010). As the number of ASD cases continues to increase, autism research focusing on prevention, treatment, and intervention has become a high priority for the National Institutes of Health and other research organizations. It is imperative that speech-language pathologists read and understand the most current research available on this growing population.

Several different research methodologies are used with individuals with ASD; each has specific advantages and disadvantages as well as quality indicators of scientific rigor.

The Challenges of Conducting Research with Individuals with Autism

Tamiko Azuma

Tamiko Azuma

Research group designs, which compare two seemingly separate but equal groups of participants, are difficult to conduct with individuals with ASD and are used less frequently than single-subject research designs (SSRDs). Group designs pose a challenge for several reasons. First, group designs typically require random selection. Random selection requires a supportive, homogenous sample size, which is rarely present in populations with ASD. The range of symptoms and the severity across those symptoms result in a heterogeneous population and, therefore, difficulty in finding groups that are comparable. Also, given that intensive early intervention is recommended for individuals with autism (Interagency Autism Coordinating Committee; IACC, 2011), there are ethical considerations to consider if one group is placed in a treatment condition while leaving a second group in a no-treatment condition.

Finally, researchers also must be concerned with statistical power. Statistical power is a tool that can help researchers test if the particular effect they are looking for does, in fact, exist. Difficulties finding homogenous groups of participants with autism make it challenging to analyze statistical power and treatment effects.

Teresa Cardon

Teresa Cardon

This high degree of heterogeneity among children with ASDs poses serious challenges for researchers (National Research Council; NRC, 2001). A fundamental assumption of any rigorous research study is that the observed sample is representative of the general population. This assumption allows the researcher to extend the results beyond the study to the larger population. Given the heterogeneous nature of autism, it may not be feasible to assume that the results observed in a given sample can be readily extended to a larger population. This lack of homogeneity causes threats to both internal and external validity (NRC, 2001). Threats to internal validity arise when children are not, or cannot, be matched across possibly influential variables (e.g., age, cognitive level, verbal ability, etc.). Thus, observed differences in behavior may be due to the treatment, but also could be due to demographic or other differences. Additionally, unless all study participants are receiving identical interventions outside of the study, carryover effects could be responsible for any observed changes in behavior (NRC, 2001).

To accommodate the inherent heterogeneity in this population, Schreibman (2000) recommends that researchers include clear and specific descriptions of their study participants. Consumers of research should look for diagnostic information that includes complete descriptions of the assessment tools and the qualifications of the examiner (Reichow, Volkmar, & Cicchetti, 2008). Extensive descriptions that include demographic and behavioral profiles allow researchers to closely match the original study protocol when they attempt to replicate or extend the results and allow SLPs the opportunity to determine the clinical relevance of the research. Additionally, clear descriptors may reveal specific cognitive or behavioral profiles that interact with different treatment effects.

Single-Subject Research Designs  

The most common type of treatment design used when studying individuals with ASD is an SSRD. These designs allow for a degree of experimental control and provide information beyond the traditional descriptive case study (Horner, Carr, Halle, McGee, Odom, & Wolery, 2005). SSRD differ from group designs in that independent variables require systemic replication during baseline and treatment sessions rather than at single time points; effectiveness of treatments are traditionally determined via visual analysis (i.e., a review of graphs depicting both dependent and independent variables) rather than statistical analysis; and causal and functional results are based on behavioral observations of baseline vs. treatment conditions rather than on the confirmation of theoretical hypotheses or correlational relationships (NRC, 2001; Horner et al., 2005; Kazdin, 1982). 

The two most commonly used types of SSRD in autism research are the withdrawal (or reversal) design and the multiple-baseline design (NRC, 2001). Withdrawal designs involve administration and then removal of treatment to reveal how the absence of treatment affects the target behavior (Richards, Taylor, Ramasamy, & Richards, 1999). Withdrawal designs are typically shown using a schematic in which A represents the baseline phase and B represents the treatment phase (See Figure 1 [PDF]). For example, to study the effect of picture-card use on verbal expression, the A1 (no treatment) phase would establish how many verbal expressions were made prior to treatment. During the B1 (treatment) phase, picture cards would be used and verbal expressions would be assessed. In the A2 (withdrawal) phase, the picture cards are removed and verbal expression is again measured. Typically, another B2 (treatment) phase would be implemented to replicate the treatment effect. Withdrawal designs are helpful in identifying a functional or causal relationship between the target behavior and the treatment (Horner et al., 2005; Richards et al., 1999).

One of the strongest advantages of withdrawal designs is ease: they are simple to implement and do not require extensive training. They also can help identify and confirm a causal relationship between the treatment and the behavior, particularly when multiple series of the baseline and treatment phases are introduced in the same study (Richards et al., 1999). If a functional relationship is present, a withdrawal design can reveal it quickly and powerfully.

However, withdrawal designs are not always appropriate—for instance, in situations in which a target behavior cannot be eliminated after being learned (Richards et al., 1999). If the acquired skill—such as matching objects or reading words—cannot be "unlearned" later, then a withdrawal design is not appropriate. Important ethical issues also should be considered prior to implementing a withdrawal design. If the withdrawal of treatment could lead to harmful consequences for the child (e.g., the child is engaging in serious self-injurious behaviors that disappear during treatment), it may be unethical to withdraw the treatment for the sake of replication. Remember, the research design must be suitable to answer the proposed research question.

Another common SSRD is the multiple-baseline design, in which the baseline condition and the treatment condition are replicated multiple times across the same study. In contrast to the withdrawal design, baseline measures are implemented simultaneously until trends are established. As treatment starts for one participant or behavior, baseline continues for the others until everyone enters a treatment phase at a different point in time. Multiple-baseline designs are typically used to measure treatment effects across behaviors, environmental settings, or participants (Richards et al., 1999).

For example, a researcher may be interested in determining the effectiveness of a verbal elicitation treatment protocol. Specifically, the researcher may want to examine three children with ASD from the same preschool to determine if their verbal productions increase after exposure to the same verbal elicitation treatment (replication of the treatment effect across three children). Figure 2 [PDF] shows a schematic representation of a typical multiple-baseline design in which A represents the baseline condition and B represents the treatment condition. Following the example, all three children would begin phase A (baseline) with measures of current verbal output, until a stable trend is established. Baselines typically consist of a minimum of three points. Participant 1 would then begin phase B (verbal elicitation treatment) while the other two continued with phase A. Participant 2 would then enter phase B while Participant 3 continued with phase A (baseline). After Participant 3 completed baseline, all three children would be in phase B (verbal elicitation treatment). During a multiple-baseline study, treatment is implemented at different times across the participants to establish that changes in the target behavior are, in fact, related to the treatment and not due to other extraneous factors. Multiple-baseline designs also can be used to analyze different target behaviors (e.g., self-help skills, play skills, picture matching) with the same child or to determine if one child can learn a target behavior across three different settings (e.g., home, school, child care).

It has been suggested that multiple-baseline designs are an appropriate design for clinical and educational research for several reasons (Baer, Wolf, & Risley, 1968; Gast, 2010). First, treatment does not have to be withdrawn, so there are no concerns over the ethical ramifications of removing treatment or the feasibility of behavior returning to baseline levels. Second, several individuals could potentially benefit from treatment at the same time. Third, strong functional relationships between the independent variable and the behavior should be evident after a visual inspection of the data (Horner et al., 2005).

However, there is a concern that multiple-baseline designs provide weaker causal evidence than the withdrawal design (Richards et al., 1999). As there is no removal of treatment, it can be unclear if the behavior would continue to occur without treatment. To account for this possibility, some researchers strengthen their multiple-baseline designs by including a phase C (withdrawal), in which treatment is removed to determine the effect it has on the target behavior. Further, in multiple-baseline studies, the treatment is typically applied in only one condition. There is no return to baseline and no introduction of additional treatment variables, thereby limiting replication and extended treatment effects. Lastly, a multiple-baseline design requires more time to implement than a withdrawal design, because all participants must establish a clear baseline trend in behavior prior to the implementation of treatment. In research with multiple-baseline designs, stable baseline trends should be evident for each participant. In particular, target behaviors that appear to change during baseline—before treatment has been implemented—should lead to cautious interpretations and a closer evaluation of internal validity.  

The preferred method of analysis for analyzing SSRDs is a visual inspection of the data (Kromrey & Foster-Johnson, 1996). Indeed, over the past 30 years, visual analysis has accounted for 90% of the analysis on SSRDs (Busk & Marasculio, 1992). Researchers visually analyze the data to observe two specific outcomes: level of change (variability) in the dependent measure and consistent changes (trends) across data points of the independent variable (Richards et al., 1999). In the above example, the level of change in verbal output would be measured across the sessions of verbal elicitation treatment.  It is important to analyze the data visually within and between phases to examine the variability of the data. Close inspection of the data should allow the researcher to determine if changes are due to natural variability or to the treatment. Thus, having sufficient data point within each phase is critical to establishing natural variability patterns. Researchers examine the data to identify any apparent trends in the behavior.

Overall, visual analysis is a holistic approach that has a number of advantages for consumers of research articles. First, in SSRD, the data are displayed concisely, allowing readers to inspect the results visually. Second, visual inspection does not require extensive knowledge of statistical calculations (Kromrey & Foster-Johnson, 1996). Readers can identify clinically significant changes via visual analysis even when those same changes may not be large enough to be statistically significant (Richards et al., 1999). Finally, readers can glean important information from the visual data displayed in SSRD. Specifically, readers can analyze the participants’ starting skill level, the number of treatment sessions that were implemented, and how many treatment sessions were required before changes in the target behavior were observed.

Other Design Considerations 

Additional design concerns important to consider when reviewing autism research involve the replication of research findings (NRC, 2001; Odom, Brown, Frey, Karasu, Smith-Canter, & Strain, 2003). Novel treatments are often replicated in the same labs by the same researchers: that is, the originators of an intervention method and the students they have mentored are those who replicate findings. This situation is problematic because the effectiveness of the treatment may be due to the specific set of researchers and their students. There also could be a possibility of experimenter bias. This lack of diversified replicability affects the contributions the treatment could be making to the growing evidence base (Reichow, Volkmar, & Cicchetti, 2008). For example, an autism intervention known as Reciprocal Imitation Training (RIT) was developed by Ingersoll and Schreibman in 2006. Although the intervention has shown promise, only Ingersoll and colleagues had published results of RIT research until recently (Ingersoll, 2008a; Ingersoll, 2008b; Ingersoll & Gergans, 2007; Ingersoll & Lalonde, 2010; Ingersoll, Lewis, & Kroman, 2006). New evidence supporting RIT as a beneficial treatment program for children with autism has emerged as researchers from different labs across the country have replicated the original findings (Cardon & Wilcox, 2010).

It also is imperative that consumers of research understand issues related to treatment fidelity. Wheeler and colleagues (2006) conducted a meta-study to determine if behavioral intervention studies on children with ASD met the requirements for treatment fidelity, which the researchers defined as "the degree to which an independent variable is implemented as intended" (p. 45). In their analysis, they assessed how well the independent variables (i.e. the treatments) were clearly defined and measurable. The researchers identified 60 studies that met their inclusionary criteria (e.g., behavioral interventions, participants with ASD, published in recognizable journals, etc.) and analyzed them for evidence of treatment fidelity. Of the 60 studies, only 18% operationally defined the independent variable and reported measures of treatment fidelity. Surprisingly, 68% of the studies included no information on treatment fidelity (Wheeler, Baggett, Fox, & Blevins, 2006). Those investigating research on individuals with ASD should seek out research articles that operationally define a target behavior, provide a concise description of the intervention with step-by-step instructions, and define the expected outcomes of the intervention.

Quality Indicators for Single-Subject Research Designs 

Given that the focus of intervention research with individuals with ASD is to examine strategies that enhance individual development, SSRD is an appropriate method of evaluation (Reichow, Volkmar, & Cicchetti, 2008). SLPs examining autism research should look for quality indicators that have been recommended by both special education and autism researchers (e.g., Horner, et al., 2005; Parker & Hagan-Burke, 2007; Reichow, Volkmar, & Cicchetti, 2008; see Table 1). As the incidence of autism has continued to rise, the need for effective autism intervention also has increased. The translation of research results into clinical application could provide valuable tools for individuals with ASD and their families.

Teresa A. Cardon, PhD, CCC-SLP, an assistant professor at Washington State University-Spokane, is pursuing a research career working with young children on the autism spectrum. She has worked with individuals on the autism spectrum for more than 19 years. Contact her at teresa.cardon@wsu.edu.

Tamiko Azuma, PhD, is an associate professor in the Department of Speech and Hearing Science at Arizona State University. Her research focuses on working memory and language processing in young adults, older adults, and individuals with dementia. Dr. Azuma has taught undergraduate and graduate level courses focusing on research methods.She can be reached at tamiko.azuma@asu.edu.

cite as: Cardon, T. A.  & Azuma, T. (2011, June 07). Deciphering Single-Subject Research Design and Autism Spectrum Disorders. The ASHA Leader.


Baer, D. M., Wolf, M. W. & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91–97.

Brookman-Frazee, L. (2004). Using parent/clinician partnerships in parent education programs for children with autism. Journal of Positive Behavior Interventions, 6(4), 195–213.

Busk, P., & Marascuilo, L. (1992). Statistical analysis in single-case research: Issues, procedures, and recommendations, with applications to multiple behaviours. In T. Kratochwill & J. Levin (Eds.), Single case research design and analysis: New directions for psychology and education (pp. 159–185). Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Cardon, T. & Wilcox, M. J. (2010). Promoting imitation in young children with autism: A comparison of reciprocal imitation training and video modeling. Journal of Autism and Developmental Disabilities. DOI: 10.1007/s10803-010-1086-8.

Center for Disease Control and Prevention (CDC).Autism Spectrum Disorders: Data and Statistics. Retrieved May 11, 2010, from http://www.cdc.gov/ncbddd/autism/data.html

Durand, V. M., & Rost, N. (2005). Does it matter who participates in our studies? Journal of Positive Behavior Interventions, 7(3), 186–188.

Gast, D. L. (2010). Single subject research methodology in behavioral sciences. New York, New York: Taylor & Francis.

High, R. (2008). Important factors in designing statistical power analysis studies. Retrieved Nov. 22, 2008, from cc.uoregon.edu/cnews/summer2000/statpower.html.

Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Council for Exceptional Children, 71(2), 165–179.

Ingersoll, B. (2008a). The social role of imitation in autism: Implications for the treatment of imitation deficits. Infants & Young Children, 21, 107–119.

Ingersoll, B. (2008b). Teaching imitation to children with autism: A focus on social reciprocity. Journal of Speech-Language Pathology and Applied Behavior Analysis, 2(3), 269–277.

Ingersoll, B., & Gergans, S. (2007). The effect of a parent-implemented imitation intervention on spontaneous imitation skills in young children with autism. Research in Developmental Disabilities, 28, 163–175.

Ingersoll, B., & Lalonde, K. (2010). The impact of object and gesture imitation training on language use in children with autism. Journal of Speech, Language, and Hearing Research, published online July 10, 2010, as doi:10.1044/1092-4388(2009/09-0043).

Ingersoll, B., Lewis, E., & Kroman, E. (2006). Teaching the imitation and spontaneous use of descriptive gestures in young children with autism using a naturalistic behavioral  intervention. Journal of Autism and Developmental Disorders, 37, 1446–1456.

Ingersoll, B., & Schreibman, L. (2006). Teaching reciprocal imitation skills to young children  with autism using a naturalistic behavioral approach: Effects on language, pretend play, and joint attention. Journal of Autism and Developmental Disorders, 36, 487–505.

Interagency Autism Coordinating Committee (2011). The 2011 Interagency Autism Coordinating Committee Strategic Plan for Autism Spectrum Disorder Research.  Retrieved March, 25, 2011 from http://iacc.hhs.gov/strategicplan/2011/print_version.jsp

Kanner, L. (1943). Autistic disturbances of affective contact. Nervous Child, 2, 217–250.

Kazdin, A. (2001). Behavior modification in applied settings. Toronto, Canada: Wadsworth.

Kromrey, J. D., & Foster-Johnson, L. (1996). Determining the efficacy of intervention: The use of effect sizes for data analysis in single subject research. The Journal of Experimental Education, 65, 73–93.

National Research Council (2001). Educating Children with Autism. Committee on Education Interventions for Children with Autism. Catherine Lord and James P. McGee, eds. Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.  

Odom, S. L., Brown, W. H., Frey, T., Karasu, N., Smith-Canter, L. L., & Strain, P. S. (2003). Evidence-based practices for young children with autism: Contributions or single-subject design research. Focus on Autism and Other Developmental Disabilities, 18(3), 166–175.

Olive, M., & Smith, B. (2005). Effect size calculations and single subject designs. Educational Psychology, 25(2-3), 313–324.

Parsonson, B., & Baer, D. M. (1986). The graphic analysis of data. In A. Poling & R. W. Fuqua (Eds.), Research methods in applied behavior analysis (pp. 157–186). New York: Plenum.

Parker, R. I., Hagan-Burke, S. (2007). Useful effect size interpretations for single case research. Behavior Therapy, 38, 95–105. 

Power Analysis (n.d.). Retrieved Nov. 18, 2008, from www.statsoft.com/textbook/stpowan.html.

Reichow, B., Volkmar, F. R., & Cicchetti, D. (2008). Development of the evaluative method for evaluating and determining evidenced-based practice in autism. Journal of Autism and Developmental Disorders, 38, 1311–1319.

Richards, S., Taylor, R., Ramasamy, R., & Richards, R. (1999). Single subject research: Applications in educational and clinical settings. Belmont, California: Wadsworth Group.

Schreibman, L. (2000). Intensive behavioral/psychoeducational treatments for autism: Research needs and future directions. Journal of Autism and Developmental Disorders, 30(5), 373–378.

Sparrow, Balla, & Cicchetti, (1984). Vineland Adaptive Behavior Scales. Circle Pines, MN: American Guidance Service.

Wheeler, J., Baggett, B., Fox, J., & Blevins, L. (2006). Treatment integrity: A review of intervention studies conducted with children with autism. Focus on Autism and Other Developmental Disabilities, 21(1), 45–54.

Yoder, P. & Compton, D. (2004). Identifying predictors of treatment response. Mental Retardation and Developmental Disabilities Research Reviews, 10, 162–168.


Advertise With UsAdvertisement