Single Subject Randomization Design For Clinical Research

Ebbels tweet Intervention ResearchDuring the week April 23 – 29, 2017 Susan Ebbels is curated WeSpeechies on the topic Carrying Out Intervention Research in SLP/SLT Practice. Susan kicked off the week with a link to her excellent paper that discusses the strengths and limitations of various procedures for conducting intervention research in the clinical setting. As we would expect, a parallel groups randomized control design was deemed to provide the best level of experimental control. Many ways of studying treatment related change within individual clients, with increasing degrees of control were also discussed. However, all of the ‘within participant’ methods described were vulnerable to confounding by threats to internal validity such history, selection, practice, fatigue, maturation or placebo effects to varying degrees.

One design was missing from the list because it is only just now appearing in the speech-language pathology literature, specifically the Single Subject Randomization Design. The design (actually a group of designs in which treatment sessions are randomly allocated to treatment conditions) provides the superior internal validity of the parallel groups randomized control trial by controlling for extraneous confounds through randomization. As an added benefit the results of a single subject randomization design can be submitted to a statistical analysis, so that clear conclusions can be drawn about the efficacy of the experimental intervention. At the same time, the design can be feasibly implemented in the clinical setting and is perfect for answering the kinds of questions that come up in daily clinical practice. For example, randomized control trials have shown than speech perception training is an effective adjunct to speech articulation therapy on average when applied to groups of children but you may want to know if it is a necessary addition to your therapy program for a speciRomeiser Logan Levels of Evidence SCRfic child.

Furthermore,  randomized single subject experiments are now acceptable as a high level of research evidence by the Oxford Centre for Evidence Based Medicine. An evidence hierarchy has been created for rating single subject trials, putting the randomized single subject experiments at the top of the evidence hierarchy as shown in the following table, taken from Romeiser Logan et al. 2008.

 

Tanya Matthews and I have written a tutorial showing exactly how to implement and interpret two versions of the Single Subject Randomization Design, a phase design and an alternation design. The accepted manuscript is available but behind a paywall at the Journal of Communication Disorders. In another post I will provide a mini-tutorial showing how the alternation design could be used to answer a clinical question about a single client.

Further Reading

Ebbels, Susan H. 2017. ‘Intervention research: Appraising study designs, interpreting findings and creating research in clinical practice’, International Journal of Speech-Language Pathology: 1-14.

Kratochwill, Thomas R., and Joel R. Levin. 2010. ‘Enhancing the scientific credibility of single-case intervention research: Randomization to the rescue’, Psychological Methods, 15: 124-44.

Romeiser Logan, L., R. Hickman, R.R. Harris, S.R. Harris, and C. Heriza. 2008. ‘Single-subject research design: recommendations for levels of evidence and quality rating’, Developmental Medicine and Child Neuroloogy, 50: 99-103.

Rvachew, S. 1988. ‘Application of single subject randomization designs to communicative disorders research’, Human Communication Canada (now Canadian Journal of Speech-Language Pathology and Audiology), 12: 7-13. [open access]

Rvachew, S. 1994. ‘Speech perception training can facilitate sound production learning.’, Journal of Speech and Hearing Research, 37: 347-57.

Rvachew, Susan, and Tanya Matthews. in press. ‘Demonstrating Treatment Efficacy using the Single Subject Randomization Design: A Tutorial and Demonstration’, Journal of Communication Disorders.

 

Single Subject Designs and Evidence Based Practice in Speech Therapy

I was really happy to see the tutorial on Single Subject Experimental designs in November’s issue of the American Journal of Speech-Language Pathology and Audiology, by Byiers, Reichle, and Symons. The paper does not really present anything new since it covers ground previously published by authors such as Kearns (1986). However, with the current focus on RCTs as the be-all and end-all for evidence based practice, it was a timely reminder that single-subject designs have a lot to offer for EPB in speech therapy. It really irritates me when I see profs tell their students that speech therapy practice does not have an evidentiary base: many of our standard practices are well grounded in good quality single subject research (not to mention some rather nice RCTs from the sixties as well but that is another story, maybe for another post).

Byiers et al. do a nice job of outlining the primary features of a valid single-subject experiment. The internal validity of the standard designs is completely dependent upon the stable baseline with no improving trend in the data prior to the introduction of the treatment. They indicate that “by convention, a minimum of three baseline data points are required to establish dependent measure stability.” Furthermore, it is essential to not see carry-over effects of treatment of one target to the second target prior to the introduction of treatment for the second target; in other words, performance on any given target must remain stable until treatment for that specific target is introduced. The internal validity of the experiment is voided when stable baselines for each target are not established and maintained throughout their respective baseline periods. This is true even for the multiple-probe design which is a variation on the multiple-baseline design in which the dependent measure is sampled at irregular intervals tied to the introduction of successive phases of the treatment program (as opposed to regular and repeated measurement  that occurs during each and every session of a multiple baseline design). Even with the multiple probe design, a series of closely spaced baseline probes are required at certain intervals to demonstrate stability of baselines just before you begin a new treatment phase. Furthermore, the design is an inappropriate choice unless a “strong a priori assumption of stability can be made” (see Horner and Baer, 1978).

I am interested in the multiple probe design because it is the preferred design of the research teams that claim that the “complexity approach” to target selection in phonology interventions is effective and efficient. However, it is clear that the design is not appropriate in this context (in fact, given the research question, I would argue that all single subject designs are inappropriate in this context).  The reasoning behind the complexity approach is that treating complex targets results in generalization of learning to less complex targets. This is supposed to be more efficient than treating the less complex targets first because these targets are expected to improve spontaneously without treatment (e.g., as a result of maturation) while not resulting in generalization to more complex targets. The problem of course is that improvements in less complex targets while you are treating a more complex one (especially when you get no improvement on the treatment target, see Cummings and Barlow, 2011) cannot be interpreted as a treatment effect. By the logic of a single-subject experiment, this outcome indicates that you do not have experimental control. To make matters worse, these improvements in generalization targets are often observed prior to the introduction of treatment –  and indeed the a priori assumption is that these improvements in less complex targets will occur without treatment – that is the whole rationale behind avoiding them as treatment targets! And therefore, by definition, both the multiple baseline and multiple probe designs are invalid approaches to the test of the complexity hypothesis. Without a randomized control trial one can only conclude that the changes observed in less complex targets in these studies are the result of maturation or history effects. (If you want to see what happens when you test the efficacy of the complexity approach using a randomized control trial, check out my publications: Rvachew & Nowak, 2001; Rvachew & Nowak, 2003; Rvachew, 2005; Rvachew & Bernhardt, 2010).

Some recent single subject studies have had some really nice outcomes for some children. Ballard, Robin and McCabe (2010) demonstrated an effective treatment for improving prosody in children with apraxia of speech, showing that work on pseudoword targets generalizes to real word dependent measures. Skelton (2004) showed that you can literally randomize your task sequence and get excellent results for the treatment of /s/ with carryover to the nonclinic environment (in other words you don’t have to follow the usual isolation-syllable- word-phrase-sentence sequence; rather, you can mix it up by practicing items with random difficulty level on every trial). Both of these studies showed uneven outcomes for different children however. Francoise and I suggested at ASHA2012 that the “challenge point framework” helps to explain variability in outcomes across children. The trick is to teach targets that are at the challenge point for the child – not uniformly complex but carefully selected to be neither too simple nor too complex for each individual child.

Both of these studies (Ballard et al. and the Skelton study) used a multiple baseline design. This design tends to encourage the selection of complex targets because consistent 0% correct is as stable as you can get in a baseline. If you want to pick targets that are at the “challenge point” you may be working on targets for which the child is demonstrating less stable performance. Fortunately there is a single subject design that does not require a stable baseline for internal validity – it is called a single subject randomization design. We are using two different variations on this design in our current study of different treatments for childhood apraxia of speech. I will describe our application of the design in another post.