What is a control group?

I have a feeling that my blog might become less popular in the next little while because you may notice an emerging theme on research design and away from speech therapy procedures specifically! But it is important to know how to identify evidence based procedures and to do that requires knowledge of research design and it has come to my attention, as part of the process of publishing two randomized control trials (RCTs) this past year, that there are a lot of misperceptions about what an RCT is in the SLP and education communities, among both clinicians and researchers. Therefore, I am happy to draw your attention to this terrific blog by Edzard Ernst, and in particular to an especially useful post “How to differentiate good from bad research”. The writer points out that a proper treatment of this topic “must inevitably have the size of a book” because each of the indicators that he provides “is far too short to make real sense.” So I have taken it upon myself in this blog to expand upon one of his indicators of good research – one that I know causes some confusion, specifically:

  • Use of a placebo in the control group where possible.

Recently the reviewers (and editor) of one of my studies was convinced that my design was not an RCT because the children in both groups received an intervention. In the absence of a “no-treatment control” they said, the study could not be an RCT! I was mystified about the source of this strange idea until I read Ernst’s blog and realized that many people, recalling their research courses from university, must be mistaking “placebo control” for “no-treatment control.” However, a placebo control condition is not at all like the absence of treatment. Consider the classic example of a placebo control: in a drug trial, the patients randomized to the treatment arm will visit the nurse who hands him or her a white paper cup holding 2 pink pills containing active ingredient X and some other ingredients that do not impact the patient’s disease, i.e., inactive ingredients; the patients randomized to the control arm will also visit the nurse who hands him or her a white paper cup holding 2 pink pills containing only the inactive ingredients. In other words, the experiment is designed so that all patients are “treated” exactly the same except that only patients randomized to treatment receive (unknowingly) the active ingredient. Therefore, all changes in patient behavior that are due to those aspects of the treatment that are not the active treatment (visiting the nice nurse, expecting the pills to make a difference etc.) are equalized across arms of the study. These are called the “common factors” or “nonspecific factors”.

In the case of a behavioral treatment it is important to equalize the common factors across all arms of the study. Therefore in my own studies I deliberately avoid “no treatment” controls. In my very first RCT (Rvachew, 1994) for example the treatment conditions in the two arms of the study were as follows;

  • Experimental: 10 minutes of listening to sheet vs Xsheet recordings and judging correct vs incorrect “sheet” items (active ingredient) in a computer game format followed by 20 minutes of traditional “sh” articulation therapy, provided by a person blind to the computer game target.
  • Control: 10 minutes of listening to Pete vs meat recordings and judging correct vs incorrect “Pete” items in a computer game format followed by 20 minutes of traditional “sh” articulation therapy, provided by a person blind to the computer game target.

It can be seen that the study was designed to ensure that all participants experienced exactly the same treatment except for the active ingredient that was reserved for children who were randomly assigned to the experimental treatment arm, specifically exposure to the experience of listening to and making perceptual judgments about a variety of correct and incorrect versions of words beginning with “sh” or distorted versions of “sh”-the sound that the children misarticulated. Subsequently I have conducted all my randomized control studies in a similar manner. But, as I said earlier, I run across readers who vociferously assert that the studies are not RCTs because an RCT requires a “no treatment” control. In fact, a “no treatment” control is a very poor control indeed as argued in this blog that explains why the frequently used “wait list control group” is inappropriate. For example, a recent trial on the treatment of tinnitus claimed that a wait list control had merit because “While this comparison condition does not control for all potential placebo effects (e.g., positive expectation, therapeutic contact, the desire to please therapists), the wait-list control does account for the natural passing of time and spontaneous remission.” In fact, it is impossible to control for common factors when using a wait list control and it is unlikely that patients are actually “just waiting” when you randomize them to the “wait list control” condition; therefore Hesser et al.’s defense of the wait list control is  optimistic although their effort to establish how much change you get in this condition is worthwhile.

We had experience with a “wait list” comparison condition in a recent trial (Rvachew & Brosseau-Lapré, 2015). Most of the children were randomly assigned to one of four different treatment conditions, matched on all factors except the specific active ingredients of interest. However, we also had a nonexperimental wait list comparison group* to estimate change for children outside of the trial. We found that parents were savvy about maximizing the treatment that their children could receive in any given year. Our trial lasted six weeks, the public health system entitled them to six weeks of treatment and their private insurance entitled them to six to 12 weeks of therapy depending on the plan. Parents would agree to enrolled their child in the trial with randomization to a treatment arm if their child was waiting for the public service, OR they would agree to be assessed in the “wait list” arm if their child was currently enrolled in the public service. They would use their private insurance when all other options had been exhausted. Therefore the children in the “wait list” arm were actually being treated. Interestingly, we found that the parents expected their children to obtain better results from the public service because it was provided by a “real” SLP rather than the student SLPs who provided our experimental treatments even though the public service was considerably less intense! (As an aside, we were not surprised to find that the reverse was true). Similarly, as I have mentioned in previous blogs, Yoder et al. (2005) found that the children in their “no treatment” control accessed more treatment from other sources than did the children in their treatment arm. And parents randomized to the “watchful waiting” arm of the Glogowska et al. (2000) trial sometimes dropped out because parents will do what they must to meet their child’s needs.

In closing, a randomized control trial is simply a study in which participants are randomly assigned to an experimental treatment and a control condition (even in a cross-over design, in which all participants experience all conditions, as in Rvachew et al., in press). The nature of the control should be determined after careful thought about the factors that you are attempting to control, which can be many – placebo, Hawthorne, fatigue, practice, history, maturation and so on. These will vary from trial to trial obviously. Placebo control does not mean “no treatment” but rather, a treatment that excludes everything except the “active ingredient” that is the subject of your trial. As an SLP, when you are reading about studies that test the efficacy of a treatment, you need to pay attention to what happens to the control group as well as the treatment group. The trick is to think in every case – what is the active ingredient that explains the effect seen in the treatment group? what else might account for the effects seen in the treatment arm of this study? If I implement this treatment in my own practice, how likely am I to get a better result compared to the treatment that my caseload is currently receiving?

* A colleague sent me a paper (Mercer et al., 2007) in which a large number of researchers advocating for the acceptance of a broader array of research designs in order to focus more attention on external validity and translational research, got together to discuss the merits of various designs. During the symposium it arose that there was disagreement about the use of the terms “control” and “comparison” group. I use the terms in accordance with a minority of their attendees, as follows: control group means that the participants were randomly assigned to a group that did not experience the “active ingredient” of the experimental treatment; comparison group means that the participants were not randomly assigned to the group that did not experience the experimental intervention, a group that may or may not have received a treatment. This definition was ultimately not used by the attendees, I don’t know why – somehow they decided on a different definition that didn’t make any sense at all, I invite you to consult p. 141 and see if you can figure it out!

References

Glogowska, M., Roulstone, S., Enderby, P., & Peters, T. (2000). Randomised controlled trial of community based speech and language therapy in preschool children. British Medical Journal, 321, 923-928.

Hesser, H., Weise, C., Rief, W., & Andersson, G. (2011). The effect of waiting: A meta-analysis of wait-list control groups in trials for tinnitus distress. Journal of Psychosomatic Research, 70(4), 378-384. doi:http://dx.doi.org/10.1016/j.jpsychores.2010.12.006

Mercer, S. L., DeVinney, B. J., Fine, L. J., Green, L. W., & Dougherty, D. (2007). Study Designs for Effectiveness and Translation Research: Identifying Trade-offs. American Journal of Preventive Medicine, 33(2), 139-154.e132. doi:http://dx.doi.org/10.1016/j.amepre.2007.04.005

Rvachew, S. (1994). Speech perception training can facilitate sound production learning. Journal of Speech and Hearing Research, 37, 347-357.

Rvachew, S., & Brosseau-Lapré, F. (2015). A randomized trial of twelve week interventions for the treatment of developmental phonological disorder in francophone children. American Journal of Speech-Language Pathology, 24, 637-658. doi:10.1044/2015_AJSLP-14-0056

Rvachew, S., Rees, K., Carolan, E., & Nadig, A. (in press). Improving emergent literacy with school-based shared reading: Paper versus ebooks. International Journal of Child-Computer Interaction. doi:http://dx.doi.org/10.1016/j.ijcci.2017.01.002

Yoder, P. J., Camarata, S., & Gardner, E. (2005). Treatment effects on speech intelligibility and length of utterance in children with specific language and intelligibility impairments. Journal of Early Intervention, 28(1), 34-49.

Using effect sizes to choose a speech therapy approach

I am quite intrigued by the warning offered by Adrian Simpson in his paper “The misdirection of public policy: comparing and combining standardized effect sizes

The context for the paper is the tendency of public policy makers to rely on meta-analyses to make decisions such as, for example, should we improve teachers’ feedback skills or reduce class sizes as a means of raising student performance? Simpson shows that that meta-analyses (and meta-analyses of the meta-analyses!) are a poor tool for making these apples to oranges comparisons and cannot be relied upon as a source of information when making public policy decisions such as this. He identifies three specific issues with research design that invalidate the combining and comparing of effect sizes. I think that these are good issues to keep in mind when considering effect sizes as a clue to treatment efficacy and a source of information when choosing a speech or language therapy approach.

Recall that an effect size is a standardized mean difference, whereby the difference between means (i.e., the mean outcome of the treatment condition versus the mean outcome of the control condition) is expressed in standard deviation units. The issue is that the standard deviation units, which are supposed to reflect the variation in outcome scores between participants in the intervention trial, actually reflect many different aspects of the research design. Therefore if you compare the effect size of an intervention as obtained in one treatment trial with the effect size for another intervention as obtained in a different treatment trial, you cannot be sure that the difference is due to differences in the relative effectiveness of the two treatments. And yet, SLPs are asking themselves these kinds of questions every day: should I use a traditional articulation therapy approach or a phonological approach? Should I add nonspeech oral motor exercises to my traditional treatment protocol? Is it more efficient to focus on expressive language or receptive language goals? Should I use a parent training approach or direct therapy? And so on. Why is it unsafe to combine and compare effect sizes across studies to make these decisions?

The first issue that Simpson raises is that of comparison groups. Many, although not all, treatment trials compare an experimental intervention to either a ‘no treatment’ control group or a ‘usual care’ condition. The characteristics of the ‘no treatment’ and ‘usual care’ controls are inevitably poorly described if at all. And yet meta-analyses will combine effect sizes across many studies despite having a very poor sense of what the control condition is in the studies that are included in the final estimate of treatment effect. Control group and intervention descriptions can be so paltry that in some cases the experimental treatment of one study may be equivalent to the control condition of another study. The Law et al. (2003) review combined effect sizes for a number of RCTs evaluating phonological interventions. One intervention compared a treatment that was provided in 22 twice-weekly half hours sessions over a four month period to a wait list control (Almost & Rosenbaum, 1998). Another intervention involved monthly 45 minute sessions provided over 8 months, in comparison to a “watchful waiting” control in which many parents “dropped out” of the control condition (Glogowska et al. 2000). Inadequate information was provided about how much intervention the control group children accessed while they waited almost anything is possible relative to the experimental condition in the Glogowska trial. For example, Yoder et al. (2005) observed that their control group actually accessed more treatment than the kids in their experimental treatment group which maybe explains why they did not obtain a main effect of their intervention (or not, who knows?). The point is that it is hard to know whether a small effect size in comparison to a robust control is more or less impressive than a large effect size in comparison to no treatment at all. Certainly, the comparison is not fair.

The second issue raised concerns range restriction in the population of interest. I realize now that I failed to take this into account when I repeated (in Rvachew & Brosseau-Lapré, 2018) the conclusion that dialogic reading interventions are more effective for low-income children than children with developmental language impairments (Mol et al., 2008). Effect sizes are inflated when the intervention is provided to only a restricted part of the population, and the selection variables are associated with the study outcomes. However, the inflation is greatest for the children near the middle of the distribution and least for children at the tails of the distribution. This fact may explain why effect sizes for vocabulary size after dialogic reading intervention are highest for middle class children (.58, Whitehurst et al. 1988), in the middle for lower class but normally developing children (.33, Lonigan & Whitehurst, 1998), and lowest for children with language impairments (.13, Crain-Thoreson & Dale, 1999). There are other potential explanatory factors in these studies but this issue with restricted range is an important variable that is of obvious importance in treatment trials directed at children with speech and language impairments. The low effect size for dialogic reading obtained by Crain-Thoreson & Dale should not by itself discourage use of dialogic reading with this population.

Finally, measurement validity plays a huge role with longer more valid tests improving effect sizes in comparison to shorter less valid tests. This might be important when comparing the relative effectiveness of therapy for different types of goals. Law et al. (2003) concluded that phonology therapy appeared to be more effective than therapy for syntax goals for example. For some reason the outcome measures in these two groups of studies tend to be very different. Phonology outcomes are typically assessed with picture naming tasks that include 25 to 100 items, with the outcome often expressed as percent consonants correct and therefore at the consonant level there are many items contributing to the test score. Sometimes the phonology outcome measure is created specifically to probe the child’s progress on the specific target of the phonology intervention. In both cases the outcome measure is likely to be a sensitive measure of the outcomes of the intervention. Surprisingly, in Law et al., the outcome of the studies of syntax interventions were quite often omnibus measures of language functioning, such as the Preschool Language Scale, or worse the Reynell Developmental Language Scale, neither test containing many items targeted specifically at the domain of the experimental intervention. When comparing effect sizes across studies, it is crucial to be sure that the outcome measures have equal reliability and validity as measures of the outcomes of interest.

My conclusion is that it is important to not make a fetish of meta-analyses and effect sizes. These kinds of studies provide just one kind of information that should be taken into account when making treatment decisions. Their value is only as good as the underlying research—overall, effect sizes are most trustworthy when they come from the same study or a series of studies involving the exact same independent and dependent variables and the same study population. Given that this is a rare occurrence in speech and language research, there is no real substitute for a deep knowledge of an entire literature on any given subject. Narrative reviews from “experts” (a much maligned concept!) still have a role to play.

References

Almost, D., & Rosenbaum, P. (1998). Effectiveness of speech intervention for phonological disorders: a randomized controlled trial. Developmental Medicine and Child Neuroloogy, 40, 319-325.

Crain-Thoreson, C., & Dale, P. S. (1999). Enhancing linguistic performance: Parents and teachers as book reading partners for children with language delays. Topics in Early Childhool Special Education, 19, 28-39.

Glogowska, M., Roulstone, S., Enderby, P., & Peters, T. (2000). Randomised controlled trial of community based speech and language therapy in preschool children. British Medical Journal, 321, 923-928.

Law, J., Garrett, Z., & Nye, C. (2003). Speech and language therapy interventions for children with primary speech and language delay or disorder (Cochrane Review). Cochrane Database of Systematic Reviews, Issue 3. Art. No.: CD004110. doi:10.1002/14651858.CD004110.

Lonigan, C. J., & Whitehurst, G. J. (1998). Relative efficacy of a parent teacher involvement in a shared-reading intervention for preschool children from low-income backgrounds. Early Childhood Research Quarterly, 13(2), 263-290.

Mol, S. E., Bus, A. G., de Jong, M. T., & Smeeta, D. J. H. (2008). Added value of dialogic parent-child book readings: A meta-analysis. Early Education and Development, 19, 7-26.

Rvachew, S., & Brosseau-Lapré, F. (2018). Developmental Phonological Disorders: Foundations of Clinical Practice (Second Edition). San Diego, CA: Plural Publishing.

Simpson, A. (2017). The misdirection of public policy: comparing and combining standardised effect sizes. Journal of Education Policy, 1-17. doi:10.1080/02680939.2017.1280183

Whitehurst, G. J., Falco, F., Lonigan, C. J., Fischel, J. E., DeBaryshe, B. D., Valdez-Menchaca, M. C., & Caulfield, M. (1988). Accelerating language development through picture book reading. Developmental Psychology, 24, 552-558.

Yoder, P. J., Camarata, S., & Gardner, E. (2005). Treatment effects on speech intelligibility and length of utterance in children with specific language and intelligibility impairments. Journal of Early Intervention, 28(1), 34-49.

How to choose phonology goals?

I find out via Twitter (don’t you love twitter!) that “teach complex sounds first” is making the rounds again (still!) and I am prompted to respond. Besides the fact that I have disproven the theoretical underpinnings of this idea, it bothers me that so many of the assumptions wrapped up in the assertion are unhelpful to a successful intervention. Specifically, we should not be treating “sounds”, there is no agreed upon and universal ordering of targets from simple to complex, and there is no reason to teach the potential targets one at a time in some particular order anyway. So what should we do? I will describe a useful procedure here with an example.

There is this curious rumour that I promote a “traditional developmental” approach to target selection that I must lay to rest. In fact, I have made it clear that I promote a dynamic systems approach. An important concept is the notion of nonlinearity: if you induce gradual linear changes in several potential targets at once, a complex interaction will result causing a nonlinear change across the system known as a phase shift. How do you choose the targets to work on at once? Francoise and I show how to use a “quick multilinear analysis” to identify potential targets  at all levels of the phonological hierarchy, in other words phrases, trochaic or iambic feet, syllables, onsets, rimes or codas, clusters, features or individual phonemes. Many case studies and demonstrations are laid out in our book that will shortly appear in a beautiful second edition. Then we show how to select three targets for simultaneous treatment using Grunwell’s scheme designed to facilitate progressive change in the child’s phonological system. I will demonstrate both parts of this process here, using a very brief sample from a case study that is described in our book. The child’s speech is delayed for her age of two years which can be established by comparing the word shape and phonetic repertoire to expectations established by Carol Stoel-Gammon.

case-study-6-3-sample-for-blog

Potential treatment targets can be identified by considering strengths and weaknesses at the prosodic and segmental tiers of the phonological hierarchy (full instructions for this quick multilinear analysis are contained in our book). The table below describes units that are present and absent. Note that since her language system is early developing, her phonology is probably word-based rather than phoneme based; therefore ‘distinction’ refers to the presence of a phonetic distinction rather than a phonemic contrast.

case-study-6-3-quick-multilinear-analysis

Now that we have a sense of potential targets from across the whole system, how do we select targets using Grunwell’s scheme? We want to ensure that we address word shape and segmental goals. We also want to choose one goal to stabilize a variable structure in the system, another to extend something that is established to a new context, and a third to expand the system to including something new. Here are my choices (others are possible):

case-study-6-3-grunwell-goals

There is a good chance that fricatives and codas will emerge spontaneously with this plan because we will have laid down the foundation for these structures. If they don’t it should not be hard to achieve them during the next therapy block. The idea that you can only induce large change in the system by teaching the most complex targets first is clearly not true as I have explained previously – in fact, complex sounds emerge more easily when the foundation is in place. Furthermore, Schwartz and Leonard (1982) also recommended in their study on selection effects in early phonological development that it was best to teach IN words to children with small vocabulary sizes – in other words expand the vocabulary size gradually by using word shapes and phonemes that are in the inventory, but combined in new ways.

It would be possible to use the stabilize-extend-expand scheme and choose different, more complex goals. For example, we could consider the nonreduplicated CVCV structure (cubby, bunny, bootie) to be the stabilize goal. Then we could introduce word final labial stops as the extend goal, generalizing these phones from the onset where they are well established to a new word position (up, tub, nap). Finally, we could introduce a word initial fricative as the expand goal (see, sock, soup). This plan with more complex targets might work but you are risking slower progress, given the empirical findings reported in Rvachew & Nowak (2001) and in Schwartz & Leonard (1982). Furthermore, you would be failing to recognize a major constraint on the structure of her syllables (the limitation to only 2 segments, VV or CV with CVV and CVC currently proscribed). If you focus only on introducing “complex sounds” without attending to this major issue at the prosodic levels of her phonological system, you will be in for a rough ride.

I attach here another example, this one a demonstration from the second edition of our book, chapter-8-demonstration-8-2, to appear in December 2016. Francoise and I have taken a great effort to show students how to implement an evidence based approach to therapy. I invite readers to take a peak!

Reading List

Rvachew, S., & Brosseau-Lapré, F. (2018). Developmental Phonological Disorders: Foundations of Clinical Practice (Second Edition). San Diego, CA: Plural Publishing. (Ready for order in December 2016)

Grunwell, P. (1992). Processes of phonological change in developmental speech disorders. Clinical Linguistics & Phonetics, 6, 101-122.

Stoel-Gammon, C. (1987). Phonological skills of 2-year-olds. Language, Speech & Hearing Services in Schools, 18, 323-329.

Rvachew, S., & Bernhardt, B. (2010). Clinical implications of the dynamic systems approach to phonological development. American Journal of Speech-Language Pathology, 19, 34-50.

Rvachew, S. & Nowak, M. (2001). The effect of target selection strategy on sound production learning. Journal of Speech, Language, and Hearing Research, 44, 610-623.

Schwartz, R., & Leonard, L. (1982). Do children pick and choose? An examination of selection and avoidance in early lexical acquisition. Journal of Child Language, 9, 319-336.

 

 

 

CAMs & Speech Therapy

In this final post on the potential conflict between Evidence Based Practice (EBP) and Patient Centred Care (PCC) I consider those situations in which your client or the client’s family persists in a course of action that you may feel is not evidence based. This is a very common occurrence although you may not be aware of it. Increasing numbers of surveys reveal that the families of children with disabilities use Complementary and Alternative Medicines/Therapies (CAMs), usually without telling their doctor and other health care providers within the “standard” health care environment.

Despite a growing number of studies it is difficult to get an exact estimate of the prevalence of CAM use among such families (see reading list below). Some estimates are low because families are reluctant to admit to using CAMs. Other estimates are ridiculously high because CAM users are responding to insurance company surveys in order to promote funding for these services and products. However, the best estimates are perhaps as follows: about 12% of children in the general population are exposed to CAMs; the proportion probably doubles for children with developmental disabilities in general and doubles again for children with autism. The most commonly used CAMs are dietary supplements or special diets, followed by “mind and body practices” (sensory integration therapy, yoga, etc.); the use of dangerous practices such as chelation therapy is mercifully much less frequent. Predictors of CAM use are high levels of parental education and stress. The child’s symptoms are not reliably associated with CAM use. The hypothesized reasons for these correlations are that educated parents have the means to find out about the CAMs and the financial means to access them. Having had some personal experience with this, I think that educated parents are very used to feeling in control of their lives and nothing shatters that sense of control as much as finding that your child has a developmental disability. I find it very interesting that the studies shown below counted CAM use after specifically excluding prayer! I may be wrong but I expect that many well educated parents, even those that pray, would look for a more active solution than putting their family exclusively in the hands of God. Educating yourself through internet searches and buying a miracle cure feels like taking back control of your life (although months later when you realize you have thousands of dollars of worthless orange gunk in your basement, you are feeling out of control again AND stupid, but that is another story). Anyway, this is why I think (an untested hypothesis I admit) that patient centred care is actually the key to preventing parents from buying into harmful or useless therapies.

When the parent asks (or demands, as used to happen when I had my private practice) that you use a therapy that is not evidence based, how do you respond in a way that balances evidence based practice with patient centred care?

The most important strategy is to maintain an open and respectful dialogue with the family at all times so that conversation about the use of CAMs can occur. Parents often do not reveal the use of these alternative therapies and sometimes there are dangerous interactions among the many therapies that the child is receiving. It is critical that the parent feels comfortable sharing with you and this will not occur if you are critical or dismissive of the parents’ goals and choices. A PCC approach to your own goal setting and intervention choices will facilitate that dialogue. It is actually a good thing if the parent asks you to participate in a change in treatment approach.

Find out what the parent’s motivations are. Possibly the parent’s concerns are not in your domain. For example dad might ask you to begin sessions with relaxation and breathing activities. You launch into a long lecture about how these exercises will not improve speech accuracy. It turns out that the exercises are meant to calm anxiety, a new issue that has arisen after a change in medication and some stresses at school. As an SLP, you are not actually in a position to be sure about the efficacy of the activity without some further checking and going along with the parent is not going to hurt in any case.

Consider whether your own intervention plan is still working and whether your own goals are still the most pertinent for the child. Sometimes we get so wrapped up in the implementation of a particular plan we miss the fact that new challenges in the child’s life obligate a course correction. Mum feels like her child needs something else and looks around for an alternative. After some discussion you may find that switching your goal from morphosyntax to narrative skills might work just as well as introducing acupuncture!

Talk with the parent about where the idea to use the CAM came from and how the rest of the family is adapting to the change. It is possible that mum knows the diet is unlikely to work but dad and dad’s entire family has taken it on as a family project to help the child. In some ways the diet is secondary to the family’s sense of solidarity. On the other hand mum may be isolating herself and the child from the rest of the family by committing to an intervention that everyone else thinks is bonkers! This will be difficult but efforts to engage the family with counseling might be in order.

Explore ways to help the parent establish the efficacy of the CAM. With the family’s consent you might be able to find information about the alternative approach from sources that are more credible than google. You might be able to help the parent set up a monitoring program to document changes in behavior or sleep habits or whatever it is that the parent is trying to modify. You may even be able to implement a single subject randomized experiment to document the efficacy of the therapy for the child. Dad may enjoy helping to plot the data in a spreadsheet.

Finally and also crucially, model evidence based thinking in all your interactions with the family. When you are suggesting new goals or approaches to intervention explain your decisions. Involve the family in those choices, describing the potential benefits and costs of the various options by referencing the scientific literature. Let the parent know that you are making evidence based hypotheses all the time and watching their child carefully to confirm whether your hypotheses were correct. Involve families in this process so that they become used to thinking in terms of educated guesses rather than phony certainties.

Reading list

Bowen, C. & Snow, P. C. (forthcoming, about January 2017). Making Sense of Interventions for Children’s Developmental Difficulties. Guildford: J&R Press. ISBN 978-1-907826-32-0 

Levy, S. E., & Hyman, S. L. (2015). Complementary and Alternative Medicine Treatments for Children with Autism Spectrum Disorders. Child and Adolescent Psychiatric Clinics of North America, 24(1), 117-143.

Owen-Smith, A. A., Bent, S., Lynch, F. L., Coleman, K. J., Yau, V. M., Pearson, K. A., . . . Croen, L. A. (2015). Prevalence and predictors of complementary and alternative medicine use in a large insured sample of children with Autism Spectrum Disorders. Research in Autism Spectrum Disorders, 17, 40-51.

Salomone, E., Charman, T., McConachie, H., Warreyn, P., Working Group 4, & COST Action “Enhancing the Scientific Study of Early Autism”. (2015). Prevalence and correlates of use of complementary and alternative medicine in children with autism spectrum disorder in Europe. European Journal of Pediatrics, 174, 1277-1285.

Valicenti-McDermott, M., Burrows, B., Bernstein, L., Hottinger, K., Lawson, K., Seijo, R., . . . Shinnar, S. (2014). Use of Complementary and Alternative Medicine in Children With Autism and Other Developmental Disabilities: Associations With Ethnicity, Child Comorbid Symptoms, and Parental Stress. Journal of Child Neurology, 29(3), 360-367.

 

 

Do our patients prove that speech therapy works?

The third post in my series on Evidence Based Practice versus Patient Centred Care addresses the notion that the best source of evidence for patient centred care comes from the patient. I recall that when I was a speech-language pathology student in the 1970s, my professors were fond of telling us that we needed to treat each patient as a “natural experiment”. I was reminded of this recently when a controversy blew up in Canada about a study on Quebec’s universal daycare subsidy and the author of the study described the introduction of the subsidy as a “natural experiment” and then this same economist went on to show himself completely confused about the nature of experiments! So, if you will forgive me, I am going to take a little detour through this study about daycare before coming back to the topic of speech therapy with the goal of demonstrating why your own clients are not always the best source of evidence about whether your interventions are working or not, as counter-intuitive as this may seem.

Quebec introduced a universal daycare program in 1997 and a group of economists have published a few evaluations using data from the National Longitudinal Study of Children and Youth (NLSCY), one looking at anxiety in younger kids  and the more recent one describing crime rates when the kids were older . The studies are rather bizarre in that children who access daycare (or not) do not provide data for these studies – rather province wide estimates of variables such as likelihood of using daycare and childhood anxiety are obtained from the NLSCY which is a survey of 2000 children from across Canada, obtained biannually but followed longitudinally; then they estimated province wide youth criminal activity from a completely different survey rather than using the self-report measures from the NLSCY. Differences in these estimates (see post-script) from pre-daycare cohorts to post-daycare cohorts are compared for Quebec versus the ROC (rest of Canada, which does not have any form of universal childcare program). One author described the outcome this way: “looking at kids in their teens, we find indicators of health and life satisfaction got worse, along with teens being in more trouble with the law.” The statistical analysis and design are so convoluted I was actually hoodwinked into thinking youth crime was going up in Quebec, when in fact youth crime was actually declining, just not as fast as in the ROC. In actual fact, youth crime legislation and practices vary so dramatically across provinces, and particularly between Quebec and the ROC that it is difficult indeed to compare rates of youth crime using the variable cited in the NBER paper (rates of accused or convicted youths; for discussion see Sprott). Then they attribute this so-called rise but actual decline in crime to “the effects of a sizeable negative shock to non-cognitive skills due to the introduction of universal child care in Quebec”. Notwithstanding this nonsense summary of the results of these really weird studies, the most inaccurate thing that Milligan said is that this study was a “natural experiment” which is “akin to a full randomized experiment such as Perry Preschool, but on a larger scale”. But the thing is, a “natural experiment” is not an experiment at all because when the experiment is natural, you cannot determine the cause of the events that you are observing (although when you have enough high quality pairs of data points you can sometimes make decent inferences, NOT the case in this particular study). The economists know how to observe and describe naturally occurring events. They can estimate an increase in daycare use and changing rates of child anxiety and youth crime convictions in Quebec vs the ROC and compare changing rates of things between these jurisdictions. What they cannot do is determine why daycare use changed or reported anxiety changed or convictions for youth crime changed. To answer the question “why”, you need an experiment. What’s more, experiments can only answer part of the “why” question.

So let’s return to the topic of speech therapy. We conduct small scale randomized control trials in my lab precisely because we want to answer the “why” question. We describe changes in children’s behavior over time but we also want to know whether one or more of our interventions were responsible for any part of that change. In our most recently published RCT we found that even children who did not receive treatment for phonological awareness improved in this skill, but children who received two of our experimental interventions improved significantly more. Control group children did not change at all for articulation accuracy whereas experimental group children did improve significantly. In scatterpots posted on my blog, we also showed that there are individual differences among children in the amount of change that occurs within the control group that did not experience the experimental treatments and within the experimental groups.  Therefore, we know that there are multiple influences on child improvement in phonological awareness and articulation accuracy, but our experimental treatments account for the greater improvement in the experimental groups relative to the control group. We can be sure of this because of the random assignment of children to treatments which controls for history and maturation effects and other potential threats to the internal validity of our study. How do we apply this information as speech-language pathologists when we are treating children, one at a time.

When a parent brings a child for speech therapy it is like a “natural experiment”. The parent and maybe the child are concerned about the child’s speech intelligibility and social functioning. The parent and the child are motivated to change. Coming to speech therapy is only one of the changes that they make and given long waits for the service it is probably the last in a series of changes that the family makes to help the child. Mum might change her work schedule, move the child to a new daycare, enlist the help of the grandparent, enroll the child in drama classes, read articles on the internet, join a support group, begin asking her child to repeat incorrect words, check out alliteration books from the library and so on. Most importantly, the child gets older. Then he starts speech therapy and you put your shiny new kit for nonspeech oral motor exercises to use. Noticing that the child’s rate of progress picks up remarkably relative to the six month period preceding the diagnostic assessment, you believe that this new (for you) treatment approach “works”.

What are the chances? It helps to keep in mind that a “natural experiment” is not an experiment at all. You are in the same position as the economists who observed historical change in Quebec and then tried to make causal inferences. One thing they did was return to the randomized control trial literature, ironically citing the Perry Preschool Project which proved that a high quality preschool program reduced criminality in high risk participants. On the other hand, most RCTs find no link between daycare attendance and criminal behavior at all. So their chain of causal inferences seems particularly unwise. In the clinical case you know that the child is changing, maybe even faster than a lot of your other clients. You don’t know which variable is responsible for the change. But you can guess by looking at the literature. Are there randomized controlled trials indicating that your treatment procedures cause greater change relative to a no-treatment or usual care control group? If so, you have reason for optimism. If not, as in the case of nonspeech oral motor exercises, you are being tricked by maturation effects and history effects. If you have been tricked in this way you shouldn’t feel bad because I know some researchers who have mistaken history and maturation effects for a treatment effect. We should all try to avoid this error however if we are to improve outcomes for people with communication difficulties.

*******************************************

PS If you are interested in the difference-in-difference research method, here is a beautiful youtube video about this design, used to assess handing out bicycles to improve school attendance by girls in India. In this case the design includes three  differences (difference-in-difference-in-difference design) and the implementation is higher quality all round compared to the daycare study that I described. Nonetheless, even here, a randomized control trial would be more convincing.

Full Engagement with Evidence and Patients in SLP Practice

This is the second in my promised series on the topic of Evidence Based Practice versus Patient Centred Care. I will respond to the implication that EBP constrains the choices of the patient and the SLP thus conflicting with PCC and minimizing the role of clinical expertise. I will argue that the evidence is less constraining than generally believed.  The SLP must understand the general principles revealed by the research and understand the patient’s needs well enough to apply the evidence. As Sue Roulstone put it “Clinical expertise supports the skillful application of that research to the practice situation”.

It is necessary to know the research deeply in order to understand the clinical implications beyond the simple “what works” messages that might be found in the paper abstracts. I will provide two examples. Recently Françoise Brosseau Lapré and I published a randomized control trial (RCT) in which we reported that 2 of 4 interventions worked well (one focusing on intense speech practice and the other focusing on listening and responding to speech inputs). Both interventions involved 6 weeks of individualized speech therapy followed by a 6 week parent-education program. We have been talking about this trial in the local community for some time now and unfortunately the take away message seems to have been that parent training works. SLPs are using the trial as a justification for the implementation of parent training initiatives without an accompanying individualized therapy component. And yet, a careful reading of our work reveals that the trial is not, ultimately, about parent training at all. Our research was about alternative means for providing intense focused inputs to children with phonological disorders. In one way the results of the trial increased options for SLPs and families by showing that two approaches can be equally effective as shown in a previous blog post; at the same time the trial should constrain SLP choices somewhat by focusing attention on intensity and coherence in the selection of treatment procedures.

Another example is provided by the parent administered focus stimulation program for treatment of morphosyntax delays described by Marc Fey and Pat Cleave many years ago. They reported that a parent implemented intervention could be as effective as an SLP implemented intervention while being very efficient, requiring about half as much SLP time per child. I remember that shortly thereafter many SLP programs in Canada cut their ration from 16 to 8 weeks/hours per child and strongly promoted parent mediated interventions as the primary mode of care. The thing is that the parent program described in Fey’s trial required 21 hours per child to implement! Furthermore, a follow-up study revealed that parents were not able to implement it as effectively when their children’s goals became more complex.

There are many studies that involve parent implemented interventions and overall what they tell us is that (1) intensity matters – an effective speech and language interventions gets as much “data” to the child as possible; (2) parent involvement in the therapy program is just one way to achieve an increase in intensity; (3) parents can be effective if they are very well trained and supported and the home program is focused on the achievement of specific goals; (4) training and support takes time and effort on the part of the SLP; and (5) not all parents are equipped to implement speech therapy for all goals. There is a lot of nuance here and SLPs should be empowered to apply this research evidence to meet their clients’ needs.

I know that SLPs prefer to make decisions in the best of interests of their patients without being constrained by evidence based care guidelines. But the flipside of understanding the evidence well is understanding your patient’s needs equally well.  When I was a young SLP, I know that I made many mistakes in this regard until I learned some things the hard way. I recall that when my daughter was small and needed a lot of therapies, the various providers fervently believed that it was best if I provide all those practice exercises at home in the natural environment. Furthermore, somebody, without consulting me, decided that a speech-language pathologist was not a necessary member of my daughter’s rehab team because I could fulfil that role myself! I ended up in the nurse coordinator’s office crying, ‘I am her mother, not her therapist’. Mercifully, a whole new plan was put in place. The thing is, that for PCC to work you have to really understand what the evidence says and then you have to understand the needs of your patients.

An excellent qualitative study that shows how breakdowns in PCC lead to poor outcomes is found in Annals of Family Medicine.  The authors identify four archetypes of engagement in shared decision making (SDM): full engagement (SDM present, subjective experience positive); simulated engagement (SDM present, subjective experience negative); assumed engagement (SDM absent, subjective experience positive); and nonengagement (SDM absent, subjective experience negative). I strongly recommend reading the paper and the vignettes made available in the online supplemental material. They make for fascinating reading. Full engagement is characterized by shared decision making and mutual trust. The other situations often involve one or both parties making assumptions about the other person’s feelings or motives, leading to a lack of disclosure of important information.

The research evidence might tell you that an intervention can work. Patient centred care is necessary to make sure that it will work.

Evidence Based Practice versus Patient Centred Care

@WeSpeechies is again proving to be a source of fascinating conversation. During the week October 25 – 31, (David Kinnane, Consumer protections and speech pathology services: Are we doing the right things at the right times?) an excellent paper by Sue Roulstone was posted, “Evidence, expertise, and patient preference in speech-language pathology” in the context of a discussion about whether evidence based practice (EBP) is inconsistent with patient centred care (PCC). There are a number of loosely connected propositions that might lead to this conclusion and I am going to list them here and then discuss them in separate blogposts. Ultimately I will conclude that patient centred care demands that we practice in an evidence based manner.

The arguments in favour of the idea that EBP and PCC are in conflict come from both directions, either there is a worry that the patient’s preferences will be in conflict with the evidence or there is concern that applying the evidence means ignoring the patient, not to mention clinical expertise.

The first objection is that PCC means selecting treatment approaches and practices in accordance with the patient’s preferences and values. I will argue in my first blog that there are several different models of PCC but none of them are the same as ‘consumer driven decision making”, in other words, simply doing what the patient asks. The preferred model, “shared decision making” is fully consistent with EPB.

A second objection is that EPB implies that there is only one treatment option for every case; therefore there is no room for taking the patient’s preferences and values into account. I will argue that the evidence is nearly always about probabilities and general principles. Therefore it is the role of the SLP to work with the patient to determine which evidence is most applicable and then jointly choose among the best alternative courses of action.

A third perspective is that the most patient centred form of care is to apply a treatment to each individual patient and then watch to see if it “worked” because after all, RCTs only apply to groups of other patients, not your current specific patient. Therefore, clinical expertise should be added to the evidence hierarchies as a form of evidence for treatment efficacy. I will argue that you can never determine treatment efficacy by simply observing change in a single patient.

Finally, the arguments made in all of these blogposts will seem a bit abstract. What do you do when the patient persists in a course of action that appears to be in conflict with all evidence? I will recount my experience with this situation and suggest a course of action. As always I invite your comments.

Scatterplots and Speech Therapy

I have been looking for an opportunity to try out this neat spread sheet for creating scatterplots as an alternative to the standard bar graph as a way of presenting the results of a treatment trial. This week the American Journal of Speech-Language Pathology posted our manuscript “A randomized trial of twelve-week interventions for the treatment of developmental phonological disorder in francophone children”. In the paper we compare outcomes (speech production accuracy and phonological awareness) for the four experimental groups in comparison to a no-treatment group using the standard bar graphs. Weissberger et al disparage this presentation as “visual tables” that mask distributional information. They provide a spreadsheet that allows the researcher to represent data so that the underlying individual scores can be seen. I am going to show some of the speech accuracy data from the new paper that Françoise and I have just published in this form.

In our trial we treated 65 four-year-old francophone children. Each child received the same treatment components: 6 one hour individual therapy sessions targeting speech accuracy, delivered once per week in the first six weeks; followed by 6 one hour group therapy sessions targeting phonological awareness, delivered once per week in the second six weeks; simultaneously in the second six weeks, parents received a parent education program. The nature of the individual therapy and parent education programs was varied however with children randomly assigned to four possible combinations of intervention as follows: Group 1 (Output-oriented Individual Intervention and Articulation Practice Home Program); Group 2 (Output-oriented Individual Intervention and Dialogic Reading Home Program); Group 3 (Input-oriented Individual Intervention and Articulation Practice Home Program); Group 4 (Input-oriented Individual Intervention and Dialogic Reading Home Program). The Output Oriented Individual Intervention and the Articulation Practice Home Program components focused on speech production practice so this was a theoretically consistent combination. The Input Oriented Individual Intervention and the Dialogic Reading Home Program included procedures for providing high quality inputs that required the child to listen carefully to those inputs with no explicit focus on speech accuracy; the child might be required to make nonverbal responses or might choose to make verbal responses but adult feedback would be focused on the child’s meaning rather than on speech accuracy directly. This combination was also theoretically consistent. The remaining two combinations mix and match these components in a way that was not theoretically consistent. All four interventions were effective relative to the no-treatment control but the theoretically consistent combinations were the most effective. The results are shown in bar graphs in Figures 2 and 3 of the paper.

Here I will represent the results for the two theoretically consistent conditions in comparison to the no-treatment control condition, using the Weissberger Paired Data Scatterplot Template to represent the pre- to post-treatment changes in Percent Consonants Correct (PCC) scores on our Test Francophone de Phonologie. The first chart shows the data for the Input Output Oriented/Articulation Practice intervention (Group 1) compared to the no-treatment group (Group 0). You might be surprised by how high the scores are for some children pre-treatment; this is normal for French because expectations for consonant accuracy are higher in French than in English because consonants are mastered at an earlier age even although syllable structure errors persist and may not be mastered until first or second grade. The important observations are that the difference scores for the no-treatment group are tightly clustered around 0 whereas the difference scores in the treated group are spread out with the average (median) amount of change being 7 points higher than 0.

Group 0 v 1

Next I show the same comparison for the Output Input Oriented/Dialogic Reading intervention (Group 4) in comparison to Group 0. In this case the median of the difference scores is 9, slightly higher than for Group 1, possibly because the pretreatment scores are lower for this group. In any case, it is clear that a treatment effect is observed for both combinations of interventions which is striking because in one group the children practiced speech with direct feedback from the SLP and parent about their speech accuracy whereas in the other group direct speech practice and feedback about speech accuracy was minimal!

Group 0 v 4.

Do these scatterplots provide any additional information relative to the traditional bar charts that are shown in the AJSLP paper? One thing that is clearer in this representation is that there are children in Group 1 and in Group 4 who did not respond to the treatment. Randomized control trials tell us about the effectiveness of interventions on average. They can help me as a researcher suggest general principles (such as, given a short treatment interval, a theoretically consistent intervention is probably better than an “eclectic” one). As a speech-language pathologist however you must make the best choice of treatment approach for each individual child that walks into your treatment room. Providing an evidence base to support those decisions requires access to large research grants for very very large multi-site trials. There is only so much we can learn from small trials like this.

I hope that you will check out the actual paper however which includes as supplemental information our complete procedure manual with a description of all target selection and treatment procedures, equally applicable to English and French.

Using Orthographic Representations in Speech and Language Therapy

Word learning, and in particular, productive word learning is associated with three important processes in the phonological domain: first, the child must encode the acoustic-phonetic form of the word in the language input; second the child must transform this representation into a lexical representation, generally considered to take on a more abstract phonological form; finally the child must retrieve the representation to reproduce it. The first process is reliant on speech processing abilities that have been shown to be impaired in many children with speech, language and reading deficits, as shown by for example by Ben Munson and colleages (@benjyraymunson) and Nina Kraus and colleages. Phonological encoding is enhanced by access to repeated high-quality but variable inputs as shown by Richtmeier et al for normally developing children and by Rice et al for children with SLI. The majority of children with SSD have difficulties with encoding: we have a paper in press with the American Journal of Speech-Language Pathology showing that speech accuracy in these children can be improved with an approach that focuses largely on the provision of intense high quality input – I will have more to say on this subject when it (finally) emerges in print.

The second process, forming a phonological representation and storing it in the lexicon, involves articulatory recoding which can be a serious problem for children with severe SSD, accounting for deficits in speech accuracy (especially in association with inconsistency), nonword repetition, word learning, productive vocabulary, word finding, rapid automatic naming, and other phonological processing skills. These children are often diagnosed with motor planning disorders but I have pointed out previously that the problem is actually at the level of phonological planning. I have further pointed out the very close relationship between speech planning and memory. Children who are having difficulty with phonological planning may not show the same benefit from a therapy approach that is focused on the provision of high quality inputs. Therefore a new paper on the use of orthographic inputs to teach new words caught my eye.  Ricketts et al taught children with SLI and ASD as well as younger and age-matched children with typical language to label nonsense objects with new names, using a computer program. For some words, the children were exposed only to the object–auditory word pairing; for others they saw the object, heard the word and saw a printed version (orthographic representation) as well. All children found it easier to learn the new words when they were exposed to the orthographic representation along with the auditory word.

This study reminded me of the research we are doing with children who are referred to our clinic with an apraxia diagnosis due to inconsistent speech errors. So far, 40% of those children have difficulty with phonological planning rather than motor planning as revealed by the syllable repetition test, as I have explained in a previous blog. We have been using a single subject randomization design to compare the relative efficacy of two treatment approaches with these children. The Phonological Memory & Planning (PMP) intervention pairs the phonemes in the target words with visual referents that include letters as shown here. Imitative models are avoided and the child is encouraged to create their own phonological plan and produce the word using the visual symbols when necessary. An alternative treatment, the Auditory-Motor Integration (AMI) Treatment is quite different with a heavy emphasis on prior auditory stimulation and self-judgments of the match between auditory inputs and outputs. A third condition is a usual care CONtrol condition focusing on high intensity practice. In all cases we teach nonsense words paired with real objects, with the words structured to target the children’s phonological needs in the segmental and prosodic domains.

The results are assessed by applying a resampling test to probe scores and then combining p-values across the children. These are the statistical results (F and t tests by resampling test) for the Same Day Probe Scores, with p-values combined across the 5 children who have proven to have phonological planning problems in concert with a severe inconsistent speech disorder:

TASC PMP results Aug 2015

The results in the third column show that all of the children obtained a significant treatment effect. The findings in the remaining columns pertain to planned comparisons with positive t values being in the expected direction. The combined p values indicate that all treatments are significantly different from each other and inspection of the mean scores across children show that the pattern of results is PMP > CON > AMI. The result is made more interesting by the fact that the pattern of results is the exact opposite for children with a motor planning disorder. Tanya Matthews and I will compare these two subgroups with data and video during our presentation at ASHA 2016 in Denver this coming fall.

Session Number: 1429
Session Title: Differential Diagnosis of Severe Phonological Disorder & Childhood Apraxia of Speech
Day: Friday, November 13, 2015
Time: 1:00 PM – 3:00 PM
Session Format: Seminar 2-hours

For now, the take away message is that learning new words involves (at least) three important processes: encoding the sound of the new word, memory processes for storing and retrieving the phonological representation and motor planning processes for planning and programming articulatory movements prior to production of the new word. There are published studies showing that intervention procedures targeting each of these processes help children with speech, language and reading difficulties. Increasing frequency of high quality input improves quality of the acoustic-phonetic representation. Pairing phonological segments with visual symbols helps with storage and retrieval of the phonological representation. High intensity speech practice with appropriate stimulation and feedback improves motor planning and motor programming. The trick is to figure out which children require which procedures at which time.

Thinking about ‘Dose’ and SLP Practice: Part III

Continuing my discussion about the concept of ‘dose’ as applied to speech therapy, I finally get to the heart of the matter which is the issue of the optimal ‘dose’ of speech therapy to achieve the desired outcome which in our context is generalization of a phonology goal to untreated words. In previous blogs I discussed the definition of ‘dose’ in terms of the number of effective teaching episodes and the need to identify the effective ingredients of your intervention beyond the therapeutic alliance. Here I will discuss ‘dose’ specifically, as in how many effective teaching episodes are enough to achieve a good outcome in phonology intervention?

Let’s begin by returning to the pharmacology context from which the concept of dosage is borrowed. How is the concept helpful to physicians? First, it is important to know the optimum dose (or dose range) for average patients so as to avoid harming the patient. If the prescribed dose is too low the patient may not improve and the continuance or worsening of symptoms and disease will be harmful for the patient. If the dose is too high the medication itself may be toxic and harm the patient directly. Second, the patient’s response to the medication is diagnostic. If the maximum safe dosage has been prescribed and the patient is not responding favorably the physician must seek the reason: Is the patient complying with the prescribed treatment regimen? Is the patient doing something else that interferes with the effectiveness of the medication? Is the health care system administering the dose as prescribed? Does this patient respond to medications in an individualized fashion, such that a switch to another medication is required? Is the diagnosis wrong such that an entirely different treatment is called for? I will describe the research on appropriate dose in the case of meaningful minimal pairs therapy (applied to preschool aged children with moderate or severe phonological disorders) and we can consider whether these questions are relevant in the speech therapy context.

The method of meaningful minimal pairs is a uniquely linguistic approach to therapy that has the goal of changing the child’s production of an entire sound class. The procedure has two key components: (1) teaching the child pairs of words that differ by a single phoneme; and (2) arranging the environment so that the child experiences a communication breakdown if both words in a  pair are produced as a homophone. (SLPs and researchers usually get the first part right but often forget the second!) The method is directed at the child’s phonological knowledge and therefore should not be applied until after phonetic knowledge of the contrasting phonemes in the perceptual and articulatory realms has been established.

There is a lot of research involving this method and at least two papers have carefully documented the dose that leads to generalization from trained to untrained words/targets. More than 50% generalization is the outcome of interest because we know from other studies that you can discontinue direct treatment on the target pattern at this point and the child will continue to make spontaneous gains. The two papers that I will discuss have the further benefit of allowing the reader to count the “dose” precisely as the number of practice trials. The papers also provide information about the number of sessions and the number of minimal pairs over which the practice trials were distributed.

Weiner (1981) demonstrated that the method was effective with two children, using a multiple baseline design and treating deletion of final consonants (DFC), stopping of fricatives (ST) and fronting (F). Four minimal pairs were taught per target pattern and use of the pattern was probed continuously for treatment words and on a session-by-session basis for generalization words. The results do not show that much difference across target patterns but the response across children was markedly different with one child showing much faster progress than the other for all targets. For example, Child A reduced DFC to below 50% in treated words after 120 practice trials and in generalization words after 300 trials. On the other hand, Child B required 200 and 480 trials respectively to reach the same milestones for DFC. Furthermore Child A was able to accomplish many more trials in a session (e.g., 400 practice trials over 5 sessions for child A or 80 trials/session vs. 570 practice trials over 13 sessions or 43 trials/ session for child B). Despite this large variance in rate of progress across children, the study suggests that an SLP should expect a good treatment response with this method after no more than 500 trials.

This finding was replicated in a larger sample (n = 19) by Elbert, Powell and Swartzlander (1991). In this study a behaviorist approach was taken to the treatment of the minimal pair words in contrast to Weiner’s procedure that emphasized the communication breakdown as an important part of the procedure. The children were taught one pair at a time in series and the study was structured to determine how many children would achieve generalization to untreated words ,at a level of at least 50%, after learning 3, 5 or 10 pairs of words. They found that 59% of the children generalized after learning 3 pairs which took an average of 487 practice trials (range 180 to 1041) administered over approximately 5 20-minute treatment sessions; 21% of children needed to learn 5 word pairs (1221 practice trials on average) and 14% needed to learn 10 words pairs (2029 practice trials on average) before generalization occurred. This left 7% of children who did not generalize at all.

How can we use these data about dosage in our treatment planning? There is a lot of useful information here. First, we know that it is possible to achieve 80-100 practice trials in 20 minutes. Therefore, if your treatment sessions are 20 minutes long you can target one phonological pattern and if they are 60 minutes long you can target 3. Second, they show us that children do not usually generalize in under 180 practice trials (and I would argue that the data indicate that it is number of practice trials rather than sessions that is important). What harm might arise if you provide a child with the government mandated 6 annual treatment sessions, targeting three patterns, but failing to achieve more than 100 practice trials for each target pattern across the 6 sessions? We can predict that the child will not start to generalize before the end of the block and therefore will not continue to make spontaneous gains after treatment stops. When the next block begins the child may be discouraged and less cooperative with the next SLP. The parent may become discouraged and seek out complementary or alternative interventions that are even more useless or harmful than speech therapy provided with insufficient intensity!

What if the child has achieved more than 500 practice trials and has not generalized? At this point you have more than enough reason to reassess your diagnosis and/or your approach. Child B in Weiner’s study for example did finally achieve many practice trials but did so slowly because he was unable to achieve the recommended intensity, producing much fewer than 80 practice trials per session. This child also failed to generalization after 500 trials for one of his targets. Perhaps this child was lacking in the necessary prerequisites such as stable perceptual and articulatory representations for the target phonemes. Or, perhaps the child viewed the communication breakdowns to be the SLP’s listening problem rather than his own speech problem and thus a disconnect at the level of the therapeutic alliance was hampering the child’s learning.  What about the children in Elbert et al who did not generalize at all? It was eventually revealed in the paper that these children presented with many “soft signs” indicative of both speech and oral motor apraxia. Therefore, continuing to almost 3000 practice trials for these children was most assuredly harmful, given that they were not benefiting from the approach and they were deprived of the opportunity to experience a treatment approach better suited to their needs.

I am hoping that this example in the specific context of minimal pairs intervention demonstrates that the concept of dosage can be very useful in speech therapy. We need much more research that establishes typical ranges of ‘dose’ for optimum outcomes for any given intervention procedure that we use. Then we need to track these dosages as we apply procedures in our interventions. It is important to remember that the dose is not the number of sessions or visits by the child or family to the SLP. Rather, the dose is number of learning opportunities experienced by the child. When the child is not learning and we know the child has experienced the optimum dose of practice trials, we can adjust our intervention procedures with greater confidence. We can also set evidence based goals for our clients and document objectively their progress with respect to these expectations. In addition to these benefits for individual clients, this kind of information will allow us to evaluate the efficacy of our service at the program level with an objectivity that is currently lacking. Imagine if a government or an insurance company suggested that they save money by reducing the dose of our medications below effective levels! We should not allow this solution to be proposed to reduce the cost of speech therapy services. The only way to protect ourselves and our clients is with more research and greater specificity about how our treatments work. We must know the right dosage.