What is a control group?

I have a feeling that my blog might become less popular in the next little while because you may notice an emerging theme on research design and away from speech therapy procedures specifically! But it is important to know how to identify evidence based procedures and to do that requires knowledge of research design and it has come to my attention, as part of the process of publishing two randomized control trials (RCTs) this past year, that there are a lot of misperceptions about what an RCT is in the SLP and education communities, among both clinicians and researchers. Therefore, I am happy to draw your attention to this terrific blog by Edzard Ernst, and in particular to an especially useful post “How to differentiate good from bad research”. The writer points out that a proper treatment of this topic “must inevitably have the size of a book” because each of the indicators that he provides “is far too short to make real sense.” So I have taken it upon myself in this blog to expand upon one of his indicators of good research – one that I know causes some confusion, specifically:

  • Use of a placebo in the control group where possible.

Recently the reviewers (and editor) of one of my studies was convinced that my design was not an RCT because the children in both groups received an intervention. In the absence of a “no-treatment control” they said, the study could not be an RCT! I was mystified about the source of this strange idea until I read Ernst’s blog and realized that many people, recalling their research courses from university, must be mistaking “placebo control” for “no-treatment control.” However, a placebo control condition is not at all like the absence of treatment. Consider the classic example of a placebo control: in a drug trial, the patients randomized to the treatment arm will visit the nurse who hands him or her a white paper cup holding 2 pink pills containing active ingredient X and some other ingredients that do not impact the patient’s disease, i.e., inactive ingredients; the patients randomized to the control arm will also visit the nurse who hands him or her a white paper cup holding 2 pink pills containing only the inactive ingredients. In other words, the experiment is designed so that all patients are “treated” exactly the same except that only patients randomized to treatment receive (unknowingly) the active ingredient. Therefore, all changes in patient behavior that are due to those aspects of the treatment that are not the active treatment (visiting the nice nurse, expecting the pills to make a difference etc.) are equalized across arms of the study. These are called the “common factors” or “nonspecific factors”.

In the case of a behavioral treatment it is important to equalize the common factors across all arms of the study. Therefore in my own studies I deliberately avoid “no treatment” controls. In my very first RCT (Rvachew, 1994) for example the treatment conditions in the two arms of the study were as follows;

  • Experimental: 10 minutes of listening to sheet vs Xsheet recordings and judging correct vs incorrect “sheet” items (active ingredient) in a computer game format followed by 20 minutes of traditional “sh” articulation therapy, provided by a person blind to the computer game target.
  • Control: 10 minutes of listening to Pete vs meat recordings and judging correct vs incorrect “Pete” items in a computer game format followed by 20 minutes of traditional “sh” articulation therapy, provided by a person blind to the computer game target.

It can be seen that the study was designed to ensure that all participants experienced exactly the same treatment except for the active ingredient that was reserved for children who were randomly assigned to the experimental treatment arm, specifically exposure to the experience of listening to and making perceptual judgments about a variety of correct and incorrect versions of words beginning with “sh” or distorted versions of “sh”-the sound that the children misarticulated. Subsequently I have conducted all my randomized control studies in a similar manner. But, as I said earlier, I run across readers who vociferously assert that the studies are not RCTs because an RCT requires a “no treatment” control. In fact, a “no treatment” control is a very poor control indeed as argued in this blog that explains why the frequently used “wait list control group” is inappropriate. For example, a recent trial on the treatment of tinnitus claimed that a wait list control had merit because “While this comparison condition does not control for all potential placebo effects (e.g., positive expectation, therapeutic contact, the desire to please therapists), the wait-list control does account for the natural passing of time and spontaneous remission.” In fact, it is impossible to control for common factors when using a wait list control and it is unlikely that patients are actually “just waiting” when you randomize them to the “wait list control” condition; therefore Hesser et al.’s defense of the wait list control is  optimistic although their effort to establish how much change you get in this condition is worthwhile.

We had experience with a “wait list” comparison condition in a recent trial (Rvachew & Brosseau-Lapré, 2015). Most of the children were randomly assigned to one of four different treatment conditions, matched on all factors except the specific active ingredients of interest. However, we also had a nonexperimental wait list comparison group* to estimate change for children outside of the trial. We found that parents were savvy about maximizing the treatment that their children could receive in any given year. Our trial lasted six weeks, the public health system entitled them to six weeks of treatment and their private insurance entitled them to six to 12 weeks of therapy depending on the plan. Parents would agree to enrolled their child in the trial with randomization to a treatment arm if their child was waiting for the public service, OR they would agree to be assessed in the “wait list” arm if their child was currently enrolled in the public service. They would use their private insurance when all other options had been exhausted. Therefore the children in the “wait list” arm were actually being treated. Interestingly, we found that the parents expected their children to obtain better results from the public service because it was provided by a “real” SLP rather than the student SLPs who provided our experimental treatments even though the public service was considerably less intense! (As an aside, we were not surprised to find that the reverse was true). Similarly, as I have mentioned in previous blogs, Yoder et al. (2005) found that the children in their “no treatment” control accessed more treatment from other sources than did the children in their treatment arm. And parents randomized to the “watchful waiting” arm of the Glogowska et al. (2000) trial sometimes dropped out because parents will do what they must to meet their child’s needs.

In closing, a randomized control trial is simply a study in which participants are randomly assigned to an experimental treatment and a control condition (even in a cross-over design, in which all participants experience all conditions, as in Rvachew et al., in press). The nature of the control should be determined after careful thought about the factors that you are attempting to control, which can be many – placebo, Hawthorne, fatigue, practice, history, maturation and so on. These will vary from trial to trial obviously. Placebo control does not mean “no treatment” but rather, a treatment that excludes everything except the “active ingredient” that is the subject of your trial. As an SLP, when you are reading about studies that test the efficacy of a treatment, you need to pay attention to what happens to the control group as well as the treatment group. The trick is to think in every case – what is the active ingredient that explains the effect seen in the treatment group? what else might account for the effects seen in the treatment arm of this study? If I implement this treatment in my own practice, how likely am I to get a better result compared to the treatment that my caseload is currently receiving?

* A colleague sent me a paper (Mercer et al., 2007) in which a large number of researchers advocating for the acceptance of a broader array of research designs in order to focus more attention on external validity and translational research, got together to discuss the merits of various designs. During the symposium it arose that there was disagreement about the use of the terms “control” and “comparison” group. I use the terms in accordance with a minority of their attendees, as follows: control group means that the participants were randomly assigned to a group that did not experience the “active ingredient” of the experimental treatment; comparison group means that the participants were not randomly assigned to the group that did not experience the experimental intervention, a group that may or may not have received a treatment. This definition was ultimately not used by the attendees, I don’t know why – somehow they decided on a different definition that didn’t make any sense at all, I invite you to consult p. 141 and see if you can figure it out!

References

Glogowska, M., Roulstone, S., Enderby, P., & Peters, T. (2000). Randomised controlled trial of community based speech and language therapy in preschool children. British Medical Journal, 321, 923-928.

Hesser, H., Weise, C., Rief, W., & Andersson, G. (2011). The effect of waiting: A meta-analysis of wait-list control groups in trials for tinnitus distress. Journal of Psychosomatic Research, 70(4), 378-384. doi:http://dx.doi.org/10.1016/j.jpsychores.2010.12.006

Mercer, S. L., DeVinney, B. J., Fine, L. J., Green, L. W., & Dougherty, D. (2007). Study Designs for Effectiveness and Translation Research: Identifying Trade-offs. American Journal of Preventive Medicine, 33(2), 139-154.e132. doi:http://dx.doi.org/10.1016/j.amepre.2007.04.005

Rvachew, S. (1994). Speech perception training can facilitate sound production learning. Journal of Speech and Hearing Research, 37, 347-357.

Rvachew, S., & Brosseau-Lapré, F. (2015). A randomized trial of twelve week interventions for the treatment of developmental phonological disorder in francophone children. American Journal of Speech-Language Pathology, 24, 637-658. doi:10.1044/2015_AJSLP-14-0056

Rvachew, S., Rees, K., Carolan, E., & Nadig, A. (in press). Improving emergent literacy with school-based shared reading: Paper versus ebooks. International Journal of Child-Computer Interaction. doi:http://dx.doi.org/10.1016/j.ijcci.2017.01.002

Yoder, P. J., Camarata, S., & Gardner, E. (2005). Treatment effects on speech intelligibility and length of utterance in children with specific language and intelligibility impairments. Journal of Early Intervention, 28(1), 34-49.

Using effect sizes to choose a speech therapy approach

I am quite intrigued by the warning offered by Adrian Simpson in his paper “The misdirection of public policy: comparing and combining standardized effect sizes

The context for the paper is the tendency of public policy makers to rely on meta-analyses to make decisions such as, for example, should we improve teachers’ feedback skills or reduce class sizes as a means of raising student performance? Simpson shows that that meta-analyses (and meta-analyses of the meta-analyses!) are a poor tool for making these apples to oranges comparisons and cannot be relied upon as a source of information when making public policy decisions such as this. He identifies three specific issues with research design that invalidate the combining and comparing of effect sizes. I think that these are good issues to keep in mind when considering effect sizes as a clue to treatment efficacy and a source of information when choosing a speech or language therapy approach.

Recall that an effect size is a standardized mean difference, whereby the difference between means (i.e., the mean outcome of the treatment condition versus the mean outcome of the control condition) is expressed in standard deviation units. The issue is that the standard deviation units, which are supposed to reflect the variation in outcome scores between participants in the intervention trial, actually reflect many different aspects of the research design. Therefore if you compare the effect size of an intervention as obtained in one treatment trial with the effect size for another intervention as obtained in a different treatment trial, you cannot be sure that the difference is due to differences in the relative effectiveness of the two treatments. And yet, SLPs are asking themselves these kinds of questions every day: should I use a traditional articulation therapy approach or a phonological approach? Should I add nonspeech oral motor exercises to my traditional treatment protocol? Is it more efficient to focus on expressive language or receptive language goals? Should I use a parent training approach or direct therapy? And so on. Why is it unsafe to combine and compare effect sizes across studies to make these decisions?

The first issue that Simpson raises is that of comparison groups. Many, although not all, treatment trials compare an experimental intervention to either a ‘no treatment’ control group or a ‘usual care’ condition. The characteristics of the ‘no treatment’ and ‘usual care’ controls are inevitably poorly described if at all. And yet meta-analyses will combine effect sizes across many studies despite having a very poor sense of what the control condition is in the studies that are included in the final estimate of treatment effect. Control group and intervention descriptions can be so paltry that in some cases the experimental treatment of one study may be equivalent to the control condition of another study. The Law et al. (2003) review combined effect sizes for a number of RCTs evaluating phonological interventions. One intervention compared a treatment that was provided in 22 twice-weekly half hours sessions over a four month period to a wait list control (Almost & Rosenbaum, 1998). Another intervention involved monthly 45 minute sessions provided over 8 months, in comparison to a “watchful waiting” control in which many parents “dropped out” of the control condition (Glogowska et al. 2000). Inadequate information was provided about how much intervention the control group children accessed while they waited – almost anything is possible relative to the experimental condition in the Glogowska trial. For example, Yoder et al. (2005) observed that their control group actually accessed more treatment than the kids in their experimental treatment group which maybe explains why they did not obtain a main effect of their intervention (or not, who knows?). The point is that it is hard to know whether a small effect size in comparison to a robust control is more or less impressive than a large effect size in comparison to no treatment at all. Certainly, the comparison is not fair.

The second issue raised concerns range restriction in the population of interest. I realize now that I failed to take this into account when I repeated (in Rvachew & Brosseau-Lapré, 2018) the conclusion that dialogic reading interventions are more effective for low-income children than children with developmental language impairments (Mol et al., 2008). Effect sizes are inflated when the intervention is provided to only a restricted part of the population, and the selection variables are associated with the study outcomes. However, the inflation is greatest for the children near the middle of the distribution and least for children at the tails of the distribution. This fact may explain why effect sizes for vocabulary size after dialogic reading intervention are highest for middle class children (.58, Whitehurst et al. 1988), in the middle for lower class but normally developing children (.33, Lonigan & Whitehurst, 1998), and lowest for children with language impairments (.13, Crain-Thoreson & Dale, 1999). There are other potential explanatory factors in these studies but this issue with restricted range is an important variable that is of obvious importance in treatment trials directed at children with speech and language impairments. The low effect size for dialogic reading obtained by Crain-Thoreson & Dale should not by itself discourage use of dialogic reading with this population.

Finally, measurement validity plays a huge role with longer more valid tests improving effect sizes in comparison to shorter less valid tests. This might be important when comparing the relative effectiveness of therapy for different types of goals. Law et al. (2003) concluded that phonology therapy appeared to be more effective than therapy for syntax goals for example. For some reason the outcome measures in these two groups of studies tend to be very different. Phonology outcomes are typically assessed with picture naming tasks that include 25 to 100 items, with the outcome often expressed as percent consonants correct and therefore at the consonant level there are many items contributing to the test score. Sometimes the phonology outcome measure is created specifically to probe the child’s progress on the specific target of the phonology intervention. In both cases the outcome measure is likely to be a sensitive measure of the outcomes of the intervention. Surprisingly, in Law et al., the outcome of the studies of syntax interventions were quite often omnibus measures of language functioning, such as the Preschool Language Scale, or worse the Reynell Developmental Language Scale, neither test containing many items targeted specifically at the domain of the experimental intervention. When comparing effect sizes across studies, it is crucial to be sure that the outcome measures have equal reliability and validity as measures of the outcomes of interest.

My conclusion is that it is important to not make a fetish of meta-analyses and effect sizes. These kinds of studies provide just one kind of information that should be taken into account when making treatment decisions. Their value is only as good as the underlying research—overall, effect sizes are most trustworthy when they come from the same study or a series of studies involving the exact same independent and dependent variables and the same study population. Given that this is a rare occurrence in speech and language research, there is no real substitute for a deep knowledge of an entire literature on any given subject. Narrative reviews from “experts” (a much maligned concept!) still have a role to play.

References

Almost, D., & Rosenbaum, P. (1998). Effectiveness of speech intervention for phonological disorders: a randomized controlled trial. Developmental Medicine and Child Neuroloogy, 40, 319-325.

Crain-Thoreson, C., & Dale, P. S. (1999). Enhancing linguistic performance: Parents and teachers as book reading partners for children with language delays. Topics in Early Childhool Special Education, 19, 28-39.

Glogowska, M., Roulstone, S., Enderby, P., & Peters, T. (2000). Randomised controlled trial of community based speech and language therapy in preschool children. British Medical Journal, 321, 923-928.

Law, J., Garrett, Z., & Nye, C. (2003). Speech and language therapy interventions for children with primary speech and language delay or disorder (Cochrane Review). Cochrane Database of Systematic Reviews, Issue 3. Art. No.: CD004110. doi:10.1002/14651858.CD004110.

Lonigan, C. J., & Whitehurst, G. J. (1998). Relative efficacy of a parent teacher involvement in a shared-reading intervention for preschool children from low-income backgrounds. Early Childhood Research Quarterly, 13(2), 263-290.

Mol, S. E., Bus, A. G., de Jong, M. T., & Smeeta, D. J. H. (2008). Added value of dialogic parent-child book readings: A meta-analysis. Early Education and Development, 19, 7-26.

Rvachew, S., & Brosseau-Lapré, F. (2018). Developmental Phonological Disorders: Foundations of Clinical Practice (Second Edition). San Diego, CA: Plural Publishing.

Simpson, A. (2017). The misdirection of public policy: comparing and combining standardised effect sizes. Journal of Education Policy, 1-17. doi:10.1080/02680939.2017.1280183

Whitehurst, G. J., Falco, F., Lonigan, C. J., Fischel, J. E., DeBaryshe, B. D., Valdez-Menchaca, M. C., & Caulfield, M. (1988). Accelerating language development through picture book reading. Developmental Psychology, 24, 552-558.

Yoder, P. J., Camarata, S., & Gardner, E. (2005). Treatment effects on speech intelligibility and length of utterance in children with specific language and intelligibility impairments. Journal of Early Intervention, 28(1), 34-49.

CAMs & Speech Therapy

In this final post on the potential conflict between Evidence Based Practice (EBP) and Patient Centred Care (PCC) I consider those situations in which your client or the client’s family persists in a course of action that you may feel is not evidence based. This is a very common occurrence although you may not be aware of it. Increasing numbers of surveys reveal that the families of children with disabilities use Complementary and Alternative Medicines/Therapies (CAMs), usually without telling their doctor and other health care providers within the “standard” health care environment.

Despite a growing number of studies it is difficult to get an exact estimate of the prevalence of CAM use among such families (see reading list below). Some estimates are low because families are reluctant to admit to using CAMs. Other estimates are ridiculously high because CAM users are responding to insurance company surveys in order to promote funding for these services and products. However, the best estimates are perhaps as follows: about 12% of children in the general population are exposed to CAMs; the proportion probably doubles for children with developmental disabilities in general and doubles again for children with autism. The most commonly used CAMs are dietary supplements or special diets, followed by “mind and body practices” (sensory integration therapy, yoga, etc.); the use of dangerous practices such as chelation therapy is mercifully much less frequent. Predictors of CAM use are high levels of parental education and stress. The child’s symptoms are not reliably associated with CAM use. The hypothesized reasons for these correlations are that educated parents have the means to find out about the CAMs and the financial means to access them. Having had some personal experience with this, I think that educated parents are very used to feeling in control of their lives and nothing shatters that sense of control as much as finding that your child has a developmental disability. I find it very interesting that the studies shown below counted CAM use after specifically excluding prayer! I may be wrong but I expect that many well educated parents, even those that pray, would look for a more active solution than putting their family exclusively in the hands of God. Educating yourself through internet searches and buying a miracle cure feels like taking back control of your life (although months later when you realize you have thousands of dollars of worthless orange gunk in your basement, you are feeling out of control again AND stupid, but that is another story). Anyway, this is why I think (an untested hypothesis I admit) that patient centred care is actually the key to preventing parents from buying into harmful or useless therapies.

When the parent asks (or demands, as used to happen when I had my private practice) that you use a therapy that is not evidence based, how do you respond in a way that balances evidence based practice with patient centred care?

The most important strategy is to maintain an open and respectful dialogue with the family at all times so that conversation about the use of CAMs can occur. Parents often do not reveal the use of these alternative therapies and sometimes there are dangerous interactions among the many therapies that the child is receiving. It is critical that the parent feels comfortable sharing with you and this will not occur if you are critical or dismissive of the parents’ goals and choices. A PCC approach to your own goal setting and intervention choices will facilitate that dialogue. It is actually a good thing if the parent asks you to participate in a change in treatment approach.

Find out what the parent’s motivations are. Possibly the parent’s concerns are not in your domain. For example dad might ask you to begin sessions with relaxation and breathing activities. You launch into a long lecture about how these exercises will not improve speech accuracy. It turns out that the exercises are meant to calm anxiety, a new issue that has arisen after a change in medication and some stresses at school. As an SLP, you are not actually in a position to be sure about the efficacy of the activity without some further checking and going along with the parent is not going to hurt in any case.

Consider whether your own intervention plan is still working and whether your own goals are still the most pertinent for the child. Sometimes we get so wrapped up in the implementation of a particular plan we miss the fact that new challenges in the child’s life obligate a course correction. Mum feels like her child needs something else and looks around for an alternative. After some discussion you may find that switching your goal from morphosyntax to narrative skills might work just as well as introducing acupuncture!

Talk with the parent about where the idea to use the CAM came from and how the rest of the family is adapting to the change. It is possible that mum knows the diet is unlikely to work but dad and dad’s entire family has taken it on as a family project to help the child. In some ways the diet is secondary to the family’s sense of solidarity. On the other hand mum may be isolating herself and the child from the rest of the family by committing to an intervention that everyone else thinks is bonkers! This will be difficult but efforts to engage the family with counseling might be in order.

Explore ways to help the parent establish the efficacy of the CAM. With the family’s consent you might be able to find information about the alternative approach from sources that are more credible than google. You might be able to help the parent set up a monitoring program to document changes in behavior or sleep habits or whatever it is that the parent is trying to modify. You may even be able to implement a single subject randomized experiment to document the efficacy of the therapy for the child. Dad may enjoy helping to plot the data in a spreadsheet.

Finally and also crucially, model evidence based thinking in all your interactions with the family. When you are suggesting new goals or approaches to intervention explain your decisions. Involve the family in those choices, describing the potential benefits and costs of the various options by referencing the scientific literature. Let the parent know that you are making evidence based hypotheses all the time and watching their child carefully to confirm whether your hypotheses were correct. Involve families in this process so that they become used to thinking in terms of educated guesses rather than phony certainties.

Reading list

Bowen, C. & Snow, P. C. (forthcoming, about January 2017). Making Sense of Interventions for Children’s Developmental Difficulties. Guildford: J&R Press. ISBN 978-1-907826-32-0 

Levy, S. E., & Hyman, S. L. (2015). Complementary and Alternative Medicine Treatments for Children with Autism Spectrum Disorders. Child and Adolescent Psychiatric Clinics of North America, 24(1), 117-143.

Owen-Smith, A. A., Bent, S., Lynch, F. L., Coleman, K. J., Yau, V. M., Pearson, K. A., . . . Croen, L. A. (2015). Prevalence and predictors of complementary and alternative medicine use in a large insured sample of children with Autism Spectrum Disorders. Research in Autism Spectrum Disorders, 17, 40-51.

Salomone, E., Charman, T., McConachie, H., Warreyn, P., Working Group 4, & COST Action “Enhancing the Scientific Study of Early Autism”. (2015). Prevalence and correlates of use of complementary and alternative medicine in children with autism spectrum disorder in Europe. European Journal of Pediatrics, 174, 1277-1285.

Valicenti-McDermott, M., Burrows, B., Bernstein, L., Hottinger, K., Lawson, K., Seijo, R., . . . Shinnar, S. (2014). Use of Complementary and Alternative Medicine in Children With Autism and Other Developmental Disabilities: Associations With Ethnicity, Child Comorbid Symptoms, and Parental Stress. Journal of Child Neurology, 29(3), 360-367.

 

 

Do our patients prove that speech therapy works?

The third post in my series on Evidence Based Practice versus Patient Centred Care addresses the notion that the best source of evidence for patient centred care comes from the patient. I recall that when I was a speech-language pathology student in the 1970s, my professors were fond of telling us that we needed to treat each patient as a “natural experiment”. I was reminded of this recently when a controversy blew up in Canada about a study on Quebec’s universal daycare subsidy and the author of the study described the introduction of the subsidy as a “natural experiment” and then this same economist went on to show himself completely confused about the nature of experiments! So, if you will forgive me, I am going to take a little detour through this study about daycare before coming back to the topic of speech therapy with the goal of demonstrating why your own clients are not always the best source of evidence about whether your interventions are working or not, as counter-intuitive as this may seem.

Quebec introduced a universal daycare program in 1997 and a group of economists have published a few evaluations using data from the National Longitudinal Study of Children and Youth (NLSCY), one looking at anxiety in younger kids  and the more recent one describing crime rates when the kids were older . The studies are rather bizarre in that children who access daycare (or not) do not provide data for these studies – rather province wide estimates of variables such as likelihood of using daycare and childhood anxiety are obtained from the NLSCY which is a survey of 2000 children from across Canada, obtained biannually but followed longitudinally; then they estimated province wide youth criminal activity from a completely different survey rather than using the self-report measures from the NLSCY. Differences in these estimates (see post-script) from pre-daycare cohorts to post-daycare cohorts are compared for Quebec versus the ROC (rest of Canada, which does not have any form of universal childcare program). One author described the outcome this way: “looking at kids in their teens, we find indicators of health and life satisfaction got worse, along with teens being in more trouble with the law.” The statistical analysis and design are so convoluted I was actually hoodwinked into thinking youth crime was going up in Quebec, when in fact youth crime was actually declining, just not as fast as in the ROC. In actual fact, youth crime legislation and practices vary so dramatically across provinces, and particularly between Quebec and the ROC that it is difficult indeed to compare rates of youth crime using the variable cited in the NBER paper (rates of accused or convicted youths; for discussion see Sprott). Then they attribute this so-called rise but actual decline in crime to “the effects of a sizeable negative shock to non-cognitive skills due to the introduction of universal child care in Quebec”. Notwithstanding this nonsense summary of the results of these really weird studies, the most inaccurate thing that Milligan said is that this study was a “natural experiment” which is “akin to a full randomized experiment such as Perry Preschool, but on a larger scale”. But the thing is, a “natural experiment” is not an experiment at all because when the experiment is natural, you cannot determine the cause of the events that you are observing (although when you have enough high quality pairs of data points you can sometimes make decent inferences, NOT the case in this particular study). The economists know how to observe and describe naturally occurring events. They can estimate an increase in daycare use and changing rates of child anxiety and youth crime convictions in Quebec vs the ROC and compare changing rates of things between these jurisdictions. What they cannot do is determine why daycare use changed or reported anxiety changed or convictions for youth crime changed. To answer the question “why”, you need an experiment. What’s more, experiments can only answer part of the “why” question.

So let’s return to the topic of speech therapy. We conduct small scale randomized control trials in my lab precisely because we want to answer the “why” question. We describe changes in children’s behavior over time but we also want to know whether one or more of our interventions were responsible for any part of that change. In our most recently published RCT we found that even children who did not receive treatment for phonological awareness improved in this skill, but children who received two of our experimental interventions improved significantly more. Control group children did not change at all for articulation accuracy whereas experimental group children did improve significantly. In scatterpots posted on my blog, we also showed that there are individual differences among children in the amount of change that occurs within the control group that did not experience the experimental treatments and within the experimental groups.  Therefore, we know that there are multiple influences on child improvement in phonological awareness and articulation accuracy, but our experimental treatments account for the greater improvement in the experimental groups relative to the control group. We can be sure of this because of the random assignment of children to treatments which controls for history and maturation effects and other potential threats to the internal validity of our study. How do we apply this information as speech-language pathologists when we are treating children, one at a time.

When a parent brings a child for speech therapy it is like a “natural experiment”. The parent and maybe the child are concerned about the child’s speech intelligibility and social functioning. The parent and the child are motivated to change. Coming to speech therapy is only one of the changes that they make and given long waits for the service it is probably the last in a series of changes that the family makes to help the child. Mum might change her work schedule, move the child to a new daycare, enlist the help of the grandparent, enroll the child in drama classes, read articles on the internet, join a support group, begin asking her child to repeat incorrect words, check out alliteration books from the library and so on. Most importantly, the child gets older. Then he starts speech therapy and you put your shiny new kit for nonspeech oral motor exercises to use. Noticing that the child’s rate of progress picks up remarkably relative to the six month period preceding the diagnostic assessment, you believe that this new (for you) treatment approach “works”.

What are the chances? It helps to keep in mind that a “natural experiment” is not an experiment at all. You are in the same position as the economists who observed historical change in Quebec and then tried to make causal inferences. One thing they did was return to the randomized control trial literature, ironically citing the Perry Preschool Project which proved that a high quality preschool program reduced criminality in high risk participants. On the other hand, most RCTs find no link between daycare attendance and criminal behavior at all. So their chain of causal inferences seems particularly unwise. In the clinical case you know that the child is changing, maybe even faster than a lot of your other clients. You don’t know which variable is responsible for the change. But you can guess by looking at the literature. Are there randomized controlled trials indicating that your treatment procedures cause greater change relative to a no-treatment or usual care control group? If so, you have reason for optimism. If not, as in the case of nonspeech oral motor exercises, you are being tricked by maturation effects and history effects. If you have been tricked in this way you shouldn’t feel bad because I know some researchers who have mistaken history and maturation effects for a treatment effect. We should all try to avoid this error however if we are to improve outcomes for people with communication difficulties.

*******************************************

PS If you are interested in the difference-in-difference research method, here is a beautiful youtube video about this design, used to assess handing out bicycles to improve school attendance by girls in India. In this case the design includes three  differences (difference-in-difference-in-difference design) and the implementation is higher quality all round compared to the daycare study that I described. Nonetheless, even here, a randomized control trial would be more convincing.

Full Engagement with Evidence and Patients in SLP Practice

This is the second in my promised series on the topic of Evidence Based Practice versus Patient Centred Care. I will respond to the implication that EBP constrains the choices of the patient and the SLP thus conflicting with PCC and minimizing the role of clinical expertise. I will argue that the evidence is less constraining than generally believed.  The SLP must understand the general principles revealed by the research and understand the patient’s needs well enough to apply the evidence. As Sue Roulstone put it “Clinical expertise supports the skillful application of that research to the practice situation”.

It is necessary to know the research deeply in order to understand the clinical implications beyond the simple “what works” messages that might be found in the paper abstracts. I will provide two examples. Recently Françoise Brosseau Lapré and I published a randomized control trial (RCT) in which we reported that 2 of 4 interventions worked well (one focusing on intense speech practice and the other focusing on listening and responding to speech inputs). Both interventions involved 6 weeks of individualized speech therapy followed by a 6 week parent-education program. We have been talking about this trial in the local community for some time now and unfortunately the take away message seems to have been that parent training works. SLPs are using the trial as a justification for the implementation of parent training initiatives without an accompanying individualized therapy component. And yet, a careful reading of our work reveals that the trial is not, ultimately, about parent training at all. Our research was about alternative means for providing intense focused inputs to children with phonological disorders. In one way the results of the trial increased options for SLPs and families by showing that two approaches can be equally effective as shown in a previous blog post; at the same time the trial should constrain SLP choices somewhat by focusing attention on intensity and coherence in the selection of treatment procedures.

Another example is provided by the parent administered focus stimulation program for treatment of morphosyntax delays described by Marc Fey and Pat Cleave many years ago. They reported that a parent implemented intervention could be as effective as an SLP implemented intervention while being very efficient, requiring about half as much SLP time per child. I remember that shortly thereafter many SLP programs in Canada cut their ration from 16 to 8 weeks/hours per child and strongly promoted parent mediated interventions as the primary mode of care. The thing is that the parent program described in Fey’s trial required 21 hours per child to implement! Furthermore, a follow-up study revealed that parents were not able to implement it as effectively when their children’s goals became more complex.

There are many studies that involve parent implemented interventions and overall what they tell us is that (1) intensity matters – an effective speech and language interventions gets as much “data” to the child as possible; (2) parent involvement in the therapy program is just one way to achieve an increase in intensity; (3) parents can be effective if they are very well trained and supported and the home program is focused on the achievement of specific goals; (4) training and support takes time and effort on the part of the SLP; and (5) not all parents are equipped to implement speech therapy for all goals. There is a lot of nuance here and SLPs should be empowered to apply this research evidence to meet their clients’ needs.

I know that SLPs prefer to make decisions in the best of interests of their patients without being constrained by evidence based care guidelines. But the flipside of understanding the evidence well is understanding your patient’s needs equally well.  When I was a young SLP, I know that I made many mistakes in this regard until I learned some things the hard way. I recall that when my daughter was small and needed a lot of therapies, the various providers fervently believed that it was best if I provide all those practice exercises at home in the natural environment. Furthermore, somebody, without consulting me, decided that a speech-language pathologist was not a necessary member of my daughter’s rehab team because I could fulfil that role myself! I ended up in the nurse coordinator’s office crying, ‘I am her mother, not her therapist’. Mercifully, a whole new plan was put in place. The thing is, that for PCC to work you have to really understand what the evidence says and then you have to understand the needs of your patients.

An excellent qualitative study that shows how breakdowns in PCC lead to poor outcomes is found in Annals of Family Medicine.  The authors identify four archetypes of engagement in shared decision making (SDM): full engagement (SDM present, subjective experience positive); simulated engagement (SDM present, subjective experience negative); assumed engagement (SDM absent, subjective experience positive); and nonengagement (SDM absent, subjective experience negative). I strongly recommend reading the paper and the vignettes made available in the online supplemental material. They make for fascinating reading. Full engagement is characterized by shared decision making and mutual trust. The other situations often involve one or both parties making assumptions about the other person’s feelings or motives, leading to a lack of disclosure of important information.

The research evidence might tell you that an intervention can work. Patient centred care is necessary to make sure that it will work.

Shared Decision Making in SLP Practice

This is the first in my promised series on the topic of Evidence Based Practice versus Patient Centred Care. This blogpost is a response to the concern that patients’ preferences may lead to the selection of treatment options that are not evidence-based. This is increasingly possible given the large amount of consumer health information on the internet. However, this information, while sometimes emanating from credible sources, tends to be supplier- rather than consumer- driven, and information by itself does not help patients make the best decisions about their own care. Furthermore, enabling patients to “choose” options that are not in their own best interests can hardly be described as “patient centred care”.

The current best model for PCC is shared decision making in which there is a two-way exchange of information between the SLP and the patient and/or patient’s family. This decision making model can be contrasted with a paternalistic model in which the health care provider makes the decision with limited input from the patient and the informed patient choice model in which the patient is provided with information and then expected to make their own decision independently.

The SLP provides expert information and decision making support and the patient/family provides information about preferences and expectations. The exchange should clarify expectations about the goal of the intervention, identify evidence based interventions and service delivery options, highlight the benefits and risks of those options, and reveal the patient’s values, supports and decision making needs. A tool that has been developed to help patients make decisions about varying health care options is the Ottawa Personal Decision guide, available in several languages. Note that the patient is expected to review the information with the health care provider so that they can make the decision together.

We can consider this model in the context of an extremely common but controversial treatment decision. Let’s say that you have assessed a 18-month-old boy who has low normal receptive language skills and produces only three words. The child has a severely restricted phonetic repertoire when considering both meaningful and nonmeaningful utterances (one stop, one nasal, one vowel). Mum is especially concerned because her own brother is dyslexic and her sister’s son is currently receiving speech-language services at school. You discuss three potential treatment options with the parent in view of the potential risks and benefits and the mother’s preferences:

  1. Watchful waiting: this is the standard recommendation of the hospital in this case because the evidence suggests that late talkers tend to have a good outcome and 18 months is too earlier to predict language impairment on the basis of late talking. The risk is that the child will not catch up and there is evidence that reading outcomes are better in the case of late talking plus family history of dyslexia when the child achieves good vocabulary scores at an early age. Mum is not favourable to this option because her brother and nephew both struggled at school, academically and socially, and she wishes to prevent that outcome for her son at all cost.
  2. Referral to a parent group that is offered by the local health unit in which the SLP teaches the parents responsive interaction techiques. The benefit is that she may learn some useful strategies for stimulating her child’s language at home and she will have the opportunity to connect with parents of other late-talking children. If her son’s language skills do not start to increase rapidly she will be connected to a community resource that can act quickly. The risk is that she will invest time in an intervention that has not been shown to be more effective than watchful waiting over the long term for this population. Mum is not favourable to this option because her husband is about to have hip surgery and he will need her support over the next three months. She does not feel that she will have the time resources to participate in this program at the present time.
  3. Private therapy, using her work-based insurance plan, with grandma taking the son to therapy sessions (12 sessions are covered under the plan). The benefit is that the private SLP can focus on the child’s speech sound repertoire as well as his expressive vocabulary development. Mum will also know that she is doing all she can to accelerate her son’s language development. Grandma will likely be spending a lot of time with her grandson due to the other stresses currently in the family and she has the time resources to attend and follow-through on the therapy sessions. The risks are associated with the opportunity costs – there is a possibility that the child’s language development may pick up without therapy and the child and family might be better off to engage in other valuable pursuits. Mum discusses this option with the rest of the family and they decide to proceed given the family history of slow language development and dyslexia. They are particularly anxious to prevent the need for therapy during school if at all possible. The SLP agrees that this is a reasonable course of action given the concomitant speech delay (restricted phonetic repertoire) and family history, although she would have been willing to wait six months as well and then reassess. She helps the family to identify an appropriate private service provider. She recommends a hospital reassessment in one year’s time.

The goal is to make decisions jointly with the patient taking into account the patient’s needs, the family’s values and particular circumstances, the benefits and risks of the possible actions as informed by the research evidence, and the constraints of the service providers in the local community. In a future blogpost I will bring in some qualitative research on the nature of these interactions and the features that lead to a successful outcome.

Evidence Based Practice versus Patient Centred Care

@WeSpeechies is again proving to be a source of fascinating conversation. During the week October 25 – 31, (David Kinnane, Consumer protections and speech pathology services: Are we doing the right things at the right times?) an excellent paper by Sue Roulstone was posted, “Evidence, expertise, and patient preference in speech-language pathology” in the context of a discussion about whether evidence based practice (EBP) is inconsistent with patient centred care (PCC). There are a number of loosely connected propositions that might lead to this conclusion and I am going to list them here and then discuss them in separate blogposts. Ultimately I will conclude that patient centred care demands that we practice in an evidence based manner.

The arguments in favour of the idea that EBP and PCC are in conflict come from both directions, either there is a worry that the patient’s preferences will be in conflict with the evidence or there is concern that applying the evidence means ignoring the patient, not to mention clinical expertise.

The first objection is that PCC means selecting treatment approaches and practices in accordance with the patient’s preferences and values. I will argue in my first blog that there are several different models of PCC but none of them are the same as ‘consumer driven decision making”, in other words, simply doing what the patient asks. The preferred model, “shared decision making” is fully consistent with EPB.

A second objection is that EPB implies that there is only one treatment option for every case; therefore there is no room for taking the patient’s preferences and values into account. I will argue that the evidence is nearly always about probabilities and general principles. Therefore it is the role of the SLP to work with the patient to determine which evidence is most applicable and then jointly choose among the best alternative courses of action.

A third perspective is that the most patient centred form of care is to apply a treatment to each individual patient and then watch to see if it “worked” because after all, RCTs only apply to groups of other patients, not your current specific patient. Therefore, clinical expertise should be added to the evidence hierarchies as a form of evidence for treatment efficacy. I will argue that you can never determine treatment efficacy by simply observing change in a single patient.

Finally, the arguments made in all of these blogposts will seem a bit abstract. What do you do when the patient persists in a course of action that appears to be in conflict with all evidence? I will recount my experience with this situation and suggest a course of action. As always I invite your comments.

Which SLPs are Effective?

In my last two blog posts I have been talking about how to ensure that your speech therapy program “works”, in other words, how can you be sure that what you do is effecting change in your patient over and above the change that would occur due to maturation and history effects alone? I have suggested that if you choose treatment approaches that have been validated via randomized controlled trials as effective approaches and if you demonstrate that your patient is improving you can be reasonably sure that you are having a positive effect on your patient. I have further cautioned that you need to read the original research carefully and implement the treatment approaches in accordance with the treatment efficacy trials with respect to procedures, treatment intensity and so on in order to ensure that you will get the same effect. These details – the treatment procedures that you decide to implement with your patient – are referred to as the specific ingredients of your treatment program. Throughout my research career I have been focused on the relative efficacy of these specific ingredients – is it effective to use perception training or stimulability training or prepractice with visual cues in comparison to usual care? For example, one-year follow-up of the children treated in Rvachew, Nowak and Cloutier (2004) showed that 50% of the children who received usual care + speech perception training started school with normalized speech versus only 19% of the children who received usual care + dialogic reading. I obviously feel that an important role of the SLP is to know the scientific literature and choose the right specific ingredients for their patients.

In contrast, Ebert and Kohnert (2010) point out that the effectiveness of speech therapy might also be due to “common factors” which include (following Grencavage and Norcross): the patient, the clinician, and the patient-clinician alliance, change processes, and treatment structure. Studies on the effectiveness of teachers and psychotherapists are starting to appear with increasing frequency but I am not aware of any published systematic studies of SLP effectiveness that take a “common factors” approach. Ebert and Kohnert re-analyzed the data from one of my studies (Rvachew and Nowak, 2001, discussed in my previous blog) and concluded that although target selection strategy accounted for a larger proportion of variance in outcomes, individual differences in clinician effectiveness accounted for 20% of variance in outcomes. These researchers surveyed SLPs in Minnesota and asked them to rate various factors for their importance in determining client outcomes. The results showed that SLPs weight client-clinician factors very highly with “rapport” being the item rated as having the greatest impact on therapeutic outcomes. Recently Geraldine Wotton wrote a blog post on the power of the therapeutic relationship that expresses this commonly held view. The thing is however, I knew the SLPs who provided the intervention in Rvachew and Nowak and I can tell you that there were no discernible differences in rapport between these SLPs and their clients. Furthermore, at the time I was the research coordinator for allied health in the hospital and I was responsible for the client satisfaction questionnaire in the hospital. Families reported high levels of satisfaction with their clinicians while reporting varying levels of satisfaction with their child’s outcomes. I was always impressed by the fact that parental satisfaction with their child’s speech outcomes and objective measures of child outcomes were highly correlated (given that I was running several RCTs at the time I could look at this) but uncoupled from uniformly high satisfaction ratings for their relationship with the therapist. I certainly agree that the strong positive relationship between SLPs and their patients is an important factor in treatment efficacy – I just don’t agree that it explains variations in treatment outcomes: think about this carefully – SLPs are selected to have strong interpersonal skills and we are very good at establishing rapport with our patients but we do not all get the same results. There is something else going on here.

Françoise and I recently completed a RCT involving 72 francophone children in which the clinicians were student SLPs from McGill. We have 6 videorecorded therapy sessions for each child, representing more than a dozen student SLPs. Unfortunately we have run out of funds so we haven’t been able to analyze all the video but two students, Amanda Langdon and Hannah Jacobs, obtained summer research bursary funds from the Faculty of Medicine to conduct a pilot project in which they coded the videos for 6 student clinicians, attempting to identify common factors that might differentiate between more and less effective SLPs. In this case the supervising clinical educators told us which student SLPs were more or less effective in their opinion, rating them as “accomplished” or “struggling”. Then Hannah and Amanda coded the videos for factors related to the clinician, the clinician-child alliance and to change processes. Interestingly the factors that differentiated “accomplished” versus “struggling” student SLPs were not those that would be ascribed to the “clinician” category in Glencavage and Norcross’ model. Rather we found large differences in variables that could be categorized as “change processes”. In Glencavage and Norcross’ paper a lot of the factors categorized as change processes are specific to psychotherapy but some are common to speech therapy as well, for example “acquisition and practice of new behaviors”, “provision of a therapeutic rationale”, “naming the problem”, and “contingency management”. Applied to speech therapy we can hypothesize that SLPs may vary in their ability to communicate and/or negotiate the goals of the therapy program to or with the patient, maintain a high response rate during sessions so as to ensure that most of the session is spent practicing new behaviors, and manage contingencies so that the patient is receiving appropriate feedback about their responses during practice. We observed changes in these skills across the six week treatment program for student SLPs who were rated to be “accomplished” or “struggling” by their supervising clinical educators. We found that all the students increased the amount of time devoted to direct therapy in their sessions during the course of their practicum. Accomplished students began with good contingency management skills and improved those skills to an even higher level after six weeks of practice. On the other hand, struggling students began and ended the practicum with poor contingency management skills – in particular these students did not provide appropriate feedback after incorrect responses by their clients. Interestingly, in comparison to struggling students, accomplished students spent more rather than less time in “off task” behavior which may mean that they had more resources available for conversation that served to establish rapport with their clients. Struggling students spent a lot of time “manipulating materials” and therefore their disorganized approach to the therapy sessions may have interfered with the SLP-client alliance. Unfortunately this study is tiny; the coding is hugely time consuming and expensive. However I think that it is crucial for our profession that resources be expended to study these therapeutic processes and the means to improve our students’ skills in learning these skills during their preprofessional practice.

I’d love to hear from student SLPs about your experiences with learning these skills. What do you think your clinical educators could do to help you learn these skills? I’d also love to hear from practicing SLPs – do you agree that skill in the engagement of change processes is an important factor in therapeutic effectiveness? Which change processes do you think are most important in speech therapy?

Don’t get tricked: Why it pays to read original sources.

In my last blog post I suggested that you can have confidence in the effectiveness of your clinical practice if you select treatment practices that have been validated by research. Furthermore, I provided links to some resources for summaries of research evidence. In this blog post I want to caution that it is important to read the original sources and to view the summaries, including meta-analyses, with some skepticism. Excellent clinical practice requires a deep knowledge of the basic science that is the foundation for the clinical procedures that you are using. Familiarity with the details of the clinical studies that address the efficacy of those procedures is also essential. I will provide two examples where a lack of familiarity with those details has led to some perverse outcomes.

Two decades ago it was quite common for children who were receiving services from publically funded providers in Canada to receive 16-week blocks of intervention. Then we went through the recession of the nineties and there was much pressure on managers in health care to cut costs. Fey, Cleave, Long, and Hughes (1993) conveniently published an RCT demonstrating that a parent intervention was just as effective as direct intervention provided by the SLP to improve children’s expressive grammar – the icing on the cake was that the parent-provided service required half as many SLP hours as the direct SLP-provided service. All across Canada, direct service blocks were cut to 8 weeks and parent-consultation services were substituted for the direct therapy model. About a decade after that I made a little money myself giving workshops to SLPs on evidence based practiced. The audiences were always shocked when I presented the actual resource inputs for Fey et al.’s interventions: (1) direct SLP intervention –  cost = 40 hours per child over 20 weeks, versus (2) parent administered intervention – cost = 21 hours per child over 20 weeks. So you see, the SLPs had been had by their managers! The SLPs would have been better positioned to resist this harmful change in service delivery model if they had been aware of the source of the claim that you could halve your therapy time by implementing a home program and get the same result. I don’t know that our profession could have changed the situation by being more knowledgeable about the research on service delivery models because the political and financial pressures at the time were extreme – but at least we and our patients would have had a fighting chance!

Another reason that you have to be vigilant is that the authors of research summaries have been known to engage in some sleight of hand. An example of this is chapter on Complexity Approaches by Baker and Williams in the book Interventions for Speech Sound Disorders in Children. This book is pretty cool because each chapter describes a different approach  and is usually accompanied by a video demonstration. Each author was asked to identify all the studies that support the approach and put them on a “levels of evidence” table. As indicated in a previous blog post, the complexity approach to selecting targets for intervention is supposedly supported by a great many studies employing the multiple probe design which is a fairly low level of evidence because it does not control for maturation or history effects. In the Baker and Williams “levels of evidence” table all of these single subject studies are listed  so it looks pretty impressive. The evidence to support the approach looks even more impressive when you notice that two randomized controlled trials are shown at a higher level on the table. This table leads you to believe that the complexity approach is supported by a large amount of data and the highest level of evidence until you realize that neither of those two RCTs, Dodd et al. (2008) and Rvachew and Nowak (2001), support the complexity approach. Even when you read the text, it is not clear that these RCTs do not provide support for the approach because the authors are a bit wafflely about this fact.  Before I noticed this table I couldn’t understand why clinicians would tell me proudly that they were using the complexity approach because it is evidence based. It is pretty hard to keep up with the evidence when you have to watch out for tricks like this!

In the comments to my last blog post there were questions about how you can be sure that your treatment is leading to change that is better than maturation alone. An RCT is designed to answer just that question so I am going to discuss the results of Rvachew and Nowak (2001), as detailed in a later paper, Rvachew, S. (2005). Stimulability and treatment success. Topics in Language Disorders. Clinical Perspectives on Speech Sound Disorders, 25(3), 207-219. Unfortunately this paper is hard to get so a lot of SLPs are not aware of the implications of our findings for the central argument that motivates the use of the complexity approach to target selection.  Gierut (2007) grounds the complexity approach on learnability theory, paradoxically the notion that language is essentially unlearnable and thus the structure of language must be innately built in. Complex language inputs are necessary to trigger access to this knowledge. Because of the hierarchical structure of this built-in knowledge, exposure to complex structure will “unlock the whole”, having a cascading effect down through the system. On the other hand, she claims that “it has been shown that simpler input actually makes language learning more difficult because the child is provided with only partial information about linguistic structure (p. 8).”

We tested this hypothesis in our RCT. Each child received a 15 item probe of their ability produce all the consonants of English in initial, medial and final position of words. The phonemes that they had not mastered were then ordered according to productive phonological knowledge and developmental order. Michele Nowak selected potential treatment targets for each child from both ends of the continuum. I independently (blindly, without access to the child’s test information or knowledge of the targets that Michelle had selected) randomly assigned the child to treatment condition, either ME or LL. ME condition means that the child was treated for phonemes for which the child had most knowledge and which are usually early developing. LL condition means that the child was treated for phonemes for which the child had least productive phonological knowledge and which are usually late developing. The children were treated in two six week blocks with a change in treatment targets for the second block using the same procedure to select the targets. The figure below shows probe performance for several actual and potential targets per child: the phoneme being treated in a given block, the phoneme to be treated in the next block (or that was treated in the previous block) and the phonemes that would have been treated if the child had been assigned to the other treatment condition. As a clinician, I am interested in learning and retention of the treated phonemes, relative to maturation. As a scientist who is testing the complexity approach, Gierut is interested in cross-class generalization, regardless of whether the child learns the targeted phoneme. We can look at these two outcomes across the two groups.

Let’s begin with the question of whether the children learned the target phonemes and whether there is any evidence that this learning is greater than what we would see with maturation alone. In the chart, learning during treatment is shown by the solid lines whereas dotted lines indicate periods where those sounds were not being treated. A1 is the assessment before the first treatment block, A2 is the assessment after the first block and before the second block, and A3 is the last assessment after the second treatment block. On the left hand side, we see that the ME group was treated during the first block for phonemes that were mastered in one word position but not in the other two (average score of 6/15 prior to treatment). The slopes of the solid versus dotted lines show you that change from A1 to A2 was greater than change from A2 to A3. This means that these targets showed more change when they were being treated in the first block than when they were not being treated during the second block. During the second block, we treated slightly harder sounds that were not mastered in any word position, with a starting probe score of 3/15 on average. These phonemes improved from A1 to A2 even though they weren’t being treated but the rate of improvement is much higher between A2 and A3 when they were being treated. Interestingly, the slopes of the solid lines and the slopes of the dotted lines are parallel – this is your treatment effect – this is the proof that treatment is more effective than not treating. As further proof we can look at the results for the LL group. We have a similar situation with parallel solid and dotted lines for the phonemes that were treated in the first and second blocks at the bottom of the chart. We don’t have as much improvement for these phonemes because they were very difficult, unstimulable late developing sounds (targets that are consistent with the complexity approach). None-the-less the outcomes are better while the phonemes are being treated than when they are not (in fact there are slight regressions during the blocks when these sounds are not treated). At the same time, the phonemes for which the children have the most knowledge improve spontaneously (Gierut would attribute this change to cross-class generalization whereas I attribute this change to maturation). The interesting comparison however is across groups. Notice that the ME group shows a change of 4 points for treated “most knowledge” phonemes versus a change of 3 points for the untreated “most knowledge” phonemes for the LL group. This is not a very big difference but none-the-less, treating these phonemes results in slightly faster progress than not treating them.

In our 2001 paper we reported that progress for treated targets was substantially better for children in the ME condition than for children in the LL condition (in the latter group, the children remained unstimulable for 45% of targets after 6 weeks of therapy). However, the proponents of the complexity approach are not interested in this finding. If the child does not learn the hard target that is an acceptable price to pay if cross-class generalization occurs and the child learns easier untreated phonemes. If you look at the right hand side of the chart by itself, the chart can be taken as support for the complexity approach because spontaneous gains are observed for the “most knowledge” phonemes. The problem is that the proponents of this approach have argued that exposure to “simpler input actually makes language learning more difficult” – it is literally supposed to be impossible to facilitate learning of harder targets by teaching simpler targets. Therefore the real test of the complexity approach is not in the right hand chart. We have to compare the rate of change for the unstimulable targets across the two groups. It is apparent that the gain for UNTREATED unstimulable phonemes (ME group, gain = 2) is double that observed for TREATED unstimulable phonemes (LL group, gain = 1). The results shown on the left clearly show that treating the easier sounds first facilitated improvements for the difficult phonemes. I have explained this outcome by reference to dynamic systems theory in Rvachew and Bernhardt (2010). From my perspective, it is not just that my RCT shows that the complexity approach doesn’t work. It’s that my RCT is just part of a growing and broad based literature that invalidates the “learnability approach” altogether. Francoise and I describe and evaluate this evidence while promoting a developmental approach to phonology in our book Developmental Phonological Disorders: Foundations of Clinical Practice.

 

Probe Scores for Treated and Untreated Phonemes

Probe Scores for Treated and Untreated Phonemes

 

 

 

 

 

 

 

 

The larger point that I am trying to make here is that SLPs need to know the literature deeply. The evidence summaries tend to take a bit of a “horse race” approach, grading study quality on the basis of sometimes questionable checklists and then making conclusions on the basis of how many studies can be amassed at a given level of the evidence table. This is not always a clinically useful practice. It is necessary to understand the underlying theory, to know the details of the methods used in those studies, and to draw your own conclusions about the applicability of the treatments to your own patients. This means reading the original sources. In order to achieve this level of knowledge we need to reorganize our profession to encourage a greater number of specialists in the field because no individual SLP can have this depth of knowledge about every type of patient that you might treat. But it should be possible to encourage the development of specialists who are given the opportunity to stay current with the literature and provide consultation services to generalists on the front lines. Even if we could ensure that SLPs had access to the best evidence as a guide to practice however, there are some “common factors” that have a large impact on outcomes even when treatment approach is controlled. In my next post I will address the role of the individual clinician in ensuring excellent client outcomes.