Research Engagement with SLPs

I still have days when I miss my former job as a research coordinator in a hospital speech-language department. As a faculty researcher, I try to embed my research in clinical settings as often as I can but it is not easy. Administrators, in particular, and speech-language pathologists on occasion may be leery of the time requirement and often worry that the project might shine too bright a light on every day clinical practices that may not be up to the highest evidence based standard. I always try to design projects that are mutually beneficial to the research team and the clinical setting. As a potential support to the promise of mutual benefit, I was pleased to read a recent paper in the British Medical Journal “Does the engagement of clinicians and organizations in research improve healthcare performance: a three-stage review”. On the basis of an hour-glass shaped review, using an interpretive sythesis of the literature on the topic, Boaz, Hanney, Jones, and Saper drew the following conclusions:

Some papers reported an association between hospital participation in research and improved patient outcomes. Some of these findings were quite striking as for example significantly worse survival from ovarian cancer in “non study hospitals” versus hospitals involved in research trials (my sister-in-law died from this terrible disease last month so I couldn’t help but notice this).

A majority of papers reported an association between hospital participation in research and improved processes of healthcare. This includes the adoption of innovative treatments as well as better compliance with best practice guidelines. 

Different causal mechanisms may account for these findings when examining impacts at the clinician versus organization level. For example, involvement in a clinical trial may include staff training and other experiences that change clinician attitudes and behaviors. Higher up, participation in the trial may require the organization to acquire new infrastructure or adopt new policies.

The direction of cause and effect may be difficult to discern. Specifically, a hospital that is open to involvement in research may have a higher proportion of research-active staff who have unique skills, specialization or personal characteristics. These characteristics may jointly improve healthcare outcomes in that setting and that make those staff more amenable to engagement with research.

This last point resonates well with my experience at the Alberta Children’s Hospital in the 80’s and 90’s. The hospital had a very large SLP department, up to 30 SLPs, permitting considerable specialization among us. Furthermore, as a teaching hospital we a had a good network of linkages to the two universities in the province and to a broad array of referral sources. Our working model, that was based on multidisciplinary teams, also supported involvement in research. Currently, in Montreal I am able to set up research clinics in healthcare and educational settings from time to time, but none of them have the resources that we enjoyed in Alberta three decades ago.

Of course, direct involvement in research is not the only way for SLPs to engage with research evidence. Another paper, published in Research in Developmental Disabilities used a survey to explore “Knowledge acquisition and research evidence in autism.” Carrington et al found that researchers and practitioners had somewhat different perspectives. The researcher group (n=256) and the practitioner group (n=422) identified sources of information that they used to stay up to date with current information on autism. Researchers were more likely to identify scientific journals and their colleagues whereas practitioners were more likely to identify conferences/PD workshops and non-academic journals. Respondents also identified sources of information that they thought would help practitioners translate research to practice. Researchers thought that nontechnical summaries and interactions with researchers would be most helpful. Practitioners identified academic journals as the best source of information (but the paper doesn’t explain why they were not using these journals as their primary source).

Finally, the most interesting finding for me was that both groups did not use or suggest social media as a helpful source of information. I thought this was odd because social media is a potential access point to academic journal articles or summaries of those articles as well as a way of interacting directly with scientists. 

The authors concluded that knowledge translation requires that practitioners be engaged with research and researchers. For that to happen they suggest that “research should focus on priority areas that meet the needs of the research-user community” and that “attempts to bridge the research-practice gap need to involve greater collaboration between autism researchers and research-users.”

Given that the research shows that the involvement of practitioners in research actually improves care and outcomes for our  clients and patients, I would say that it is past time to bring down barriers to researcher-SLP collaboration and bring research right into the clinical setting.

Advertisements

Maternal Responsiveness to Babbling

Over the course of my career the most exciting change in speech-language pathology practice has been the realization that we can have an impact on speech and language development by working with the youngest patients, intervening even before the child “starts to talk”. Our effectiveness with these young patients is dependent upon the growing body of research on the developmental processes that underlie speech development during the first year of life. Now that we know that the emergence of babbling is a learned behavior, influenced by auditory and social inputs, this kind of research has mushroomed although our knowledge remains constrained because these studies are hugely expensive, technically difficult and time consuming to conduct. Therefore I was very excited to see a new paper on the topic in JSLHR this month:

Fagan, M. K., & Doveikis, K. N. (2017). Ordinary Interactions Challenge Proposals That Maternal Verbal Responses Shape Infant Vocal Development. Journal of Speech, Language, and Hearing Research, 60(10), 2819-2827. doi:10.1044/2017_JSLHR-S-16-0005

The purpose of this paper was to examine the hypothesis that maternal responses to infant vocalizations are a primary cause of the age related change in the maturity of infant speech during the period 4 through 10 months of age. This time period encompasses three stages of infant vocal development: (1) expansion stage, that is producing vowels and a broad variety of vocalizations that are not speech like but nonetheless exercise vocal parameters such as pitch, resonance and vocal tract closures; (2) canonical babbling stage, that is producing speech like CV syllables, singly or in repetitive strings; and, (3) integrative stage, that is producing a mix of babbling and meaningful words. In the laboratory, contingent verbal responses from adults increase the production rate of mature syllables by infants. Fagan and Doveikis asked whether this shaping mechanism, demonstrated in the laboratory, explains the course of infant speech development in natural interactions in real world settings. They coded 5 and a quarter hours of natural interactions recorded between mothers and infants in the home environment from 35 dyads in a cross-sectional study. Their analysis focuses on maternal behaviors in the 3 second interval following an infant vocalization, defined as a speech-like vowel or syllable type utterance. They were specifically interested to know whether maternal vocalizations in this interval would be responsive (prompt, contingent, relevant to the infant’s vocal behavior, e.g., affirmations, questions, imitations) or nonresponsive (prompt but not meaningfully related to the infant’s vocal behavior, e.g., activity comment, unrelated comment, redirect). This is a summary of their findings:

  • Mothers vocalized 3 times more frequently than infants.
  • One quarter of maternal vocalizations fell within the 3 sec interval after an infant vocalization.
  • About 40% of the prompt maternal vocalizations were responsive and the remainder were nonresponsive, according to their definitions derived from Bornstein et al., 2008).
  • Within the category of responsive maternal vocalizations, the most common were questions and affirmations.
  • A maternal vocalization of some kind occurred promptly after 85% of all infant utterances.
  • Imitations of the infant utterance (also in the responsive category) occurred after approximately 11% of infant utterances (my estimate from their data).
  • Mothers responded preferentially to speech-like vocalizations but not differentially to CV syllables versus vowel-only syllables. In other words, it did not appear that maternal reinforcement or shaping of mature syllables could account for the emergence and increase in this behavior with infant age.

One reason I like this paper so much is that some of the results accord with data that we are collecting in my lab in a project coordinated by my doctoral student Pegah Athari who is showing great skill and patience, having worked her way through through 10 hours of recordings from 5 infants in a longitudinal study (3 months of recording from each infant but covering ages 6 through 14 months overall). The study is designed to explore mimicry specifically as a responsive utterance that may be particularly powerful (mimicry involves full or partial imitation of the preceding utterance). We want to be able to predict when mimicry will occur and to understand its function. In our study we examine the 2 second intervals that precede and follow each infant utterance. Another important difference is that we record the interactions in the lab but there are no experimental procedures, we arrange the setting and materials to support interactions that are as naturalistic as possible. These are some of our findings:

  • Mothers produced 1.6 times as many utterances as their infants.
  • Mothers said something after the vast majority of the infant’s vocalizations just as observed by Fagan and Doveikis.
  • Instances in which one member of the dyad produced an utterance that is similar to the other were rare, but twice as common in the direction of mother mimicking the infant (10%), compared to the baby mimicking the mother (5%).
  • Infant mimicry of the mother is significantly (but not completely) contingent on the mother modeling one of the infant’s preferred sounds in her utterance (mean contingency coefficient = .34).
  • Maternal mimicry is significantly (but not completely) contingent on perceived meaningfulness of the child’s vocalization (mean contingency coefficient = .35). In other words, it seems that the mother is not specifically responding to the phonetic character of her infant’s speech output; rather, she makes a deliberate attempt to teach meaningful communication throughout early development.
  • The number of utterances that the mother perceives to be meaning increase with the infant’s age although this is not a hard and fast rule because regressions occur when the infant is ill and the canonical babbling ratio declines. Mothers will also respond to nonspeechlike utterances in the precanonical stage as being meaningful (animal noises, kissing and so forth).

We want to replicate our findings with another 5 infants before we try to publish our data but I feel confident that our conclusions will be subtly different from Fagan and Doveikis’ despite general agreement with their suggestion that self-motivation factors and access to auditory feedback of their own vocal output plays a primary role in infant vocal development. I think that maternal behavior may yet prove to have an important function however. It is necessary to think about learning mechanisms in which low frequency random inputs are actually helpful. I have talked about this before on this blog in a post about the difference between exploration and exploitation in learning. Exploration is a phase during which trial and error actions help to define the boundaries of the effective action space and permit discovery of actions that are most rewarding. Without exploration one might settle on a small repertoire of actions that are moderately rewarding and never discover others that will be needed as your problems become more complex. Exploitation is the phase during which you use the actions that you have learned to accomplish increasingly complex goals.

The basic idea behind the exploration-exploitation paradox is that long term learning is supported by using an exploration strategy early in the learning process. Specifically, many studies have shown that more variable responding early in learning is associated with easier learning of difficult skills later in the learning process. For early vocal learning, the expansion stage corresponds to this principle nicely: the infant produces a broad variety of vocalizations—squeals, growls, yells, raspberries, vowels, quasiresonants, fully resonant vowels and combinations called marginal babbles. These varied productions lay the foundations for the production of speech like syllables during the coming canonical babbling stage. Learning theorists have demonstrated that environmental inputs can support this kind of free exploration. Specifically, a high reinforcement rate will promote a high response rate but it is important to reinforce variable responses early in the learning process.

In the context of mother-infant interactions, it may be that mothers are reinforcing many different kinds of infant vocalizations in the early stages because they are trying to teach words but the infant is not really capable of producing real words and she has to work with what she hears. She does do something after almost every infant utterance however so she encourages many different practice trials on the part of the infant. It is also possible (although not completely proven) that imitative responses on the part of the mother are particularly reinforcing to the infant. In the short excerpt of a “conversation” between a mum and her 11 month old infant shown here, it can be seen that she responds to every one of the infant’s utterances, encouraging a number of variable responses, specifically mimicking those that are most closely aligned with her intentions.

IDV11E03A EXCERPT

It is likely that when alone in the crib, the infant’s vocalizations will be more repetitive, permitting more specific practice of preferred phonetic forms such as “da” (infants are known to babble more when alone than in dyadic interactions, especially when scientists feed back their vocalizations over loud speakers). The thing is, the infant’s goals are not aligned with the mothers. In my view, the most likely explanation for infant vocal learning is self-supervised learning. The infant is motivated to produce specific utterances and finds achievement of those utterances to be intrinsically motivating. What kind of utterances does the infant want to produce? Computer models of this process have settled on two factors: salience and learning progress. That is, the infant enjoys producing sounds that are interesting and that are not yet mastered. The mother’s goals are completely different (teach real words) but her behaviors in this regard serve the infant’s goals nonetheless by: (1) supporting perceptual learning of targets that correspond to the ambient language; (2) encouraging sound play/practice by responding to the infant’s attempts with a variety of socially positive behaviors; (3) reinforcing variable productions by modeling a variety of forms and accepting a variety of attempts as approximations of meaningful utterances when possible; and (4) increasing the salience of speech-like utterances through mimicry of these rare utterances. The misalignment of the infant’s and the mother’s goals is helpful to the process because if the mother were trying to teach the infant specific phonetic forms (CV syllables for example), the exploration process might be curtailed prematurely and self-motivation mechanisms might be hampered.

What are the clinical implications of these observations? I am not sure yet. I need a lot more data to feel more confident that I can predict maternal behavior in relation to infant behavior. But in the meantime it strikes me that SLPs engage in a number of parent teaching practices that assume that responsiveness by the parent is a “good thing”. However, it is not certain that parents typically respond to their infant’s vocalizations in quite the ways that we expect. In the mean time, procedures to encourage vocal play are a valuable part of your tool box, as described in Chapter 10 of our book:

Rvachew, S., & Brosseau-Lapre, F. (2018). Developmental Phonological Disorders: Foundations of Clinical Practice (Second ed.). San Diego, CA: Plural Publishing, Inc.

 

Testing Client Response to Alternative Speech Therapies

Buchwald et al published one of the many interesting papers in a recent special issue on motor speech disorders in the Journal of Speech, Language and Hearing Research. In their paper they outline a common approach to speech production, one that is illustrated and discussed in some detail in Chapters 3 and 7 of our book, Developmental Phonological Disorders: Foundations of Clinical Practice. Buchwald et al. apply it in the context of Acquired Apraxia of Speech however. They distinguish between patients who produce speech errors subsequent to left hemisphere cardiovascular accident as a consequence of motor planning difficulties versus phonological planning difficulties. Specifically, in their study there are four such patients, two in each subgroup. Acoustic analysis was used to determine whether their cluster errors arose during phonological planning or in the next stage of speech production – during motor planning. The analysis involves comparing the durations of segments in triads of words like this: /skæmp/ → [skæmp], /skæmp/ → [skæm], /skæm/ → [skæm]. The basic idea is that if segments such as [k] in /sk/ → [k] or [m] in /mp/ → [m] are produced as they would be in a singleton context, then the errors arise during phonological planning; alternatively, if they are produced as they would be in the cluster context, then the deletion errors arise during motor planning. This leads the authors to hypothesize that patients with these different error types would respond differently to intervention. So they treated all four patients with the same treatment, described as “repetition based speech motor learning practice”. Consistent with their hypothesis, the two patients with motor planning errors responded to this treatment and the two with phonological planning errors did not as shown in the table of pre- versus post-treatment results.

Buchwald et al results corrected table

However, as the authors point out, a significant limitation of this study is that the design is not experimental. Having failed to establish experimental control either within or across speakers it is difficult to draw conclusions.

I find the paper to be of interest on two accounts nonetheless. Firstly, their hypothesis is exactly the same hypothesis that Tanya Matthews and I posed for children who present with phonological versus motor planning deficits. Secondly, their hypothesis is fully compatible with the application of a single subject randomization design. Therefore it provides me with an opportunity to follow through with my promise from the previous blog, to demonstrate how to set up this design for clinical research.

For her dissertation research, Tanya identified 11 children with severe speech disorders and inconsistent speech sound errors who completed our full experimental paradigm. These children were diagnosed with either a phonological planning disorder or a motor planning disorder using the Syllable Repetition Task and other assessments as described in our recently CJSLPA paper, available open access here. Using those procedures, we found that 6 had a motor planning deficit and 5 had a phonological planning deficit.

Then we hypothesized that the children with motor planning disorders would respond to a treatment that targeted speech motor control: much like Brumbach et al., it included repetition practice according to the principles of motor practice during the practice parts of the session but during prepractice, children were taught to identify the target words and to identify mispronunciations of the target words so that they would be better able to integrate feedback and self-correct during repetition practice. Notice that direct and delayed imitation are important procedures in this approach. We called this the auditory-motor integration (AMI approach).

For children with Phonological Planning disorders we hypothesized that they would respond to a treatment similar to the principles suggested by Dodd et al (i.e., see core vocabulary approach). Specifically the children are taught to segment the target words into phonemes, associating the phonemes with visual cues. Then we taught the children to chain the phonemes back together into a single word. Finally, during the practice component of each session, we encouraged the children to produce the words using the visual cues when necessary. An important component of this approach is that auditory-visual models are not provided prior to the child’s production attempt-the child is forced to construct the phonological plan independently. We called this the phonological memory & planning (PMP) approach.

We also had a control condition that consisted solely of repetition practice (CON condition).

The big difference between our work and Brumbach et al. is that we tested our hypothesis using a single subject block randomization design, as described in our recent tutorial in Journal of Communication Disorders. The design was set up so that each of the 11 children experienced all three treatments. We chose 3 treatment targets for each child, randomly assigned the targets to each of the three treatments, and then randomly assigned the treatments to each of three sessions, scheduled to occur on different days of the week, 3 sessions per week for 6 weeks. You can see from the table below that each week counts as one block, so there are 6 blocks of 3 sessions for 18 sessions in total. The randomization scheme was generated blindly and independently using computer software for each child. The diagram below shows the treatment schedule for one of the children with a motor planning disorder.

Block Randomization TASC02 DPD Blog

This design allowed us to compare response to the three treatments within each child using a randomization test. For this child, the randomization test revealed a highly significant difference in favour of the AMI treatment as compared to the PMP treatment, as hypothesized for children with motor planning deficits. I don’t want to scoop Tanya’s thesis because she will finish it soon, before the end of 2017 I’m sure, but the long and the short of it is that we have a very clear results in favour of our hypothesis using this fully experimental design and the statistics that are licensed by it. I hope you will check out our tutorial on the application of this design: we show how flexible and versatile this design can be for addressing many different questions about speech-language practice. There is much exciting work being done in the area of speech motor control and this is a design that gives researchers and clinicians an opportunity to obtain interpretable results with small samples of children with rare or idiosyncratic profiles.

Reading

Buchwald, A., & Miozzo, M. (2012). Phonological and Motor Errors in Individuals With Acquired Sound Production Impairment. Journal of Speech, Language, and Hearing Research, 55(5), S1573-S1586. doi:10.1044/1092-4388(2012/11-0200)

Rvachew, S., & Matthews, T. (2017). Using the Syllable Repetition Task to Reveal Underlying Speech Processes in Childhood Apraxia of Speech: A Tutorial. Canadian Journal of Speech-Language Pathology and Audiology, 41(1), 106-126.

Rvachew, S., & Matthews, T. (2017). Demonstrating treatment efficacy using the single subject randomization design: A tutorial and demonstration. Journal of Communication Disorders, 67, 1-13. doi:https://doi.org/10.1016/j.jcomdis.2017.04.003

 

Single Subject Randomization Design For Clinical Research

Ebbels tweet Intervention ResearchDuring the week April 23 – 29, 2017 Susan Ebbels is curated WeSpeechies on the topic Carrying Out Intervention Research in SLP/SLT Practice. Susan kicked off the week with a link to her excellent paper that discusses the strengths and limitations of various procedures for conducting intervention research in the clinical setting. As we would expect, a parallel groups randomized control design was deemed to provide the best level of experimental control. Many ways of studying treatment related change within individual clients, with increasing degrees of control were also discussed. However, all of the ‘within participant’ methods described were vulnerable to confounding by threats to internal validity such history, selection, practice, fatigue, maturation or placebo effects to varying degrees.

One design was missing from the list because it is only just now appearing in the speech-language pathology literature, specifically the Single Subject Randomization Design. The design (actually a group of designs in which treatment sessions are randomly allocated to treatment conditions) provides the superior internal validity of the parallel groups randomized control trial by controlling for extraneous confounds through randomization. As an added benefit the results of a single subject randomization design can be submitted to a statistical analysis, so that clear conclusions can be drawn about the efficacy of the experimental intervention. At the same time, the design can be feasibly implemented in the clinical setting and is perfect for answering the kinds of questions that come up in daily clinical practice. For example, randomized control trials have shown than speech perception training is an effective adjunct to speech articulation therapy on average when applied to groups of children but you may want to know if it is a necessary addition to your therapy program for a speciRomeiser Logan Levels of Evidence SCRfic child.

Furthermore,  randomized single subject experiments are now acceptable as a high level of research evidence by the Oxford Centre for Evidence Based Medicine. An evidence hierarchy has been created for rating single subject trials, putting the randomized single subject experiments at the top of the evidence hierarchy as shown in the following table, taken from Romeiser Logan et al. 2008.

 

Tanya Matthews and I have written a tutorial showing exactly how to implement and interpret two versions of the Single Subject Randomization Design, a phase design and an alternation design. The accepted manuscript is available but behind a paywall at the Journal of Communication Disorders. In another post I will provide a mini-tutorial showing how the alternation design could be used to answer a clinical question about a single client.

Further Reading

Ebbels, Susan H. 2017. ‘Intervention research: Appraising study designs, interpreting findings and creating research in clinical practice’, International Journal of Speech-Language Pathology: 1-14.

Kratochwill, Thomas R., and Joel R. Levin. 2010. ‘Enhancing the scientific credibility of single-case intervention research: Randomization to the rescue’, Psychological Methods, 15: 124-44.

Romeiser Logan, L., R. Hickman, R.R. Harris, S.R. Harris, and C. Heriza. 2008. ‘Single-subject research design: recommendations for levels of evidence and quality rating’, Developmental Medicine and Child Neuroloogy, 50: 99-103.

Rvachew, S. 1988. ‘Application of single subject randomization designs to communicative disorders research’, Human Communication Canada (now Canadian Journal of Speech-Language Pathology and Audiology), 12: 7-13. [open access]

Rvachew, S. 1994. ‘Speech perception training can facilitate sound production learning.’, Journal of Speech and Hearing Research, 37: 347-57.

Rvachew, Susan, and Tanya Matthews. in press. ‘Demonstrating Treatment Efficacy using the Single Subject Randomization Design: A Tutorial and Demonstration’, Journal of Communication Disorders.

 

How to choose a control condition for speech therapy research

This post is an addendum to a previous post “What is a control group?”, inspired by a recently published new paper (“Control conditions for randomized trials of behavioral interventions in psychiatry: a decision framework” Early View, Lancet Psychiatry, March 2017). Following a brief review of the literature on effect sizes associated with different types of control conditions, a framework for choosing an appropriate control condition in behavioral trials is offered. The types of control conditions discussed are as follows:

  • Active comparator
  • Minimal treatment control
  • Nonspecific factors control
  • No-treatment control
  • Patient choice
  • Pill placebo
  • Specific factors component control
  • Treatment as usual
  • Waitlist control

The considerations for choosing one of these control conditions for testing a behavioral intervention are (1) participant risk; (2) trial phase; and (3) available resources. With respect to participant risk, more active interventions should be provided as the control condition when the risk of withholding treatment (especially when known effective treatments are available) is high. Therefore, when making this decision characteristics of the participant population and characteristics of the available treatments will play a role in the decision making process.

Regarding trial phase, early stage exploratory trials should be concerned with the risk of Type II error; in other words the researcher will want to maximize the chances of finding a benefit of a potentially helpful new intervention. Therefore, a waitlist control group might be appropriate at this stage of the research process given that waitlist controls are associated with large effect sizes in behavioral trials. In the later stages of the research program, the researcher should strive to minimize Type I error; in other words it is important to guard against concluding that an ineffective treatment is helpful. In this case an active comparator would be a logical choice although the sample size would need to be large given that the effect size is likely to be small in this case.

Finally, the resources available to the researchers will influence the choice of control condition. For example, in a late stage trial an active comparator provided by trained and monitored study personnel would be the best choice in most circumstances; however, in this case the provision of the control may be at least as expensive as the provision of the experimental treatment. When sufficient resources are lacking, the cost effective alternative might be to ask the usual community provider to administer treatment as usual although every effort should be made to describe the control intervention in detail.

A very nice graphic is provided (Figure 2) to illustrate the decision framework and can be applied to speech therapy trials. There are a number of interventions that have been in use or are emerging in speech therapy practice with a minimal evidence base. We can consider the choice of appropriate control condition for the assessment of these interventions.

Ultrasound intervention for school aged children with residual speech errors has been examined in quite a number of single subject studies but is now overdue for a randomized control trial. Given that the exploratory work has been completed in single subject trials I would say that we could proceed to a phase 3 RCT. The risk to the participant population is more difficult to conceptualize. You could say that it is low because these children are not at particular risk for poor school outcomes or other harmful sequels of non-intervention and the likelihood of a good speech outcome will not change much after the age of nine. The cost of providing an active control will be high because these children are often low priority for intervention in the school setting. Therefore, according to Figure 2, a no-treatment control would be appropriate when you make this assumption. On the other hand, you could argue that the participant risk of NOT improving is very high-all the evidence demonstrates that the residual errors do not improve without treatment after this age. If you consider the participant risk to be higher, especially considering community participation and psychosocial factors, then the appropriate control condition would be something more vigorous: patient choice, an active comparator, a nonspecific factors component control or a specific factors component control. Given the relatively early days of this research, small trials utilizing these control conditions in order might be advisable.

Metaphon as a treatment for four-year-olds with severe phonological delay and associated difficulties with phonological processing has not, to my knowledge, been tested with a large scale RCT. The population would be high risk by definition due to the likelihood of experiencing delays in the acquisition of literacy skills if the speech delay is not resolved prior to school entry. Effective treatment options are known to exist. Therefore, the appropriate control condition would be an active comparator-in other words, another treatment that is known to be effective with this population. Another option would be a specific factors component control that examines the efficacy of specific components of the Metaphon approach. Therefore, the meaningful minimal pairs procedure could be compared directly to the full metaphon approach with speech and phonological processing skills as the outcome variables. Similar trials have been conducted by Anne Hesketh and in my own lab (although not involving Metaphon specifically).

PROMPT has still not been tested in good quality single subject or parallel groups research. If a Phase 2 trial were planned for three-year-olds with suspected apraxia of speech, treatment as usual would be the appropriate control condition according to Figure 2. The speech condition is too severe to ethically withhold treatment and the research program is not advanced enough for a specific factors components control although this would be the next step.

Finally, an RCT of the effectiveness of Speech Buddies to stimulate /s/ in 3-year-olds with speech delay could be implemented. In this case, the participant group would low risk due to the likelihood of spontaneous resolution of the speech delay. Given a phase 2 trial, either no treatment or waitlist control could be implemented.

The authors of this framework conclude by recommending that researchers justify their choice of control condition in every trial protocol. They further recommend that waitlist controls are only acceptable when it is the only ethical choice and state that “no behavioral treatment should be included in treatment guidelines if it is only supported by trials using a waitlist control group or meta-analytic evidence driven by such trials.” To me, this is eminently sensible advice for speech and language research as well.

And this I believe concludes my trilogy of posts on the control group!

Further Reading

What is a control group? Developmental Phonological Disorders blog post, February 5, 2017

Using effect sizes to choose a speech therapy approach, Developmental Phonological Disorders blog post, January 31, 2017

Gold, S. M., Enck, P., Hasselmann, H., Friede, T., Hegerl, U., Mohr, D. C., & Otte, C. Control conditions for randomised trials of behavioural interventions in psychiatry: a decision framework. The Lancet Psychiatry. doi:10.1016/S2215-0366(17)30153-0

Hesketh, A., Dima, E., & Nelson, V. (2007). Teaching phoneme awareness to pre-literate children with speech disorder: a randomized controlled trial. International Journal of Language and Communication Disorders, 42(3), 251-271.

Rvachew, S., & Brosseau-Lapré, F. (2015). A Randomized Trial of 12-Week Interventions for the Treatment of Developmental Phonological Disorder in Francophone Children. American Journal of Speech-Language Pathology, 24(4), 637-658. doi:10.1044/2015_AJSLP-14-0056

Who to refer for speech therapy?

Morgan et al. have recently published a very important paper: Who to refer for speech therapy at 4 years of age versus who to “watch and wait”? This longitudinal study reports speech outcomes at age 7 years for children who received GFTA and DEAP assessments at age 4. The children were recruited from an Australian community cohort study (the Early Language in Victoria study) that recruited almost 2000 infants between 7 and 10 months of age for long-term follow-up.

The data reported in Morgan et al. is interesting by itself, as follows:

  1. Eleven percent of 1496 children tested at age 4 had speech errors qualifying the child for repeat assessment at age 7 years (the 11% finding interested me because we settled on 11% as the best estimate for prevalence of developmental phonological disorders at school entry in the review that we reported in DPD).
  2. At age 7 years, approximately 40% of the children who had speech errors at age 4 still had speech errors.
  3. Children at age 4 who had speech delay (typical speech errors; 60% of the sample) were most likely to show resolution of the speech problem. Specifically 70% of these children were classed as “resolved” and 30% as “persistent” at age 7 years.
  4. Children at age 4 who had a speech disorder (atypical speech errors; 40% of the sample) were less likely to show resolution of the speech problem. Specifically, 40% of these children were classed as resolved and 60% as persistent.
  5. No other variables in the study predicted speech outcome but neither did these variables predict “delay” versus “disorder” group membership (sex, SES, family history, language skills, nonverbal IQ).
  6. Apparently, reliable data on receipt of SLP services and outcomes was not available but there was some suggestion that children with “speech delay” who received therapy were more likely to resolve than children with “speech disorder” who received therapy.

Therefore, in this paper that is published in a journal for pediatricians the conclusion was “our data call into question whether the “watch and wait” approach should be universally applied to all preschool children. Rather these data suggest an efficient model may guide children with disorder at age 4 years to be fast-tracked for speech therapy…”.

The data provided in this paper are exceptionally important for SLPs and the development of service delivery guidelines but I am a little uncomfortable with the conclusions that were drawn. The first assumption I suppose is that doctors are not referring any 4 year olds so if we could get them to refer some that would help. The second assumption seems to be that the reason we refer 4 year olds with speech errors to speech therapy is to eliminate the speech errors. This is only partially true. More importantly, we have the goal of preventing the sequels that are known to be associated with delayed/disordered speech at school entry. These are mostly in the area of literacy but also in the psychosocial domains. It is clear that children who show early speech delays are at-risk for persistent literacy difficulties regardless of whether the speech problem resolves before at age 7. The important age cut-off is resolution of the speech problem before school entry. The risk for literacy difficulties is predicted by direct measures of phonological processing and not by an examination of speech error types. Certain speech error types are associated with phonological processing difficulties and a heightened risk for literacy problems but they are poor predictors of this risk. I will come back to this point with some case histories below.

The second problem that I have with the conclusions is that they are delivered to pediatricians who are in no way qualified to differentiate typical from atypical speech errors. In fact, SLPs themselves find this hard enough to do reliably. The difference between speech delay and speech disorder is both qualitative and quantitative– in other words the dividing line between delay and disorder is a very large grey area. Family doctors should not attempt to make this differentiation. In the paper, Morgan et al. do point out that the real issue is intelligibility. When the child is unintelligible past the age of 3 or 4, the physician should refer to a SLP who should determine the best course of action. In our review of the literature for SAC, Susan Raffat and I proposed wait times recommendations for children who are “ producing so many speech sound errors that speech intelligibility falls below expectations given the speaker’s age and experience with the language being spoken.” All children in the 4 to 6 year age group were considered by us to be high priority for a rapid assessment by an SLP. Any child with speech intelligibility problems who is expected to start school in the year of referral and/or presenting with phonological processing difficulties would be considered a high priority for immediate intervention.

Now to some case studies that I draw directly from our DPD text (Rvachew and Brosseau-Lapré), showing only portions here to make a point about speech delay, speech disorder and literacy outcomes. The first example is a clear case of speech disorder (data shown from age 7;4 assessment, right).

Complete information is provided in DPD, showing that two years earlier this child also presented with a severe speech disorder and severely delayed phonolCase Study DPD 8-4.JPGogical processing skills. His error types were atypical and inconsistent throughout the longitudinal follow-up period, despite much speech therapy targeting motor aspects of his speech. At age 7 his nonword reading skills were slightly below normal limits and 14 points below his receptive vocabulary scores. We can predict that he will struggle with the acquisition of reading and spelling in addition to continuing to have highly unintelligible speech for some time. Interestingly, his mother reported that his speech accuracy finally started to improve after a systematic phonics program was instituted to help him with his reading in second grade. The outcomes reported at age 7 will not surprise anyone.

The interesting findings for me were associated with the children with milder speech delay. The second child shown here (age 6;9 assessment, left) had a mild speech delay at age 4 but a severe delay in phonological processing skills that was, fortunately for him, treated appropriately by the SLP program in the local children’s hospital. At age 7 his speech delay is more-or-less resolved. His nonword reading skills are borderline normal but there is a 28 point gap between his nonword reading score and his receptive vocabulary scoreCase Study DPD 8-1. I think that this child is essentially dyslexic. He is coping well because he is exceptionally bright with excellent inputs from his family and the community service providers. That does not mean that the outcome would have been as good without those services however. The 30% of kids with speech delay who don’t resolve by themselves? Someone has to watch out for those kids, especially since they are numerically the larger group of kids. As an SLP, I make it my job to worry about them.

 

What is a control group?

I have a feeling that my blog might become less popular in the next little while because you may notice an emerging theme on research design and away from speech therapy procedures specifically! But it is important to know how to identify evidence based procedures and to do that requires knowledge of research design and it has come to my attention, as part of the process of publishing two randomized control trials (RCTs) this past year, that there are a lot of misperceptions about what an RCT is in the SLP and education communities, among both clinicians and researchers. Therefore, I am happy to draw your attention to this terrific blog by Edzard Ernst, and in particular to an especially useful post “How to differentiate good from bad research”. The writer points out that a proper treatment of this topic “must inevitably have the size of a book” because each of the indicators that he provides “is far too short to make real sense.” So I have taken it upon myself in this blog to expand upon one of his indicators of good research – one that I know causes some confusion, specifically:

  • Use of a placebo in the control group where possible.

Recently the reviewers (and editor) of one of my studies was convinced that my design was not an RCT because the children in both groups received an intervention. In the absence of a “no-treatment control” they said, the study could not be an RCT! I was mystified about the source of this strange idea until I read Ernst’s blog and realized that many people, recalling their research courses from university, must be mistaking “placebo control” for “no-treatment control.” However, a placebo control condition is not at all like the absence of treatment. Consider the classic example of a placebo control: in a drug trial, the patients randomized to the treatment arm will visit the nurse who hands him or her a white paper cup holding 2 pink pills containing active ingredient X and some other ingredients that do not impact the patient’s disease, i.e., inactive ingredients; the patients randomized to the control arm will also visit the nurse who hands him or her a white paper cup holding 2 pink pills containing only the inactive ingredients. In other words, the experiment is designed so that all patients are “treated” exactly the same except that only patients randomized to treatment receive (unknowingly) the active ingredient. Therefore, all changes in patient behavior that are due to those aspects of the treatment that are not the active treatment (visiting the nice nurse, expecting the pills to make a difference etc.) are equalized across arms of the study. These are called the “common factors” or “nonspecific factors”.

In the case of a behavioral treatment it is important to equalize the common factors across all arms of the study. Therefore in my own studies I deliberately avoid “no treatment” controls. In my very first RCT (Rvachew, 1994) for example the treatment conditions in the two arms of the study were as follows;

  • Experimental: 10 minutes of listening to sheet vs Xsheet recordings and judging correct vs incorrect “sheet” items (active ingredient) in a computer game format followed by 20 minutes of traditional “sh” articulation therapy, provided by a person blind to the computer game target.
  • Control: 10 minutes of listening to Pete vs meat recordings and judging correct vs incorrect “Pete” items in a computer game format followed by 20 minutes of traditional “sh” articulation therapy, provided by a person blind to the computer game target.

It can be seen that the study was designed to ensure that all participants experienced exactly the same treatment except for the active ingredient that was reserved for children who were randomly assigned to the experimental treatment arm, specifically exposure to the experience of listening to and making perceptual judgments about a variety of correct and incorrect versions of words beginning with “sh” or distorted versions of “sh”-the sound that the children misarticulated. Subsequently I have conducted all my randomized control studies in a similar manner. But, as I said earlier, I run across readers who vociferously assert that the studies are not RCTs because an RCT requires a “no treatment” control. In fact, a “no treatment” control is a very poor control indeed as argued in this blog that explains why the frequently used “wait list control group” is inappropriate. For example, a recent trial on the treatment of tinnitus claimed that a wait list control had merit because “While this comparison condition does not control for all potential placebo effects (e.g., positive expectation, therapeutic contact, the desire to please therapists), the wait-list control does account for the natural passing of time and spontaneous remission.” In fact, it is impossible to control for common factors when using a wait list control and it is unlikely that patients are actually “just waiting” when you randomize them to the “wait list control” condition; therefore Hesser et al.’s defense of the wait list control is  optimistic although their effort to establish how much change you get in this condition is worthwhile.

We had experience with a “wait list” comparison condition in a recent trial (Rvachew & Brosseau-Lapré, 2015). Most of the children were randomly assigned to one of four different treatment conditions, matched on all factors except the specific active ingredients of interest. However, we also had a nonexperimental wait list comparison group* to estimate change for children outside of the trial. We found that parents were savvy about maximizing the treatment that their children could receive in any given year. Our trial lasted six weeks, the public health system entitled them to six weeks of treatment and their private insurance entitled them to six to 12 weeks of therapy depending on the plan. Parents would agree to enrolled their child in the trial with randomization to a treatment arm if their child was waiting for the public service, OR they would agree to be assessed in the “wait list” arm if their child was currently enrolled in the public service. They would use their private insurance when all other options had been exhausted. Therefore the children in the “wait list” arm were actually being treated. Interestingly, we found that the parents expected their children to obtain better results from the public service because it was provided by a “real” SLP rather than the student SLPs who provided our experimental treatments even though the public service was considerably less intense! (As an aside, we were not surprised to find that the reverse was true). Similarly, as I have mentioned in previous blogs, Yoder et al. (2005) found that the children in their “no treatment” control accessed more treatment from other sources than did the children in their treatment arm. And parents randomized to the “watchful waiting” arm of the Glogowska et al. (2000) trial sometimes dropped out because parents will do what they must to meet their child’s needs.

In closing, a randomized control trial is simply a study in which participants are randomly assigned to an experimental treatment and a control condition (even in a cross-over design, in which all participants experience all conditions, as in Rvachew et al., in press). The nature of the control should be determined after careful thought about the factors that you are attempting to control, which can be many – placebo, Hawthorne, fatigue, practice, history, maturation and so on. These will vary from trial to trial obviously. Placebo control does not mean “no treatment” but rather, a treatment that excludes everything except the “active ingredient” that is the subject of your trial. As an SLP, when you are reading about studies that test the efficacy of a treatment, you need to pay attention to what happens to the control group as well as the treatment group. The trick is to think in every case – what is the active ingredient that explains the effect seen in the treatment group? what else might account for the effects seen in the treatment arm of this study? If I implement this treatment in my own practice, how likely am I to get a better result compared to the treatment that my caseload is currently receiving?

* A colleague sent me a paper (Mercer et al., 2007) in which a large number of researchers advocating for the acceptance of a broader array of research designs in order to focus more attention on external validity and translational research, got together to discuss the merits of various designs. During the symposium it arose that there was disagreement about the use of the terms “control” and “comparison” group. I use the terms in accordance with a minority of their attendees, as follows: control group means that the participants were randomly assigned to a group that did not experience the “active ingredient” of the experimental treatment; comparison group means that the participants were not randomly assigned to the group that did not experience the experimental intervention, a group that may or may not have received a treatment. This definition was ultimately not used by the attendees, I don’t know why – somehow they decided on a different definition that didn’t make any sense at all, I invite you to consult p. 141 and see if you can figure it out!

References

Glogowska, M., Roulstone, S., Enderby, P., & Peters, T. (2000). Randomised controlled trial of community based speech and language therapy in preschool children. British Medical Journal, 321, 923-928.

Hesser, H., Weise, C., Rief, W., & Andersson, G. (2011). The effect of waiting: A meta-analysis of wait-list control groups in trials for tinnitus distress. Journal of Psychosomatic Research, 70(4), 378-384. doi:http://dx.doi.org/10.1016/j.jpsychores.2010.12.006

Mercer, S. L., DeVinney, B. J., Fine, L. J., Green, L. W., & Dougherty, D. (2007). Study Designs for Effectiveness and Translation Research: Identifying Trade-offs. American Journal of Preventive Medicine, 33(2), 139-154.e132. doi:http://dx.doi.org/10.1016/j.amepre.2007.04.005

Rvachew, S. (1994). Speech perception training can facilitate sound production learning. Journal of Speech and Hearing Research, 37, 347-357.

Rvachew, S., & Brosseau-Lapré, F. (2015). A randomized trial of twelve week interventions for the treatment of developmental phonological disorder in francophone children. American Journal of Speech-Language Pathology, 24, 637-658. doi:10.1044/2015_AJSLP-14-0056

Rvachew, S., Rees, K., Carolan, E., & Nadig, A. (in press). Improving emergent literacy with school-based shared reading: Paper versus ebooks. International Journal of Child-Computer Interaction. doi:http://dx.doi.org/10.1016/j.ijcci.2017.01.002

Yoder, P. J., Camarata, S., & Gardner, E. (2005). Treatment effects on speech intelligibility and length of utterance in children with specific language and intelligibility impairments. Journal of Early Intervention, 28(1), 34-49.

Using effect sizes to choose a speech therapy approach

I am quite intrigued by the warning offered by Adrian Simpson in his paper “The misdirection of public policy: comparing and combining standardized effect sizes

The context for the paper is the tendency of public policy makers to rely on meta-analyses to make decisions such as, for example, should we improve teachers’ feedback skills or reduce class sizes as a means of raising student performance? Simpson shows that that meta-analyses (and meta-analyses of the meta-analyses!) are a poor tool for making these apples to oranges comparisons and cannot be relied upon as a source of information when making public policy decisions such as this. He identifies three specific issues with research design that invalidate the combining and comparing of effect sizes. I think that these are good issues to keep in mind when considering effect sizes as a clue to treatment efficacy and a source of information when choosing a speech or language therapy approach.

Recall that an effect size is a standardized mean difference, whereby the difference between means (i.e., the mean outcome of the treatment condition versus the mean outcome of the control condition) is expressed in standard deviation units. The issue is that the standard deviation units, which are supposed to reflect the variation in outcome scores between participants in the intervention trial, actually reflect many different aspects of the research design. Therefore if you compare the effect size of an intervention as obtained in one treatment trial with the effect size for another intervention as obtained in a different treatment trial, you cannot be sure that the difference is due to differences in the relative effectiveness of the two treatments. And yet, SLPs are asking themselves these kinds of questions every day: should I use a traditional articulation therapy approach or a phonological approach? Should I add nonspeech oral motor exercises to my traditional treatment protocol? Is it more efficient to focus on expressive language or receptive language goals? Should I use a parent training approach or direct therapy? And so on. Why is it unsafe to combine and compare effect sizes across studies to make these decisions?

The first issue that Simpson raises is that of comparison groups. Many, although not all, treatment trials compare an experimental intervention to either a ‘no treatment’ control group or a ‘usual care’ condition. The characteristics of the ‘no treatment’ and ‘usual care’ controls are inevitably poorly described if at all. And yet meta-analyses will combine effect sizes across many studies despite having a very poor sense of what the control condition is in the studies that are included in the final estimate of treatment effect. Control group and intervention descriptions can be so paltry that in some cases the experimental treatment of one study may be equivalent to the control condition of another study. The Law et al. (2003) review combined effect sizes for a number of RCTs evaluating phonological interventions. One intervention compared a treatment that was provided in 22 twice-weekly half hours sessions over a four month period to a wait list control (Almost & Rosenbaum, 1998). Another intervention involved monthly 45 minute sessions provided over 8 months, in comparison to a “watchful waiting” control in which many parents “dropped out” of the control condition (Glogowska et al. 2000). Inadequate information was provided about how much intervention the control group children accessed while they waited – almost anything is possible relative to the experimental condition in the Glogowska trial. For example, Yoder et al. (2005) observed that their control group actually accessed more treatment than the kids in their experimental treatment group which maybe explains why they did not obtain a main effect of their intervention (or not, who knows?). The point is that it is hard to know whether a small effect size in comparison to a robust control is more or less impressive than a large effect size in comparison to no treatment at all. Certainly, the comparison is not fair.

The second issue raised concerns range restriction in the population of interest. I realize now that I failed to take this into account when I repeated (in Rvachew & Brosseau-Lapré, 2018) the conclusion that dialogic reading interventions are more effective for low-income children than children with developmental language impairments (Mol et al., 2008). Effect sizes are inflated when the intervention is provided to only a restricted part of the population, and the selection variables are associated with the study outcomes. However, the inflation is greatest for the children near the middle of the distribution and least for children at the tails of the distribution. This fact may explain why effect sizes for vocabulary size after dialogic reading intervention are highest for middle class children (.58, Whitehurst et al. 1988), in the middle for lower class but normally developing children (.33, Lonigan & Whitehurst, 1998), and lowest for children with language impairments (.13, Crain-Thoreson & Dale, 1999). There are other potential explanatory factors in these studies but this issue with restricted range is an important variable that is of obvious importance in treatment trials directed at children with speech and language impairments. The low effect size for dialogic reading obtained by Crain-Thoreson & Dale should not by itself discourage use of dialogic reading with this population.

Finally, measurement validity plays a huge role with longer more valid tests improving effect sizes in comparison to shorter less valid tests. This might be important when comparing the relative effectiveness of therapy for different types of goals. Law et al. (2003) concluded that phonology therapy appeared to be more effective than therapy for syntax goals for example. For some reason the outcome measures in these two groups of studies tend to be very different. Phonology outcomes are typically assessed with picture naming tasks that include 25 to 100 items, with the outcome often expressed as percent consonants correct and therefore at the consonant level there are many items contributing to the test score. Sometimes the phonology outcome measure is created specifically to probe the child’s progress on the specific target of the phonology intervention. In both cases the outcome measure is likely to be a sensitive measure of the outcomes of the intervention. Surprisingly, in Law et al., the outcome of the studies of syntax interventions were quite often omnibus measures of language functioning, such as the Preschool Language Scale, or worse the Reynell Developmental Language Scale, neither test containing many items targeted specifically at the domain of the experimental intervention. When comparing effect sizes across studies, it is crucial to be sure that the outcome measures have equal reliability and validity as measures of the outcomes of interest.

My conclusion is that it is important to not make a fetish of meta-analyses and effect sizes. These kinds of studies provide just one kind of information that should be taken into account when making treatment decisions. Their value is only as good as the underlying research—overall, effect sizes are most trustworthy when they come from the same study or a series of studies involving the exact same independent and dependent variables and the same study population. Given that this is a rare occurrence in speech and language research, there is no real substitute for a deep knowledge of an entire literature on any given subject. Narrative reviews from “experts” (a much maligned concept!) still have a role to play.

References

Almost, D., & Rosenbaum, P. (1998). Effectiveness of speech intervention for phonological disorders: a randomized controlled trial. Developmental Medicine and Child Neuroloogy, 40, 319-325.

Crain-Thoreson, C., & Dale, P. S. (1999). Enhancing linguistic performance: Parents and teachers as book reading partners for children with language delays. Topics in Early Childhool Special Education, 19, 28-39.

Glogowska, M., Roulstone, S., Enderby, P., & Peters, T. (2000). Randomised controlled trial of community based speech and language therapy in preschool children. British Medical Journal, 321, 923-928.

Law, J., Garrett, Z., & Nye, C. (2003). Speech and language therapy interventions for children with primary speech and language delay or disorder (Cochrane Review). Cochrane Database of Systematic Reviews, Issue 3. Art. No.: CD004110. doi:10.1002/14651858.CD004110.

Lonigan, C. J., & Whitehurst, G. J. (1998). Relative efficacy of a parent teacher involvement in a shared-reading intervention for preschool children from low-income backgrounds. Early Childhood Research Quarterly, 13(2), 263-290.

Mol, S. E., Bus, A. G., de Jong, M. T., & Smeeta, D. J. H. (2008). Added value of dialogic parent-child book readings: A meta-analysis. Early Education and Development, 19, 7-26.

Rvachew, S., & Brosseau-Lapré, F. (2018). Developmental Phonological Disorders: Foundations of Clinical Practice (Second Edition). San Diego, CA: Plural Publishing.

Simpson, A. (2017). The misdirection of public policy: comparing and combining standardised effect sizes. Journal of Education Policy, 1-17. doi:10.1080/02680939.2017.1280183

Whitehurst, G. J., Falco, F., Lonigan, C. J., Fischel, J. E., DeBaryshe, B. D., Valdez-Menchaca, M. C., & Caulfield, M. (1988). Accelerating language development through picture book reading. Developmental Psychology, 24, 552-558.

Yoder, P. J., Camarata, S., & Gardner, E. (2005). Treatment effects on speech intelligibility and length of utterance in children with specific language and intelligibility impairments. Journal of Early Intervention, 28(1), 34-49.

How to choose phonology goals?

I find out via Twitter (don’t you love twitter!) that “teach complex sounds first” is making the rounds again (still!) and I am prompted to respond. Besides the fact that I have disproven the theoretical underpinnings of this idea, it bothers me that so many of the assumptions wrapped up in the assertion are unhelpful to a successful intervention. Specifically, we should not be treating “sounds”, there is no agreed upon and universal ordering of targets from simple to complex, and there is no reason to teach the potential targets one at a time in some particular order anyway. So what should we do? I will describe a useful procedure here with an example.

There is this curious rumour that I promote a “traditional developmental” approach to target selection that I must lay to rest. In fact, I have made it clear that I promote a dynamic systems approach. An important concept is the notion of nonlinearity: if you induce gradual linear changes in several potential targets at once, a complex interaction will result causing a nonlinear change across the system known as a phase shift. How do you choose the targets to work on at once? Francoise and I show how to use a “quick multilinear analysis” to identify potential targets  at all levels of the phonological hierarchy, in other words phrases, trochaic or iambic feet, syllables, onsets, rimes or codas, clusters, features or individual phonemes. Many case studies and demonstrations are laid out in our book that will shortly appear in a beautiful second edition. Then we show how to select three targets for simultaneous treatment using Grunwell’s scheme designed to facilitate progressive change in the child’s phonological system. I will demonstrate both parts of this process here, using a very brief sample from a case study that is described in our book. The child’s speech is delayed for her age of two years which can be established by comparing the word shape and phonetic repertoire to expectations established by Carol Stoel-Gammon.

case-study-6-3-sample-for-blog

Potential treatment targets can be identified by considering strengths and weaknesses at the prosodic and segmental tiers of the phonological hierarchy (full instructions for this quick multilinear analysis are contained in our book). The table below describes units that are present and absent. Note that since her language system is early developing, her phonology is probably word-based rather than phoneme based; therefore ‘distinction’ refers to the presence of a phonetic distinction rather than a phonemic contrast.

case-study-6-3-quick-multilinear-analysis

Now that we have a sense of potential targets from across the whole system, how do we select targets using Grunwell’s scheme? We want to ensure that we address word shape and segmental goals. We also want to choose one goal to stabilize a variable structure in the system, another to extend something that is established to a new context, and a third to expand the system to including something new. Here are my choices (others are possible):

case-study-6-3-grunwell-goals

There is a good chance that fricatives and codas will emerge spontaneously with this plan because we will have laid down the foundation for these structures. If they don’t it should not be hard to achieve them during the next therapy block. The idea that you can only induce large change in the system by teaching the most complex targets first is clearly not true as I have explained previously – in fact, complex sounds emerge more easily when the foundation is in place. Furthermore, Schwartz and Leonard (1982) also recommended in their study on selection effects in early phonological development that it was best to teach IN words to children with small vocabulary sizes – in other words expand the vocabulary size gradually by using word shapes and phonemes that are in the inventory, but combined in new ways.

It would be possible to use the stabilize-extend-expand scheme and choose different, more complex goals. For example, we could consider the nonreduplicated CVCV structure (cubby, bunny, bootie) to be the stabilize goal. Then we could introduce word final labial stops as the extend goal, generalizing these phones from the onset where they are well established to a new word position (up, tub, nap). Finally, we could introduce a word initial fricative as the expand goal (see, sock, soup). This plan with more complex targets might work but you are risking slower progress, given the empirical findings reported in Rvachew & Nowak (2001) and in Schwartz & Leonard (1982). Furthermore, you would be failing to recognize a major constraint on the structure of her syllables (the limitation to only 2 segments, VV or CV with CVV and CVC currently proscribed). If you focus only on introducing “complex sounds” without attending to this major issue at the prosodic levels of her phonological system, you will be in for a rough ride.

I attach here another example, this one a demonstration from the second edition of our book, chapter-8-demonstration-8-2, to appear in December 2016. Francoise and I have taken a great effort to show students how to implement an evidence based approach to therapy. I invite readers to take a peak!

Reading List

Rvachew, S., & Brosseau-Lapré, F. (2018). Developmental Phonological Disorders: Foundations of Clinical Practice (Second Edition). San Diego, CA: Plural Publishing. (Ready for order in December 2016)

Grunwell, P. (1992). Processes of phonological change in developmental speech disorders. Clinical Linguistics & Phonetics, 6, 101-122.

Stoel-Gammon, C. (1987). Phonological skills of 2-year-olds. Language, Speech & Hearing Services in Schools, 18, 323-329.

Rvachew, S., & Bernhardt, B. (2010). Clinical implications of the dynamic systems approach to phonological development. American Journal of Speech-Language Pathology, 19, 34-50.

Rvachew, S. & Nowak, M. (2001). The effect of target selection strategy on sound production learning. Journal of Speech, Language, and Hearing Research, 44, 610-623.

Schwartz, R., & Leonard, L. (1982). Do children pick and choose? An examination of selection and avoidance in early lexical acquisition. Journal of Child Language, 9, 319-336.

 

 

 

Using Phonetics to Teach Phonology

Francoise and I have been working on the second edition of our book for some time now and the book is finally in the production stage – counting down to a December 2016 release date. One of the decisions we have had to make is whether to keep all the figures that were in the first edition – we must pay the copyright holders (note: not the authors!) in order to gain the right to reproduce all those figures and tables in our book. It is a difficult decision for each and every figure given that the costs vary from approximately $100 to $1000 per figure and there are 99 of them in the book!

Consider the figure shown at the bottom of this post – it illustrates data from some research by Goffman and Malin (1999) in which adults and children produced nonsense words with either a trochaic stress pattern (strong-weak) or an iambic stress-pattern (weak-strong). Kinematic tracings of lower lip movements are shown. The surprise was that the children modulated the stress pattern of the iambic words in a fairly adult-like manner, albeit with less consistency than the adults. The children did not modulate the stress pattern of the trochaic word, producing it like a spondee, with equal stress on both syllables, which was an unexpectedly immature pattern. Why did I choose to keep this figure in a book on phonology? Surely the whole point of phonology is to convert speech to an abstract form like this: [ˈpʌpəp] and [pəˈpʌp]? In the end I decided that I wanted to keep it because I so much want my students to see it – it encapsulates so many primary themes in our book, as follows:

  1. Basic concepts are essential to understand, and for multilingual students in particular, the figure provides a beautiful visual representation of trochee, spondee and iamb that is much more effective than a string of phonetic symbols.
  2. What you get is not always what you hear! If you were to transcribe the child saying the word “puppet” with the kinematics shown in the lower left quadrant of the figure, the odds are that you would produce [ˈpʌpət] which would represent what you expect to hear rather than exactly what the child said. I spend quite a bit of time talking about the limits of phonetic transcription in the first chapter of the book.
  3. The development of prosody is fundamental to the development of phonology: prosodic frames – word templates made up of syllable shapes and stress patterns that are characteristic of ambient language – emerge early and support the acquisition of phonemes. These two levels of the phonological hierarchy are intimately interconnected – it really is time to stop teaching linear phonology.
  4. Phonology is fully dependent upon phonetics – you cannot understand phonological development without understanding the articulatory and perceptual substrates.
  5. Having said that, it is not true that phonological development is determined by maturation of the motor system. If it were, the trochaic pattern would emerge first, before the iambic stress pattern, whereas the reverse is shown in the figure. This demonstration can be the trigger for an interesting discussion of competing approaches to intervention.
  6. The figure is a beautiful illustration of the operation of lexical contrast. Why does the child learn to modulate the strong-strong stress pattern to produce a weak-strong iamb before properly mastering the (for English) canonical strong-weak pattern? Because they must do that in order to produce a contrast between these two word templates in the minds of the listener.
  7. The figure is a lovely illustration of how phonology emerges from the dynamic interplay of phonetic, semantic, and social factors with a dynamic systems approach to development being a coherent thread throughout the book.

The thing about a book however is I can only build possibilities into it – the teaching and the learning is constrained by the imagination of the teachers and the learners. I don’t know how many readers will discover in a paragraph on the development of “interarticulator coordination” a plethora of important messages about the development of phonology.

Figure 3-7

Figure 3–7. Time and amplitude normalized kinematic tracings of displacement of the lower lip during productions of the nonsense words [ˈpʌpəp] (left) and [pəˈpʌp] (right), recorded from an adult (top) and child (bottom). The corresponding spatiotemporal indexes for the repeat productions shown are: (A) adult trochee STI = 8.56, (B) adult iamb STI = 8.99, (C) child trochee STI =18.15, and (D) child iamb STI = 14.24. Adapted from Goffman & Malin (1999). Metrical effects on speech movements in children and adults. Journal of Speech, Language, and Hearing Research, 42, Figure 5, p. 1009. Used with permission of the American Speech-Hearing-Language Association.

References

Rvachew, S., & Brosseau-Lapré, F. (2018). Developmental Phonological Disorders: Foundations of Clinical Practice (Second Edition). San Diego, CA: Plural Publishing. http://pluralpublishing.com/publication_dpd2e.htm

Goffman, L., & Malin, C. (1999). Metrical effects on speech movements in children and adults. Journal of Speech, Language and Hearing Research, 42, 1003-1015

(edited on August 26, 2016 to correct copy-right date for DPD2e. The second edition will be released in December 2016)