Is Acoustic Feedback Effective for Remediating “r” Errors?

I am very pleased to see a third paper published in the speech-language pathology literature using the single-subject randomization design that I have described in two tutorials, the first in 1988 and the second more recently. Tara McAllister Byun used the design to investigate the effectiveness of acoustic biofeedback treatment to remediate persistent “r” errors in 7 children aged 9 to 15 years. She used the single subject randomized alternation design with block randomization, including a few unique elements in her implementation of the design. She and her research team provided one traditional treatment session and one biofeedback treatment session each week for ten weeks. However the order of the traditional and biofeedback sessions was randomized each week. Interestingly, each session targeted the same items (i.e., “r” was the speech sound target  in both treatment conditions): rhotic vowels were tackled first and consonantal “r” was introduced later, in a variety of phonetic contexts. (This procedure is a variance from my experience in which, for example, Tanya Matthews and I randomly assign different targets to different treatment conditions). Another innovation is the outcome measure: a probe constructed of untreated “r” words was given at the beginning and end of each session so that change (Mdif) over the session was the outcome measure submitted to statistical analysis (our tutorial explains that the advantage of the SSRD is that a nonparametric randomization test can be used to assess the outcome of the study, yielding a p value).  In addition, 3 baseline probes and 3 maintenance probes were collected so that an effect size for overall improvement could be calculated. In this way there are actually 3 time scales for measuring change in this study: (1) change from baseline to maintenance probes; (2) change from baseline to treatment performance as reflected in the probes obtained at the beginning of each session and plotted over time; and (3) change over a session, reflected in the probes given at the beginning and the end of each session. Furthermore, it is possible to compare differences in within session change for sessions provided with and without acoustic feedback.

I was really happy to see the implementation of the design but it is fair to say that the results were a dog’s breakfast, as summarized below:

Byun 2017 acoustic biofeedback

The table indicates that two participants (Piper, Clara) showed an effect of biofeedback treatment and generalization learning. Both showed rapid change in accuracy overall after treatment was introduced in both conditions and maintained at least some of that improvement after treatment was withdrawn. Garrat and Ian showed identical trajectories in the traditional and biofeedback conditions with a late rise in accuracy during treatment session, large within session improvements during the latter part of the treatment period, and good maintenance of those gains. Neither boy achieved 60% correct responding however at any point in the treatment program. Felix, Lucas and Evan demonstrated no change in probe scores across the twenty weeks of the experiment in both conditions. Lucas started at a higher level and therefore his probe performance is more variable: because he actually showed a within session decline during traditional sessions while showing stable performance within biofeedback sessions, the statistics indicate a treatment effect in favour of acoustic biofeedback but in fact no actual gains are observed.

So, this is a long description of the results that brings me to two conclusions: (1) the alternation design was the wrong choice for the hypothesis in these experiments; and (2) biofeedback was not effective for these children; even in those cases where it looks like there was an effect, the children were responsive to both biofeedback and the traditional intervention.

In a previous blog, I described the alternation design; there is another version of the single subject randomization design that would be more appropriate for Tara’s hypothesis however.  The thing about acoustic biofeedback is that it is not fundamentally different from traditional speech therapy, involving a similar sequence of events: (i) SLP says a word as an imitative model; (ii) child imitates the word; (iii) SLP provides informative or corrective feedback. In the case of incorrect responses in the traditional condition in Byun’s study, the SLP provided information about articulatory placement and reminded the child that the target involved certain articulatory movements (“make the back part of your tongue go back”). In the case of incorrect responses in the acoustic biofeedback condition, the SLP made reference to the acoustic spectrogram when providing feedback and reminded the child that the target involved certain formant movements (“make the third bump move over”). Firstly, the first two steps are completely overlapping in both conditions and secondly it can be expected that the articulatory cues given in the traditional condition will be remembered and their effects will carry-over into the biofeedback sessions. Therefore we can consider the acoustic biofeedback to be an add-on to traditional therapy. We want to know about the value added. Therefore the phase design is more appropriate: in this case, there would be 20 sessions (2 per week over 10 weeks as in Byun’s study), each session would be planned with the same format: beginning probe (optional), 100 practice trials with feedback, ending probe. The difference is that the starting point for the introduction of acoustic biofeedback would be selected at random. All the sessions that precede the randomly selected start point would be conducted with traditional feedback and all the remainder would be conducted with acoustic biofeedback. The first three would be designated as traditional and the last 3 would be designated as biofeedback for a 26 session protocol as described by Byun. Across the 7 children this would end up looking like a multiple baseline design except that (1) the duration of the baseline phase would be determined by random selection for each child; and (2) the baseline phase is actually the traditional treatment with the experimental phase testing the value added benefit of biofeedback. There are three possible categories of outcomes: no change after introduction of the biofeedback, an immediate change, or a late change. As with any single subject design, the change might be in level, trend or variance and the test statistic can be designed to capture any of those types of changes. The statistical analysis asks whether the obtained test statistic is bigger than all possible results given all of the possible random selection of starting points. Rvachew & Matthews (2016) provides a more complete  explanation of the statistical analysis.

I show below an imaginary result for Clara, using the data presented for her in Byun’s paper, as if the traditional treatment came first and then the biofeedback intervention. If we pretend that the randomly selected start point for the biofeedback intervention occurred exactly in the middle of the treatment period, the test statistic is the difference of the M(bf) and the M(trad) scores resulting in -2.308. All other possible random selections of starting points for intervention lead to 19 other possible mean differences, and 18 of them are bigger than the obtained test statistic leading to a p value of 18/20 = .9. In this data set the probe scores are actually bigger in the earlier part of the intervention when the traditional treatment is used and they do not get bigger when the biofeedback is introduced. These are the beginning probe scores obtained by Clara but Byun obtained a significant result in favour of biofeedback by block randomization and by examining change across each session. However, I am not completely sure that the improvements from beginning to ending probes are a positive sign—this result might reflect a failure to maintain gains from the previous session in one or the other condition.

Hypothetical Clara in SSR Phase Design

There are several reasons to think that both interventions that were used in Byun’s study might result in unsatisfactory generalization and maintenance. We discuss the principles of generalization in relation to theories of motor learning in Developmental Phonological Disorders: Foundations of Clinical Practice. One important principle is that the child needs a well-established representation of the acoustic-phonetic target. All seven of the children in Byun’s study had poor auditory processing skills but no part of the treatment program addressed phonological processing, phonological knowledge or acoustic phonetic representations. Second, it is essential to have the tools to monitor and use self-produced feedback (auditory, somatosensory) to evaluate success in achieving the target. Both the traditional and the biofeedback intervention put the child in the position of being dependent upon external feedback. The outcome measure focused attention on improvements from the beginning of the practice session to the end. The first principle of motor learning is that practice performance is not an indication of learning however.  The focus should have been on the sometimes large decrements in probe scores from the end of one session to the beginning of the next. The children had no means of maintaining any of those performance gains. Acoustic feedback may be a powerful means of establishing a new response but it is a counterproductive tool for maintenance and generalization learning.

Reading

McAllister Byun, T. (2017). Efficacy of Visual–Acoustic Biofeedback Intervention for Residual Rhotic Errors: A Single-Subject Randomization Study. Journal of Speech, Language, and Hearing Research, 60(5), 1175-1193. doi:10.1044/2016_JSLHR-S-16-0038

Rvachew, S., & Matthews, T. (2017). Demonstrating treatment efficacy using the single subject randomization design: A tutorial and demonstration. Journal of Communication Disorders, 67, 1-13. doi:https://doi.org/10.1016/j.jcomdis.2017.04.003

 

Advertisements

Maternal Responsiveness to Babbling

Over the course of my career the most exciting change in speech-language pathology practice has been the realization that we can have an impact on speech and language development by working with the youngest patients, intervening even before the child “starts to talk”. Our effectiveness with these young patients is dependent upon the growing body of research on the developmental processes that underlie speech development during the first year of life. Now that we know that the emergence of babbling is a learned behavior, influenced by auditory and social inputs, this kind of research has mushroomed although our knowledge remains constrained because these studies are hugely expensive, technically difficult and time consuming to conduct. Therefore I was very excited to see a new paper on the topic in JSLHR this month:

Fagan, M. K., & Doveikis, K. N. (2017). Ordinary Interactions Challenge Proposals That Maternal Verbal Responses Shape Infant Vocal Development. Journal of Speech, Language, and Hearing Research, 60(10), 2819-2827. doi:10.1044/2017_JSLHR-S-16-0005

The purpose of this paper was to examine the hypothesis that maternal responses to infant vocalizations are a primary cause of the age related change in the maturity of infant speech during the period 4 through 10 months of age. This time period encompasses three stages of infant vocal development: (1) expansion stage, that is producing vowels and a broad variety of vocalizations that are not speech like but nonetheless exercise vocal parameters such as pitch, resonance and vocal tract closures; (2) canonical babbling stage, that is producing speech like CV syllables, singly or in repetitive strings; and, (3) integrative stage, that is producing a mix of babbling and meaningful words. In the laboratory, contingent verbal responses from adults increase the production rate of mature syllables by infants. Fagan and Doveikis asked whether this shaping mechanism, demonstrated in the laboratory, explains the course of infant speech development in natural interactions in real world settings. They coded 5 and a quarter hours of natural interactions recorded between mothers and infants in the home environment from 35 dyads in a cross-sectional study. Their analysis focuses on maternal behaviors in the 3 second interval following an infant vocalization, defined as a speech-like vowel or syllable type utterance. They were specifically interested to know whether maternal vocalizations in this interval would be responsive (prompt, contingent, relevant to the infant’s vocal behavior, e.g., affirmations, questions, imitations) or nonresponsive (prompt but not meaningfully related to the infant’s vocal behavior, e.g., activity comment, unrelated comment, redirect). This is a summary of their findings:

  • Mothers vocalized 3 times more frequently than infants.
  • One quarter of maternal vocalizations fell within the 3 sec interval after an infant vocalization.
  • About 40% of the prompt maternal vocalizations were responsive and the remainder were nonresponsive, according to their definitions derived from Bornstein et al., 2008).
  • Within the category of responsive maternal vocalizations, the most common were questions and affirmations.
  • A maternal vocalization of some kind occurred promptly after 85% of all infant utterances.
  • Imitations of the infant utterance (also in the responsive category) occurred after approximately 11% of infant utterances (my estimate from their data).
  • Mothers responded preferentially to speech-like vocalizations but not differentially to CV syllables versus vowel-only syllables. In other words, it did not appear that maternal reinforcement or shaping of mature syllables could account for the emergence and increase in this behavior with infant age.

One reason I like this paper so much is that some of the results accord with data that we are collecting in my lab in a project coordinated by my doctoral student Pegah Athari who is showing great skill and patience, having worked her way through through 10 hours of recordings from 5 infants in a longitudinal study (3 months of recording from each infant but covering ages 6 through 14 months overall). The study is designed to explore mimicry specifically as a responsive utterance that may be particularly powerful (mimicry involves full or partial imitation of the preceding utterance). We want to be able to predict when mimicry will occur and to understand its function. In our study we examine the 2 second intervals that precede and follow each infant utterance. Another important difference is that we record the interactions in the lab but there are no experimental procedures, we arrange the setting and materials to support interactions that are as naturalistic as possible. These are some of our findings:

  • Mothers produced 1.6 times as many utterances as their infants.
  • Mothers said something after the vast majority of the infant’s vocalizations just as observed by Fagan and Doveikis.
  • Instances in which one member of the dyad produced an utterance that is similar to the other were rare, but twice as common in the direction of mother mimicking the infant (10%), compared to the baby mimicking the mother (5%).
  • Infant mimicry of the mother is significantly (but not completely) contingent on the mother modeling one of the infant’s preferred sounds in her utterance (mean contingency coefficient = .34).
  • Maternal mimicry is significantly (but not completely) contingent on perceived meaningfulness of the child’s vocalization (mean contingency coefficient = .35). In other words, it seems that the mother is not specifically responding to the phonetic character of her infant’s speech output; rather, she makes a deliberate attempt to teach meaningful communication throughout early development.
  • The number of utterances that the mother perceives to be meaning increase with the infant’s age although this is not a hard and fast rule because regressions occur when the infant is ill and the canonical babbling ratio declines. Mothers will also respond to nonspeechlike utterances in the precanonical stage as being meaningful (animal noises, kissing and so forth).

We want to replicate our findings with another 5 infants before we try to publish our data but I feel confident that our conclusions will be subtly different from Fagan and Doveikis’ despite general agreement with their suggestion that self-motivation factors and access to auditory feedback of their own vocal output plays a primary role in infant vocal development. I think that maternal behavior may yet prove to have an important function however. It is necessary to think about learning mechanisms in which low frequency random inputs are actually helpful. I have talked about this before on this blog in a post about the difference between exploration and exploitation in learning. Exploration is a phase during which trial and error actions help to define the boundaries of the effective action space and permit discovery of actions that are most rewarding. Without exploration one might settle on a small repertoire of actions that are moderately rewarding and never discover others that will be needed as your problems become more complex. Exploitation is the phase during which you use the actions that you have learned to accomplish increasingly complex goals.

The basic idea behind the exploration-exploitation paradox is that long term learning is supported by using an exploration strategy early in the learning process. Specifically, many studies have shown that more variable responding early in learning is associated with easier learning of difficult skills later in the learning process. For early vocal learning, the expansion stage corresponds to this principle nicely: the infant produces a broad variety of vocalizations—squeals, growls, yells, raspberries, vowels, quasiresonants, fully resonant vowels and combinations called marginal babbles. These varied productions lay the foundations for the production of speech like syllables during the coming canonical babbling stage. Learning theorists have demonstrated that environmental inputs can support this kind of free exploration. Specifically, a high reinforcement rate will promote a high response rate but it is important to reinforce variable responses early in the learning process.

In the context of mother-infant interactions, it may be that mothers are reinforcing many different kinds of infant vocalizations in the early stages because they are trying to teach words but the infant is not really capable of producing real words and she has to work with what she hears. She does do something after almost every infant utterance however so she encourages many different practice trials on the part of the infant. It is also possible (although not completely proven) that imitative responses on the part of the mother are particularly reinforcing to the infant. In the short excerpt of a “conversation” between a mum and her 11 month old infant shown here, it can be seen that she responds to every one of the infant’s utterances, encouraging a number of variable responses, specifically mimicking those that are most closely aligned with her intentions.

IDV11E03A EXCERPT

It is likely that when alone in the crib, the infant’s vocalizations will be more repetitive, permitting more specific practice of preferred phonetic forms such as “da” (infants are known to babble more when alone than in dyadic interactions, especially when scientists feed back their vocalizations over loud speakers). The thing is, the infant’s goals are not aligned with the mothers. In my view, the most likely explanation for infant vocal learning is self-supervised learning. The infant is motivated to produce specific utterances and finds achievement of those utterances to be intrinsically motivating. What kind of utterances does the infant want to produce? Computer models of this process have settled on two factors: salience and learning progress. That is, the infant enjoys producing sounds that are interesting and that are not yet mastered. The mother’s goals are completely different (teach real words) but her behaviors in this regard serve the infant’s goals nonetheless by: (1) supporting perceptual learning of targets that correspond to the ambient language; (2) encouraging sound play/practice by responding to the infant’s attempts with a variety of socially positive behaviors; (3) reinforcing variable productions by modeling a variety of forms and accepting a variety of attempts as approximations of meaningful utterances when possible; and (4) increasing the salience of speech-like utterances through mimicry of these rare utterances. The misalignment of the infant’s and the mother’s goals is helpful to the process because if the mother were trying to teach the infant specific phonetic forms (CV syllables for example), the exploration process might be curtailed prematurely and self-motivation mechanisms might be hampered.

What are the clinical implications of these observations? I am not sure yet. I need a lot more data to feel more confident that I can predict maternal behavior in relation to infant behavior. But in the meantime it strikes me that SLPs engage in a number of parent teaching practices that assume that responsiveness by the parent is a “good thing”. However, it is not certain that parents typically respond to their infant’s vocalizations in quite the ways that we expect. In the mean time, procedures to encourage vocal play are a valuable part of your tool box, as described in Chapter 10 of our book:

Rvachew, S., & Brosseau-Lapre, F. (2018). Developmental Phonological Disorders: Foundations of Clinical Practice (Second ed.). San Diego, CA: Plural Publishing, Inc.

 

Single Subject Randomization Design For Clinical Research

Ebbels tweet Intervention ResearchDuring the week April 23 – 29, 2017 Susan Ebbels is curated WeSpeechies on the topic Carrying Out Intervention Research in SLP/SLT Practice. Susan kicked off the week with a link to her excellent paper that discusses the strengths and limitations of various procedures for conducting intervention research in the clinical setting. As we would expect, a parallel groups randomized control design was deemed to provide the best level of experimental control. Many ways of studying treatment related change within individual clients, with increasing degrees of control were also discussed. However, all of the ‘within participant’ methods described were vulnerable to confounding by threats to internal validity such history, selection, practice, fatigue, maturation or placebo effects to varying degrees.

One design was missing from the list because it is only just now appearing in the speech-language pathology literature, specifically the Single Subject Randomization Design. The design (actually a group of designs in which treatment sessions are randomly allocated to treatment conditions) provides the superior internal validity of the parallel groups randomized control trial by controlling for extraneous confounds through randomization. As an added benefit the results of a single subject randomization design can be submitted to a statistical analysis, so that clear conclusions can be drawn about the efficacy of the experimental intervention. At the same time, the design can be feasibly implemented in the clinical setting and is perfect for answering the kinds of questions that come up in daily clinical practice. For example, randomized control trials have shown than speech perception training is an effective adjunct to speech articulation therapy on average when applied to groups of children but you may want to know if it is a necessary addition to your therapy program for a speciRomeiser Logan Levels of Evidence SCRfic child.

Furthermore,  randomized single subject experiments are now acceptable as a high level of research evidence by the Oxford Centre for Evidence Based Medicine. An evidence hierarchy has been created for rating single subject trials, putting the randomized single subject experiments at the top of the evidence hierarchy as shown in the following table, taken from Romeiser Logan et al. 2008.

 

Tanya Matthews and I have written a tutorial showing exactly how to implement and interpret two versions of the Single Subject Randomization Design, a phase design and an alternation design. The accepted manuscript is available but behind a paywall at the Journal of Communication Disorders. In another post I will provide a mini-tutorial showing how the alternation design could be used to answer a clinical question about a single client.

Further Reading

Ebbels, Susan H. 2017. ‘Intervention research: Appraising study designs, interpreting findings and creating research in clinical practice’, International Journal of Speech-Language Pathology: 1-14.

Kratochwill, Thomas R., and Joel R. Levin. 2010. ‘Enhancing the scientific credibility of single-case intervention research: Randomization to the rescue’, Psychological Methods, 15: 124-44.

Romeiser Logan, L., R. Hickman, R.R. Harris, S.R. Harris, and C. Heriza. 2008. ‘Single-subject research design: recommendations for levels of evidence and quality rating’, Developmental Medicine and Child Neuroloogy, 50: 99-103.

Rvachew, S. 1988. ‘Application of single subject randomization designs to communicative disorders research’, Human Communication Canada (now Canadian Journal of Speech-Language Pathology and Audiology), 12: 7-13. [open access]

Rvachew, S. 1994. ‘Speech perception training can facilitate sound production learning.’, Journal of Speech and Hearing Research, 37: 347-57.

Rvachew, Susan, and Tanya Matthews. in press. ‘Demonstrating Treatment Efficacy using the Single Subject Randomization Design: A Tutorial and Demonstration’, Journal of Communication Disorders.

 

How to choose a control condition for speech therapy research

This post is an addendum to a previous post “What is a control group?”, inspired by a recently published new paper (“Control conditions for randomized trials of behavioral interventions in psychiatry: a decision framework” Early View, Lancet Psychiatry, March 2017). Following a brief review of the literature on effect sizes associated with different types of control conditions, a framework for choosing an appropriate control condition in behavioral trials is offered. The types of control conditions discussed are as follows:

  • Active comparator
  • Minimal treatment control
  • Nonspecific factors control
  • No-treatment control
  • Patient choice
  • Pill placebo
  • Specific factors component control
  • Treatment as usual
  • Waitlist control

The considerations for choosing one of these control conditions for testing a behavioral intervention are (1) participant risk; (2) trial phase; and (3) available resources. With respect to participant risk, more active interventions should be provided as the control condition when the risk of withholding treatment (especially when known effective treatments are available) is high. Therefore, when making this decision characteristics of the participant population and characteristics of the available treatments will play a role in the decision making process.

Regarding trial phase, early stage exploratory trials should be concerned with the risk of Type II error; in other words the researcher will want to maximize the chances of finding a benefit of a potentially helpful new intervention. Therefore, a waitlist control group might be appropriate at this stage of the research process given that waitlist controls are associated with large effect sizes in behavioral trials. In the later stages of the research program, the researcher should strive to minimize Type I error; in other words it is important to guard against concluding that an ineffective treatment is helpful. In this case an active comparator would be a logical choice although the sample size would need to be large given that the effect size is likely to be small in this case.

Finally, the resources available to the researchers will influence the choice of control condition. For example, in a late stage trial an active comparator provided by trained and monitored study personnel would be the best choice in most circumstances; however, in this case the provision of the control may be at least as expensive as the provision of the experimental treatment. When sufficient resources are lacking, the cost effective alternative might be to ask the usual community provider to administer treatment as usual although every effort should be made to describe the control intervention in detail.

A very nice graphic is provided (Figure 2) to illustrate the decision framework and can be applied to speech therapy trials. There are a number of interventions that have been in use or are emerging in speech therapy practice with a minimal evidence base. We can consider the choice of appropriate control condition for the assessment of these interventions.

Ultrasound intervention for school aged children with residual speech errors has been examined in quite a number of single subject studies but is now overdue for a randomized control trial. Given that the exploratory work has been completed in single subject trials I would say that we could proceed to a phase 3 RCT. The risk to the participant population is more difficult to conceptualize. You could say that it is low because these children are not at particular risk for poor school outcomes or other harmful sequels of non-intervention and the likelihood of a good speech outcome will not change much after the age of nine. The cost of providing an active control will be high because these children are often low priority for intervention in the school setting. Therefore, according to Figure 2, a no-treatment control would be appropriate when you make this assumption. On the other hand, you could argue that the participant risk of NOT improving is very high-all the evidence demonstrates that the residual errors do not improve without treatment after this age. If you consider the participant risk to be higher, especially considering community participation and psychosocial factors, then the appropriate control condition would be something more vigorous: patient choice, an active comparator, a nonspecific factors component control or a specific factors component control. Given the relatively early days of this research, small trials utilizing these control conditions in order might be advisable.

Metaphon as a treatment for four-year-olds with severe phonological delay and associated difficulties with phonological processing has not, to my knowledge, been tested with a large scale RCT. The population would be high risk by definition due to the likelihood of experiencing delays in the acquisition of literacy skills if the speech delay is not resolved prior to school entry. Effective treatment options are known to exist. Therefore, the appropriate control condition would be an active comparator-in other words, another treatment that is known to be effective with this population. Another option would be a specific factors component control that examines the efficacy of specific components of the Metaphon approach. Therefore, the meaningful minimal pairs procedure could be compared directly to the full metaphon approach with speech and phonological processing skills as the outcome variables. Similar trials have been conducted by Anne Hesketh and in my own lab (although not involving Metaphon specifically).

PROMPT has still not been tested in good quality single subject or parallel groups research. If a Phase 2 trial were planned for three-year-olds with suspected apraxia of speech, treatment as usual would be the appropriate control condition according to Figure 2. The speech condition is too severe to ethically withhold treatment and the research program is not advanced enough for a specific factors components control although this would be the next step.

Finally, an RCT of the effectiveness of Speech Buddies to stimulate /s/ in 3-year-olds with speech delay could be implemented. In this case, the participant group would low risk due to the likelihood of spontaneous resolution of the speech delay. Given a phase 2 trial, either no treatment or waitlist control could be implemented.

The authors of this framework conclude by recommending that researchers justify their choice of control condition in every trial protocol. They further recommend that waitlist controls are only acceptable when it is the only ethical choice and state that “no behavioral treatment should be included in treatment guidelines if it is only supported by trials using a waitlist control group or meta-analytic evidence driven by such trials.” To me, this is eminently sensible advice for speech and language research as well.

And this I believe concludes my trilogy of posts on the control group!

Further Reading

What is a control group? Developmental Phonological Disorders blog post, February 5, 2017

Using effect sizes to choose a speech therapy approach, Developmental Phonological Disorders blog post, January 31, 2017

Gold, S. M., Enck, P., Hasselmann, H., Friede, T., Hegerl, U., Mohr, D. C., & Otte, C. Control conditions for randomised trials of behavioural interventions in psychiatry: a decision framework. The Lancet Psychiatry. doi:10.1016/S2215-0366(17)30153-0

Hesketh, A., Dima, E., & Nelson, V. (2007). Teaching phoneme awareness to pre-literate children with speech disorder: a randomized controlled trial. International Journal of Language and Communication Disorders, 42(3), 251-271.

Rvachew, S., & Brosseau-Lapré, F. (2015). A Randomized Trial of 12-Week Interventions for the Treatment of Developmental Phonological Disorder in Francophone Children. American Journal of Speech-Language Pathology, 24(4), 637-658. doi:10.1044/2015_AJSLP-14-0056

Advocacy and Research

On May 9th, 2014 at the annual conference of Speech-Language and Audiology Canada I was immensely honoured to receive the Eve Kassirer Award for Outstanding Professional Achievement. At the time I understood that I had two minutes to make some remarks and then we were asked to reduce to one minute so I improvised to what I recall was pretty much babble so I have decided to expand upon those remarks in my blog with cross-posting to the SAC site. I do recall that I had enough presence of mind to thank the award committee and my nominees Françoise Brosseau-Lapré and Susan Rafaat to whom I am extremely grateful.

Judy Meintzer, President of SAC, made a lovely introduction that focused on some of my administrative accomplishments, many having to do with student education, and therefore it is perhaps not surprising that my most accomplished student Françoise, now an Assistant Professor at Purdue University, nominated me for this award. In my own mind however my career has been primarily marked by my efforts to conduct research that will have direct implications for clinical practice or health care policy and to subsequently communicate those implications to clinicians and policy makers. Over the course of my career I have been gratified by the recognition that these efforts have received. My doctoral dissertation on infant babble for example was not such a large thing but subsequent efforts to highlight early vocal development as an important stage of language development were recognized with CASLPA’s media award in 2000. Similarly my contribution to research on the topic of maximum performance tasks is tiny but my efforts to teach SLPs to apply this assessment technique accurately and to promote its use even with young patients was recognized with a CASLPA Editor’s award in 2007. My work in the area of phonological awareness and speech sound disorders is well known but it was my communication of the implications of this work to pediatricians that was recognized with the Dr. Noni McDonald award, also in 2007. The international recognition that I received with ASHA Fellowship in 2012 reflected in part the clinical nature and reach of my research. I think that it is no accident that I received the Eve Kassirer award now when I am fully immersed in the Wait Times Benchmark project – this is a Pan Canadian Alliance initiative coordinated by Susan Rafaat that I will write more about in a forthcoming blog.  Again, my focus is not just on ensuring that the wait times recommendations are evidence-based but on developing an effective and well-branded communication strategy for promoting the use of those benchmarks.

So now I get to the points that I was trying to make somewhat inarticulately on the evening of May 9th. I had spent much of the conference talking to conference attendees about the Wait Times Benchmark for Speech Sound Disorders while handing out the cards announcing the new recommendation. I had many interesting conversations about the challenges of reducing wait times in different jurisdictions across Canada. I know that individual SLPs often feel powerless to effect change or make a contribution to solving a problem that big. The solutions however lie simultaneously in advocacy and research. This is where membership in your national association, in the Canadian context, SAC, is so critical. SAC has proven itself to be absolutely superb at advocacy and the power of SAC’s voice is completely dependent upon the size of its membership. Effective advocacy is also reliant upon good information – reliable and relevant to the practices and policies we are promoting. SAC has used survey research very effectively to communicate about interprovincial variation in the achievement of national standards for infant hearing screening for example and their chart showing SLP and audiologist numbers per capita is stunning. Just as important is the need for more clinical research to help clinicians deliver services more effectively and efficiently if we are going to meet benchmarks for timely and effective provision of care. It is a matter of great concern to me that Canada has no research funding body equivalent to the National Institute on Deafness and Other Communication Disorders and therefore it is very difficult to get funding in Canada for applied research in speech-language pathology or audiology. The SAC Clinical Research Grants program is a miniscule first step however that must be encouraged and expanded.

To recap, if we are going to ensure that children and adults with hearing, communication and swallowing difficulties get the services that they need when they need them, the most important action that we can make as individuals is to join SAC, encourage our colleagues to join SAC, and promote SAC’s efforts to fund clinical research.