OME and Speech Therapy

A new paper has been published (Brennan-Jones et al, JSLHR, 2020) that examines the relationship between the outcome of a single tympanostomy assessment at age 6 with PPVT test scores at age 6 and 10 and CELF scores at age 10. My doctoral thesis was on the topic of otitis media with effusion (OME) and I noticed that the authors curiously omitted the most important large sample prospective study from consideration in their introduction and their discussion. The omission seems to have been strategic. The authors were motivated to compare their own study to others that had been flawed by ascertainment bias. However, there are other excellent studies that used a prospective design with good quality sampling procedures. Furthermore, these other studies have the advantage of multiple assessments of middle ear function at an age that is of particular relevance to language development. It is instructive to consider the findings of the literature as a whole when attempting to draw conclusions about the clinical implications of Brennan-Jones et al findings. The study cannot stand alone. For this reason, I offer my own commentary.

  1. OME is Normal

The first and most important fact to understand about otitis media with effusion is that it is normal. Because it is a “silent” condition the fluid in the child’s middle ear can remain unnoticed for 30 or more days. Even worse, common treatments such as antibiotics or decongestants are quite useless when it comes to clearing up the fluid. Although infection is dangerous to the child’s health it is the fluid that impairs hearing and it is the fluid that is hardest to cure. So children can spend a lot of their life with suboptimal hearing. That study that Brennan-Jones et al ignored? It involved frequent prospective monitoring of middle-ear status in 2253 infants, from 61 days until 2 years of age (the Pittsburgh study by Paradise et al). The proportion of infants who were observed to have middle ear effusion more than once was 48%, 79% and 91% at ages 6, 12, and 24 months. On average these infants spent about 20% of their life with fluid in one or both ears. A similar study conducted in Boston (Teele, Klein & Rosner) followed children from birth to age 3 and recorded a range of 0 to 500 days with middle ear effusion and an average of 116 days. Half the sample experienced more than 90 days with middle ear effusion and almost half the sample had a bout of OME during their first year. To summarize, nearly every child gets at least one ear infection but the range of days with middle ear effusion varies greatly from child to child.

  1. OME Causes Significant Hearing Loss

Almost every paper that discusses the conductive hearing loss that is associated with OME describes it as “mild” because most children achieve pure tone average thresholds of 20 to 25 decibels during an episode of OME and only 10% suffer losses of greater than 40dB (see Roberts et al for review). However the amount of hearing loss changes greatly during each episode and greatly across the population of children who have OME. Furthermore, children require a much greater signal-to-noise ratio to achieve the same perceptual performance as an adult when identifying and discriminating speech signals. The same level of hearing loss that is mild for an adult with normal language abilities is significant for an infant or young child that is engaged with the task of learning his or her first language.

  1. OME is Associated with Variations in Language Development

Both the Pittsburgh study (2253 infants monitored prospectively from birth) and the Boston study (205 infants followed prospectively from birth) found that amount of time with middle ear effusion was correlated with language development. I am reproducing some of the date below, grouped according to days with MEE ascending down the rows and SES categories ascending across the columns. In both studies, MEE and SES are significant predictors of vocabulary knowledge. In the Pittsburgh study the vocabulary measure was parent report of productive vocabulary on the McArthur Communicative Development Inventory when the child was 24 months old. In the Boston sample, the measure of receptive vocabulary was the Peabody Picture Vocabulary Test.

  Pittsburgh Sample (Expressive Vocabulary)
  Low SES Mid SES High SES
Least MEE

70

70

79

Mid MEE

60

63

71

Most MEE

43

61

67

  Boston Sample (Receptive Vocabulary)
  Low SES Mid SES High SES
Least MEE

97

 

105

Mid MEE

95

 

103

Most MEE

93

 

100

How do we interpret these data? The first thing to notice is that variation in vocabulary size is normal (Fenson et al., 2000). At 24 months a child might produce no words or over 400. What accounts for this broad variation? It is common to call on genetic explanations but environmental inputs play a large role in vocabulary development specifically and SES and OME are both environmental variables. The point here is that OME does not cause language delay but it is one variable that helps to explain the large variation in early vocabulary development within the normal range.

  1. What are the clinical implications of the research on OME?

It is a rather common tactic to conclude that the research data indicating a correlation between OME and slightly slower growth in some aspect of language development (as reported in Brennan-Jones et al between age 6 and 10 years for example) is of no particular clinical interest. The reason for this conclusion is that the impact of OME is taken to be “small” because the mean test scores are all within the normal range. In other words, OME does not cause language impairment and therefore “no clinical implications.”

Let’s think about this from the perspective of an SLP treating one particular patient. I have in mind the most common type of patient treated by the pediatric SLP in the world (I can predict this from survey data and large scale caseload studies): a child aged somewhere between 4 and 7 with a mixed speech sound disorder and expressive language delay. We can expect an underlying impairment with phonological processing that has a heritable genetic cause (Bishop et al, 2008). The most important protective factor (Rvachew & Grawburg, 2006) will be the child’s vocabulary size—something that is highly malleable. If the child receives sufficient high-quality inputs, it will be a lot easier to bring phonological processing skills into the expected range and ensure acquisition of literacy skills. If the child has chronic OME, you don’t really care whether the OME has caused the child’s speech and language skills or not. Even though I would still argue that there is reason to be concerned about permanent effects of OME during the first year on the development of the auditory system, you can let the scientists worry about that. The issue is that this child cannot afford to lose a single word of language input. Because right now, intense high-quality language input is all we have in our treatment tool box. Let’s make sure that each child on our caseload can hear the precious minutes of therapy input that we are providing. And when we send them back to their noisy homes and classrooms with their homework books, let’s make sure they can participate in those activities to their maximum benefit. Hearing impairment affects everybody. And this child in particular doesn’t have any days to lose.

 

 

Speech Therapy and Speech Motor Control: Part 3

In two previous blogs I discussed a recent paper by Strand in which she outlines in detail the theoretical foundation and procedural details of Dynamic Temporal and Tactile Cueing (DTTC) as a treatment for Childhood Apraxia of Speech (CAS). In Part 1 I suggested that the theoretical base, being Schmidt’s “Schema Theory of Discrete Motor Skill Learning,” was outdated. In Part 2 I discussed modern theories of speech motor control that assume a dynamic interplay of feedforward and feedback control mechanisms. In this blog I will discuss the implications for speech therapy, in relation to critical aspects of DTTC.

First, let us consider the core element of DTTC, “the focus on the movement (rather than the sound or phoneme) in terms of modeling, cueing, feedback, and target selection” (p. 4). I believe that all of us who strive to help children with CAS acquire intelligible speech agree that speech movements are the focus of speech therapy, as opposed to phonological contrasts. Nonetheless, this statement raises questions about the nature of “speech movements.” What is the goal of a speech movement? The answer to this question is controversial: it may be a somatosensory target involving specific articulators, such as for example bring the margins of the tongue blade into contact with the upper first molars; or it may be to produce a particular vocal tract shape such as a large back cavity separated from a small front cavity by a narrow constriction; or it may be to produce an acoustic output that will be perceived as the vowel [i]. The DTTC is structured to promote precise and consistent movements of the articulators and therefore the first scenario is presumed. Furthermore, the origin of CAS is hypothesized to be a deficit in proprioceptive processing that arises from an impairment in cerebellar mechanisms. Updating the theory, this hypothesis would implicate feedforward control which, following from Guenther and Vladosich (2012), “projects directly from the speech sound map [in left ventral premotor cortex and posterior Broca’s area] to articulatory control units in cerebellum and primary motor cortex” (p. 2). However, new research (Liégeois et al., 2019) identifies the locus of structural and functional impairments underlying CAS as being along a dorsal pathway of cortical structures, specifically: reduced white matter and fMRI activations in sensory motor cortex and along the arcuate fasciculus and reduced grey matter and fMRI activations in superior temporal gyrus and angular gyrus. They explain that “this route links auditory input/representation to articulatory systems … and transforms phonological representations into motor programs …In contrast, the speech execution white matter pathway (corticobulbar) and the ventral language route (IFOF) were not altered in this family” [that showed multigenerational impairments in speech praxis]. My point is that although the cerebellum is important to speech motor control and CAS may well involve impairments in proprioceptive feedback, speech is clearly a sensory motor skill that requires close connection among articulatory and auditory representations for sounds and syllables.

In Part 2 of this blog series I indicated that adults can compensate for unexpected perturbations to articulatory trajectories or auditory feedback very rapidly by drawing on their internal model of vocal tract function. It is interesting to consider that throughout speech development children cope with perturbations to articulatory gestures and expected acoustic outputs because their vocal tract is changing shape, sometimes quite dramatically, throughout childhood. Callen et al. (2000) showed how the developing child can adapt to the changing vocal tract by aiming for relatively stable auditory targets (conceived of as regions in auditory space) and using auditory feedback and simulations of auditory outputs to achieve those targets even as vocal tract structure is changing. The key to this remarkable ability is a learned mapping between articulator movements, vocal tract shapes and auditory outputs. The learning and updating of this internal model of vocal tract function arises from an unsupervised learning mechanism, essentially Hebbian learning: young infants engage in a great deal of unstructured vocal play as well as somewhat more structured babbling – speech practice that allows them to learn the necessary correspondences without having specific speech goals. Infants with CAS are widely believed to skip this period of speech development; therefore, it is likely they begin speech therapy without an internal model of vocal tract function which is foundational for goal directed speech practice. Therefore, precise, repeated, consistent speech movements may not be the best place to start a treatment program for severe CAS; a program of unstructured vocal play that targets highly varied playful vocalizations is a better starting place for many children. Subsequently, high intensity practice with babble (repetitive syllable production) will stabilize the mappings between articulatory gestures and the resulting vocal tract configurations and somatosensory and auditory outcomes.

One of the advantages of a well-tuned internal model of vocal tract function is that it supports “motor-equivalent speech production” given commonly occurring constraints on speech production. In other words, there are many different articulatory gestures that will produce the same acoustic-phonetic goal. When the child has a stable acoustic-phonetic target and is able to process auditory feedback in relation to that target, various articulatory solutions can be found to adapt to changing vocal tract structure or constraints such as talking while eating or a holding a pen between the teeth. Developmental changes in the way that articulators are coordinated to produce the same phoneme are well documented in the literature. Similarly speech production varies with phonetic context. Motor equivalent trading relations between tongue body height and lip rounding are well known for production of the vowel [u] and the consonant [ʃ] for example and the front-back positioning of the constriction in these phonemes is highly variable across speakers and phonetic contexts. The precision with which these phonemes are produced is related to the talker’s perceptual acuity: for example, adults who have sharp perceptual boundaries between [ʃ] and [s] produce them with greater articulatory consistency as well as greater acoustic contrast between the phoneme categories. Perkell et al. (2004) speculated “In learning to maximize intelligibility, the child with higher acuity is better able to reject poor exemplars of each phoneme (as in the DIVA model), and thus will adopt sensory goals for producing those phonemes that are further apart than the child with lower acuity.” The implications for speech therapy are that, even in the case of CAS, ensuring stable acoustic-phonetic targets for speech therapy goals is essential whereas insisting upon SLP defined articulatory parameters may be counter-productive. The goal is not absolute  consistency in the production of specific motor movements, but rather, dynamic stability in the achievement of speaking goals.

Although it is speculated that feedforward control is weighted more heavily than feedback control in adult speech, feedback is critical to speech learning during infancy and childhood. Furthermore, auditory feedback plays a crucial role. The initial goal is an auditory target. Guenther and Vladusich (2012) explain that “the auditory feedback control subsystem [helps to] shape the ongoing attempt to produce the sound by transforming auditory errors into corrective motor commands via the feedback control map in right ventral premotor cortex” (p. 2). They further explain that repeated practice of this type eventually leads to the development of somatosensory goal regions. A particular frustration for children with CAS is perseveration, the difficulty of changing a well-learned articulatory pattern to a new one that is more appropriate. This problem with perseveration highlights the need to engage the feedback control system. There are two strategies that are essential: first a high degree of variation in the practice materials which can be introduced by practicing nonsense syllables with a carefully graded increase in difficulty but variation in the combination of syllables within difficulty levels. The second strategy is to provide just the right amount of scaffolding along the integral stimulation hierarchy so that the child will be successful more often than not while experiencing a certain amount of error. Some error ensures that corrective motor commands will be generated from time to time. Imagine practicing syllables that combine four consonants [b, m, w, f] with four vowels [i], [u], [æ], [ɑ] and four diphthongs [ei], [ou], [ɑi], [au], [oi], presented at random so that the child imitates the first syllable (Say [bi]) and then repeats it again twice (Say it again… and again…), before proceeding to another syllable. You will have a great many targets in your session but created from a small number of elements. Imagine further that you progress to a more difficult level (reduplicated syllables, [bubu], [mimi]) as soon as the child achieves 80% correct production of the single syllables. You can see that you will also be allowing the child to produce quite a bit of error. We call this the challenge point. Tanya Matthews, Francoise Brosseau-Lapré and I are working on a paper to describe how to do this and describe our experiences with the approach. You will see that it is very different from working on five words and requiring that the child achieve 15 to 20 correct productions at the imitative word level before proceeding to delayed imitation and then again before proceeding to spontaneous productions. Errorless learning is a fundamental aspect of DTTC and has a long history in speech therapy practice. However it is not clear that it is well-motivated from the perspective of developmental science.

To summarize, there are many aspects of DTTC that are similar across all sensory-motor approaches to the treatment of CAS. In particular high intensity speech practice is well motivated and likely to be effective with all forms of moderate and severe speech sound disorder. Nonetheless there are some significant differences between Strand’s approach and the approach that I recommend based on an updated theory of speech motor control. There is still a great deal of research to do because very few of our specific speech therapy practices have received empirical validation even though speech therapy in general has been shown to be efficacious. As a guide to future research (hopefully using randomized and thus interpretable designs), I provide a table of procedures that are similar and different across the two theoretical approaches.

 

SCHEMA THEORY

AUDITORY FEEDBACK CONTROL

Treatment Procedures that are Similar

High intensity practice
Focus on speech movements (not phonemes)
Practice syllable sized units (not isolated sounds)
Attend to temporal aspects of trial structure (delayed imitation, delayed provision of feedback)
Integral stimulation hierarchy (attend to visual and auditory aspects of target)

Treatment Procedures that are Different

Focus on precise, consistent movements Focus on dynamic stability
Over-practice: accuracy over 10-20 trials Variable practice when possible
Errorless learning Challenge point: 4/5 correct, then move up
Behavioral shaping of accurate movements Motor equivalent movements
Tactile and gestural cues to ensure accuracy Sharpen knowledge of auditory target
“Hold” initial configurations Encourage vocal play, develop internal model

Readings:

Callan, D. E., Kent, R. D., Guenther, F. H., & Vorperian, H. K. (2000). An auditory-feedback-based neural network model of speech production that is robust to developmental changes in the size and shape of the articulatory system. Journal of Speech, Language, and Hearing Research, 43, 721-738.

Guenther, F. H., & Vladusich, T. (2012). A neural theory of speech acquisition and production. Journal of Neurolinguistics, 25(5), 408-422.

Liégeois, F. J., Turner, S. J., Mayes, A., Bonthrone, A. F., Boys, A., Smith, L., . . . Morgan, A. T. (2019). Dorsal language stream anomalies in an inherited speech disorder. Brain, 142(4), 966-977.

Perkell, J., Matthies, M., Lane, H., Guenther, F. H., Wilhelms-Tricarico, R., Wozniak, J., & Guiod, P. (1997). Speech motor control: Acoustic goals, saturation effects, auditory feedback and internal models. Speech Communication, 22, 227-250.

Perkell, J., Matthies, M. L., Tiede, M., Lane, H., Zandipour, M., Marrone, M., . . . Guenther, F. H. (2004). The distinctness of speakers’ /s/-/ʃ/ contrast is related to their auditory discrimination and use of an articulatory saturation effect. Journal of Speech, Language, and Hearing Research, 47, 1259-1269.

Rvachew, S., & Matthews, T. (2017). Demonstrating treatment efficacy using the single subject randomization design: A tutorial and demonstration. Journal of Communication Disorders, 67, 1-13.

Rvachew, S., & Matthews, T. (2019). An N-of-1 Randomized Controlled Trial of Interventions for Children With Inconsistent Speech Sound Errors. Journal of Speech, Language, and Hearing Research, 62, 3183–3203

Speech Therapy and Speech Motor Control: Part 2

Speech Therapy and Theories of Speech Motor Control: Part 2

In Part 1 of this blog series I described the theoretical basis of Dynamic Temporal and Tactile Cueing as recently published by Edy Strand. Specifically, the treatment is founded on Schmidt’s Schema Theory in which generalized motor programs are learned. During speech production the child must select the right program and apply the correct parameters before implementing it all at once. If the parameters are selected incorrectly, a speech error will occur. It is rather like making toast. If you forget to reset your settings after toasting bagels, your Wonderbread will come out black! The problem as stated by Schmidt is that by the time you realize that your toast settings are wrong and your motor gestures are off track, it’s too late— the toast is burned and you have said “Trat! Doast!” Learning occurs by “trial and error” — after much experience with your toaster you learn the settings (parameters) for getting the right amount of toastiness for different items. Learning to operate your toaster is similar to acquiring one “generalized motor program.” Speech motor learning is assumed to operate this way because sensory feedback is too slow to support on-line adjustments to the parameters in a direct way. I used a different analogy in the previous blog — once you have committed to swinging your golf club, you tend to follow through.

The problem with this model of speech motor control is that we know for certain that real time modification of vocal tract movements occurs in response to somatosensory and auditory feedback. Strangely we have known since the early eighties that the speech system is highly sensitive to error on-line; therefore, I don’t know why this idea of open-loop control persists. The proof comes from studies in which (typically) an adult is asked to repeatedly produce a particular syllable or disyllable and then experiences a perturbation in sensory feedback (either somatosensory feedback or auditory feedback). An early example of this paradigm involved productions of “aba”: during 15% of trials a mechanism placed an unexpected load on the talker’s lower lip. Here is where it gets interesting: the research participants corrected for this perturbation in the articulatory trajectory of the bottom lip very rapidly with compensatory actions of the top and the bottom lip (the bottom lip would need to exert greater upward force and the top lip would need to produce greater downward extent in order to produce the labial closure and the expected transitions into and out of the consonantal closure). Decades of experiments have followed involving many other perturbations in the domain of articulatory gestures, somatosensory (skin) sensations, and auditory feedback. For example, while the research participants are repeatedly saying “bed” you can trick their ear into thinking they are saying “bad” which leads to compensatory adjustments in articulation to get the expected auditory percept.

This kind of dynamic compensation across the entire vocal tract is made possible by an “internal model” — a neural model that simulates the behavior of a sensorimotor system in relation to its environment. The internal model can generate a prediction of the sensory consequences of implementing a motor plan via simulation. For speech, future outputs in the somatosensory and auditory domains are simulated; furthermore, the simulator takes into account delayed sensory feedback, noise in the perceptual system and other variables so that when feedback arrives it can be compared with the prediction and provide reliable error messages. Continuous tracking of the vocal tract state is thus permitted and forms the basis for ongoing planning of movements as speech unfolds. If an unexpected event occurs, as in the perturbation experiments that I have described, error corrections are dynamic across the entire system; therefore, if the predicted trajectory of acoustic formant transitions from the [a] into the [b] closure is not occurring, lower lip, upper lip, jaw and tongue movements can all be harnessed to produce the desired outcome.

As Houde and Nagarajan (2011) explain, “speech motor control is not an example of pure feedback control or feedforward control” (p. 11). The acquisition of speech motor control is dependent upon the development of the internal model of vocal tract function as well as detailed knowledge of auditory targets. This understanding has implications for the treatment of childhood apraxia of speech. I will explore these implications further in the next and final blog in this series.

Readings

Abbs, J. H., & Gracco, V. L. (1983). Sensorimotor actions in the control of multi-movement speech gestures. Trends in Neurosciences, 6, 391-395.

Houde, J. F., & Jordan, M. I. (2002). Sensorimotor adaptation of speech I: Compensation and adaptation. Journal of Speech, Language & Hearing Research, 45(2), 295-310.

Houde, J. F., & Nagarajan, S. S. (2011). Speech production as state feedback control. Frontiers in Human Neuroscience, 5, doi: 10.3389/fnhum.2011.00082.

Tourville, J. A., Reilly, K. J., & Guenther, F. H. (2008). Neural mechanisms underlying auditory feedback control of speech. NeuroImage, 39, 1429-1443.

Speech Therapy and Theories of Speech Motor Control: Part I

Edy Strand recently published a detailed description of her Dynamic Temporal and Tactile Cueing treatment strategy. As she says this is a hugely valuable paper because it provides a complete description of a treatment designed for severe speech sound disorders, especially Childhood Apraxia of Speech, and more importantly, it summarizes in one place the theoretical foundation for the treatment. I think that, on the whole, this is an efficacious treatment although there are some procedures, derived directly from the outdated theoretical underpinnings, that are questionable however, and therefore I am going to devote several blogs to more recent theory and basic science research on the development of speech motor control and apraxia of speech. In this first blog, I review Schema Theory, even though this theory is just not right! But it has a long history and remains currently popular across almost all clinically-oriented papers on motor speech disorders.

The theory that is referenced in Edy Strand’s paper is Richard Schmidt’s “Schema Theory of Discrete Motor Skill Learning,” published in Psychological Review in 1975 and subsequently brought to speech-language pathology by Ray Kent and others as a useful framework for thinking about speech therapy. The important idea underlying this theory is that motor skills are made up of brief, discrete motor acts that are executed all-at-once as open-loop generalized motor programs, adapted with specific response specifications (called parameters) for the current conditions. The theory assumes “open-loop” control because sensory feedback is often too slow to impact movement after it has started. According to this theory feedback is processed after the movement is over and incorporated into the schema for the future execution of the generalized motor program. I have used golf as an example before; even though I haven’t played much in years let’s do it again: if we are adopting this theory we would think of practice sessions as developing different generalized motor programs for each type of shot, a long drive, a short 7-iron shot, the up-and-down pitch onto the green, and the putt into the hole. Which shot you choose depends upon your recall schema: what is your target and which type of shot is likely to achieve it? I personally recall that when close to the green my pitch is better than my chip (whereas my husband has the opposite preference). How you address the ball depends upon the initial conditions (flat ground, hill, tall grass etc.). The motor control parameters (also known as response specifications) depend upon the distance to the target (how high to lift the club, speed of follow through, force applied and so on). Based on the initial conditions and the desired outcome, I launch the shot with my wedge, expecting a certain “feel” as I hit the ball based on past experience with the sensory consequences of hitting this shot; I can always “recognize” a good hit even before I see the ball land (often I just turn my back on the ball, I don’t even want to see it land!). But in any case, the actual outcome is important for updating the “recall” schema; specifically, if I have actually achieved my target, I add all this information, the initial conditions, the response specifications, the recognition schema and the recall schema to my memory. The generalized motor program is an abstraction across all these remembered practice trials, permitting correct specification of the response parameters in future shots. Furthermore, I should be able to adapt the generalized motor program to similar shots, even if the ball is a little further or closer to the green for example.

When applied to CAS, in which current research suggests unreliable or degraded somatosensory feedback, the use of this model focuses attention on the child’s processing of initial conditions, inaccurate planning or programming of the movement due to poor selection of response specifications, and/or poor recognition schema (not knowing when the movement “feels right”). Therefore, certain procedures are recommended. DTTC providers use manual or gestural cues to shape the child’s articulators into the “initial position” and encourage the child to “hold” the position momentarily so as to fully process those initial conditions before launching the movement. During the initial stages of therapy, the SLP uses a slow rate and co-production so that the child is getting extra feedback during the practice trial, presumably with the goal of stabilizing the recognition schema. Imitative models support the child’s knowledge of the target which, when combined with copious knowledge of results feedback should support the development of recall schema. And finally, a great deal of practice with an errorless approach ensures that the child lays down many memory traces of correctly executed motor programs.

The recommendations that are provided make a certain amount of sense given the context of schema theory (even though there is in fact no evidence for the specific efficacy any one of these particular procedures). The problem is that it is not clear that schema theory is a reasonable foundation for modern speech therapy practice.

First, citing Richard Schmidt himself, he cautioned in 2003 that “schema theory was intended to be an account of discrete actions. Hence, continuous actions, such as steering a car or juggling, which are both of longer duration (allowing time for response-produced feedback to have a role) and more based on the performer’s interactions with the environment were outside the area for schema theory…long-duration actions might be based on interplay between open-loop subactions and feedback-based corrections… . Interestingly, tasks such as juggling seem appropriate for analysis in terms of the dynamical systems perspective” (p. 367). I would argue that our understanding of, not only juggling, but speech motor control has benefited immensely from the dynamical systems perspective and I will come back to that in the next blog. If juggling is considered too complex and continuous to be explained by schema theory, probably speech is not a good fit either.

Second, modern theories of speech motor control have shown that on-line correction of motor action even over short durations occurs despite the limitations of feedback control. The explanation lies in the continuous operation of feedforward control mechanisms. More on feedforward control in another blog.

References

Rvachew, S., & Brosseau-Lapré, F. (2012). Developmental Phonological Disorders: Foundations of Clinical Practice. San Diego, CA: Plural Publishing.

Schmidt, R. A. (1975). A schema theory of discrete motor skill learning. Psychological Review, 82(4), 225-260. doi:10.1037/h0076770

Schmidt, R. A. (2003). Motor schema theory after 27 years: Reflections and implications for a new theory. Research Quarterly for Exercise and Sport, 74(4), 366-375.

Strand Edythe, A. (2019, Early View). Dynamic Temporal and Tactile Cueing: A Treatment Strategy for Childhood Apraxia of Speech. American Journal of Speech-Language Pathology. doi:10.1044/2019_AJSLP-19-0005

Using Apps for Speech Therapy

It seems like only a few days ago I promised to write a blog post on the best uses of apps for speech therapy, when I wrote about the Werfel study in my last blogpost. But it turns out that I made that promise 3 months ago! Time flies when you are School Director it turns out. But also, my thinking about why you might want to substitute an app for picture cards reminded me of a particularly traumatic event in my past and maybe I just didn’t want to revisit that memory. But here goes…when I was sent out on my first summer practicum as an undergraduate student sometime in the nineteen-seventies I was assigned to a health unit in rural Alberta. The placement involved driving a great big Ford around to schools on country roads which was scary enough because I had a driver’s licence, but I had never really driven on account of not owning a car. Anyway, on the very first day my supervisor asked me to carry all our materials out to the car so she piled my arms up with stuff, many files filled with papers, some board games, those plastic boxes full of articulation cards, and on top of that…her lunch! Of course, I dropped the load in the parking lot. You can imagine the scene — I am not going to describe the process of picking it all back up and trying to reorder everything before getting it in the back seat. To make it all worse, she then hands me the keys and tells me to drive because she is going to eat her lunch on the way. Her lunch included a can of grape pop. Now you can imagine how my glasses became painted with purple goop. All I can says is that it is lucky I did not drive the car off the road.

This story is actually relevant to the topic at hand because I want to talk about iPad apps relative to all the things I was carrying in my hands, excluding the lunch. Recall that Werfel implemented a therapy program in which the children named pictures on the screen and then swiped them off, one after the other, for 25 sessions over 8 weeks. Is this how we want to use apps? Why would we use apps? What are the advantages of apps over the boxes of pictures cards? Let’s go through the advantages one at a time.

  1. Storage

The first obvious advantage is that all the information and functionality carried in the files, the boxes of picture cards and even the board games can be stored on an iPad — a relatively small object that would have fit in the lunch bag or my purse. Not only that, the information can be password protected so it is an efficient and relatively secure way of carrying things around. At the same time the screen is large enough for two people to view and small hands to manipulate. I read that SLPs use a lot of apps built for phones because their employers do not provide them with iPads but everyone has their own iPhone. That is a real shame because the functionality of an iPad or other tablet is hard to beat.

  1. Multimedia

The second advantage of a digital app is the possibility of presenting information to children with multimedia correlation across different sensory modalities. Apps can present therapy stimuli with an integration of colourful and realistic visual representations, integrated text, sound effects and movement. Susan Neuman’s theory of synergy predicts that children learn and store more robust mental representations when they experience new information this way. Some experimental support for this idea was presented by Strouse & Ganea who randomly assigned 102 toddler-mother pairs to a print-book or ebook shared reading condition. The results were striking:

“Toddlers who were read the electronic books paid more attention, made themselves more available for reading, displayed more positive affect, participated in more page turns, and produced more content-related comments during reading than those who were read the print versions of the books. Toddlers also correctly identified a novel animal labeled in the book more often when they had read the electronic than the traditional print books.”

In this study the animation provided by the ebooks was very simple: when the toddlers patted the page, the sound associated with the illustrated animal was presented. Therefore, we have multimedia stimulation and an interactive component contributing to engagement and learning.

  1. Interactive Features

The variety of interactive features that are built into apps are boundless. In ebooks “hotspots” within the text or illustrations launch a variety of effects that may advance the story and support learning. Alternatively these animations, sound effects and games that occur when the hotspots are activated may be entertaining while not relevant to the story at all. These same kind of features can be used to create learning activities in the context of educational games meant to teach letter sounds or vocabulary or reading or a wide range of other skills. Many games are simply digital versions of conventional board games. Other games are meant to be fun and creative, involving free style drawing, opportunities to create characters and settings and stories in an open-ended fashion. Apps that encourage creativity are recommended for their “minds-on” properties. Hirsh-Pasek et al presented a framework for evaluating and choosing apps that rests on four pillars of learning: (1) the app encourages active learning; (2) in which the child is deeply engaged by the learning task; (3) the learning experience is meaningful in that it promotes connections between new knowledge and existing knowledge; and (4) the learning activity permits high quality social interaction or social contingency. These authors also review the science of learning and conclude that when the app is explicitly educational the learning program should be structured to provided “scaffolded exploration toward a learning goal.” Therefore, rote learning games in which the child, for example, simply names pictures and receives a tangible reward such as points in a token-economy game would not meet these criteria. A completely open-ended game with no learning goal would also not meet these criterial.

  1. Personalization

Perhaps the most exciting opportunities offered by tablets and the associated apps are the possibilities for personalization. It is possible for children to create their own stimuli and stories using the camera, drawing, and writing tools. In this way all the practice materials for speech and language therapy can be especially meaningful and relevant to the child’s daily life and special interests.

Using Apps in Speech Therapy

The first advantage to using apps in speech therapy is that it is possible to “think outside the articulation card box” and use other tools to practice speech accuracy in authentic communicative contexts. Let us imagine that you are working of velar stops with a child who typically fronts these consonants. You want an opportunity to product the sounds in relatively complex words while providing meaningful feedback using focused stimulation that is adapted for the speech therapy context as described by Rvachew & Brosseau-Lapré (2012). There are some electronic books that lend themselves to conversation that useful for this purpose. Consider the Nosy Crow book “Don’t Wake Up Tiger!” First there are several opportunities to produce velar sounds in conversation: tiger (contrasted with turtle), frog, cake, candle, pelican, fox). There is an active learning component in that the child must perform specific actions to help the different animals get around the tiger without waking him up in order to set up their surprise birthday party. There are matching games and five “spot the difference” games, the last one involving the birthday party scene, providing the opportunity for distancing prompts. The idea here is that articulation drill is not the best way to improve speech accuracy for the majority of children with speech delay or disorders in any case. You will want to choose different stories or games for older children but definitely choose apps that permit authentic conversation and minds-on learning.

It is also possible to create your own games for speech therapy drill very simply using presentation tools along with photos, clip art, or drawing tools. If you were practicing words that contain siblilants for example, the child could bring a photo of his house. Pasted into a series of slides, over top of cartoon characters and animated to disappear upon clicking or swiping, you have a very simple game. In this case, the child asks the question “Whose house?” and after swiping the house, a simple animation reveals the “It’s mouse’s house (sheep’s/zebra’s/seal’s etc.).” Many common software tools permit simple animations that are useful, turning a simple swipe into a game that connects meaning to the drill practice.

Of course, there are many commercial apps for drill therapy or minimal pairs games. I will not make the mistake of endorsing or criticizing any particular product. However, you will want to look for common problems when you download free games or purchase more sophisticated therapy tools. One common issue is putting text on the minimal pair cards so that the children are using letter cues rather than listening to the sound of speech and referring to their own underlying representations for words when playing the game that is involved. Another issue is poor choices of words from the point of view of phonological theory (e.g., “ball” and “bottle” are not both /l/-coda words). The old articulation card boxes had the same problem but it was often easier to shuffle through and exclude the words that did not fit the pattern you were working on. The commercial apps may or may not be that flexible.

In any case, I am sure that most of you are more familiar with these apps than I am and have lots of creative ideas for using them. The main point I wanted to make is that we should not let the tail wag the dog. It is really important to choose the most creative minds-on apps and not let the software coax us way back to the “drill and kill” days of the sixties. We have known for some time now that phonological therapy is all about meaning. The fun part of digital tools is the opportunity that multimedia and interactivity offers for helping children make connections between new learning and their prior experience.

Would you do speech therapy like this?

I was interested to read a paper about the relative efficacy of using traditional flash cards versus tablet presentation of pictures for articulation drill therapy because I have developed iPad apps myself (e.g., see www.DIALspeech.com) and have an interest in the potential of digital tools to enhance the speech therapy experience. The paper was recently published in the Online First section of Communication Disorders Quarterly by Krystel Werfel, Marren Brooks, and Lisa Fitton.

The study used a single subject alternating treatment design with four subjects, each kindergarten aged, —not clearly exhibiting signs of speech delay but none-the-less misarticulating two phonemes that could be practiced. Some statistical analyses (rather dubiously applied to single subject data) suggested that the children achieved mastery sooner in the flashcard condition but produced more correct responses in the tablet condition. To my eye, the data did not suggest a clear advantage to either condition. All the children did in fact master the treated phonemes (which were /z,s/, /pl,ɡl/, and /θ,ð/ (this pair for two children).

The authors make clear that the study is meant to be informative on the modality of stimulus presentation and not a test of the treatment protocol itself but I found myself alarmed at the possibility that readers might think that the treatment protocol would be reasonable in regular clinical practice and therefore I would like to address the way that the intervention was implemented. Often researchers implement a speech therapy intervention in a way that they would not in a regular clinical environment in an effort to exert more experimental control over all the variables than is typically necessary or desirable in an authentic clinical context. I can only hope that this explains some of the clinical choices that were made in this case. I am going to address several in turn as follows: (1) treatment approach; (2) treatment procedure; (3) reinforcement procedures; (4) cumulative intervention intensity; and (5) discharge criteria.

First, the authors state that they chose a traditional approach to therapy because there is empirical evidence that it works and clinicians prefer it. There is evidence of efficacy but in fact for most preschool aged children who qualify for speech services a phonological approach may be more efficacious as Francoise and I discuss in our text. Furthermore, the surveys indicating a preference for a traditional approach indicating that this is true in the United States but not elsewhere. Finally, there seems to be some confusion about what a “traditional” approach is. In some cases, traditional refers to a strict behaviorist intervention that focuses solely on speech production with a gradual increase in the complexity of speech units; in other cases it involves a sensory-motor approach with careful attention to variable speech practice and multiple targets; in other cases a traditional approach means Charles Van Riper’s approach which was properly sensory motor including both ear training, graduated speech practice and some principles of motor learning. The implementation in this paper was highly restricted involving only practice of single words and sometimes isolated sounds if necessary. If the speech therapist chooses a traditional rather than phonological approach it is best that the full sensory motor protocol be implemented.

Second, the drill based approach that was employed was selected again on empirical grounds. The study cited to support this approach was sound especially when treating children who have good speech perception abilities, most likely the case for the children in this study who did not have clear evidence of a speech disorder. Other approaches can be effective if procedures targeting phonological processing are incorporated into the intervention as shown by Hesketh and colleagues in the U.K. and also by me and Francoise with French-speaking children.

The strangest part of the whole intervention is that the children experienced over 25 treatment sessions each and throughout every session identical practice trials occurred: a stimulus prompt was presented, the child attempted to name the picture, the clinician provided feedback or extra support and then if the child’s response was correct he or she was permitted to mail the flash card or swipe the picture of the tablet. That was it. For eight weeks. I’m speechless. Enough said.

Regarding cumulative intervention intensity, I indicated in previous blogs that children should receive a minimum of 50 practice trials and ideally 100 practice trials per session. Furthermore, other single subject research using a minimal pairs procedures indicates that generalization goals are not usually met with fewer than 180 practice trials (when treating children with moderate or severe phonological delays). In Werfel’s study the children received treatment for two sounds in 20 minutes, so ten minutes per sound and 15 practice trials per sound or 10-minute block, therefore 30 practice trials per 20-minute treatment session. Reportedly, the mastery was achieved after 203 trials in the flashcard condition and 270 trials in the tablet condition (equivalent to 135 and 180 minutes of therapy respectively). However, increasing the number of practice trials to 50 during that 20-minute session could reduce the number of sessions or weeks in the intervention program by almost half. One way to do that would be to reduce the amount of feedback that was provided. The intervention was designed so that the clinician provided explicit feedback to the child after every practice attempt whereas the principles of motor learning suggest that less feedback is often better for speech motor learning. For example, a child can name five pictures in a row and be told that four of the five productions were correct. Another strategy is to practice at the challenge point at all times as described in detail by Francoise and I in Developmental Phonological Disorders: Foundations of Clinical Practice but also in our new undergraduate text Introduction to Speech Sound Disorders.

Finally, the discharge or stopping criteria in the study were set at 100% correct performance on the generalization probe over 3 consecutive sessions. The probe contained 5 treated words and 5 untreated words. This criterion meant that children practiced their targets for a long time past the point at which the practice material should have been made more difficult or the child should have been discharged to see if spontaneous generalization to natural speaking situations would occur. As Francoise and I review in Chapter 8 of our book, several studies have shown that children can be discharged after achieving between 40 and 80% correct responding on generalization probes. Most children will continue to make gains in production accuracy after this point. The four children in the Werfel et al study received an average of 5 unnecessary treatment sessions according to these criteria.

When conducting treatment studies, it is helpful to provide models of treatment procedures that are best practice in the clinical setting. Often interventions that are better than no intervention will prove to be effective in a research setting while not necessarily being best practice. These studies are confusing for a clinical audience I think. Furthermore, when asking clinical questions about new technologies it is interesting to ask, why would we want to bring it into our clinical practice? What benefit might it bring? How can we adapt these technologies so that the best of human interactions are retained and the most benefit of the technology is added? In my next blog I will address the Werfel study again, but this time imagining the questions we might ask about tablet-based implementations of articulation therapy.

Jingle and Jangle Fallacies in Levels of Representation

I have had several opportunities over the past few years to object when the investigators who conducted the Sound Start trial associated the framework outlined in our book with the theoretical foundation for their work. We have not had an opportunity to fully explore my objection and twitter is a bad medium for a discussion requiring this much complexity and nuance and therefore I am going to provide a rationale in some detail in my blog. McLeod et al. justify their approach by reference to certain psycholinguistic models with particular allegiance to Stackhouse and Wells’ (1993) important work. In our book, Francoise and I also pay homage to their model and note the historical lineage although our framework is drawn directly from work by Munson, Edwards, and Beckman (2005). Therefore there is no argument about the use of the terms input, output, and phonological processes or representations–as we note, this basic tripartite division of speech processes is more or less universal. The difficulty is that McLeod and colleagues divide up assessment and treatment tasks according to these category labels (input, output and phonological processes) differently from us and then cite our framework. It is jarring because the error appears to reflect both jingle and jangle fallacies. For those who have not encountered these amusing terms before, a jingle fallacy refers to the assumption that two concepts with the same name are the same when they are actually different; a jangle fallacy is the assumption that one concept that may be referred to with different names are therefore different when there is in fact only one concept. We can all agree that McLeod’s team and my team (and Munson’s team and so on) can all use a tripartite framework (input-output-phonological) and we can all agree to disagree about what which tasks go into which division (psycholinguistics have been doing that for a long time now and will continue to do so). However, when I am cited I would prefer to not have confusion about what I mean when I talk about input vs output processes.

I will begin with the point of agreement–the tripartite division of psycholinguistic processes, citing McLeod et al. (2017) directly: “Stackhouse and Wells (1997) … proposed three core elements: Input processes (i.e., detecting and perceiving speech…), cognitive-linguistic processes (i.e., creating, storing, and accessing lexical representations of words …), and output processes (i.e., producing speech…).

In our text, Francoise and I borrow heavily from Munson, Edwards, and Beckman to describe three types of phonological knowledge: Perceptual knowledge encoded in the form of acoustic-phonetic representations for speech sounds, abstracted from stored acoustic memories of words; articulatory knowledge encoded in the form of motor plans for syllables; and phonological knowledge, encoded as underspecified phonological units at all levels of the phonological hierarchy, and acquired as an emergent property of the lexicon itself. A variety of processes are proposed for acquiring and using these types of knowledge when perceiving, understanding, and producing speech.

The difficulty comes when we begin to assign different assessment or treatment tasks to these levels of processing or representation. In our book we describe input processes and input approaches to treatment as those that target specifically children’s acoustic-phonetic representations. Strong acoustic-phonetic representations provide support for speech perception and implicit phonological awareness. Assessment and intervention tasks will involve the provision of varied speech inputs, focusing on words but with systematic variation in acoustic cues and involving implicit learning strategies. Tasks that tap these processes may involve only listening to speech input or they may involve listening and talking–it depends upon the design of the task and the way that the children’s responses are analyzed. For example, on of my favourite studies that reveals the importance of “input processes” was conducted by Munson, Baylis, Krause, and Yim (2006). In their study children first listened passively to nonwords. After a distractor task they repeated nonwords, some of which they had previously heard during the passive listening task. Children with typical speech showed a benefit of the previous exposure in their repetition accuracy whereas children with a speech sound disorder did not show this benefit. You can see that this task that is largely dependent upon spoken responses is a measure of input processing! Speech perception tasks fall into this category most clearly when they reveal something about the nature of the acoustic cues that the child is using to make decisions about which acoustic-phonetic objects form a particular word or phonetic category. Some phonological awareness tasks are also input oriented when the child indicates that, for example, “hat” and “bat” sound similar by matching pictures even if they do not have high level metacognitive knowledge about what the similarity is.

Phonological knowledge is a more abstract form of knowledge that emerges from the organization of the lexicon and from explicit teaching, especially phonics and reading education in schools. It includes metacognitive knowledge of sublexical and subsyllabic units. Assessment and intervention tasks in this domain often involve high level expressions of this knowledge such as verbally identifying the common sound in the coda of the words “hat” and “boat” or indicating that [b] is at the beginning of the sound “boat” or differentiating 3-syllable from 2-syllable words.

In some children there are discontinuities across these levels of knowledge even when the same unit is involved. For example, a child may be able to indicate that [bæθ] and [bæt] and [bæs] correspond to different pictures (i.e., have different meanings) but have an unclear sense of the acoustic cues that differentiate the phonetic categories that differentiate these words. Another child might have excellent acoustic-phonetic representations for these words and the phonetic categories that differentiate them but have immature metaphonological knowledge, being unaware that each word is composed of three phonemes and unable to tell you that they share the same head [bæ]. In our book, Francoise and I detail the kinds of tasks that can be used clinically to assess and remediate children’s knowledge at different levels of representation.

The disagreement we are having with McLeod et al is the classification of all the tasks in the Phoneme Factory computer intervention as being “input oriented” tasks. According to our framework, even though most tasks require the child to listen and then respond by selecting pictures or letters on a computer screen, these tasks all involve accessing phonological levels of representation and do not serve to strengthen the child’s acoustic-phonetic representations. Even the most basic level task involves associating sounds produced in isolation (e.g., [s], [d]) with a standard pictograph (e.g., [s] → “snake”). The authors mistakenly identify this task with the lowest level “input” process in Stackhouse and Well’s model, that is, speech discrimination, but it is not a discrimination task and the stimuli do not reveal anything about the children’s knowledge of the acoustic-phonetic cues that differentiate one category of speech sounds from another. All the tasks in the program are metaphonological tasks that that therefore tap phonological knowledge even though real words and word meanings are not always engaged.

At the recent NZSPA2019 Conference in Brisbane Jane McCormack divided up the phonological awareness assessment tasks that comprise the CTOPP into input and output task purely on the basis of whether a spoken response was required by the child. However, I would not agree that any of these phonological awareness tasks reveal the child’s acoustic-phonetic knowledge of speech sound categories and therefore there are no “input tasks” per se. All the tasks are tapping meta-phonological knowledge.

If this is still confusing, think of that child who says [s̪it], [s̪nek], [fes̪], and [buts̪], and who confidently identifies [mauθ] as the picture with teeth, but both [maus] and [maus̪] as the picture of the rodent. This same child is able to blend the sounds [m] – [au] – [θ] to recreate the word /mauθ/. If you ask her to say [maus] without [s] she answers [mau]. Here we have a child whose acoustic-phonetic and articulatory-phonetic knowledge of the /s/ phoneme is poor, explaining the consistent distortion in her speech; at the same time the child’s phonological knowledge of the /s/ – /θ/ contrast is good and her meta-phonological skills are good as well. Therefore, when treating this child, we would want to focus at the phonetic level. The Phoneme Factory intervention might be good for her future literacy skills but it would not be the best prescription for her speech articulation problem. We really want to have a clear understanding of the difference between these three levels of representation.

As a more general point it is really important when citing anyone to match up terms with concepts in a way that is consistent with the cited authors’ original intent. This is hard because the use of terms undergoes so much historical and theoretical change. The changes are good I think – Munson et al. help us to understand that many children with developmental phonological disorders have difficulties in the phonetic domains (acoustic-phonetic and articulatory-phonetic representations) whereas many children with language impairments have deficits in phonological knowledge in fact, a by product of smaller lexicons. Knowing how to assess and remediate children’s knowledge in these three domains will help us to target our interventions more effectively.

References

Baker, E., Croot, K., McLeod, S., & Paul, R. (2001). Psycholinguistic models of speech development and their application to clinical practice. Journal of Speech, Language, and Hearing Research, 44, 685-702.

McLeod, S., Baker, E., McCormack, J., Wren, Y., Roulstone, S., Crow, K., . . . Howland, C. (2017). Cluster-Randomized Controlled Trial Evaluating the Effectiveness of Computer-Assisted Intervention Delivered by Educators for Children With Speech Sound Disorders. Journal of Speech, Language & Hearing Research, 60(7), 1891-1910. doi:10.1044/2017_JSLHR-S-16-0385

Munson, B., Baylis, A., Krause, M., & Yim, D.-S. (2006). Representation and access in phonological impairment. Paper presented at the 10th Conference on Laboratory Phonology, Paris, France, June 30-July 2.

Munson, B., Edwards, J., & Beckman, M. E. (2005). Phonological knowledge in typical and atypical speech-sound development. Topics in Language Disorders, 25(3), 190-206.

Rvachew, S., & Brosseau-Lapre, F. (2018). Developmental Phonological Disorders: Foundations of Clinical Practice (Second ed.). San Diego, CA: Plural Publishing, Inc. https://www.pluralpublishing.com/publication_dpd2e.htm.

Stackhouse, J., & Wells, B. (1993). Psycholinguistic assessment of developmental speech disorders. European Journal of Disorders of Communication, 28, 331-348.

Boys and Spelling

I rather like this new paper by Treiman et al (2019) in Scientific Studies of Reading on “The unique role of spelling in the prediction of later literacy performance” or in actual fact word reading performance because that is the only outcome measure in this study, albeit measured longitudinally between kindergarten and ninth grade and in 970 children. The upshot is that early spelling predicts unique variance in ongoing word reading skills after taking into account early phonological awareness, vocabulary and letter knowledge skills. Presumably spelling captures other important aspects of literacy knowledge such as orthographic knowledge and also I imagine morphological skills.

I have been interested in spelling for a while now because it is the aspect of literacy most likely to be impaired in children who have speech sound disorders. Furthermore, the Quebec government (that funded the research that I will describe here) had been concerned by falling literacy test scores across the province’s schools and the scores for orthography (a combination of spelling and morphology) had been particularly low. Specifically the percentage of children passing the province wide literacy test with respect to orthography fell from 87% in the year 2000 to 77% in 2005 whereas the proportion of children scoring in the unsatisfactory range increased from 5% to 11% over the same period.

Therefore, a group of us set out to develop a tool to predict spelling difficulties in French-speaking children in Quebec, the result being PHOPHLO (Prédiction des Habiletés Orthographiques Par des Habiletés Langage Oral). Specifically, we hypothesized that spelling difficulties at the end of the first and third grades could be predicted by examining oral language skills at the end of kindergarten/beginning of first grade using an ipad based screen of speech perception, speech production, rime awareness and morphology productions skills (more about the test at www.dialspeech.com). The test was found to accord well with teacher predictions of spelling difficulties and objective measures of spelling at the end of first grade:

Kolne, K., Gonnerman, L., Marquis, A., Royle, P., & Rvachew, S. (2016). Teacher predictions of children’s spelling ability: What are they based on and how good are they? Language and Literacy, 18(1), 71-98. [open access]

In a larger study we documented specificity and sensitivity of 93% and 71% respectively for the prediction of spelling at the end of second grade:

Rvachew, S., Royle, P., Gonnerman, L., Stanké, B., Marquis, A., & Herbay, A. (2017). Development of a Tool to Screen Risk of Literacy Delays in French-Speaking Children: PHOPHLO. Canadian Journal of Speech-Language Pathology and Audiology, 41(3), 321-340. [open access]

We are especially proud of this latter paper because it won the editor’s paper from CJSLPA. And I am especially proud of Alexandre Herbay because he created such beautiful software with only 6 months of funding from MITACS.

The reason for this blog is that it was only after publishing these papers that it occurred to me to look for gender effects in the data! I don’t know why because the province wide literacy test results had been flagging issues with gender differences in literacy performance all along. There has been a significant gap favouring the girls in literacy performance across all scoring criteria since 2000: even after improving the success rate considerably since 2005, the gender gap persists. For example, in 2010 88.7% of children passed orthography but the rate for girls was 90.1% versus the rate for boys at 81.3%. With this concern about the performance of boys looming large at the provincial level, it finally occurred to me to wonder if our PHOPHLO screener would be sensitive to gender differences.

The answer to my question is interesting on two accounts. First there turns out to be a big gender effect in spelling outcomes, as follows: girls who passed the PHOPHLO screener obtained a second grade spelling test score of 51 which compares to 40 for the girls who failed the PHOPHLO screener; boys who passed the PHOPHLO screener achieved a second grade spelling test score of 47 which compares to a spelling test score of 31 for boys who failed the PHOPHLO. This means that PHOPHLO predicted PHOPHLO performance for both boys and girls (main effect of PHOPHLO, F(1,74) = 26.71, p < .0001) but boys obtained lower scores than girls regardless of their PHOPHLO performance (main effect of gender, F(1,74) = 6.61, p = .012) with no significant interaction.

The second interesting finding however was that there was no gender difference in PHOPHLO scores: as measured by this screener the children had equivalent language skills at school entry. There are three possible explanations. The screener is only a screener and therefore it is quite likely that there are differences in language performance between the boys and girls at school entry that are uncovered by the PHOPHLO screener, given that boys and girls do have a different trajectory for early language development, although typically only for language production and it is often reported that they have caught up by school age. Another possibility is that these early language differences cause a difference in executive functions or temperament for boys that impacts their ability to learn literacy skills in school. The third possibility is that boys are treated differently in school due to gendered social expectations for behavior, interests and social identity that discourage literacy related activities for boys. In any case, this finding raises questions about what happens to boys at school between kindergarten and first grade. Our research is currently concerned with this question and I will share those results during my keynote address at the upcoming 2019 joint conference of Speech Pathology Australia and the New Zealand Speech Therapists Association in Brisbane.

How to score iPad SAILS

As the evidence accrues for the effectiveness of SAILS as a tool for assessing and treating children’s (in)ability to perceive certain phoneme contrasts (see blog post on the evidence here), the popularity of the new iPad SAILS app is growing. Now I am getting questions about how to score the new SAILS app on the iPad so I provide a brief tutorial here. The norms are not built into the app since most of the modules are not normed. However, four of the modules are associated with normative data and can be used to give a sense of whether children’s performance is within the expected range according to age/grade level. Those normative data have been published in our text “Developmental Phonological Disorders: Foundations of Clinical Practice” (derived from the sample described in Rvachew, 2007) but I reproduce the table here and show how to use it.

When you administer the modules lake, cat, rat and Sue you will be provided with an overall Level score for all the Levels in each module as well as item by item scores on the Results page. As an example, I show the results page below after administering the  rat module.

SAILS results screenshot rat

The screen shot shows the item-by-item performance on the right hand side for Level 2 of the rat module. On the left hand side we can see that the total score for Level 2 was 7/10 correct responses and the total score for Level 1 was 9/10 correct responses (we ignore responding to the Practice Level). To determine if the child’s perception of “r” is within normal limits, average performance across Levels 1 and 2: [(9+7)/20]*100 = 80% correct responses. This score can be compared to the normative data provided in Table 5-7 of the second edition of the DPD text, as reproduced below:

SAILS Norms RBL 2018

Specifically a z-score should be calculated: (80-85.70)/12.61 = -.45. In other words, if the child is in first grade, the z score is calculated by taking the obtained score of 80% minus the expected score of 85.70% and dividing the result by the standard deviation of 12.61 which gives a z score that is less than one standard deviation below the mean. Therefore, we are not concerned about this child’s perceptual abilities for the “r” sound. When calculating these scores, observe that some modules have one test level, some have two and some have three. Therefore the average score is sometimes based on 10 total responses, sometimes on 20 total responses as shown here, and sometimes on 30 total responses.

The child’s total score across the four modules lake, cat, rat and Sue can be averaged (ignoring all the practice levels) and compared against the means in the row labeled “all four”. Typically you want to know about the child’s performance on a particular phoneme however because generally children’s perceptual difficulties are linked to those phonemes that they misarticulate.

Normative data has not been obtained for any of the other modules. Typically however, a score of 7/10 or less than 7/10 is not a good score – a score this low suggests guessing or not much better than guessing given that this is a two alternative forced choice task.

Previously we have found that children’s performance on this test is useful for treatment planning in that children with these speech perception problems will achieve speech accuracy faster when the underlying speech perception problem is treated. Furthermore, poor overall speech perception performance  in children with speech delay is associated with slower development of phonological awareness and early reading skills.

I hope that you and your clients enjoy the SAILS task which can be found on the App Store, with new modules uploaded from time to time: https://itunes.apple.com/ca/app/sails/id1207583276?mt=8

 

Feedback Errors in Speech Therapy

I have been spending hours reviewing video of student SLPs (SSLPs) conducting speech therapy sessions, looking for snippets to take to my upcoming talks at ASHA2018. The students are impressively skilled with a very difficult CAS population but after this many hours of watching, repeated examples of certain categories of errors pile up in the provision of feedback to children about their attempts to produce the targets words, phrases and sentences. I am going to provide some examples here with commentary. In no way am I meaning any disrespect to the students because it is my experience that the average person becomes an idiot when a camera is pointed at them. I recall hearing about studies on the “audience effect” as an undergraduate – the idea is that when your skills are shaky you get worse when someone is watching but when your skills are excellent an audience actually enhances them. My social psychology prof said this even works for cockroaches! I can’t vouch for that but it certainly works for speech pathologists. I remember one time video-taping a session that was required for a course – I thought it went really well so I gave a copy to the parents and the course instructor. Later when watching it I could see clearly that for the whole half hour the child was trying desperately and without success to tell me that I was calling him by the wrong name (I had mixed him up with his twin brother whom I was also treating). I was oblivious to this during the live session but it was clear on the video. Anyway, these examples are not reflections on the students’ skill levels overall but they are examples of common feedback errors that I see in novice and experienced SLPs. Interestingly the clinical educators (CEs) who were supervising these sessions rarely mentioned this aspect of the students’ practice. Readers may find this blog useful as a template for reviewing student practice.

Category 1: No feedback

Child: [repeats 5 different sentences containing the target /s/ cluster words]

SSLP: [Turns to CE.] “What did you get?” [This is followed by 1 minute and 40 seconds of conversation about the child’s level of accuracy and strategies to improve it on the next block of trials.]

SSLP: [Turns back to child.] “You need to sit up. You got 2 out 5 correct. Now we’re going to count them on my fingers…”

Child: “Do we have to say these?”

Comment on vignette: In this case the SSLP did finally give feedback but too late for it to be meaningful to the child and after the telling the child off for slouching in her chair! Other variants on this are taking notes about the child’s performance or turning to converse with the child’s parent or getting caught up in the reinforcement game and forgetting to provide feedback. In CAS interventions it is common to provide feedback on a random schedule or to provide summative feedback after a block of trials. However, the child should be able to predict the block size and have information about whether their performance is generally improving or not. Even if the child does not have a count of number or percent trials correct, the child should know that practice stimuli are getting more difficult, reflecting performance gains. Sometimes, we deliberately plan to not provide feedback because we want the child to evaluate his or her own productions, but in these cases the child is told beforehand and the child is given a means of explicitly making that judgment (e.g., putting token in jar). Furthermore, the SSLP would be expected to praise the child for making accurate self-judgments or self-corrections. When the child does not get feedback or cannot track their own progress they will lose interest in the activity. It is common for SSLPs to change the game thinking that it is not motivating enough but there is nothing more motivating than a clear sense of success!

Possible solutions: Video record sessions and ask students to watch for and count the frequency of events in which the child has not received expected feedback. Provide child with visual guides to track progress indexed either as correct trials or difficulty of practice materials.

Category 2: Ambiguous feedback

SSLP: “Say [ska].”

Child: “[skak]”

SSLP: “OK, take the fish out.”

Comment on the vignette: In this case it is not clear if the SSLP is accepting the inexact repetition of her model. In our CAS interventions we expect the child to produce the model exactly because metathesis and other planning errors are common and therefore I would consider this production to be incorrect. Other ambiguous feedback that I observed frequently were “Good try” and “Nice try” and similar variants. In these cases the child has not received a clear signal that the “try” was incorrect. Another version of ambiguous feedback is to comment on the child’s behavior rather than the child’s speech accuracy (e.g., “You did it by yourself!” in which case the “it” is ambiguous to the child not clearly related to the accuracy of the child’s speech attempts).

Possible solutions: SSLPs really do not like telling children that have said something incorrectly. Ask students to role play firm and informative feedback. Have the students plan a small number of clear phrases that are acceptable to them as indicators of correct and incorrect responses (e.g., “I didn’t hear your snake sound” may be more acceptable than “No, that’s wrong”). Post written copies of the phrases somewhere in the therapy room so that the SLP can see them. Track the use of vague phrases such as “nice try” and impose a mutually agreed but fun penalty for exceeding a threshold number (buy the next coffee round for example). This works well if students are peer coaching.

Category 3: Mixed signals

SSLP: “Say [ska].”

Child: “[s:ka]”

SSLP: “Good job! Take the fish out.” [Frown on face].

Comment on the vignette: I am rather prone to this one myself due to strong concentration on next moves! But it is really unhelpful for children with speech and language delays who find the nonverbal message much easier to interpret than the verbal message.

Possible solutions: It would be better if SLP therapy rooms looked like a physiotherapy room. It annoys the heck out of me when we can’t get them outfitted with beautiful wall to ceiling mirrors. The child and SLP should sit or stand in front of the mirror when working on speech. Many games can be played using ticky tack or reusable stickers or dry erase pens. The SLP will be more aware of the congruence or incongruence between facial expressions, body language and verbal signals during the session.

Category 4: Feedback that reinforces the error

SSLP: “Repeat after me, Spatnuck” [this is the name of a rocket ship in nonsense word therapy].

Child: “fatnuck”

SSLP: “I think you said fatnuck with a [f:] instead of a [s:].

Comment on the vignette: Some SSLPs provide this kind of feedback so frequently that the child hears as many models of the incorrect form as the correct form. This is not helpful! This kind of feedback after the error is not easy for young children to process. To help the child succeed, it would be better to change the difficulty level of the task itself and provide more effective support before the next trial. After attempts, recasting incorrect tries and imitating correct tries can help the child monitor their own attempts at the target.

Possible solutions: Try similar strategies as suggested for ambiguous feedback. Plan appropriate feedback in advance. Plan to say this when the incorrect response is heard: “I didn’t hear the snake sound. Let’s try just the beginning of the word, watch me: sss-pat.” And when “spat” is achieved, plan to say “Good, I heard spat, you get a Spatnuck to put in space.”

Category 5: Confused feedback

SSLP: “Oh! Remember to curl your tongue when you say shadow.”

SSLP: “Oh! You found another pair.”

Child: “It’s shell [sʷɛo].”

SSLP: “Oh! I like the way you rounded your lips. Where is your tongue? Remember to hide your tongue.”

SSLP: Oh! You remembered where it was. You found another pair.”

Child: “Shoes [sʷuz].”

SSLP: “Oh! I like the way you rounded your tongue.”

Comment on vignette: In this vignette the SSLP is providing feedback about three aspects of the child’s performance-finding pairs when playing memory, rounding lips when attempting “sh” sounds, and in some cases anterior tongue placement when attempting the “sh” sound as well. One aspect of her feedback that is confusing when watching the video is the using of the exclamation “Oh!” Initially it appeared to signal an upcoming correction but it became so constant that it was not a predictable signal of any kind of feedback and was confusing. The exclamation had a negative valence to it but it might precede a correction or positive feedback. The SSLP confused her feedback about lips and tongue and it was not clear whether she was expecting the child to achieve the correct lip gesture, the correct tongue gesture or both at the same time.

Possible solutions: This can happen when there is too much happening in a session. The CE could help the SSLP restructure the session so that she can focus her attention on one aspect of the child’s behavior at a time, like this: “I want you to name these five pictures. Each time I am going to watch your lips. When you are done you can put the pictures on the table and mix them up for our game later.” If the child rounds the lips each time, switch to focusing on the tongue. When the ten cards are on the table play memory, modeling the picture names. In this way the three behaviors (rounding lips, retracting tongue, finding pairs) are separated in time and the SSLP can focus attention on each one with care, providing appropriate feedback repeatedly during the appropriate intervals.

Category 6: Confused use of reinforcement materials

SSLP: “Repeat after me, [ska].”

Child: “[θak]”

SSLP: [ska]

Child: “[θak]”

SSLP: “OK, take the fish out.”

SSLP: “Repeat after me, [ska].”

Child: [ska]

SSLP: “There you got it, take the fish out.”

SSLP: “Repeat after me, [ska].”

Child: [ska]

SSLP: “Good, and the last one, [ska].”

Child: [ska]

SSLP: “That’s good, take the fish out.”

Comment on vignette: In this vignette the child cannot tell if he gets a fish for correct answers or wrong answers or any answer. It is even worse if the child has been told that he will get a fish for each correct answer. Sometimes a student will say “Everything was going fine, we were having fun and then he just lost it!” When you look at the video you see exchanges such as the one reproduced here leading up to a tantrum by the child. The SSLP has broken a promise to the child. They don’t forgive that.

Possible solutions: This one is hard because it is a classic rooky mistake. Experience is the best cure. Reducing the number of tasks that the SSLP must do simultaneously may help. Therefore, in the early sessions the CE might keep track of the child’s correct and incorrect responses for the SSLP and allow her to focus on managing the materials and the child’s behavior. SSLPs would never think of this but it is possible to let the child manage the reinforcement materials themselves in some cases. One of our favorite vignettes, reprinted on page 463 of DPD2e (Case Study 9-4) involved an error detection activity in which the child could put toy animals in the barn but only when the SSLP said the names of the animals correctly. The child had the toys in his hands throughout the activity. He would not put them in the barn unless the clinician said the words correctly and would get annoyed if she said them wrong, telling her “you have to say cow [kau]!” SSLPs can learn that it is not necessary to control everything.

I put these here for students and clinical educators and speech-language pathologists and hope that you will have fun finding these feedback mishaps in your own sessions. If you come up with better strategies to avoid them than I have suggested here please share them in the comments.