Invoking the influence of emotion in central auditory processing to improve the treatment of speech impairments =============================================================================================================== * Safa Alqudah * Maha Zaitoun * Sara Alqudah ## Abstract **Objectives:** To explore the benefits of invoking unconscious sentiment to improve the treatment of stuttering and misarticulation. **Methods:** This cross-sectional study of 80 participants with speech issues (44 patients with misarticulation and 36 with stuttering) who underwent comprehensive speech and hearing evaluations to confirm and diagnose speech difficulties. Speech and language pathologists then calculated either the percentage of correctly pronounce sounds in misarticulation cases or stuttering severity index-4 scores in cases of stuttering following the use of therapeutic stimuli recorded with familiar and non-familiar voices of similar linguistic and phonetic complexity. Descriptive and inferential statistics were used to compare the data collected following the use of familiar and unfamiliar stimuli. **Results:** The analysis showed that the number of dysfluencies in cases of stuttering were significantly fewer when employing familiar voices than unfamiliar voices (3% errors vs 12% errors; Z= -5.16 *p*<0.001). Additionally, the percentages of correct pronouncing of target sounds in cases of articulation disorders were prominently higher when using familiar voices compared with unfamiliar voices (88% PCC vs 66% PCC; Z= -5.65, *p*<0.001) **Conclusion:** This study confirms the utility of invoking emotion in improving speech therapy and maximizing therapeutic outcomes. This study also recommends engaging families and friends in providing speech services to the speech-impaired population to improve patient progress. * articulation * central auditory system * emotions * stuttering **A**s humans interact with and control their surroundings to ensure their survival, the environment - in turn - influences their behaviors, emotions, motivations, and moods. The emotional aspects of human personalities and behaviors are organized by the limbic system of the central nervous system.1 Previous research has informed the detailed anatomical mapping of emotions, implicating the amygdala and its components – a part of the limbic system – in emotional learning and memory.2-6 The auditory cortex is integral to the extraction of emotional content from speech. Tasking participants with identifying words based on the emotion with which they were spoken or their phonetic characteristics, Buchanan et al7 found that the emotion tasks elicited higher right anterior auditory cortical activity than did the phonetic tasks. Occasionally, speakers intend to not say things directly, but the listeners can understand from the paralingual aspects of communication what the hidden meanings are. Other situations, it may be used to express the need of support and empathy from recipients.8 Buchanan et al7 showed that by running a magnetic resonance imaging scanning on participants, high right anterior auditory cortex activity was noticed while performing emotional tasks, and researchers concluded that the auditory systems analyze emotions heading content (the actual word). Moreover, the areas including right anterior and middle frontal gyrus were more stimulated by depressing words, suggesting that human beings are more reacted and attentive to unhappy pitches more than happy tones. The preference is explained by knowing the nature of humans and their tendencies to assist and modify an unhappy person’s mood more than adjust happy person’s feelings. However, these findings may not extend to individuals with impaired hearing, who have been demonstrated to feature aberrant emotional processing. Ludlow et al9 found that children with deafness performed significantly worse relative to their counterparts with normal hearing when asked to classify the emotions of cartoon and real human faces. Deaf participants exhibited more mistakes in the task and had considerably lower recognition than controls. Dyck et al10 identified the differences in the skills of emotional recognition between impaired and normal hearing children. Dyck et al10 found that children with hearing loss scored poorly on both the Fluid Emotions Test and Emotion Vocabulary Test, which evaluate the precision in distinguishing emotional alternations and the ability to label and explain emotional expressions, respectively. These findings suggest that the development of emotional knowledge might depend on auditory skills. A study investigated the potential connection between emotional processes and articulation speed in preschool-aged children who stutter and do not stutter. Lower articulation rates were found in stutterers after a negative emotional event, in contrast with recovered and non-stuttering children whose articulation rates were not impacted by positive- or adverse-provoking events.11 The sound characteristics of speech reflect the emotional states of speakers. Moreover, the psychological condition of a speaker impacts the fundamental frequency, rate, intensity, and so on.12 It has been observed that the most noticeable elements causing vocal alternations are emotional modifications in the body, such as reactions of the autonomic nervous system (ANS) to sudden environmental events related to various emotions.13,14 It is assumed that various mental states are evoked by many types of ANS responses.15,16 Furthermore, one of the physiological representations of mental state reported in these studies is the change in vocal levels. Emotional events stimulate sympathetic and parasympathetic processes, eventually causing alternations in the physiology of the speech system and functions associated with breathing, vocal fold phonation and articulatory movements. Displeasurable emotions such as anxiety or aggravation have been connected with increased sympathetic arousal, causing a high blood supply to muscles, raised pulse rate and muscular tension to support the body with additional force, acceleration, and conductivity.13,14 This kind of spontaneous body reaction can be associated with boosting the articulation rate of the speech organs. Likewise, pleasurable feelings such as enjoyment or happiness have been connected with a fast articulation rate. Nevertheless, some literature indicated that certain feelings that convey unwanted and uncontrolled emotional events, such as distress, cause slower rates of speaking.11,12,15,17 However, the influence of emotional state is based on the individual’s tolerance and coping capabilities.13,14,18 Previous literature showed that emotional status is correlated to stuttering in young children.19-21 Despite the fact that it is not easy to decide if emotions contribute to or are provoked by stuttering, a group of researchers have indicated a relationship between them.22-27 These studies have used a variety of methods, such as caregiver reports and behavioral observations. The findings of previous studies reported that children who stutter demonstrate less flexibility to unexpected environmental modifications than non-stuttering peers.28,29 Intense emotional reactions to emotional conditions have been reported more frequently in children who stutter, in addition to lower emotional control and a higher chance of developing problems with the depth and duration of feelings.25,30-35 Furthermore, caregivers complained that their children with stuttering expressed more negative tempers as well as being hypersensitive and more nervous, frightened, introverted, or isolated.29,32,36,37 Experimental studies have highlighted a connection between stuttering and emotions in preschool age. Students who suffer from stuttering display more symptoms after emotionally agitating situations, particularly in view of the complicated emotional effect combined with limited emotional control.23,25,26,38,39 A study carried out by Ambrose et al22 indicated that, based on parental reports, the temperamental traits of preschool students who stutter were more predominant than those who recovered and non-stutterers. In the present, a new way has been pioneered to assess the efficiency and effectiveness of speech and language therapy by applying the practical implications of findings concerning the connection between hearing and emotions. Specifically, we speculate that familiar voices will stimulate emotional central auditory processing in addition to phonetic processing and bolster the patients’ confidence and comfort while undergoing therapy. Hence, we explore the hypothesis that using therapeutic auditory material recorded by familiar voices will improve the outcomes of patients with speech impairment in terms of reducing the stuttering errors and increasing the percentage of consonants correct (PCC). The data provided by this study will provide new insight into the functions and adaptability of the auditory system according to emotional content of auditory stimuli, as well as inform new methods of conducting speech and language interventions to ensure an optimal prognosis – particularly in cases of abnormal signal processing. ## Methods This cross-sectional study sample of 80 Arabic speakers (age range, 4-22 years; male, 67.5%) was carried out at the Speech and Hearing Clinic, King Abdallah University Hospital (KAUH), Irbid, Jordan. Each participant had normal hearing and cognition while all other participants complained of either stuttering or misarticulation. To confirm their hearing functions, each participant underwent a comprehensive diagnostic battery comprising of an assessment of clinical history, otoscopic examination, immittance measurements, and behavioral tests. The battery used to evaluate central auditory processing included: gaps in noise; masking difference level; pitch pattern sequence; duration pattern sequence; and random gap detection tests. For subjects younger than 7 years, the parents completed the Fissure Auditory Problem Checklist. We obtain the subjects (their parents or guardians) written informed consent and the study protocol was approved by the Institutional Research Board of the Jordan University of Science and Technology (20190016). The exclusion criteria were as follows: incidences of hearing loss, children with normal dysfluencies, children with immature speech systems, and patients with abnormal signal processing or other physical/mental impairments that could impact speech comprehension or production. Of 115 participants, 35 were excluded for having deficits in central auditory processing and 2 for having other perpetual abnormalities besides speech disorder. The participants were evaluated by a well-qualified speech language pathologist to define the type and severity of speech impediment and obtain baseline clinical data for each participant. The evaluation tools included orofacial examination, the Stuttering Severity Index-4 (SSI4), as well as a non-standardized language and articulation test. Before beginning therapy sessions, 4 speech language pathologists generated an experimental plan for each participant, including tailored speech samples. In order to induce specific emotional states for measuring their effects on speech therapy, each subject listened to emotional (negative and positive) and dried emotional words produced by both the caregiver and therapist. The speech samples ultimately consisted of 4 lists: the first contained speech stimuli aroused in an emotional context and recorded by a professional speech language pathologist; the second was recorded by a familiar person, such as a parent, sibling or friend of the participant in emotionally provoking situations; the third list was recorded by the speech language pathologist but in an unemotional arrangement; and the last list was also prepared to be heard by a familiar voice in objective, non-emotional contexts. All the lists shared the same number of words or sentences and the phonetic and language levels were customized to be similar across all 4 lists. The recorded materials were uploaded to a software designed to facilitate the therapy sessions. Each session featured lists containing 200-syllable sentences used with stuttering patients and 10 words for subjects with misarticulations, and of several tasks, including questions conveyed in writing, audio, or graphic. To make the software user-friendly regardless of the age of the participant, responses were presented in various moods involving written, audio or visual formats. The speech and language pathologist then presented the prompts recorded by familiar and unfamiliar voices and calculated either the percentage of correct responses for participants with difficulty articulating or SSI4 scores for those who stuttered. The number of sessions was fixed at 3 for every patient, with an average session time of approximately 45 minutes. The therapeutic approaches followed by the therapist were the traditional approach in dealing with articulations disorders, and the stuttering modification strategies applied in the treatment of dysfluencies. Appropriate reinforcement was introduced in the software whenever the patient achieved correct pronunciation or provided a fluent answer. To ensure the absences of ambiguity or accuracy in the content of the therapeutic speech material, it was subjectively assessed by the participants before testing; they were asked to compare intelligibility of the 2 lists by reading them and rating each item using a 5-point scale, where values of 5 indicated “easy to understand all the time” and 1 indicated “unable to understand”. Setting the list reading to be a previewing mood for the participants would not affect the study results. ### Statistical analysis The Statistical Package for the Social Sciences, version 16 (IBMCorp, Armonk, NY, USA) was used for all statistical analyses. Kolmogorov-Smirnov test was first applied to check the normality of data. The demographic information for the subjects was then analyzed using descriptive tests to express the median of different variables, including age, duration of impairment, gender, and educational level. Each patient was exposed to 4 phonetically and linguistically equal lists. Outputs were assessed for each participant across the lists. These lists were designed to be emotionally diverse in content and act as verbal sample carriers. A Shapiro-Walk test showed that data was significantly departure from normality (W80=0.865, *p*<0.05). As the data are non-parametric, the Friedman test was run to perform comparison-based analysis of overall differences in the degrees of misarticulation or stuttering between responses prompted by identifiable and unfamiliar voices in emotional and emotionless contexts. To pinpoint which conditions differed from each other, post hoc tests were employed; *p*-values of <0.05 indicated statistical significance in every test. ## Results A total of 80 participants (age range: 4-22 years) with speech issues characterized either as misarticulation or stuttering were recruited. The first one to 2 speech sessions were carried out before running the 3 therapeutic sessions to determine an appropriate diagnosis for the participant: half (45%) of the recruited individuals were diagnosed with stuttering, and the other half (55%) with articulation disorders. Most of the participants (67.5%) were male. As speech and language impairments are more prevalent among men than women, this reflects the prevalence of speech impairments in clinical populations.40 The majority of participants were elementary school students (76.3%). More than 3 quarters of children’s families had annual salaries of lower than 10,000 United States dollar. The details of the demographic information related to participants as shown in Table 1. View this table: [Table 1](http://smj.org.sa/content/42/12/1325/T1) Table 1 - Sociodemographic characteristics of the participants. ### The severity and duration of the participant’s speech impediments The severity of the cases was evaluated before commencing therapy. Those with sound-production disorders were assessed with the single-word test. The severity of misarticulation was determined through applying PCC. A PCC of 96.4% indicated a mild condition severity since all participants performed poorly in only one-sound production. The participants reported an average speech-impediment history of 5.3 years. Regarding the severity of stuttering, the percentage of dysfluencies was calculated after conducting the SSI4. Based on the collected results, the mean percentage of dysfluencies across the subjects was 62%, which indicated moderate severity. The participants reported an average stuttering history of 7 years. ### Intelligibility of the research tool Prior to collecting the data from each subject, a research tool was designed for each patient with misarticulation to assess the target sound that required improvement. The therapeutic materials delivered to participants with stuttering, on the other hand, were not tailored. After 4 language and speech therapists carefully evaluated the lists to ensure their linguistic and phonetic equality, each patient was asked to rate each item on a 5-point Likert scale: the familiar voice (non-emotional words), average rate was 4.93; unfamiliar voice (non-emotional words), average rate was 4.88; familiar voice (emotional words), average rate was 4.84; and 4.91 average rate for unfamiliar voice (emotional words), indicating that the intelligibility degrees among them were statistically similar (Z= -1.50, *p*>0.05; Table 2). Further review was not implemented because no participant reported any difficulty or noticeable variations between the 4 lists. View this table: [Table 2](http://smj.org.sa/content/42/12/1325/T2) Table 2 - The intelligibility scores for the 4 therapeutic lists, as rated by the patients. ### Impact of emotions on the progression of speech therapy and performance of central nervous system The results of exposing the patients to speech material recorded by the voices of familiar individuals and speech language pathologists in emotional and non-emotional contexts were analyzed with the Friedman test. We found that the percentage of correct sound production of participants with articulation difficulties and the percentage of dysfluencies among participants with stuttering were significantly different across the 4 conditions: i) familiar voice using emotional vocabulary; ii) unfamiliar voice using emotional vocabulary; iii) familiar voice using emotionless vocabulary; and iv) unfamiliar voice using emotionless vocabulary (stuttering group: X2(3)=80.63, *p*<0.05; misarticulation group: X2(3)=77.35, *p*<0.05). To examine where the actual differences occur, separate Wilcoxon signed-rank tests were run on multiple combinations of the 4 treatment conditions. After applying a Bonferroni adjustment, the Wilcoxon signed-rank tests showed that the lowest dysfluency percentage calculated in participants with stuttering was for stimuli delivered by familiar voice and in an emotional context (3%, standard deviation (SD): 6.05) compared to using only emotional words (7.25%, SD:11.94; Z= -4.81, *p*<0.001) or only a familiar voice (6%, SD:7.03; Z= -4.17, *p*<0.001) or none (12%, SD:12.17; Z= -5.16, *p*<0.001; Table 3). Likewise, the highest percentage of corrected sound production among misarticulation patients was notable when provoking the target sound in emotional words using a familiar voice (88%, SD:12.43), in contrast to applying only a familiar voice (74% SD:11.81; Z= -4.41, *p*<0.001) or none (66%, SD:13.58; Z= -5.65, *p*<0.001). No statistical difference in improvement was reported in patients with misarticulation when only emotional words were utilized (83% SD:11.58; Z= -0.554, *p*>0.05; Table 4). View this table: [Table 3](http://smj.org.sa/content/42/12/1325/T3) Table 3 - Performance of the 36 participants in the sessions employing familiar and unfamiliar and emotional and non-emotional contexts to treat stuttering View this table: [Table 4](http://smj.org.sa/content/42/12/1325/T4) Table 4 - Performance of the 44 participants in the sessions employing familiar and unfamiliar, and emotional and non-emotional contexts to treat misarticulation. ## Discussion This study aimed to apply the practical implications of the theory that emotions affect central auditory processing. We found that the prognoses of dysfluency and sound production disorders improved with the use of familiar auditory stimuli coupled with emotional contexts, emphasizing the importance of engaging the caregivers, families, and friends in therapeutic activities to ensure that treatment goals are reached efficiently. Listening to speech was presently found to be impacted by the degree of stimulus familiarity – such as, the emotional relevance of the auditory stimuli. While the amygdala has been broadly implicated in the conscious perception of emotion.3,5,6 It has been observed in the current research that the progress of speech therapy is accelerated by using familiar sounds that the child listens to frequently. The effect is stronger when words with emotional aspects are used to target the speech production. These findings are consistent with several studies, indicating that there is a connection between the emotional states of children and the articulation rates of their speech organs.13,14,17,18 Although a previous literature13,14,17,18 agreed with the conclusion that acoustic alternations are emotional modifications, there is debate over whether positive- or adverse-provoking events are responsible for enhancing the articulation rate of moveable articulators. Consequently, the focus of the present research is to study the combined effect of using a well-known voice and emotionally spoken messages rather than depending only on emotional words. The lowest manifestation of fluency problems was reported among participants in the current research when utilizing parents’ voices as carriers for emotional speech, compared to the results of applying the other experimental conditions: speech samples of dried emotional speech; speech-language pathologist’s voice; or just sentimental expression not spoken by a well-known voice. These findings were compatible with previous literature showing that emotional status is linked to stuttering in a young population.19-21 Despite the fact that it is not easy to determine whether emotions contribute to or are provoked by stuttering, several studies have indicated a relationship between them.22-27 There is no previous literature discussing the role of combining emotional contexts with familiar voices to enhance the speech therapeutic outcomes. The basis of using well-known voices in this study derives from several prominent studies demonstrating that stimuli are learned by repetition and, consequently, are memorized faster and maintained for a longer period of time.41,42 The only difference between the 2 experimental groups was that no obvious therapeutic advantage to using a combination of familiar voice with emotion contexts over only using emotional words was noticed with misarticulation patients, in contrast to patients with stuttering. Therefore, strong reactions to emotional circumstances have been noticed more regularly in children with stuttering.25,30,31 When the participants in the present study were undergoing therapy, the speech language pathologists noticed that the physical concomitant, such as rapid eye blinks, tremors of the lips or jaw, facial tics, head jerks, and clenching fists, among participants who stuttered were less severe when they were exposed to familiar voice samples than when they were exposed to unfamiliar voices. This implies the potential of using recognizable voices when addressing stuttering in the beginning stages of therapy to raise the self-confidence of the patients, diminish negative feelings, and thus mitigate unwanted body or facial movements. The diminished frequency also suggests that invoking emotion has a salutary effect on the neural mechanisms that cause unwanted movements and exacerbate stuttering. Future research should consider the differential neural activity prompted in individuals who stutter by emotionally relevant and irrelevant stimuli to investigate this possibility further. ### Study limitations The study has a small sample size and it is not having considered language impairments such as central auditory processing disorder and autism. Furthermore, these results should be validated in a larger prospective study to examine patients over period of time: specifically, those with language disorders or delay. These data would improve the understanding of the influence of emotions on auditory central mechanisms In conclusion, this study demonstrates the capacity of emotionally salient stimuli to maximize the benefits of speech therapy and speed its progression. Future studies will be necessary to assess the effect of sentiment on the function of central auditory systems of children with learning disabilities as well as using the families, caregivers, and friends of the patients to record therapeutic resources and improve the efficacy of language managing therapeutic sessions. ## Acknowledgment *The authors gratefully Proof-Reading-Service.com Ltd ([Proof-Reading-Service.com](https://Proof-Reading-Service.com)) for English language editing.* ## Footnotes * **Disclosure.** Authors have no conflict of interests, and the work was not supported or funded by any drug company. * Received September 1, 2021. * Accepted October 18, 2021. * Copyright: © Saudi Medical Journal This is an Open Access journal and articles published are distributed under the terms of the Creative Commons Attribution-NonCommercial License (CC BY-NC). Readers may copy, distribute, and display the work for non-commercial purposes with the proper citation of the original work. ## References 1. 1.Hagemann D, Waldstein SR, Thayer JF. Central and autonomic nervous system integration in emotion. Brain Cogn 2003; 52; 2003: 79–87. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1016/S0278-2626(03)00011-3&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=12812807&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000183735000010&link_type=ISI) 2. 2.Sarter M, Markowitsch HJ. Involvement of the amygdala in learning and memory: A critical review, with emphasis on anatomical relations. Behav Neurosci 1985; 99: 342–380. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1037/0735-7044.99.2.342&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=3939663&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=A1985AEW9500013&link_type=ISI) 3. 3.Davis M, Whalen PJ. The amygdala: vigilance and emotion. Eur J Neurosci 2001; 6: 13–34. 4. 4.Parkinson JA, Robbins TW, Everitt BJ. Dissociable roles of the central and basolateral amygdala in appetitive emotional learning. Eur J Neurosci 2000; 12: 405–413. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1046/j.1460-9568.2000.00960.x&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=10651899&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000085366600046&link_type=ISI) 5. 5.Paré D, Collins DR, Pelletier JG. Amygdala oscillations and the consolidation of emotional memories. Trends Cogn Sci 2002; 6: 306–314. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1016/S1364-6613(02)01924-1&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=12110364&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000176636700011&link_type=ISI) 6. 6.Kryklywy JH, Nantes SG, Mitchell DGV. The amygdala encodes level of perceived fear but not emotional ambiguity in visual scenes. Behav Brain Res 2013; 252: 396–404. 7. 7.Buchanan TW, Lutz K, Mirzazade S, Specht K, Shah NJ, Zilles K, et al. Recognition of emotional prosody and verbal components of spoken language: An fMRI study. Brain Res Cogn Brain Res 2000; 9: 227–238. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1016/S0926-6410(99)00060-9&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=10808134&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) 8. 8.Ethofer T, Anders S, Erb M, Droll C, Royen L, Saur R, et al. Impact of voice on emotional judgment of faces: An event‐related fMRI study. Hum Brain Mapp 2006; 27: 707–714. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1002/hbm.20212&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=16411179&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000239849900001&link_type=ISI) 9. 9.Ludlow A, Heaton P, Rosset D, Hills P, Deruelle C. Emotion recognition in children with profound and severe deafness: Do they have a deficit in perceptual processing? J Clin Exp Neuropsychol 2010; 32: 923–928. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1080/13803391003596447&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=20349386&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) 10. 10.Dyck MJ, Farrugia C, Shochet IM, Holmes-Brown M. Emotion recognition/understanding ability in hearing or vision-impaired children: do sounds, sights, or words make the difference? J Child Psychol Psychiatry 2004; 45: 789–800. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1111/j.1469-7610.2004.00272.x&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=15056310&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000221805400013&link_type=ISI) 11. 11.Erdemir A, Walden TA, Jefferson CM, Choi D, Jones RM. The effect of emotion on articulation rate in persistence and recovery of childhood stuttering. J Fluency Disord 2018; 56: 1–17. 12. 12.Bachorowski JA, Owren MJ. Vocal expression of emotion: Acoustic properties of speech are associated with emotional intensity and context. SAGE Open Med 1995; 6: 219–224. 13. 13.1. Lewis M, 2. Haviland-Jones JM Johnstone T, Scherer KR. Vocal communication of emotion. In: Lewis M, Haviland-Jones JM (eds.) Handbook of emotions. 2nd edition. The Guilford Press, New York, USA; 2000. pp. 220–235. 14. 14.Scherer KR. Vocal communication of emotion: A review of research paradigms. Speech Commun 2003; 40: 277–256. 15. 15.Levenson RW. The autonomic nervous system and emotion. SAGE Open Med 2014; 6: 100–112. 16. 16. Levenson RW. Blood, sweat, and fears: The autonomic architecture of emotion. Ann N Y Acad Sci 2003; 1000: 348–366. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1196/annals.1280.016&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=14766648&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000189443500029&link_type=ISI) 17. 17.Ellgring H, Scherer KR. Vocal indicators of mood change in depression. J Nonverbal Behav 1996; 20: 83–110. 18. 18.Schröder M, Cowie R, Douglas-Cowie E., Westerdijk M, Gielen S. Acoustic correlates of emotion dimensions in view of speech synthesis. Proc. 7th European Conference on Speech Communication and Technology (Eurospeech 2001). 2001 September 3-7; Aalborg, Denmark; 2001. p. 87–90 19. 19.Conture EG, Kelly EM, Walden TA. Temperament, speech and language: An overview. J Fluency Disord 2013; 46: 125–142. 20. 20.Choi D, Conture EG, Walden TA, Jones RM, Kim H. Emotional diathesis, emotional stress, and childhood stuttering. J Speech Lang Hear Res 2016; 59: 616–630. 21. 21.Kefalianos E, Onslow M, Block S, Menzies R, Reilly S. Early stuttering, temperament and anxiety: Two hypotheses. J Fluency Disord 2012; 37: 151–163. [PubMed](http://smj.org.sa/lookup/external-ref?access_num=22682317&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) 22. 22.Ambrose NG, Yairi E, Loucks TM, Seery CH, Throneburg R. Relation of motor, linguistic and temperament factors in epidemiologic subtypes of persistent and recovered stuttering: Initial findings. J Fluency Disord 2015; 45: 12–26. 23. 23.Arnold HS, Conture EG, Key AP, Walden, T. Emotional reactivity, regulation and childhood stuttering: A behavioral and electrophysiological study. J Commun Disord 2011; 44: 276–293. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1016/j.jcomdis.2010.12.003&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=21276977&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) 24. 24.Choi D, Conture EG, Walden TA, Lambert WE, Tumanova V. Behavioral inhibition and childhood stuttering. J Fluency Disord 2013; 38: 171–183. 25. 25.Jones RM, Buhr AP, Conture EG, Tumanova V, Walden TA, Porges SW. Autonomic nervous system activity of preschool-age children who stutter. J Fluency Disord 2014; 41: 12–31. 26. 26.Walden TA, Frankel CB, Buhr AP, Johnson KN, Conture EG, Karrass JM. Dual diathesis-stressor model of emotional and linguistic contributions to developmental stuttering. J Abnorm Child Psychol 2012; 40: 633–644. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1007/s10802-011-9581-8&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=22016200&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) 27. 27.Kefalianos E, Onslow M, Ukoumunne O, Block S, Reilly S. Stuttering, temperament, and anxiety: Data from a community cohort ages 2-4 years. J Speech Lang Hear Res 2014; 57: 1314–1322. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1044/2014_JSLHR-S-13-0069&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=24687124&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) 28. 28.Anderson JD, Pellowski MW, Conture EG, Kelly EM. Temperamental characteristics of young children who stutter. J Fluency Disord 2003; 46: 1221–1233. 29. 29.Howell P, Davis S, Patel H, Cuniffe P, Downing-Wilson D, Au-Yeung J, Williams R. Fluency development and temperament in fluent children and children who stutter. Theory, research and therapy in fluency disorders. Brain Cogn 2004; 250–256. 30. 30.Karrass J, Walden TA, Conture EG, Graham CG, Arnold HS, Hartfield KN, et al. Relation of emotional reactivity and regulation to childhood stuttering. J Commun Disord 2006; 39: 402–423. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1016/j.jcomdis.2005.12.004&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=16488427&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000242322800002&link_type=ISI) 31. 31.Zengin-Bolatkale H, Conture EG, Walden TA. Sympathetic arousal of young children who stutter during a stressful picture naming task. J Fluency Disord 2015; 46: 24–40. 32. 32.Eggers K, De Nil L, Van den Bergh B. Attention shifting in children who stutter. In Convention of the International Association of Logopedics and Phoniatrics. 2010 August 22-26; Athenaum Intercontinental Hotel (Athens), Greece; 2010. 33. 33.Eggers K, Luc F, Van den Bergh BR. Inhibitory control in childhood stuttering. J Fluency Disord 2013; 38: 1–13. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1016/j.jfludis.2012.10.001&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=23540909&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) 34. 34.Felsenfeld S, van Beijsterveldt CEM, Boomsma DI. Attentional regulation in young twins with probable stuttering, high nonfluency, and typical fluency. J Speech Lang Hear Res 2010; 53: 1147–1166. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1044/1092-4388(2010/09-0164)&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=20643792&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000282279100007&link_type=ISI) 35. 35.Schwenk KA, Conture E G, Walden TA. Reaction to background stimulation of preschool children who do and do not stutter. J Commun Disord 2007; 40: 129–141. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1016/j.jcomdis.2006.06.003&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=16876188&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000244812600003&link_type=ISI) 36. 36.Blood GW, Blood IM. Preliminary study of self-reported experience of physical aggression and bullying of boys who stutter: Relation to increased anxiety. Percept Mot Skills 2007; 104: 1060–1066. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.2466/PMS.104.3.1060-1066&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=17879638&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000249136300002&link_type=ISI) 37. 37.Iverach L, Jones M, McLellan LF, Lyneham HJ, Menzies R. G, Onslow M, et al. Prevalence of anxiety disorders among children who stutter. J Fluency Disord 2016; 49: 13–28. 38. 38.Johnson KN, Walden TA, Conture EG, Karrass J. Spontaneous regulation of emotions in preschool children who stutter: Preliminary findings. J Speech Lang Hear Res 2010; 53: 1478–1495. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1044/1092-4388(2010/08-0150)&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=20643793&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) 39. 39.Jones RM, Walden TA, Conture EG, Erdemir A, Lambert WE, Porges SW. Executive functions impact the relation between respiratory sinus arrhythmia and frequency of stuttering in young children who do and do not stutter. J Speech Lang Hear Res 2017; 60: 2133–2150. 40. 40.Adani S, Cepanec M. Sex differences in early communication development: behavioral and neurobiological indicators of more vulnerable communication system development in boys. Croat Med J 2019; 60: 141–149. 41. 41.Grill-Spector K, Henson R, Martin A. Repetition and the brain: neural models of stimulus-specific effects. Trends Cogn Sci 2006; 10: 14–23. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1016/j.tics.2005.11.006&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=16321563&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000234910400007&link_type=ISI) 42. 42.Segaert K, Weber K, de Lange FP, Petersson KM, Hagoort P. The suppression of repetition enhancement: a review of MRI studies. Neuropsychologia 2013; 51: 59–66. [CrossRef](http://smj.org.sa/lookup/external-ref?access_num=10.1016/j.neuropsychologia.2012.11.006&link_type=DOI) [PubMed](http://smj.org.sa/lookup/external-ref?access_num=23159344&link_type=MED&atom=%2Fsmj%2F42%2F12%2F1325.atom) [Web of Science](http://smj.org.sa/lookup/external-ref?access_num=000314491300007&link_type=ISI)