For example, the landmark paper of Bever and Chiarello (1974) emphasized the different roles of both hemispheres in processing music and language information, with the left hemisphere considered more specialized for propositional, analytic, and serial processing and the right-hemisphere more specialized for appositional, holistic, and synthetic relations. This view has been challenged in recent years mainly because of the advent of modern brain imaging techniques and the improvement in neurophysiological measures to investigate brain functions. Using these innovative approaches, an entirely new view on the neural and psychological underpinnings of music and speech has evolved. The findings of these more recent studies show that music and speech functions have many aspects in common and that several neural modules are similarly involved in speech and music (Tallal and Gaab, 2006). There is also emerging evidence that speech functions can benefit from music functions and vice versa. This field of research has accumulated a lot of new information and it is therefore timely to bring together the work of those researchers who have been most visible, productive, and inspiring in this field.
This special issue comprises a collection of 20 review and research papers that focus on the specific relationship between music and language. Of these 20 papers 12 are research papers that report entirely new findings supporting the close relationship between music and language functions. Two papers report findings demonstrating that phonological awareness, which is pivotal for reading and writing skills, is closely related to pitch awareness and musical expertise (Dege and Schwarzer, 2011; Loui et al., 2011). Dege and colleagues even show that pre-schoolers can benefit from a program of musical training to increase their phonological awareness.
Three research papers focus on the relationship between tonal language expertise and musical pitch perception skills and on whether pitch-processing deficits might influence tonal language perception. Giuliano et al. (2011) demonstrated Mandarin speakers are highly sensitive to small pitch changes and interval distances, a sensitivity that was absent in the control group. Using ERPs obtained during the pitch and interval perception tasks, their study reveals earlier ERP responses in Mandarin speakers compared with controls to these pitch changes relative to no-change trials. In their elegant paper, Peretz et al. (2011) report that native speakers of a tone language, in which pitch contributes to word meaning, are impaired in the discrimination of falling pitches in tone sequences as compared with speakers of a non-tone language. Taken together, these two studies illustrate the cross-domain influence of language experience on the perception of pitch, suggesting that the native use of tonal pitch contours in language leads to a general enhancement in the acuity of pitch representations. Tillmann et al. (2011) examined whether subjects suffering from congenital amusia also demonstrate impairments of pitch-processing in speech, specifically the pitch changes used to contrast lexical tones in tonal languages. Their study revealed that the performance of congenital amusics was inferior to that of controls for all materials including the Mandarin language, this therefore suggesting a domain-general pitch-processing deficit.
Five research papers sought to examine interactions either between musical expertise and language functions or whether an interaction between musical and language functions is beneficial for phonetic perception. Ott et al. (2011) demonstrate that professional musicians process unvoiced stimuli (irrespective of whether these stimuli are speech or non-speech stimuli) differently than controls, this suggesting that early phonetic processing is differently organized depending on musical expertise. Strait and Kraus (2011) report perceptual advantages in musicians for hearing and neural encoding of speech in background noise. They also argue that musicians possess a neural proficiency for selectively engaging and sustaining auditory attention to language and that music thus represents a potential benefit for auditory training. Gordon et al. (2011) examined the interaction between linguistic stress and musical meter and established that alignment of linguistic stress and musical meter in song enhances musical beat tracking and comprehension of lyrics. Their study thus supports the notion of a strong relationship between linguistic and musical rhythm in songs. Hoch et al. (2011) investigated the effect of a musical chord's tonal function on syntactic and semantic processing and conclude that neural and psychological resources of music and language processing strongly overlap. The fifth paper of this group (Omigie and Stewart, 2011) demonstrates that the difficulties amusic individuals have with real-world music cannot be accounted for by an inability to internalize lower-order statistical regularities but may arise from other factors. Although there are still some differences between music and speech-processing, there thus is growing evidence that speech and music processing strongly overlap.
Halwani et al. (2011) examined whether the arcuate fasciculus, a prominent white-matter tract connecting temporal and frontal brain regions, is anatomically different between singers, instrumentalists, and non-musicians. They showed that long-term vocal–motor training might lead to an increase in volume and microstructural complexity (as indexed by fractional anisotropy measures) of the arcuate fasciculus in singers. Most likely, these anatomical changes reflect the necessity in singers of strongly linking together frontal and temporal brain regions. Typically, these regions are also involved in the control of many speech functions. The beneficial impact of music on speech functions has also been demonstrated by Vines et al. (2011) in their research paper. They examined whether the melodic intonation therapy (MIT) in Broca's aphasics can be improved by simultaneously applying anodal transcranial direct current stimulation (tDCS). In fact, they showed that the combination of right-hemisphere anodal-tDCS with MIT speeded up recovery from post-stroke aphasia.
In addition to these 12 research papers there are 8 review and opinion papers that highlight the tight link between music and language. Patel (2011) proposes the so-called OPERA hypothesis with which he explains why music is beneficial for many language functions. The acronym OPERA stands for five conditions which might drive plasticity in speech-processing networks (Overlap: anatomical overlap in the brain networks that process acoustic features used in both music and speech; Precision: music places higher demands on these shared networks than does speech; Emotion: the musical activities that engage this network elicit strong positive emotion; Repetition: the musical activities that engage this network are frequently repeated; Attention: the musical activities that engage this network are associated with focused attention). According to the OPERA hypothesis, when these conditions are met, neural plasticity drives the networks in question to function with higher precision than needed for ordinary speech communication. While Patel's paper is more an opinion paper that puts musical expertise into a broader context, the seven other reviews more or less emphasize specific aspects of the current literature on music and language. Ettlinger et al. (2011) emphasize the specific role of implicitly acquired knowledge, implicit memory, and their associated neural structures in the acquisition of linguistic or musical grammar. Milovanov and Tervaniemi (2011) underscore the beneficial influence of musical aptitude on the acquisition linguistic skills as for example in acquiring a second language. Bella et al. (2011) summarize findings of the existing literature concerning normal singing and poor-pitch singing and suggest that pitch imitation may be selectively inaccurate in the music domain without being affected in speech, thus supporting the separability of mechanisms subserving pitch production in music and language. In their extensive review of the literature, Besson et al. (2011) discuss the transfer effects from music to speech by specifically focusing on the musical expertise in musicians. Shahin (2011) article reviews neurophysiological evidence supporting an influence of musical training on speech perception at the sensory level, and the question is discussed whether such transfer could facilitate speech perception in individuals with hearing loss. This review also explains the basic neurophysiological measures used in the neurophysiological studies of speech and music perception. The comprehensive review by Koelsch (2011) summarizes findings from neurophysiology and brain imaging on music and language processing and integrates these findings into a broader “neurocognitive model of music perception.” Specific emphasis is placed on the comparison of musical syntax and their similarities and differences to language syntax. Schon and Francois (2011) present a review in which they focus on a series of electrophysiological studies that investigated speech segmentation and the extraction of linguistic versus musical information. They demonstrated that musical expertise facilitates the learning of both linguistic and musical structures. A further point is that electrophysiological measures are often more sensitive for identifying music-related differences than behavioral measures.
Taken together, this special issue provides a comprehensive summary of the current knowledge on the tight relationship between music and language functions. Thus, musical training may aid in the prevention, rehabilitation, and remediation of a wide range of language, listening, and learning impairments. On the other hand, this body of evidence might shed new light on how the human brain uses shared network capabilities to generate and control different functions.
WHAT do you have to do to bridge the gap between who you are now and who you want to become in the context of international business communication? What are the next five steps that you could make? Who could you ask for help? What resources do you need?
One of the key things to success in developing your linguistic identity is to enjoy the process. Forget the tedious drills you had to learn at school. Enhancing your linguistic competence should be fun. I didn’t spend a single moment not enjoying myself when learning Portuguese.
***
I came back to the bar. Nothing had changed. The rustling sound of European Portuguese was filling the room. There were so many of them! The fight or flight response activated in my head and I screamed inside but at that point I’d had enough. “Oh, just shut up!” I scolded myself. Then I attached my name badge, came up to a group of people and said my first “olá”. Result: I left three hours later with a bunch of new professional contacts and an invitation to become part of London’s Portuguese community.
Anna-Jane Niznikowska is a Communication Consultant who helps non-native English-speaking professionals become confident international business communicators.
She is the founder of Telegraph Street and creator of Linguistic Business Coaching (LBC) - a training and coaching methodology that increases linguistic self-confidence and enhances spontaneity of expression among non-native English speakers.
Removing "is" from your language
This next one is super interesting. Alfred Korzybski, the creator of General Semantics, was firmly convinced that the ‘to be' verbs like "I am, he is, they are, we are" promoted insanity. Why? Quite simply because things can't be exactly equal to something else. Douglas Cartwright explains further: "This X = Y creates all kinds of mental anguish and it doesn't need to because we never can reduce ourselves to single concepts. You believe yourself to have more complexity than that, don't you? Yet unconsciously accepting this languaging constrains us to believe we operate as nothing more or less than the idea we identified ourselves with."
Read the following list of examples and you'll see immediately how different the outcome of the statements is:
He is an idiot vs. He acted like an idiot in my eyes
She is depressed vs. She looks depressed to me
I am a failure vs. I think I've failed at this task
I am convinced that vs. It appears to me that
You, Because, Free , Instantly, New: The 5 Most Persuasive Words in English
In a terrific article, Gregory Ciotti researched the top five words in English. His list is not suprising and yet the research behind it, is extremely powerful.
You: or your name is something that's so easy to be forgotten and yet so important for great communication.
Free: Gregory explains Ariely's principle of loss aversion. All of us naturally go for the lowest hanging fruit and free triggers exactly that.
Because: Because is probably as dangerous as it is useful. Creating a causal relationship is incredibly persuasive: "even giving weak reasons have been shown to be more persuasive than giving no reason at all."
Instantly: If we can trigger something immediately, our brain jumps on it like a shark, says Greg: "Words like "instant," "immediately," or even"fast" are triggers for flipping the switch on that mid-brain activity."
Quick last fact: Make three positive comments for every negative statement; and for every negative argument give three positive ones. This comes from Andrew Newberg. His research suggests that negative arguments have a very detrimental effect to our brain. We need to pay particular attention to not let them take over and working agains them with this 3-to-1 ratio: "When you get into a dialogue with somebody to discuss any particular issue, a three-to-one ratio is a relatively good benchmark to think about; you wind up creating the opportunity for a more constructive dialogue and hopefully a better resolution."