Development of music perception in children: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Julia M. Nakagawa
No edit summary
imported>Julia M. Nakagawa
No edit summary
Line 117: Line 117:
In actual recording situations, it is difficult to see an ERP after the presentation of a single stimulus. Rather the most robust ERPs are seen after many dozens or hundreds of individual presentations are averaged together. This technique cancels out noise in the data allowing only the voltage response to the stimulus to stand out clearly.
In actual recording situations, it is difficult to see an ERP after the presentation of a single stimulus. Rather the most robust ERPs are seen after many dozens or hundreds of individual presentations are averaged together. This technique cancels out noise in the data allowing only the voltage response to the stimulus to stand out clearly.
Language-related ERP components such as the N400, LAN (Left Anterior Negativity) and P600 have proven useful in understanding the processing of language in children and adults, in native and non-native language, in normal processing and in language disorders.  The most important components comparing language and music are the early left anterior negativity (ELAN) and the early right anterior negativity (ERAN). It has been evidence that changes in ELAN are linked to syntax processing in the language domain, whereas the ERAN is evoked by a violation of musical regularities, both located at the inferior frontolateral cortex and the anterior superior temporal gyrus on each side respectively.  
Language-related ERP components such as the N400, LAN (Left Anterior Negativity) and P600 have proven useful in understanding the processing of language in children and adults, in native and non-native language, in normal processing and in language disorders.  The most important components comparing language and music are the early left anterior negativity (ELAN) and the early right anterior negativity (ERAN). It has been evidence that changes in ELAN are linked to syntax processing in the language domain, whereas the ERAN is evoked by a violation of musical regularities, both located at the inferior frontolateral cortex and the anterior superior temporal gyrus on each side respectively.  





Revision as of 12:11, 27 October 2008

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.
Nuvola apps kbounce green.png
Nuvola apps kbounce green.png
This article is currently being developed as part of an Eduzendium student project. The project's homepage is at CZ:Guidel 2008 summer course on Music and Brain‎. One of the goals of the course is to provide students with insider experience in collaborative educational projects, and so you are warmly invited to join in here, or to leave comments on the discussion page. However, please refrain from removing this notice.
Besides, many other Eduzendium articles welcome your collaboration!



Contents

1. Structural development of the human auditory system

2. Enculturation - definition
2.1. Learning to distinguish consonance and dissonance
2.2. Learning rhythmic structure

3. Music and language
3.1. The event related potential technique (ERP)
3.2. Changes in ERP pattern during development
3.3. Are there transfer effects between music and language due to overlapping processing resources?
3.4. Does extensive musical training influence the detection of incongruities in both music and language?

4. References


1. Structural development of the human auditory system

Carrying out research on the development of music perception in children anatomic and functional development of the auditory system has to be taken into consideration as well as neuropsychological aspects.


Sequence of structural development of the human auditory system

Cochlea Brainstem Cortex
Embryonic period
 (1st to 3rd fetal week)
Formation of cochlea and cochlear nerve Formation of nuclei and pathways Formation of cortical plate
2nd trimester
 (14th to 26th fetal week)
Maturation of cochlea and cochlear nerve Neuronal growth and axial maturation Cortical growth
Transition to perinatal
 (27th to 29th fetal week)
Onset of myelination Formation of temporal lobe
Perinatal period Maturation of dendrites and myelin Maturation of marginal layer axons
Transition to childhood
 (6 months to 1 year)
Onset of thalamic axon development
Early childhood
 (1 to 5 years)
Maturation of thalamic axons
Late childhood
 (6 to 12 years)
Maturation of intrinsic axons

Jean K. Moore, Fred H. Linthicum Jr. - International Journal of Audiology 2007; 46:460!4781


Anatomical and functional studies suggest substantial effects of musical training on brain development. Already at the level of the brainstem, evoked responses to sound are larger and more accurate in adult musicians compared with non-musicians. These differences are maintained in auditory cortex for musical tones, with larger evoked responses in musicians than non-musicians.


2. Enculturation - definition

Musical enculturation can be described as the process by which individuals acquire culture-specific knowledge about the structure of the music they are exposed to through everyday experiences such as listening to the radio, singing and dancing. As in different cultures many languages are evolving, there are many different musical systems, each with unique scales, categories and grammatical rules governing pitch and rhythmic structures. However, there are also almost universal aspects of musical structure that might reflect innate constraints working in concert with widelyshared auditory experiences, such as hearing sounds with spectral (pitch) and temporal (rhythm) patterning2 .


2.1. Learning to distinguish consonance and dissonance

Most adults encode and remember melodies in terms of relative pitch (RP), that is, the pitch distances or intervals between notes of the melody, rather than in terms of individual absolute pitches (AP). Automatic electrophysiological responses to violations of RP structure are seen even in non-musicians 3 . Many studies have shown that infants encode pitch patterns based on RP information; for example, by ignoring transpositions but treating violations of a melody’s intervallic structure as novel 4. It has been controversially discussed, whether infants are born with absolute pitch processing abilities unlearning it during development though they are also observed remembering a familiar melody, but showing no evidence of remembering its pitch level. Taken together, recent evidence best supports the idea that infants and adults alike encode both relative and absolute aspects of music, such as timbre, tempo and pitch, and can use both types of information depending on the context 56 . When different pitches sound at the same time, we unwillingly come to the point categorizing the chord in consonance or dissonance. However, the feeling to chords being consonant or dissonant differs between different cultures due to enculturation processes. Across all cultures, pitches whose fundamental frequencies stand in small integer ratios (e.g. octave, 2:1; perfect fifths, 3:2) form consonant intervals, and elicit more positive affective responses than pitches whose fundamental frequencies stand in more complex ratios (dissonances: e.g. tritone, 45:32)7 . Sensitivity to consonance and dissonance might be universal across cultures owing to peripheral mechanisms of the auditory system that develop early in ontogeny. Specifically, many overtones of pitches related by complex ratios are close in frequency (less than a critical bandwidth –1/3 of an octave) and the overlap in vibration patterns compromises the resolution of frequency on the basilar membrane, leading to beating and the perception of roughness1. Electrophysiological studies reveal that also listeners engaged in another task, show their auditory cortex flagging unexpected chord sequences, pitch contours and melodic intervals in both musicians and non-musicians automatically8910. In behavioural experiments, Western adults more readily detect a one-note melodic change that either violates the key or implied harmony within the key of the melody than a change that preserves key and harmony, reflecting implicit knowledge of Western musical conventions. Observing the infants, eight-month-old infants discriminate all changes above chance levels and equally well, suggesting no knowledge of key membership or harmony. At age 5, North American children more readily detect violations of key than violations of harmony, but by age 6 or 7 they are sensitive to both key and harmony. Acquisition of more subtle aspects of tonality continues until 9-12 years of age. To outline the development of pitch processing from childhood to adolescence, it can be proved sensitivity to consonance emerging earliest (universal), system-specific knowledge of key membership developing later (scales are found in virtually all cultures but differ in specific composition), and knowledge of harmony observed last (specific to Western music). In sum, enculturation to pitch structures follows a clear developmental trajectory in which universal aspects are grasped during or before infancy and system-specific aspects are acquired during childhood2.


2.2. Learning rhythmic structure

Even more important than pitch structure might be the sensitivity to temporal aspects. Rhythm is a basic structure of virtually all social musical behaviours, such as dancing, ensemble performance and communication patterns in some native cultures. The earliest age rhythm discrimination can be observed is at two months of age. When listening to music, listeners tend to infer an underlying regular or ‘isochronous’ beat that determines, for example, when to tap or dance. When temporal regularity is compromised or disrupted, adults exhibit difficulties in production (i.e. tapping) and discrimination1112. Like adults, infants as young as 7 months infer an underlying beat, categorizing rhythms on the basis of meter13, and 9-month-old infants more readily notice small timing discrepancies in strongly metrical than in non-metrical rhythms14. Our sense of rhythm might be based in biological rhythms, such as walking and the heartbeat. This idea is supported by findings of multisensory interactions between movement and auditory rhythm in infants and adults15. It is also the vestibular system, which develops before birth and provides continuous input to infants through bouncing, rocking and walking, is more than important to this early-developing interaction, as shown by the finding that rhythmic galvanic stimulation of the vestibular nerve alone biases adults to ‘hear’ the ambiguous pattern as a waltz or a march16. Like enculturation to tonal structures, culture-specific metrical knowledge might continue to develop throughout childhood. When getting older children access increased culture-specific knowledge of deeper hierarchical metrical levels (and thus longer time spans)17.


3. Music and language

As language and music are human universals based on discrete elements organized in hierarchically structured sequences and as there is evidence data suggesting overlapping processing areas for both language and music, it seems plausible that there might be some interacting effects observed.


3.1. The event related potential technique (ERP)

An event-related potential (ERP) is any stereotyped electrophysiological response to an internal or external stimulus. More simply, it is any measured brain response that is directly the result of a thought or perception. Event-related brain potentials (ERPs) are a non-invasive method of measuring brain activity during cognitive processing. The transient electric potential shifts (so-called ERP components) are time-locked to the stimulus onset (e.g., the presentation of a word, a sound, or an image). Each component reflects brain activation associated with one or more mental operations. In contrast to behavioural measures such as error rates and response times, ERPs are characterized by simultaneous multi-dimensional online measures of polarity (negative or positive potentials), amplitude, latency, and scalp distribution. Therefore, ERPs can be used to distinguish and identify psychological and neural sub-processes involved in complex cognitive, motor, or perceptual tasks. Moreover, unlike fMRI (even Event-Related fMRI, which precludes the need for blocking stimulus items), they provide extremely high time resolution, in the range of one millisecond. In actual recording situations, it is difficult to see an ERP after the presentation of a single stimulus. Rather the most robust ERPs are seen after many dozens or hundreds of individual presentations are averaged together. This technique cancels out noise in the data allowing only the voltage response to the stimulus to stand out clearly. Language-related ERP components such as the N400, LAN (Left Anterior Negativity) and P600 have proven useful in understanding the processing of language in children and adults, in native and non-native language, in normal processing and in language disorders. The most important components comparing language and music are the early left anterior negativity (ELAN) and the early right anterior negativity (ERAN). It has been evidence that changes in ELAN are linked to syntax processing in the language domain, whereas the ERAN is evoked by a violation of musical regularities, both located at the inferior frontolateral cortex and the anterior superior temporal gyrus on each side respectively.


3.2. Changes in ERP pattern during development

Unlike adults, the ERP responses of young infants are dominated by slow positive waves; by 3 to 4 months of age, faster negative components are apparent in response to unexpected sound features. The N1 and P2 responses to sound for adults are so small as to be difficult to observe in children under 4 to 6 years of age. These components increase in amplitude with age, reaching a maximum around 10 to 12 years, and diminish to adult levels by 18 years. This maturational trajectory fits with anatomical data from autopsy studies showing gradual development of neurofilament in upper cortical layers (II and upper III) between 6 and 12 years, which enables fast, synchronized firing of neurons. Because these layers contain the primary connections to other brain regions and are central to the generation of N1 and P2, protracted development of inter-cortical projections might underlie the development of adult-like brain responses. Interestingly, musical training seems to accelerate this development because 4- to 6-yearold children studying Suzuki music show larger N1 and P2 responses than children not undergoing musical training. In summary, the effects of musical training on networks in auditory cortex can be seen in early childhood2.


3.3. Are there transfer effects between music and language due to overlapping processing resources?

There is evidence that ERAN and ELAN are, at least partly, generated in the same brain regions. Therefore, it seems plausible to expect transfer effects between music and language due to shared processing resources. Moreover, it has been observed that the ERAN is larger in adults with formal musical training (musicians) than in those without, indicating that more specific representations of musical regularities lead to heightened musical expectancies. This assumption led Jentschke, Koelsch an Friederici (2005) to three main topics of investigation for their study published in the Ann. N.Y. Acad. Sci. 1060: 231–242 (2005)18: How will violations of musical and linguistic syntax be processed in different age groups? Is there a difference in ERAN and ELAN between children with and without musical training and with or without language impairments? Can a transfer be found due to additional musical training and does language impairment lead to a difference in the neural processing of musical structure? They investigated processes during auditory sentence comprehension and music perception in children of two different age groups. A violation of harmonic expectancies and linguistic syntax led either to an ERAN or to a later, sustained negativity in response to a syntactic violation. Furthermore differences between children with and without musical training and in linguistically non-impaired compared to language-impaired children could be found when carrying out these processes. The results indicate that musical training facilitates the processing of musical structure. Interestingly this difference was found as early as 11 years when children have not played an instrument for longer than 4 or 5 years. Despite such a relatively short period of musical training, these children may have acquired specific representations of music-syntactic regularities (e.g., more implicit and explicit knowledge about the theory of harmony underlying Western tonal music, and more specific representations of harmonic relatedness)19. A characteristic of the ELAN was a larger difference in amplitude on the left hemisphere comparing musicians to non-musicians. Moreover, a later, sustained negativity was found in both groups with an enhanced amplitude in the group with musical training. This indicates a positive transfer from the music to the language domain. This finding was expected, since the neural resources underlying the processing of musical and linguistic syntax overlap to some extent, as outlined in the introduction. The fact that the negativity in response to the syntactic violation is distributed bilaterally is in accordance with an earlier study, which showed the same pattern of results: a bilateral anterior negativity between 600 and 1500 milliseconds20. On the other hand, no ERAN could be found for the 5 year olds with specific language impairment (SLI), whereas an ERAN could be seen in linguistically nonimpaired children. The finding that an ERAN is present even at that age is in accordance with an earlier study21.


3.4. Does extensive musical training influence the detection of incongruities in both music and language?

The idea that extensive musical training can influence processing in cognitive domains other than music has received considerable attention from the educational system and the media. Hence, Cyrille Magne, Daniele Schön, and Mireille Besson22 analyzed behavioural data and recorded event-related brain potentials (ERPs) from 8-year-old children to test the hypothesis that musical training facilitates pitch processing not only in music but also in language. They used a parametric manipulation of pitch so that the final notes or words of musical phrases or sentences were congruous, weakly incongruous, or strongly incongruous. That way they found behavioural evidence for a common pitch processing mechanism in language and music perception. By showing qualitative and quantitative differences in the ERPs recorded from musician and non-musician children, they were able to get new insights to processes that may underlie positive transfer effects between music and language. The occurrence of an early negative component to the weak incongruity in music for musician children only may indeed reflect a greater sensitivity to pitch processing. Such enhanced pitch sensitivity in musician children would also be reflected by the larger late positivity to weak incongruities than congruous words in language that was not found in non-musician children. Taken together, these results add to the body of cognitive neuroscience literature on the beneficial effects of musical education showing positive effects of music lessons for linguistic abilities in children. Therefore, these findings argue in favour of music classes being an intrinsic and important part of the educational programs in public schools and in all the institutions that aim at improving children’s perceptive and cognitive abilities.


4. References

1 Moore J.K., Linthicum Jr. F.H. - International Journal of Audiology 2007; 46:460!478

2 Hannon E., Trainor L.J. (2007) - Music acquisition: effects of enculturation and formal training on development, TRENDS in Cognitive Sciences Vol.11 No.11

3 Trainor L.J. et al. (2002) - Automatic and controlled processing of melodic contour and interval information measured by electrical brain activity. J. Cogn. Neurosci. 14, 430–442

4 Trainor L.J. (2005) - Are there critical periods for musical development? Dev. Psychobiol. 46, 262–278

5 Schellenberg E.G., Trehub S.E. (2003) - Good pitch memory is widespread. Psychol. Sci. 14, 262–266

6 Volkova A. et al. (2006) - Infants’ memory for musical performances. Dev. Sci. 9, 583–589

7 Koelsch S. et al. (2006) - Investigating emotion with music: an fMRI study. Hum. Brain Mapp. 27, 239–250

8 Trainor L.J. et al. (2002) - Automatic and controlled processing of melodic contour and interval information measured by electrical brain activity. J. Cogn. Neurosci. 14, 430–442

9 Fujioka T. et al. (2004) - Musical training enhances automatic encoding of melodic contour and interval structure. J. Cogn. Neurosci. 16, 1010–1021

10 Koelsch S., Siebel W.A. (2005) - Towards a neural basis of music perception, Trends Cogn. Sci. 9, 578–584

11 Repp B.H. et al. (2005) - Production and synchronization of uneven rhythms at fast tempi, Music Percept. 23, 61–78

12 Jones M.R. et al. (2002) - Temporal aspects of stimulus-driven attending in dynamic arrays. Psychol. Sci. 13, 313–319

13 Hannon E.E., Johnson S.P. (2005) Infants use meter to categorize rhythms and melodies: implications for musical structure learning. Cognit. Psychol. 50, 354–377

14 Bergeson T.R., Trehub S.E. (2006) - Infants’ perception of rhythmic patterns, Music Percept. 23, 345–360

15 Phillips-Silver J., Trainor L.J. (2007) - Hearing what the body feels: auditory encoding of rhythmic movement, Cognition 105, 533–546


16 Trainor L.J. et al. - The primal role of the vestibular system in determining musical rhythm. Cortex (in press)

17 Drake C. (2003) - Synchronizing with music: intercultural differences, Ann. N. Y. Acad. Sci. 999, 429–437

18 Jentschke S., Koelsch S., Friederici A.D. (2005) – Investigating the relationship of music and language in children, Ann. N.Y. Acad. Sci. 1060: 231–242 (2005)

19 Bharucha J.J. (1984) - Anchoring effects in music: the resolution of dissonance. Cogn. Psychol. 16: 485–518.

20 Hahne A., Eckstein R., Friederici A.D. (2004) - Brain signatures of syntactic and semantic processes during children’s language development. J. Cogn. Neurosci. 15:1302–1318.

21 Koelsch S., Grossmann T. et al. (2003) - Children processing music: electric brain responses reveal musical competence and gender differences. J. Cogn. Neurosci. 15:683–693

22 Magne C., Schön D., Besson M. - Musician Children Detect Pitch Violations in Both Music and Language Better than Nonmusician Children: Behavioral and Electrophysiological Approaches, Journal of Cognitive Neuroscience 18:2, pp. 199–211