Jump to content

User:MarjolaPeca/sandbox

From Wikipedia, the free encyclopedia

AUDITORY AND BRAIN MECHANISM OF SPEECH PROCESSING

Sound production is a result of waveforms yielded by an object’s motion. These waveforms are consisted of molecules of air which move away from the object at 700 miles per hour[1][2]. Features of sound, that are responsible for sound perception are: loudness / amplitude, pitch / frequency and timbre / complexity[3][1]. With regard to language, motion is yielded by the activation of physiologic components, such as the vocal tract, larynx and lungs, in combination with muscle control over the mouth, resulting in speech production[1]. Consequently, when a stimulus is presented, the pinna of the ear thrusts the sound waveforms in the ear canal provoking the vibration of the tympanic membrane. These vibrations are transmitted to the cochlea through the three bones of the middle ear: malleus, incus and stapes.The stapes is attached to the membrane behind the oval window on the vestibule, connecting the middle ear with the inner ear. When the stapes presses the membrane in, the round window's membrane is pressed out, allowing the oscillation of the fluid in the organ of Corti. Hence, energy in the form of vibrations travels inside the cochlea and provokes the fluctuation of the basilar membrane, which basically is the result of the frequency of the sound[3][4]. Low frequency is produced, when fluctuation is more intense towards the end of the basilar membrane i.e. towards the apex. Waves that reach their highest peak towards the oval window i.e. the base, are produced due to higher frequency. Specifically, when a sound of higher frequency is presented, hair cells located towards the base are activated, on the contrary, low frequency activates hair cells located towards the apex[5]. Auditory stimuli such as speech, are of complex frequencies, which means that a wide range of hair cells are activated at the same time[3][6][7]

Hair cells in the organ of Corti are stabilized by supporting cells which are attached to the basilar membrane[8]. Dendrites of the auditory nerve connect with the hair cells' and receive electrical energy by the bundles of cilia that protrude at the top of the cell bodies. The hair cells are divided into two categories, the outer and the inner. The outer hair cells (OHCs) are around 12,000 configured in three rows and innervated by the 5% unmyelinated axons that constitute the auditory nerve. The rest 95% of them are myelinated and innervate the 3,500 inner hair cells (IHCs), which are located opposite the OHCs. All hair cells are covered with the reticular membrane, but bundles of cilia penetrate it as a result those of the OHCs are attached to the tectorial membrane[9][3]. Cilia of the IHCs on the other hand do not contact with the tectorial membrane. OHCs although they are connected to unmyelinated axons, they function as an assisting medium to the IHCs. When the basilar membrane's fluctuation is yielded by a low frequency sound, this movement must be amplified otherwise the fluid in the cochlear duct would reduce sound energy. This amplification depends on the OHCs which enhance basilar membrane vibration. IHCs then bend because of the fluctuation and they activate receptive neurons to transfer the sound inside the brain. Therefore, when a very soft sound is presented as a stimulus, OHCs are activated to enhance the displacement of the basilar membrane which makes us capable of hearing sounds of very low amplitude, such as whispers. As a result, the existence OHCs make humans capable of hearing very low amplitude sounds but without the IHCs no sound energy could be transmitted in the brain, which means that IHCs’ disability would cause total deafness[9][7][6].

The cilia are basically responsible to transform vibrations in the organ of Corti into electrical energy which will then be transmitted into the brain. The fluctuation of the basilar membrane causes the flexion of the cilia, which are organized on each hair cell, from the shortest to the tallest. Each cilium is linked to the adjacent cilium with a thin fiber, the elastic filament / tip link, which connects the top of the one cilium to the other. The place of the elastic filament’s attachment on the cilia is called insertional plaque and receptor potentials are triggered there. When the fluctuation of the basilar membrane causes the cilia to bend from the tallest to the shortest no vibration can be transformed into electrical energy, due to the fact that the elastic filament is unstretched. When fully stretched i.e. when the shortest cilium pushes towards the tallest, action potentials of the cochlear nerve are increased. When no movement is occurred, the elastic filaments normally are slightly stretched giving in that way 10% probability of firing rate. Thus, the more the elastic filament is stretched the more the firing rate of the cochlear nerve increases. The force towards the tallest cilium, opens ion channels in the insertional plaque[6].[3] The type of group of potassium current that influxes the adjacent cilium, is that of ions of K+ activated by Ca2+. K+ in general determines the possibility of the transmission of the electric signals through the axon of the receptive nerve cell i.e. the action potential and the entry of ions of Ca2+ into the neuron is a factor that affects the release of neurotransmitters i.e. the postsynaptic potentials[5].[7]

In such way, the cochlear nerve receives electrical impulses, generated from the inner hair cells, which are transmitted to the spiral ganglion where the soma of the cochlear nerve is located. Neurons from there extend there axons and form synapses with the ventral and dorsal cochlear nucleus in the medulla. Neurons from both dorsal and ventral cochlear nuclei form synapses with neurons of both left and right superior olivary nuclei. Axons from the dorsal and ventral cochlear nuclei and from the superior olivary nuclei which ascend the midbrain, are mainly divided contralaterally and form a bundle of fibers in the right and left hemisphere. This fiber tract is named lateral lemniscus and it terminates at the inferior colliculus in the dorsal midbrain. Amplitude information to the inferior colliculus is carried from the ventral cochlear nucleus, whereas the dorsal cochlear nucleus sends pitch information. Neurons from each inferior colliculus extends axons to both left and right medial geniculate nuclei in the thalamus. Finally, from the medial geniculate nuclei, neurons spread their axons to the primary auditory cortex, in the superior temporal gyrus[10][5][3][11]. As a result, stimuli presented at the right ear are perceived both by the right and left hemisphere but mainly by the left, and vice versa. The representation of sounds in the primary auditory cortex depends on the fluctuation of the basilar membrane in the cochlear duct. If the frequency of the sound is high, the basilar membrane is fluctuated near the base, consequently the sound is represented in the posterior part of the primary auditory cortex. Low frequency sounds that fluctuate the basilar membrane near the apex, are represented in its anterior part. This manner of localization of sounds is named tonotopic representation[3][4].

Frequencies produced by spoken stimuli are recognized in the same way, although most of the recognition relies mainly on the primary auditory area of the left hemisphere. Recognition of words occurs first and comprehension next. Specifically, electrical signals yielded by speech sounds are transferred to the auditory cortex where are recognized as familiar signals. After been recognized, they are matched to mental representations. This process takes place in the posterior and middle part of the superior temporal lobe, known as the Wernicke’s area where words are comprehended[5]. For instance, when we hear a word spoken in another language, because of the complexity of frequency that speech naturally produces, electrical signals are recognized as speech sounds, but they cannot be linked to mental representations because they are not familiar signals to the brain consequently they can not be comprehended. However, when we learn a new word in another language, Wernicke’s area is firstly activated in order to recognize the signals produced by the specific word. Repetition of the word in order to be able to produce it correctly follows next, which is an act that takes place in Broca’s area. Repetitive stimulus results in sound recognition thus we map the word to a mental representation back in Wernick's area[11]. Broca’s area, which is located in the posterior part of the frontal lobe, is connected to Wernick's area with bundles of fibers named the Arcuate Fasciculus. Broca’s area is responsible for language production rather than language comprehension, however lesion in this area would provoke also impaired comprehension and lesions in Wernick's area affect language production as well[12][13]. For a word to be produced, the primary motor cortex has to be activated and to transmit electrical signals through a fiber tract called pyramidal tract, to the spinal cord. In this way the control of muscle movement is achieved and parts of the mouth such as lips and tongue are activated. These movements in combination with the physiologic components produce molecules of air moving away from the speaker's mouth. Ones these waveforms reach someone’s ear, the circle of speech comprehension and speech production, is reproduced.[1]     

REFERENCES

  1. ^ a b c d Lieberman, Philip (1977). Speech Physiology and Acoustic Phonetics. New York: Macmillan. ISBN 0-02-370620-1.
  2. ^ Fry, Dennis Butler (1979). The Physics of Speech. Cambridge; New York: Cambridge University Press. ISBN 0 521 22173 0.
  3. ^ a b c d e f g Carlson, Neil R. (1942). Physiology of Behavior. Boston: Allyn & Bacon. ISBN 0-205-66627-2.
  4. ^ a b Kalat, James W. (1998). Biological Psychology. Pacific Groove, California: Brooks / Cole Publishing Company. ISBN 0-534-34893-9.
  5. ^ a b c d Thompson, Richard F. (1985). The Brain: An Introduction to Neuroscience. New York: W. H. Freeman and Company. ISBN 0-7167-1462-0.
  6. ^ a b c Schwander, Martin; et al. (November 16, 2018). "The cell biology of hearing". The Journal of Cell Biology. 190(1): 9–20. {{cite journal}}: Explicit use of et al. in: |first= (help)
  7. ^ a b c Shepherd, Gordon M. (1998). The Synaptic Organization of the Brain. New York: Oxford University Press. ISBN 0-19-511824-3.
  8. ^ Afifi K. Adel, Bergman A. Ronald (1998). Functional Neuroanatomy: text and atlas. New York: McGraw-Hill. ISBN 0-07-001589-9.
  9. ^ a b Warren, Richard M. (1999). Auditory Perception. Cambridge; New York: Cambridge University Press. ISBN 978-0-521-86870-9.
  10. ^ FitzGerald, Turlough M J; et al. (2007). The Clinical Neuroanatomy and Neuroscience. Edinburgh: Elsevier Saunders. ISBN 1-4160-3445-5. {{cite book}}: Explicit use of et al. in: |first= (help)
  11. ^ a b Gazzaniga, Michael S.; et al. (1998). Cognitive Neuroscience: The biology of the mind. New York: W.W. Norton. ISBN 0-393-97219-4. {{cite book}}: Explicit use of et al. in: |first= (help)
  12. ^ Friederici D A, Gierhan ME S (November 16, 2018). "The language network". Current opinion in neurobiology. 23(2): 250–254.
  13. ^ Hagoort, Peter (November 16, 2018). "Nodes and networks in the neural architecture for language: Broca's region and beyond". Current opinion in neurobiology. 28: 136–141.