The Full Wiki

Speech repetition: Wikis

Advertisements
  

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

From Wikipedia, the free encyclopedia

Children copy with their own mouths the words spoken by the mouths of those around them. This enables them to learn the pronunciation of words not already in their vocabulary.

Speech repetition is the saying by one individual of the spoken vocalizations made by another individual. This requires the ability in the person making the copy to map the sensory input they hear from the other person's vocal pronunciation into a similar motor output with their own vocal tract.

Such speech input output imitation often occurs independently of speech comprehension such as in speech shadowing when a person automatically says words heard in earphones, and the pathological condition of echolalia in which people reflexively repeat overheard words. This links to speech repetition of words being separate in the brain to speech perception. Speech repetition occurs in the dorsal speech processing stream while speech perception occurs in the ventral speech processing stream. Repetitions are often incorporated unawares by this route into spontaneous novel sentences immediately or after delay following storage in phonological memory.

In humans, the ability to map heard input vocalizations into motor output is highly developed due to this copying ability playing a critical role in a child's rapid expansion of their spoken vocabulary. In older children and adults it still remains important as it enables the continued learning of novel words and names and additional languages. Such repetition is also necessary for the propagation of language from generation to generation. It has also been suggested that the phonetic units out of which speech is made have been selected upon by the process of vocabulary expansion and vocabulary transmissions due to children preferentially copying words in terms of more easily imitated elementary units.

Contents

Properties

Advertisements

Automatic

Vocal imitation happens quickly: words can be repeated within 250-300 milliseconds[1] both in normals (during speech shadowing[2] and during echolalia by retarded individuals.[3] The imitation of speech syllables possibly happens even quicker: people begin imitating the second phone in the syllable [ao] earlier than they can identify it (out of the set [ao], [aæ] and [ai].[4] Indeed, "...simply executing a shift to [o] upon detection of a second vowel in [ao] takes very little longer than does interpreting and executing it as a shadowed response".[4] Neurobiologically this suggests "...that the early phases of speech analysis yield information which is directly convertible to information required for speech production".[4] Vocal repetition can be done immediately as in speech shadowing and echolalia. It can also be done after the pattern of pronunciation is stored in short-term memory or long-term memory. It automatically uses both auditory and where available visual information about how a word is produced.[5][6]

The automatic nature of speech repetition was noted by Carl Wernicke, the late nineteenth century neurologist, who observed that “The primary speech movements, enacted before the development of consciousness, are reflexive and mimicking in nature..”.[7]

Independent of speech

Vocal imitiation arises in development before speech comprehension and also babbling: 18 week-old infants spontaneously copy vocal expressions provided the accompanying voice matches.[8] Imitation of vowels has been found as young as 12 weeks.[9] It is independent of native language, language skills, word comprehension and a speaker's intelligence. Many autistic and some mentally retarded people engage in the echolalia of overheard words (often their only vocal interaction with others) without understanding what they echo.[10][11][12][13] Reflex uncontrolled echoing of others words and sentences occurs in roughly half of those with Gilles de la Tourette syndrome.[14] The ability to repeat words and nonwords without comprehension also occurs in mixed transcortical aphasia where it links to the sparing of the short-term phonological store.[15]

The ability to repeat and imitate speech sounds occurs separately to that of normal speech. Speech shadowing provides evidence of a 'privileged' input/output speech loop that is distinct to the other components of the speech system.[16] Neurocognitive research likewise finds evidence of a direct (nonlexical) link between phonological analysis input and motor programming output.[17][18][19]

Effector independent

Speech sounds can be imitatively mapped into vocal articulations in spite of vocal tract anatomy differences in size and shape due to gender, age and individual anatomical variability. Such variability is extensive making input output mapping of speech more complex than a simple mapping of vocal track movements. The shape of the mouth varies widely: dentists recognize three basic shapes of palate: trapezoid, ovoid, and triagonal; six types of malocclusion between the two jaws; nine ways teeth relate to the dental arch and a wide range of maxillary and mandible deformities.[20] Vocal sound can also vary due to dental injury and dental caries. Other factors that do not imped the sensory motor mapping needed for vocal imitation are gross oral deformations such as hair-lips, cleft palates or amputations of the tongue tip, pipe smoking, pencil biting and teeth clinching (such as in ventriloquism). Paranasal sinuses vary between individuals 20-fold in volume, and differ in the presence and the degree of their asymmetry.[21][22]

Diverse linguistic vocalizations

Vocal imitation occurs potentially in regard to a diverse range of phonetic units and types of vocalization. The world's languages use consonantal phones that differ in thirteen imitable vocal tract place of articulations (from the lips to the glottis). These phones can potentially be pronounced with eleven types of imitable manner of articulations (nasals to lateral clicks). Speech can be copied in regard to its social accent, intonation, pitch and individuality (as with entertainment impersonators). Speech can be articulated in ways which diverge considerably in speed, timbre, pitch, loudness and emotion. Speech further exists in different forms such as song, verse, scream and whisper. Intelligible speech can be produced with pragmatic intonation and in regional dialects and foreign accents. These aspects are readily copied: people asked to repeat speech-like words imitate not only phones but also accurately other pronunciation aspects such as fundamental frequency,[23] schwa-syllable expression,[23] voice spectra and lip kinematics,[24] voice onset times,[25] and regional accent.[26]

Language acquisition

Vocabulary expansion

In 1874 Carl Wernicke proposed[27] that the ability to imitate speech plays a key role in language acquisition. This is now a widely researched issue in child development.[28][29][30][31][32] A study of 17,000 one and two word utterances made by six children between 18 months to 25 months found that, depending upon the particular infant, between 5% and 45% of their words might be mimicked.[28] These figures are minima since they concern only immediately heard words. Many words that may seem spontaneous are in fact delayed imitations heard days or weeks previously.[29] At 13 months children who imitate new words (but not ones they already know) show a greater increase in noun vocabulary at four months and non noun vocabulary at eight months.[30] A major predictor of vocabulary increase in both 20 months,[33] 24 months,[34] and older children between 4 and 8 years is their skill in repeating nonword phone sequences (a measure of mimicry and storage).[31][32] This is also the case with children with Downs syndrome .[35] The effect is larger than even age: in a study of 222 two year old children that had spoken vocabularies ranging between 3–601 words the ability to repeat nonwords accounted for 24% of the variance compared to 15% for age and 6% for gender (girls better than boys).[34]

Nonvocabularly expansion uses of imitation

Imitation provides the basis for making longer sentences than children could otherwise spontaneously make on their own.[36] Children analyze the linguistic rules, pronunciation patterns, and conversational pragmatics of speech by making monologues (often in crib talk) in which they repeat and manipulate in word play phrases and sentences previously overheard.[37] Many proto-conversations involve children (and parents) repeating what each other has said in order to sustain social and linguistic interaction. It has been suggested that the conversion of speech sound into motor responses helps aid the vocal “alignment of interactions” by “coordinating the rhythm and melody of their speech”.[38] Repetition enables immigrant monolingual children to learn a second language by allowing them to take part in 'conversations'.[39] Imitation related processes aids the storage of overheard words by putting them into speech based short- and long-term memory.[40]

Language learning

The ability to repeat nonwords predicts the ability to learn second-language vocabulary.[41] Adult polyglots moreover related to this have better abilities to repeat nonword vocalizations than nonpolyglots though both are otherwise similar in general intelligence, visuo-spatial short-term memory and paired-associate learning ability.[42] Language delay in contrast links to impairments in vocal imitation.[43]

Speech repetition and phones

Electrical brain stimulation research upon the human brain finds that 81% of areas that show disruption of phone identification are also those in which the imitating of oral movements is disrupted and vice versa;[44] Brain injuries in the speech areas show a 0.9 correlation between those causing impairments to the copying of oral movements and those impairing phone production and perception.[45]

Mechanism

Spoken words are sequences of motor movements organized around vocal track gesture motor targets.[46] Vocalization due to this is copied in terms of the motor goals that organize it rather than the exact movements with which it is produced. These vocal motor goals are auditory. According to James Abbs[47] 'For speech motor actions, the individual articulatory movements would not appear to be controlled with regard to three- dimensional spatial targets, but rather with regard to their contribution to complex vocal tract goals such as resonance properties (e.g., shape, degree of constriction) and or aerodynamically significant variables'. Speech sounds also have duplicable higher-order characteristics such as rates and shape of modulations and rates and shape of frequency shifts.[48] Such complex auditory goals (which often link--though not always--to internal vocal gestures) are detectable from the speech sound which they create.

Neurology

Dorsal speech processing stream function

Two cortical processing streams exist: a ventral one which maps sound onto meaning, and a dorsal one, that maps sound onto motor representations. The dorsal stream projects from the posterior Sylvian fissure at the temporoparietal junction, onto frontal motor areas, and is not normally involved in speech perception.[49] Carl Wernicke identified a pathway between the left posterior superior temporal sulcus (a cerebral cortex region sometimes called the Wernicke's area) as a centre of the sound “images” of speech and its syllables that connected through the arcuate fasciculus with part of the inferior frontal gyrus (sometimes called the Broca's area) responsible for their articulation.[7] This pathway is now broadly identified as the dorsal speech pathway, one of the two pathways (together with the ventral pathway) that process speech.[50] The posterior superior temporal gyrus is specialized for the transient representation of the phonetic sequences used for vocal repetition.[51] Part of the auditory cortex also can represent aspects of speech such as its consonantal features.[52]

Mirror neurons

Mirror neurons have been identified that both process the perception and production of motor movements. This is done not in terms of their exact motor performance but an inference of the intended motor goals with which it is organized.[53] Mirror neurons that both perceive and produce the motor movements of speech have been identified.[54] According to the motor theory of speech imitation such speech mirror neurons in infants have selected for motor goals with vocal track gestures that are easy to imitate and this has shaped the nature of the phonetic units out of which spoken words are constructed.[55] The motor theory of speech imitation unlike that of motor theory of speech perception does not link mirror neurons with speech perception.[55]

Evolution and language

Human language is a vocabulary-based form of communication that unlike that of other animals employs tens of thousands of lexicals and names. This requires that young humans new to language have the ability to quickly learn both the pronunciations and use of many thousands of words. If children could not repeat speech without problems, human language could not exist.[55] This makes the evolution of the capacity of speech repetition a critical innovation needed for the origin of speech[55] The motor theory of speech imitation argues that this need for speech to be imitable not speech perception nor speech production moreover underlies the evolved nature of the vowel and consonant units of phonetics.[55]

Sign language

Words in sign languages unlike spoken ones are not made of sequential units but spatial configurations of subword unit arrangements.[56] These words like spoken ones are learnt by imitation. Indeed, rare cases of compulsive sign-language echolalia exist in otherwise language-deficient deaf autistic individuals born into signing families.[56] Both sign and vocal speech neurobiologically use the same cortical areas linked with imitation in the brain, with neural areas including the auditory cortex that are used for vocal speech being reused in sign language.[57]

Nonhuman animals

Birds

Birds learn their songs from those made by other birds. In several examples, birds show highly developed repetition abilities: the Sri Lankan greater Greater Racket-tailed Drongo (Dicrurus paradiseus) copies the calls of predators and the alarm signals of other birds[58] Albert's Lyrebird (Menura alberti) can accurately imitate the Satin Bowerbird (Ptilonorhynchus violaceus),[59]

Research upon avian vocal motor neurons finds that they perceive their song as a series of articulatory gestures as in humans.[60] Birds that can imitate humans, such as the Indian Hill myna (Gracula religiosa), imitate human speech by mimicking the various speech formants, created by changing the shape of the human vocal tract, with different vibration frequencies of its internal tympaniform membrane.[61] Indian hill mynahs also imitate such phonetic characteristics as voicing, fundamental frequencies, formant transitions, nasalization, and timing, through their vocal movements are made in a different way from those of the human vocal apparatus.[61]

Nonhuman mammals

Apes

Apes taught language show an ability to imitate language signs with chimpanzees such as Washoe who was able to learn with his arms a vocabularly of 250 American Sign Language gestures. However, such human trained apes show no ability to imitate human speech vocalizations.[68]

See also

Footnotes

  1. ^ In comparison it takes 567 to 680 milliseconds to name a picture--175 milliseconds of that being devoted to "conception" of the word to be said. see Indefrey P, Levelt WJ. (2004). The spatial and temporal signatures of word production components. Cognition. 2(1-2):101-44. doi:10.1016/j.cognition.2002.06.001 PMID 15037128
  2. ^ Marslen-Wilson W. (1973). Linguistic structure and speech shadowing at very short latencies. Nature, 244, 522-523. PubMed
  3. ^ Fay WH. Coleman RO. (1977). A human sound transducer/reproducer: temporal capabilities of a profoundly echolalic child. Brain and Language, 4, 396-402. PubMed
  4. ^ a b c Porter RJ. Lubker JF. (1980). Rapid reproduction of vowel-vowel sequences, Evidence for a fast and direct acoustic-motoric linkage in speech. Journal of Speech and Hearing Research, 23, 593-602. PubMed
  5. ^ Gentilucci M, Cattaneo L. (2005). Automatic audiovisual integration in speech perception. Exp Brain Res. 167:66-75. PubMed
  6. ^ Gentilucci M, Bernardis P. (2007). Imitation during phoneme production. Neuropsychologia. 45:608-15. PubMed
  7. ^ a b Wernicke K. The aphasia symptom-complex. 1874. Breslau, Cohn and Weigert. Translated in: Eling P, editor. Reader in the history of aphasia. Vol. 4. Amsterdam: John Benjamins; 1994. p. 69–89. ISBN 978-9027218933
  8. ^ Kuhl PK. Meltzoff AN. (1982). The bimodal perception of speech in infancy. Science, 218, 1138-1141.PubMed
  9. ^ Kuhl PK. Meltzoff AN. (1996). Infant vocalizations in response to speech: Vocal imitation and developmental change. Journal of the Acoustic Society of America, 100, 2425-2438.PubMed
  10. ^ Roberts JM. (1989). Echolalia and comprehension in autistic children. J Autism Dev Disord. Jun;19(2):271-81. PubMed
  11. ^ Schneider DE. (1938). The clinical syndromes of echolalia, echopraxia, grasping and sucking. Journal of Nervous and Mental Disease, 88, 18-35, 200-216. journal link
  12. ^ Schuler AL. (1979). Echolalia, Issues and clinical applications. Journal of Speech and Hearing Disorders, 44, 411-434. PubMed
  13. ^ Stengel E. (1947). A clinical and psychological study of echo-reactions. Journal of Mental Science, 93, 598-612. doi:10.1192/bjp.93.392.598
  14. ^ Lees AJ, Robertson M, Trimble MR, Murray NM. (1984). A clinical study of Gilles de la Tourette syndrome in the United Kingdom. J Neurol Neurosurg Psychiatry. 47:1-8. PubMed
  15. ^ Trojano L, Fragassi NA, Postiglione A, Grossi D. (1988). Mixed transcortical aphasia. On relative sparing of phonological short-term store in a case. Neuropsychologia, 26(4):633-8.PubMed
  16. ^ McLeod P. Posner MI. (1984). Privileged loops from percept to act. In H. Bouma D. Bouwhuis, (Eds), Attention and performance X (pp. 55-66). Hillsdale, NJ, Erlbaum. ISBN 978-0863770050
  17. ^ Coslett HB. Roeltgen DP. Rothi LG. Heilman KM. (1987). Transcortical sensory aphasia, Evidence for subtypes. Brain and Language, 32, 362-378. PubMed
  18. ^ McCarthy R. Warrington EK. (1984). A two-route model of speech production, Evidence from aphasia. Brain, 107, 463-485. PubMed
  19. ^ McCarthy RA, Warrington EK. (2001). Repeating without semantics: surface dysphasia? Neurocase.;7:77-87. PubMed
  20. ^ Bloomer HH. (1971). Speech defects associated with dental malocclusions and related abnormalities. In L. E. (Eds), Handbook of speech pathology and audiology (pp. 715-766), New York, Appleton Century. ISBN 978-0133817645
  21. ^ Williams RJ. (1967). You are extra-ordinary. New York, Random House. pp. 26-27. OCLC 156187572
  22. ^ Vocal traits also vary moreover when people get upper respiratory tract infections as the shape and size of sinus cavities is further changed with the swelling of mucous membranes.
  23. ^ a b Kappes J, Baumgaertner A, Peschke C, Ziegler W. (2009). Unintended imitation in nonword repetition. Brain Lang. 111(3):140-51. doi:10.1016/j.bandl.2009.08.008 PMID19811813
  24. ^ Gentilucci M, Bernardis P. (2007). Imitation during phoneme production. Neuropsychologia. 1;45(3):608-15. PMID 16698051
  25. ^ Shockley K, Sabadini L, Fowler CA. (2004). Imitation in shadowing words. Percept Psychophys. 66(3):422-9. PMID 15283067
  26. ^ Delvaux V, Soquet A. (2007). The influence of ambient speech on adult speech productions through unintentional imitation. Phonetica. 64(2-3):145-73. PMID 17914281
  27. ^ Wernicke K. (1874). The aphasia symptom-complex. Breslau, Cohn and Weigert. Translated in: Eling P, editor. (1994). p. 69–89.Reader in the history of aphasia. Vol. 4. Amsterdam: John Benjamins: “The major tasks of the child in speech acquisition is mimicry of the spoken word”. p76
  28. ^ a b Bloom L. Hood L. Lichtbown P. (1974). Imitation in language, If, when, and why. Cognitive Psychology 6, 380-420.doi:10.1016/0010-0285(74)90018-8
  29. ^ a b Miller GA. (1977). Spontaneous apprentices: Children and language. New York, Seabury Press. ISBN 978-0816493302
  30. ^ a b Masur EF. (1995). Infants' early verbal imitation and their later lexical development. Merrill-Palmer Quarterly, 41, 286-306.OCLC 89395784
  31. ^ a b Gathercole SE. Baddeley AD. (1989). Evaluation of the role of phonological STM in the development of vocabulary in children, A longitudinal study. Journal of Memory and Language, 28, 200-213. cat.inist.fr
  32. ^ a b Gathercole SE. (2006). Non word repetition and word learning: The nature of the relationship. Applied Psycholinguistics 27: 513-543.doi:10.1017.S0142716406060383
  33. ^ Hoff E, Core C, Bridges K. (2008). Non-word repetition assesses phonological memory and is related to vocabulary development in 20- to 24-month-olds. J Child Lang. 35:903-16.PubMed
  34. ^ a b Stokes SF, Klee T. (2009). Factors that influence vocabulary development in two-year-old children. J Child Psychol Psychiatry. 50(4):498-505. PMID 19017366
  35. ^ Laws G, Gunn D. (2004). Phonological memory as a predictor of language comprehension in Down syndrome: a five-year follow-up study. J Child Psychol Psychiatry. 45:326-37.PubMed
  36. ^ Speidel GE. Herreshoff MJ. (1989). Imitation and the construction of long utterances. In G. E. Speidel & K. E. Nelson, (Eds), The many faces of imitation in language learning (pp. 181-197). New York, Springer-Verlag. ISBN 978-0387968858
  37. ^ Kuczaj SA. (1983). Crib speech and language practice. New York, Springer-Verlag. ISBN 978-0387908601
  38. ^ Scott SK, McGettigan C, Eisner F. (2009). A little more conversation, a little less action--candidate roles for the motor cortex in speech perception. Nat Rev Neurosci. 10295-302. PubMed p. 201
  39. ^ Fillmore LW. (1979). Individual differences in second language acquisition. In C. J. Fillmore, D. Kempler & W. S-Y. Wang, (Eds), Individual differences in language ability and language behavior (pp. 203-228). New York, Academic Press. OCLC 4983571
  40. ^ Gathercole SE. (1995). Is nonword repetition a test of phonological memory or long-term knowledge? It all depends on the nonwords. Mem Cognit. 23:83-94.PubMed
  41. ^ Cheng H. (1996). Nonword span as a unique predictor of second-language vocabulary learning. Developmental Psychology, 32, 867-873.OCLC 193920646
  42. ^ Papagno C, Vallar G. (1995). Verbal short-term memory and vocabulary learning in polyglots. Q J Exp Psychol A. 48:98-107.PubMed
  43. ^ Bishop DV. North T. Donlan C. (1996). Nonword repetition as a behavioral marker for inherited language impairment, Evidence from a twin study. Journal of Child Psychology and Psychiatry 37, 391-403. PubMed
  44. ^ Ojemann GA. (1983). Brain organization for language from the perspective of electrical stimulation mapping. Behavioral and Brain Sciences, 6, 189-230. OCLC 271058178
  45. ^ Kimura D. Watson N. (1989). The relation between oral movement control and speech. Brain and Language, 37, 565-590. PubMed
  46. ^ Shaffer LH. (1984). Motor programming in language production. In H. Bouma & D. G. Bouwhuis, (Eds), Attention and performance, X. (pp. (17-41). London, Erlbaum. ISBN 978-0863770050
  47. ^ Abbs JH. (1986). Invariance and variability in speech production, A distinction between linguistic intent and its neuromotor implementation. In J. S. Perkell, & D. H. Klatt, (Eds), Invariance and variability in speech processes (pp. 202-219). Hillsdale, NJ, Erlbaum. ISBN 978-0898595451
  48. ^ Porter RJ. (1987). What is the relation between speech production and speech perception? In: Allport A, MacKay D G, Prinz W G, Scheerer E, eds. Language Perception and Production. London: Academic Press,: 85-106. ISBN 978-0120527502
  49. ^ Hickok G, Poeppel D. (2004). Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition. 92:67-99. PubMed
  50. ^ Okada K, Hickok G. (2006). Left posterior auditory-related cortices participate both in speech perception and speech production. Brain Lang. 98:112-7. PubMed
  51. ^ Wise RJ, Scott SK, Blank SC, Mummery CJ, Murphy K, Warburton EA. (2001) Separate neural subsystems within 'Wernicke's area'. Brain. 124:83-95. PubMed
  52. ^ Obleser J, Scott SK, Eulitz C. (2006). Now you hear it, now you don't: transient traces of consonants and their nonspeech analogues in the human brain. Cereb Cortex. 16:1069-76. PubMed
  53. ^ Umiltà MA, Kohler E, Gallese V, Fogassi L, Fadiga L, Keysers C, Rizzolatti G. (2001). I know what you are doing. a neurophysiological study. Neuron. 31(1):155-65. PMID 11498058
  54. ^ Hickok G. (2009). The role of mirror neurons in speech and language processing.Brain Lang. PMID 19948355
  55. ^ a b c d e Skoyles JR. (1998). Speech phones are a replication code. Med Hypotheses. 50(2):167-73. PubMed
  56. ^ a b Poizner H. Klima ES. Bellugi U. (1987). What the hands reveal about the brain. MIT Press. ISBN 978-0262660662
  57. ^ Nishimura H, Hashikawa K, Doi K, Iwaki T, Watanabe Y, Kusuoka H, Nishimura T, Kubo T. (1999). Sign language 'heard' in the auditory cortex. Nature.;397(6715):116. PubMed
  58. ^ Goodale E, Kotagama SW. (2006). Context-dependent vocal mimicry in a passerine bird. Proc Biol Sci. 7;273(1588):875-80.PubMed
  59. ^ Putland DA, Nicholls JA, Noad MJ, Goldizen AW. (2006). Imitating the neighbours: vocal dialect matching in a mimic-model system. Biol Lett. 2(3):367-70. PubMed
  60. ^ Williams H, Nottebohm F. (1985). Auditory responses in avian vocal motor neurons: a motor theory for song perception in birds. Science. 229(4710):279-82. PubMed
  61. ^ a b Klatt DH, Stefanski RA. (1974). How does a mynah bird imitate human speech? J Acoust Soc Am. 55(4):822-32. PubMed
  62. ^ Reiss D, McCowan B. (1993). Spontaneous vocal mimicry and production by bottlenose dolphins (Tursiops truncatus): evidence for vocal learning. J Comp Psychol. 107:301-12. PubMed
  63. ^ Foote AD, Griffin RM, Howitt D, Larsson L, Miller PJ, Hoelzel AR. (2006). Killer whales are capable of vocal learning. Biol Lett. 2(4):509-12. PubMed
  64. ^ Ralls K, Fiorelli P, Gish S, (1985). Vocalizations and vocal mimicry in captive harbor seals, Phoca vitulina. Can. J. Zool. 63(5): 1050-1056 doi:10.1139/CJZ-63-5-1050
  65. ^ Poole JH, Tyack PL, Stoeger-Horwath AS, Watwood S. (2005). Animal behaviour: elephants are capable of vocal learning. Nature. 434(7032):455-6.PubMed
  66. ^ Esser KH. (1994). Audio-vocal learning in a non-human mammal: the lesser spear-nosed bat Phyllostomus discolor. Neuroreport. 5(14):1718-20. PubMed
  67. ^ Wich SA, Swartz KB, Hardus ME, Lameira AR, Stromberg E, Shumaker RW. (2009). A case of spontaneous acquisition of a human sound by an orangutan. Primates. 50(1):56-64. PubMed
  68. ^ Hayes C. (1951).The ape in our house, Harper, New York. OCLC 1579444

Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message