Vowel lip shapes in a 1919 lip reading manual

A viseme is any of several speech sounds that look the same, for example when lip reading.[1]

Visemes and phonemes do not share a one-to-one correspondence. Often several phonemes correspond to a single viseme, as several phonemes look the same on the face when produced, such as /k, ɡ, ŋ/; as well as /t, d, n, l/ and /p, b, m/). Thus words such as pet, bell, and men are difficult for lip-readers to distinguish, as all look like alike. On one account, visemes offer (phonetic) information about place of articulation, while manner of articulation requires auditory input.[2]

However, there may be differences in timing and duration during natural speech in terms of the visual "signature" of a given gesture that cannot be captured by simply concatenating (stilled) images of each of the mouth patterns in sequence.[3] Conversely, some sounds which are hard to distinguish acoustically are clearly distinguished by the face. For example, in spoken English /l/ and /r/ can often sound quite similar (especially in clusters, such as 'grass' vs. 'glass'), yet the visual information can disambiguate. Some linguists have argued that speech is best understood as bimodal (aural and visual), and comprehension can be compromised if one of these two domains is absent.[4]

Visemes can often be humorous, as in the phrase "elephant juice", which when lip-read appears identical to "I love you".

Applications for the study of visemes include speech processing, speech recognition, and computer facial animation.

See also

References

  1. ^ Fisher, Cletus G. (1 December 1968). "Confusions Among Visually Perceived Consonants". Journal of Speech and Hearing Research. 11 (4): 796–804. doi:10.1044/jshr.1104.796.
  2. ^ Summerfield, Quentin (29 January 1992). "Lipreading and audio-visual speech perception". Philosophical Transactions of the Royal Society B Biological Sciences. 335 (1273): 71–78. doi:10.1098/rstb.1992.0009. eISSN 1471-2970. ISSN 0962-8436. PMID 1348140.
  3. ^ Calvert, Gemma A.; Campbell, Ruth (1 January 2003). "Reading Speech from Still and Moving Faces: The Neural Substrates of Visible Speech". Journal of Cognitive Neuroscience. 15 (1): 57–70. doi:10.1162/089892903321107828. PMID 12590843.
  4. ^ McGurk, Harry; MacDonald, John (23 December 1976). "Hearing lips and seeing voices". Nature. 264: 746–748. doi:10.1038/264746a0.

Further reading

Chen, Tsuhan; Rao, R. R. (31 May 1998). "Audio-visual integration in multimodal communication". Proceedings of the IEEE. 86 (5). IEEE: 837–852. doi:10.1109/5.664274. eISSN 1558-2256. ISSN 0018-9219.

Chen, Tsuhan (31 January 2001). "Audiovisual speech processing". IEEE Signal Processing Magazine. 18 (1). IEEE: 9–21. doi:10.1109/79.911195. eISSN 1558-0792. ISSN 1053-5888.

Lucey, Patrick; Martin, Terrence; Sridharan, Sridha (8–10 December 2004). Confusability of Phonemes Grouped According to their Viseme Classes in Noisy Environments (PDF). 10th Australian International Conference on Speech Science & Technology. Sydney: Macquarie University. pp. 265–270. Archived from the original (PDF) on 5 July 2017.

No tags for this post.