Gesture communication, like prosody and paralinguistic voice features, strikes the attention when there is too little of it, too much of it, or when it does not seem to fit the words or the situation. The present study follows the principle that gesture is similar to some aspects of speech, particularly prosody and parts of the lexicon. Description of visual gesture articulation is therefore treated as a conservative extension of descriptions of vocal speech gesture articulation. Well-tried models of speech forms and functions are deployed, together with accounts from gesture studies from psychology to robotics. Evidence is taken from video data of story-telling in Ega, an African language, and in German, and the adequacy of descriptive and computational models of the forms and functions of speech is discussed, with a proposal for the formal modelling of speech-like timing of gesture articulators by means of Time Types in the Linear-Feature-Timing-Realtime (LFTR) model. Finally, an integrative model for combining visual and vocal gesture articulations into a comprehensive functional model of multimodal communication is proposed: the Rank Interpretation Model (RIM).
© School of English, Adam Mickiewicz University, Poznań, Poland, 2011