This present article is part of a larger study on speaker-hearer allocation of attentional resources in face-to-face interactions. The goal of the paper is twofold: first, we present results concerning the degree of correlation, in computer-mediated conversation, between speaker’s timing and intensity of smiling when humor is either present or absent in the conversation. The results were obtained from the analysis of five dyadic interactions between English speakers that were video and audio recorded, transcribed, and analyzed to establish a baseline of synchronicity of smiling among participants. From the study it emerged that conversational partners engaged in humorous conversations not only reciprocate each other’s smiling, but also match each other’s smiling intensity. Our data led to the identification of different smiling and non-smiling synchronic behaviors that point to the existence of a synchronous multimodal relationship between humorous events and smiling intensity for conversational partners. Second, in the last part of the paper, we argue for the need of a multimodal conversational corpus in humor studies and present the corpus that is being collected, annotated, and analyzed at Texas A&M University–Commerce. The corpus consists of humorous interactions among dyads of native speakers of English, Spanish, and Chinese for which video, audio, and eye-tracking data have been recorded. As part of this section of the paper, we also present some preliminary results based on the analysis of one English conversation, and some exploratory analysis of Chinese data, that show that greater attention is paid to facial areas involved in smiling when humor is present. This study sheds light on the role of smiling as a discourse marker (Attardo, S., L. Pickering, F. Lomotey & S. Menjo. 2013. Multimodality in Conversational Humor. Review of Cognitive Linguistics 11(2). 400–414.), and therefore as a meaningful device in verbal communication.