Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Journal of Intelligent Systems

Editor-in-Chief: Fleyeh, Hasan

4 Issues per year

CiteScore 2016: 0.39

SCImago Journal Rank (SJR) 2016: 0.170
Source Normalized Impact per Paper (SNIP) 2016: 0.206

See all formats and pricing
More options …
Volume 22, Issue 4 (Dec 2013)


Passing an Enhanced Turing Test – Interacting with Lifelike Computer Representations of Specific Individuals

Avelino J. Gonzalez
  • Corresponding author
  • Intelligent Systems Laboratory, University of Central Florida, Orlando, FL, USA
  • Electrical Engineering and Computer Science, University of Central Florida, PO Box 162362, 4000 Central Florida Boulevard, HEC 346, Orlando, FL 32816-2362, USA
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Jason Leigh / Ronald F. DeMara / Andrew Johnson / Steven Jones / Sangyoon Lee / Victor Hung / Luc Renambot / Carlos Leon-Barth / Maxine Brown / Miguel Elvir / James Hollister / Steven Kobosko
Published Online: 2013-05-27 | DOI: https://doi.org/10.1515/jisys-2013-0016


This article describes research to build an embodied conversational agent (ECA) as an interface to a question-and-answer (Q/A) system about a National Science Foundation (NSF) program. We call this ECA the LifeLike Avatar, and it can interact with its users in spoken natural language to answer general as well as specific questions about specific topics. In an idealized case, the LifeLike Avatar could conceivably provide a user with a level of interaction such that he or she would not be certain as to whether he or she is talking to the actual person via video teleconference. This could be considered a (vastly) extended version of the seminal Turing test. Although passing such a test is still far off, our work moves the science in that direction. The Uncanny Valley notwithstanding, applications of such lifelike interfaces could include those where specific instructors/caregivers could be represented as stand-ins for the actual person in situations where personal representation is important. Possible areas that come to mind that might benefit from these lifelike ECAs include health-care support for elderly/disabled patients in extended home care, education/training, and knowledge preservation. Another more personal application would be to posthumously preserve elements of the persona of a loved one by family members. We apply this approach to a Q/A system for knowledge preservation and dissemination, where the specific individual who had this knowledge was to retire from the US National Science Foundation. The system is described in detail, and evaluations were performed to determine how well the system was perceived by users.

Keywords: Embodied conversational agents; chatbots; animated pedagogical agents; dialogue management; automated question-and-answer systems


  • [1]

    R. Artstein, S. Gandhe, J. Gerten, A. Leuski and D. Traum, Semi-formal evaluation of conversational characters, in: Languages: From Formal to Natural: Essays Dedicated to Nissim Francez on the Occasion of His 65th Birthday, pp. 22–35, Springer-Verlag, Berlin, 2009.Google Scholar

  • [2]

    D. Barberi, The Ultimate Turing Test. http://david.barberi.com/papers/ultimate.turing.test/.

  • [3]

    Beowulf [movie], Paramount (accessed 24 September, 2012). http://www.paramount.com/movies/beowulf.

  • [4]

    N. Beringer, U. Kartal, K. Louka, F. Schiel and U. Turk, PROMISE – a procedure for multimodal interactive system evaluation, in: Proceedings of the LREC Workshop on Multimodal Resources and Multimodal Systems Evaluation, 2002.Google Scholar

  • [5]

    T. W. Bickmore and R. W. Picard, Towards caring machines, in: Proceedings of the Conference on Computer Human Interaction, Vienna, April 2004.Google Scholar

  • [6]

    Blender, Blender Open Source 3D Contents Creation Suite (accessed 24 September, 2012). http://www.blender.org.

  • [7]

    G. Bradsk, Open Computer Vision Library (2009). http://opencv.willowgarage.com/wiki/.

  • [8]

    K. Branting, J. Lester and B. Mott, Dialogue management for conversational case-based reasoning, in: Proceedings of the Seventh European Conference on Case-Based Reasoning, pp. 77–90, 2004.Google Scholar

  • [9]

    P. Brézillon, Context in problem solving: a survey, Knowledge Eng. Rev. 14 (1999), 1–34.Google Scholar

  • [10]

    R. Carpenter and J. Freeman, Computing Machinery and the Individual: The Personal Turing Test, Technical report, Jabberwacky (2005). http://www.jabberwacky.com.

  • [11]

    J. Cassell, M. Ananny, A. Basu, T. Bickmore, P. Chong, D. Mellis, K. Ryokai, J. Smith, H. Vilhjálmsson and H. Yan, Shared reality: physical collaboration with a virtual peer, in: Proceedings of CHI, 2000.Google Scholar

  • [12]

    Chant Inc., Chant Software Home (accessed 22 December, 2009). http://www.chant.net/default.aspx?doc=sitemap.htm.

  • [13]

    K. M. Colby, Artificial Paranoia, Pergamon Press, New York, 1975.Google Scholar

  • [14]

    L. Dutreve, A. Meyer and S. Bouakaz, Easy acquisition and real-time animation of facial wrinkles, Comput. Anim. Virtual Worlds 22 (2011), 169–176.Google Scholar

  • [15]

    C. Donner and H. Jensen, Light diffusion in multi-layered translucent materials, in: ACM SIGGRAPH Papers, 2005.Google Scholar

  • [16]

    P. Ekman, Universals and cultural differences in facial expressions of emotion, in: J. Cole (ed.), Nebraska Symposium on Motivation, vol. 19, pp. 207–282, University of Nebraska Press, 1972.Google Scholar

  • [17]

    P. Ekman and W. Friesen, Facial Action Coding System: A Technique for the Measurement of Facial Movement, Consulting Psychologists Press, Palo Alto, CA, 1978.Google Scholar

  • [18]

    C. V. Emgu (2009). http://sourceforge.net/projects/emgucv/.

  • [19]

    E. d’Eon and D. Luebke. Advanced techniques for realistic real-time skin rendering, in: GPU Gems 3. H. Nguyen (Ed.) Chapter 14, Addison Wesley, Reading, 2007.Google Scholar

  • [20]

    FaceGen, FaceGen Modeller: 3D Face Generator (accessed 24 September, 2012). http://www.facegen.com/modeller.htm.

  • [21]

    FBX, Autodesk FBX: 3D Data Interchange Technology (accessed 24 September, 2012). http://usa.autodesk.com/fbx.

  • [22]

    L. Foner, What’s an Agent, Anyway? A Sociological Case Study, Agents Memo 93–01, Agents Group. MIT Media Lab, 1993.Google Scholar

  • [23]

    A. J. Gonzalez and D. D. Dankel, The Engineering of Knowledge-Based Systems Theory and Practice, Prentice-Hall, Englewood Cliffs, NJ, 1993.Google Scholar

  • [24]

    A. Gonzalez, B. Stensrud and G. Barrett, Formalizing context-based reasoning: a modeling paradigm for representing tactical human behavior, Int. J. Intell. Syst. 23 (2008), 822–847.Google Scholar

  • [25]

    G. Güzeldere and S. Franchi, Dialogues with colorful personalities of early AI, Stanford Hum. Rev. 4 (1995), 161–169.Google Scholar

  • [26]

    S. Harabagiu, M. Pasca and S. Maiorano, Experiments with open-domain textual question answering, in: Proceedings of the COLING-2000, 2000.Google Scholar

  • [27]

    V. C. Hung, Robust dialog management through a context-centric architecture, Doctoral dissertation, Department of Electrical Engineering and Computer Science, University of Central Florida, August 2010.Google Scholar

  • [28]

    A. Hunt and S. McGlashan (eds.), Speech Recognition Grammar Specification Version 1.0: W3C Recommendation 16 March 2004 (accessed 28 February, 2012). http://www.w3.org/TR/speech-grammar/.

  • [29]

    H. Jensen, Subsurface Scattering (2005). http://graphics.ucsd.edu/∼henrik/images/subsurf.html.

  • [30]

    T. Kanade, J. F. Cohn and Y. Tian, Comprehensive database for facial expression analysis, in: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 46–53, 2000.Google Scholar

  • [31]

    T. Kaneko, T. Takahei, M. Inami, N. Kawakami, Y. Yanagida, T. Maeda and S. Tachi, Detailed shape representation with parallax mapping, in: Proceedings of ICAT 2001, pp. 205–208, 2001.Google Scholar

  • [32]

    P. Kenny, A. Hartholt, J. Gratch, W. Swartout, D. Traum, S. Marsela and D. Piepol, Building interactive virtual humans for training environments, in: I/ITSEC‘07, 2007.Google Scholar

  • [33]

    C. Lee, C. Sidner and C. Kidd, Engagement during dialogues with robots, in: AAAI 2005 Spring Symposia, 2005.Google Scholar

  • [34]

    L. Levin, O. Glickman, Y. Qu, D. Gates, A. Lavie, C. P. Rose, C. Van Ess-Dykema and A. Waibel, Using context in machine translation of spoken language, in: Proceedings of Theoretical and Methodological Issues in Machine Translation (TMI-95), 1995.Google Scholar

  • [35]

    K. Lucas, G. Michael and P. Frederic, Motion graphs, in: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, 2002.Google Scholar

  • [36]

    P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar and I. Matthews, The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression, in: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 94–101, 2010.Google Scholar

  • [37]

    B. MacWhinney, J. M. Keenan and P. Reinke, The role of arousal in memory for conversation Memory Cogn. 10 (1982), 308–317.Google Scholar

  • [38]

    M. L. Mauldin, ChatterBots, TinyMuds, and the Turing test: entering the Loebner Prize competition, in: Proceedings of the 12th National Conference on Artificial Intelligence, vol. 1, 1994.Google Scholar

  • [39]

    M. L. Mauldin, Going under cover: passing as human, in: Parsing the Turning Test – Philosophical and Methodological Issues in the Quest for the Thinking Computer,R. Epstein, G. Roberts and G. Beber (Eds.), pp. 413–429, Springer, 2008.Google Scholar

  • [40]

    Maya, Autodesk Maya: 3D Modeling and Animation Software (accessed 24 September, 2012). Autodesk: http://usa.autodesk.com/maya/.

  • [41]

    I. McCowan, D. Moore, J. Dines, D. Gatica-Perez, M. Flynn, P. Wellner and H. Bourlard, On the Use of Information Retrieval Measures for Speech Recognition Evaluation, Research Report, INDIAP Research Institute, 2005.Google Scholar

  • [42]

    Microsoft MSDN, Microsoft Speech API (SAPI) 5.3 (accessed 22 December, 2009). http://msdn.microsoft.com/en-us/library/ms723627(VS.85).aspx.

  • [43]

    R. J. Mooney, Learning language from perceptual context: a challenge problem for AI, in: Proceedings of the 2006 AAAI Fellows Symposium, 2006.Google Scholar

  • [44]

    M. Mori, The Uncanny Valley, in K. F. MacDorman and T. Minato (trans.), Energy, 7 (1970), 33–35 [in Japanese].Google Scholar

  • [45]

    MotionBuilder, Autodesk MotionBuilder: 3D Character Animation for Virtual Production (accessed 24 September, 2012). http://usa.autodesk.com/adsk/servlet/pc/index?id=13581855&siteID=123112.

  • [46]

    National Science Foundation, Industry & University Cooperative Research Program (I/UCRC) (2008) (accessed 23 November, 2009). http://www.nsf.gov/eng/iip/iucrc/.

  • [47]

    Normal Map Filter, NVIDIA Texture Tools for Adobe Photoshop (2011). http://developer.nvidia.com/content/nvidia-texture-tools-adobe-photoshop.

  • [48]

    C. Oat, Animated wrinkle maps, in: ACM SIGGRAPH 2007 Courses, New York, NY, pp. 33–37, 2007.Google Scholar

  • [49]

    Oblivion, The Elder Scrolls: Oblivion (accessed 24 September, 2012). http://www.elderscrolls.com/oblivion.

  • [50]

    OpenCV, OpenCV 2.0 C Reference (accessed 22 December, 2009). http://opencv.willowgarage.com/documentation/index.html.

  • [51]

    Ogre3D, Object-Oriented Graphics Rendering Engine (OGRE) (accessed 24 September, 2012).OGRE: http://www.ogre3d.org.

  • [52]

    R. Porzel and M. Strube, Towards context-dependent natural language processing in computational linguistics for the new millennium: divergence or synergy, in: Proceedings of the International Symposium, pp. 21–22, 2002.Google Scholar

  • [53]

    R. Porzel, H. Zorn, B. Loos and R. Malaka, Towards a separation of pragmatic knowledge and contextual information, in: ECAI-06 Workshop on Contexts and Ontologies, 2006.Google Scholar

  • [54]

    A. Safonova and J. K. Hodgins, Construction and optimal search of interpolated motion graphs, in: ACM SIGGRAPH 2007 Papers, 2007.Google Scholar

  • [55]

    G. Sanders and J. Scholtz, Measurement and evaluation of embodied conversation agents, Embodied Conversational Agents (2000), 346–373.Google Scholar

  • [56]

    D. R. Traum, A. Roque, A. Leuski, P. Georgiou, J. Gerten and B. Martinovski, Hassan: a virtual human for tactical questioning, in: Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue, 2007, 2008.Google Scholar

  • [57]

    A. M. Turing, Computing machinery and intelligence, Mind 59 (1950), 433–460.Google Scholar

  • [58]

    Vicon, Vicon: FX Motion Capture Camera (accessed 24 September, 2012). http://www.vicon.com/company/releases/041007.htm.

  • [59]

    M. A. Walker, D. J. Litman, C. A. Kamm and A. Abella, PARADISE: a framework for evaluating spoken dialogue agents, in: Proceedings of the Eighth Conference on European Chapter of the Association for Computational Linguistics, pp. 271–280, 1997.Google Scholar

  • [60]

    R. S. Wallace, The anatomy of A.L.I.C.E., in: Parsing the Turing Test, pp. 181–210, Springer, Netherlands, 2008.Google Scholar

  • [61]

    J. Weizenbaum, ELIZA – a computer program for the study of natural language communication between man and machine, Commun. ACM, 9 (1966), 36–45.CrossrefGoogle Scholar

  • [62]

    D. William and L. Andrew, Variance shadow maps, in: Proceedings of the 2006 Symposium on Interactive 3D Graphics and Games, 2006.Google Scholar

  • [63]

    K. A. Zechner, Minimizing word error rate in textual summaries of spoken language, in: Proceedings of the 1st North American Chapter of the Association for Computational Linguistics Conference, pp. 186–193, Morgan Kaufmann Publishers, Seattle, WA, 2000.Google Scholar

About the article

Corresponding author: Avelino J. Gonzalez, Electrical Engineering and Computer Science, University of Central Florida, PO Box 162362, 4000 Central Florida Boulevard, HEC 346, Orlando, FL 32816-2362, USA, Phone: +1-407-823-5027, Fax: +1-407-823-5835, e-mail:

Received: 2013-03-21

Published Online: 2013-05-27

Published in Print: 2013-12-01

Citation Information: Journal of Intelligent Systems, ISSN (Online) 2191-026X, ISSN (Print) 0334-1860, DOI: https://doi.org/10.1515/jisys-2013-0016.

Export Citation

©2013 by Walter de Gruyter Berlin Boston. Copyright Clearance Center

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

Avelino J. Gonzalez, James R. Hollister, Ronald F. DeMara, Jason Leigh, Brandan Lanman, Sang-Yoon Lee, Shane Parker, Christopher Walls, Jeanne Parker, Josiah Wong, Clayton Barham, and Bryan Wilder
International Journal of Artificial Intelligence in Education, 2017, Volume 27, Number 2, Page 353

Comments (0)

Please log in or register to comment.
Log in