We are proud to present the following keynote speakers at CBMI 2024.

 


Dr. Hannes Högni Vilhjálmsson

Being Multimodal: What Building Virtual Humans has Taught us about Multimodality

Abstract: Intelligent Virtual Agents (IVAs) are autonomous virtual humans that are meant to exhibit human-like traits when interacting with the world around them. When imbued with sufficient social skills, communicating face-to-face with them could feel just like communicating with real humans. The allure has been that such agents would revolutionize any domain of human-computer-interaction where rich social interaction with an automated system could take things to the next level, such as in tutoring systems, digital home assistants, personal trainers, online sales agents and even systems providing remote care for the elderly. However, replicating human face-to-face communication skills has proved a massive theoretical and technical challenge, not least because it involves the seamless and spontaneous coordination of multiple natural modalities, including spoken words, intonation, gesture, facial expressions, posture and gaze. What to many seemed to be superfluous body motion turned out to be a tightly woven fabric of multi-modal signals, evolved as an effective system of communication between humans since the very dawn of their social existence. We are, it turns out, multi-modal beings down to our core. In this talk I will start by taking us to the origins of the field of Embodied Conversational Agents (ECAs), a sub-field of Intelligent Virtual Agents, that deals specifically with providing agents with face-to-face communication skills. I will review our attempts to capture, understand and analyze the multi-modal nature of human communication, and how we have built and evaluated systems that engage in and support such communication. While I use communication as a particular study in multimodality, I will explore how some of the underlying principles may have wider relevance to working with multi-modal and multimedia content, and to the way we envision our data driven future.


Speaker bio: Dr. Hannes Högni Vilhjálmsson is a Professor of Computer Science at Reykjavik University where he leads the Socially Expressive Computing group at the Center for Analysis and Design of Intelligent Agents (CADIA), of which he was the director from 2013 to 2016. He has been doing research on the automatic generation of social and linguistic nonverbal behavior in autonomous agents and online avatars for nearly 30 years. His focus has been on making embodied communication in virtual environments both effective and intuitive, targeting primarily applications in training, education, healthcare and entertainment. Dr. Vilhjálmsson chaired the Reykjavik University’s Research Council from 2016 to 2019, and is a member of a number of academic steering and organizing committees, as well as industrial advisory and directorial boards. Prior to joining Reykjavik University in 2006, Dr. Vilhjálmsson was the technical director on the Tactical Language and Culture Training project at University of Southern California, which used social AI and advanced language technology to teach foreign languages and culturally appropriate behavior, earning the project DARPA’s Technical Achievement Award. Along with his academic career, Dr. Vilhjálmsson has co-founded several companies that take advantage of virtual experiences, including Alelo Inc, that builds serious games for immersive language learning, MindGames, that released the first BCI mind training games for the iPhone, and Envalys, that uses VR to assess the psychological impact of planned urban environments on prospective inhabitants before construction. He received his Ph.D. in Media Arts and Sciences from the MIT Media Lab in 2003.