Bio

Dr. George Papagiannakis, co-founder and CEO of ORamaVR is a computer scientist specialized in computer graphics systems, extended reality algorithms and geometric computational models. His academic credentials include serving as Professor of Computer Graphics at the Computer Science department of the University of Crete, Greece, as Affiliated Research Fellow at the Human Computer Interaction Laboratory of the Institute of Computer Science in the Foundation for Research and Technology Hellas, Heraklion, Greece, where he leads the CG Group and as visiting Prof of CS at the University of Geneva.He has more than 100 publications in the field, and he is a member of CGS (Board Member), IEEE, Eurographics, ACM and SIGGRAPH professional societies. In 2011 he was awarded a Marie-Curie Intra-European Fellowship for Career Development from the European Commission’s Research Executive Agency. He was conference chair of the Computer Graphics International 2016 Conference, in cooperation with CGS, ACM, ACM SIGGRAPH and Eurographics Associations. In 2017 he published a Springer-Nature book on Mixed Reality and Gamification which achieved more than 100.000 downloads so far. His pioneering research has attracted several awards as well as significant external R&D and VC funding at FORTH-ICS and ORamaVR.
In the realm of surgical training, the cognitive process of constructing an internal representation or “world model” is crucial for understanding complex procedures and making informed decisions under pressure. This talk investigates how the world models generated by Large Language Models (LLMs) and the experiential world models developed by surgical trainees during their training can be effectively aligned and enhanced through spatial computing and Extended Reality (XR) technologies. At the heart of this exploration is the concept of “world models” as internal representations of the environment and scenarios encountered by both artificial intelligence systems and human learners. For LLMs, these models are constructed from vast datasets, enabling them to predict, generate, and adapt information in response to queries or tasks. In contrast, surgical trainees develop their world models through direct experience, observation, and practice, forming a mental map of surgical procedures, anatomy, and the tactile feedback associated with different surgical techniques. The presentation delves into how spatial computing and XR technologies can serve as a bridge between these two types of world models. By leveraging the predictive power and content generation capabilities of LLMs within immersive XR environments, we can create highly realistic, interactive simulations that closely mimic the complexities of the surgical field. These simulations not only allow trainees to visualize and interact with detailed anatomical structures but also to experience varied clinical scenarios that enrich their internal world models with a breadth of experiences, akin to real-life exposure.

Related keynotes