In the Multisensory Experience Lab, we work on virtual reality and multisensory experiences, exploring the combination of different input and output modalities in interactive applications. We are interested in both development of novel hardware and software technologies as well as evaluation of user experience. We apply our technologies to health, rehabilitation, training, learning and entertainment.
We are particularly interested in researching topics related to sonic interaction design for multimodal environments, simulating walking experiences, sound rendering and spatialisation, haptic interfaces, cinematic VR and evaluation of user experience in multimodal environments.
The current and future work performed in the ME-Lab is centred on the use of immersive multisensory technologies (e.g., virtual and augmented reality) and falls into three broad categories:
- Basic research: We aim to improve immersive technology and understand its users (e.g. perception, cognition, and affective)
- Applied research: We aim to assist and empower specific user groups by means of immersive technology
- Art and culture: We aim to explore new forms of artistic expression and preserve cultural heritage using immersive technology
The lab is primarily used by researchers and students from Medialogy and Sound and Music Computing, but can be made available for external partners and students from other departments after agreement with the lab manager.
Equipment and facilities
- An anechoic room
- A main room with a 64 channels surround sound system
- Several head mounted displays (VIVE, Oculus Rift)
- A motion capture system by Natural point (16 cameras)
A.C. Meyers Vænge 15, Building A, room 2.0.041
Professor Stefania Serafin, firstname.lastname@example.org, tel. +45 2916 7920