SIVE book – Official launch 2022

The “Sonic Interactions in Virtual Environments” book

Michele Geronazzo and Stefania Serafin eds.


 March 12th, starting time 5:30 PM CET, virtually on Zoom.

Free registration for attendancehere

upcoming (2022)

Immersive audio technologies have the potential to transform the way we interact within virtual environments (VE) and their applications. Users can navigate immersive content in Virtual Reality (VR) with a six-degrees-of-freedom in an egocentric reference frame. When auditory feedback is provided in an ecologically valid interactive and multisensory experience, a perceptually plausible scheme for developing 3Dsonic interactions is possible, yet still efficient in computational power, memory, and latency.

This book tackles the design of 3D spatial interactions in an audio-centered and audio-first perspective, providing the fundamental notions related to the creation and evaluation of immersive sonic experiences:

  • Immersive audio concerns the computational aspects of the acoustical-space properties of VR technologies.
  • Sonic interaction refers to human-computer interplay through auditory feedback in VE.
  • VR systems naturally support multimodal integration, impacting different application domains.

These are the key elements to draw on user attention and enhance the sensation of place in VEs.

The mission of this book is to shape an emerging new field of study at the intersection of sonic interaction design and immersive media, embracing an archipelago of existing research spread in different audio communities. All began in 2014 with IEEE VirtualReality workshops Sonic Interactions in Virtual Environments (SIVE). SIVE is the study and exploitation of sound being one of the principal channels conveying information, meaning, aesthetic and emotional qualities in immersive and interactive contexts. Contributors and editors include interdisciplinary experts from the fields of computer science, engineering, acoustics, psychology, design, humanities, and beyond.

The core motivation of this initiative is to increase among the VR communities, researchers, and practitioners, the awareness of the importance of sonic elements when designing immersive environments. The book will feature state-of-the-art research on real-time auralization, sonic interaction design in VR, quality of the experience in multimodal scenarios, and applications.

Program

5:30 – 5:35 Welcome

5:35 – 5:45 Introduction to the SIVE book project

Michele Geronazzo and Stefania Serafin, Ch. 1 – Sonic Interactions in Virtual Environments: the Egocentric Audio Perspective of the Digital Twin

5:45 – 6:15 Session I –  Interactive and Immersive Audio

  • Federico Avanzini, Ch. 2 – Procedural Modeling of Interactive Sound Sources in Virtual Reality
  • Nikunj Raghuvanshi, Ch. 3 – Interactive and Immersive Auralization
  • Lorenzo Picinali, Ch. 4 – System-to-User and User-to-System Adaptations in Binaural Audio
  • Fabian Brinkmann, Ch. 5 – Audio Quality Assessment For Virtual Reality
  • SarveshR. Agrawal, Ch. 11 – Immersion in Audiovisual Experiences

6:15 – 6:45 Discussion Panel 1

6:45 – 6:50 Break

6:50 – 7:20 Session 2 –  Sonic Experiences  

Introduction by Stefania Serafin, Ch. 10 – Audio in Multisensory Interactions: from Experiments to Experiences

  • Cumhur Erkut, Ch. 7 – Embodied and Sonic Interactions in Virtual Environments: Tactics and Examplars
  • Federico Fontana, Ch. 12 – Augmenting Sonic Experiences through Haptic Feedback
  • Liang Men, Ch. 8 – Supporting Sonic Interaction in Creative, Shared Virtual Environments
  • Victor Zappi, Dario Mazzanti, and Florent Berthaut, Ch. 13 – From the Lab to the Stage: Practical Considerations on Designing Performances with Immersive Virtual Musical Instruments

7:20 -7:40 Discussion Panel 2

7:40 – 7:45 Closing

Presenting authors: short biographies

Federico Avanzini is Full Professor at the University of Milano. He received aPh.D. degree in Computer Engineering in 2002 from the University of Padova, where he worked until 2017. His main research interests are in Sound and Music Computing (SMC), specifically sound synthesis and processing, non-speech sound in human-computer interfaces, multimodal interaction. He has been principal investigator in national and international research projects, has authored about 200 publications on peer-reviewed journals and conferences, has chaired and served in several program and editorial committees. He has been Associate Editor for the journal Acta Acustica (2014-2021), and is a member of the Editorial Board of Milano University Press. He is Conference Coordinator in theInternational SMC Board, and President of the Italian Music InformaticsAssociation.

Nikunj Raghuvanshi is Senior Principal Researcher at Microsoft Research’s Redmond lab. His interests are in the area of interactive computer simulation of physical phenomena, with applications in spatial audio, computer graphics, virtual reality, and gaming. He has contributed novel techniques that are in wide use in the industry today, with over fifty papers and patents. In particular, his work on interactive sound propagation over the last decade (Project Triton) has been successfully employed in many major game titles and VR applications. Nikunj did his undergraduate degree at IIT Kanpur, India, and initiated interactive sound simulation research at UNC Chapel Hill during his PhD studies, whose code was acquired by Microsoft.

Lorenzo Picinali. I am a Reader in Audio Experience Design and I lead the AudioExperience Design team (https://www.axdesign.co.uk/) at Imperial College London. In the past years I have worked in Italy, France and UK  on projects related with 3D binaural sound rendering, spatial hearing, interactive applications for visually impaired individuals, hearing aids technologies, audio and haptic interaction and, more in general, acoustical virtual and augmented reality. More information about my work can be found herehttps://www.imperial.ac.uk/people/l.picinali/research.html  

Fabian Brinkmann received an M.A. degree in communication sciences and technical acoustics in 2011 and a Dr. rer. nat. degree in 2019 from the Technical University of Berlin. He is a senior researcher in the Audio CommunicationGroup at Technical University of Berlin where he focuses on the fields of signal processing and evaluation approaches for spatial audio. He is an anti-fascist with a weakness for unhealthy food and a questionable sense of humor.

Sarvesh R.Agrawal was born and raised in Mumbai, India. He holds an M.S. in architectural acoustics from Rensselaer Polytechnic Institute (RPI) and a B.S.in audio production with a minor in entertainment technology from MiddleTennessee State University (MTSU). An early-stage Marie Curie fellow in theRealVision ITN from 2018-2021, he was a research fellow at Bang & Olufsen(B&O) and affiliated with the Technical University of Denmark (DTU). Driven by the desire to bridge the gap between technology and business, Sarveshpivoted to global product management at B&O where he manages their Beolabline of loudspeakers. Psychoacoustics, perceptual evaluation of sound, and sensory analysis are his primary research interests.

Cumhur Erkut (M.Sc. 1997, D.Sc. 2002) is an Associate Professor of sonic and embodied interaction at Aalborg University Copenhagen. He currently focuses on interactive,model-based, neural sound and motion synthesis. He has received a PhD degree in acoustics and audio signal processing from Helsinki University of Technology, Finland, with minor studies on Information Systems (Machine Learning). Dr.Erkut is an Associate Editor in Frontiers in Audio Signal Processing, and serves in the steering committee of International Conference on MovementComputing (MOCO).

Federico Fontana is currently an Associate Professor in the Department of Mathematics, Computer Science and Physics, University of Udine, Italy, teaching Auditory & tactile interaction and Computer architectures. In 2001, he wasVisiting Scholar at the Laboratory of Acoustics and Audio Signal Processing, Helsinki University of Technology, Espoo, Finland. In summer 2015 he visited the Institute for Computer Music and Sound Technology, Zurich University of the Arts, Zurich, Switzerland. His current interests are in interactive sound processing methods and in the design and evaluation of audio-haptic musical interfaces. Professor Fontana coordinated the EU project 222107 NIW under theFP7 ICT-2007.8.0 FET-Open call from 2008 to 2011. From 2017 to 2021, he served as Associate Editor of the IEEE/ACM Transactions on Audio, Speech, and Language Processing. Since 2021 he serves as Associate Editor of the IEEE SignalProcessing Letters.

Liang Men is currently a lecturer at School of Computer Science and Mathematics, Liverpool John Moores University. Previously, he finished EPSRC+AHRC funded PhD+ MSc at Queen Mary University of London. His research focuses on exploring novel user experience, especially with the application of immersive technology. For example, one of his VR projects named LeMo allows people to make music in room-scale VR with bare-hand interaction. More about him and his projects can be found at: https://sites.google.com/view/liangmen

Florent Berthaut is an Assistant Professor at the University of Lille, France and Researcher in the MINT team of the CRIStAL.  He obtained his PhD in Computer Science from the University of Bordeaux, in the SCRIME/LaBRI and the INRIA Potioc team.  He then conducted a two years project at the University of Bristol with a MarieCurie fellowship. His research focuses on building connections between the fields of 3D User  Interfaces and of NewInterfaces for Musical Expression. In particular, he has been exploring 3Dinteraction techniques adapted to musical interaction and mixed-reality displays for augmenting digital instruments on stage.

Victor Zappi Victor Zappiis an Assistant Professor of Music Technology at Northeastern University. Being both an engineer and a musician, he focuses on the design and the use of new interfaces for musical expression. How can we use today’s most advanced technologies to build novel musical instruments? In what ways can these instruments comply with and engage the physical and cognitive abilities of performers as well as audience? And what new forms of musical training and practices are required to master them? Victor’s research interests span virtual and augmented reality, physical modeling synthesis, music perception and cognition, and music pedagogy.”

Dario Mazzanti is a technologist and researcher, passionate about the application of technology to creative and expressive contexts. With experience in virtual and augmented reality, music making and teleoperation, Dario deeply believes in the collaboration between professionals and enthusiasts with different backgrounds, seeing in interaction technologies a mean to vehicle expression, and to help making the creative experience free, accessible,s timulating and direct. Dario is currently a Senior Technician at the AdvancedRobotics research line of Istituto Italiano di Tecnologia, developing tools for virtual reality teleoperation and telepresence.

Editors: biographies

Michele Geronazzo, Ph.D., is Associate Professor at the University of Padova – Dept. of Management and Engineering, and part of the coordination unit of the EU-H2020 project SONICOM at Imperial College London. He received his M.S. degree in Computer Engineering (2009) and his Ph.D. degree in Information & Communication Technologies (2014) from the University of Padova. Between 2014 and 2021, he has worked as an Assistant Professor in Digital Media at the University of Udine and postdoctoral researcher at Imperial College London, Aalborg University, and the University of Verona in the fields of neurosciences and simulations of complex human-machine systems. His main research interests involve binaural spatial audio modeling and synthesis, virtual & augmented reality, and sound in human-computer interactions. He is IEEE Senior Member and part of the organizing committee of the IEEE VR Workshop on Sonic Interactions for Virtual Environments since 2015 (chair of the 2018 and 2020 editions). From September 2019, Michele Geronazzo has been appointed as Editorial Board member for Frontiers in Virtual Reality, and he served as guest editor for Wireless Communications and Mobile Computing (John Wiley & Sons and Hindawi publishers, 2019). He is a co-recipient of six best paper/poster awards and co-author of more than seventy scientific publications. In 2015, his Ph.D. thesis was honored by the Acoustic Society of Italy (AIA) with the “G. Sarcedote” award.

Stefania Serafin is Professor of Sonic interaction design at Aalborg University in Copenhagen and the leader of the Multi-sensory Experience Lab together with Rolf Nordahl. She was previously appointed Associate Professor (2006-2013) and Assistant Professor (2003-2006) in the same University. She has been visiting researcher at the University of Cambridge and KTH in Stockholm (2003) and visiting professor at the University of Virginia (2002). Since 2014 she is the President of the Sound and Music Computing association, and since 2018 Project Leader of the Nordic Sound and Music Computing network supported by Nordforsk. She has been part of the organizing committee of the IEEE VR Workshop on Sonic Interactions for Virtual Environments from the first edition. She is also the coordinator of the Sound and music computing Master at Aalborg University. Stefania received her PhD entitled “The sound of friction: computer models, playability and musical applications” from Stanford University in 2004, supervised by Professor Julius Smith III. She is co-author of more than 300 papers in the fields of sound and music computing, sound for virtual and augmented reality, sonic interaction design and new interfaces for musical expression.