Geometrical Acoustic simulations - All acoustical calculations were carried out using CATT-Acoustic™ v9
This project was funded in part by the FUI-BiLi project on Binaural Listening and the ANR-ECHO project concerning Digital Heritage and Historic Auralizations. From 2013 to 2017, this work was carried out at LIMSI. Since 2017, it is being led by the Sorbonne University, at the Institut d'Alembert, in the the group LAM.
The project investigates our perception of space through sound and images. The journey starts with a concert, organized to celebrate the 850th birthday of the Notre-Dame de Paris cathedral, staging a symphonic orchestra accompanying soloists and choirs through J. Massenet oratorio “La Vierge” (The Virgin). Although brilliant, this performance was nonetheless ephemeral, offering this 19th century piece augmented by this edifice of such peculiar acoustics to a privileged few.
In conjunction with the concert, the Conservatoire National Supérieure recorded the event, with each instrument section and soloists being carefully recorded. In an attempt to explore the possibility of recreating this concert event for future spectators, providing a spatially accurate rendition of the concert and allowing one to navigate within the cathedral and experience different acoustic perspectives, the Ghost Orchestra Project was launched.
Combining research efforts in binaural audio of the FUI-BiLi project, digital heritage acoustic recreations of the ANR-ECHO project, and the development of interactive virtual reality environments in the now closed BlenderVR project, this work was a means to join forces and attempt a monumental work.
Overview of the project
The following is a presentation of the project that was prepared for the FISM conference 2015. While the conference was cancelled, we provide the presentation here.
The basic premise, as far as the audio element is concerned, was to utilize the close-mic'd audio tracks from the different musicians and to replay them in a virtual acoustic reconstruction of the Cathedral, in order to recreate the proper spatial information regarding instrument positions and directivities and their relation to the room acoustics of the space. This effort required the construction of a geometrical acoustic model, which was calibrated according to measurements also carried out in the cathedral. The goal being to make sure the sound of an instrument playing at position A and heard by a listener at B would be perceived exactly as in the original building (reverb, position, presence, timbre, amplitude, etc.).
The 3-dimension room impulse response is then numerically computed for each and every instrument and every potential listener position. These responses are then *convolved* with the corresponding audio track and appropriately combined to create the Ghost Orchestra performance.
Audio example of the simulated environment
The virtual reconstruction of the instruments in the cathedral during the performance can be compared to one of the recorded tracks at the conductor's position. If the acoustics and balance are perceived appropriate at this position, we can have confidence that listening at other positions which were not recorded will still provide a realistic virtual reconstruction.
Recorded extract of the concert at the conductor's position:
Virtual reconstruction of the same extract at the conductor's position, close-mic tracks convolved with simulated room acoustics impulse responses:
These audio extracts are mono-phonic, representing the recording position microphone placed over the conductor.
Visual 3D model
To accompany the virtual acoustic reconstruction, a 3D model of Notre-Dame was created (3DS Max / Blender), mirroring the visual appearance of the place.
Then, the various instruments themselves were reproduced in the 3D environment and positioned in the virtual cathedral so that visitors could visualize the different components of the orchestra that now played in front of them.
Integration of the 3DS model into Blender, texturing, compositing, lighting, and rendering was realised by David Poirier-Quinot. One component of interest was to have some dynamic elements, which responded to the audio track, to bring some "life" to the model. The results were limited by the resolution of the HMD hardware at the time. Some more discussion on the visual elements can be found on his webpage.
The 3D virtual reconstruction can be explored interactively with an Occulus Rift Head-Mounted Display (HMD). This real-time rendering was achieved with BlenderVR.
Alternatively, the virtual world can be pre-rendered at a higher resolution as a 360° video. Such rendering provides improved graphical quality, and presentation of the results via various 360° streaming services which support 3D audio, such as YouTube and Facebook, or off-line with players such as VLC, allowing for viewing on portable devices and general VR support hardware like Google CardBoard.
Demos / Videos
The immersive experience was adapted to different medias allowing for different degrees of interaction.
Free navigation on a predefined trajectory
Free navigation in the scene equipped with a head mounted display + binaural acoustic
rendering (3rd order Ambisonic, decoded over 16 virtual speakers, rendered binaurally).
The following video presents a recording of this live navigation. The head orientation of the user is taken into consideration for updating both the visual and auditory orientation. Speed of the flying carpet can be controlled using a joystick.
Free rotation along a predefined navigation path
Equirectangular 360° video played on a tablet or on Youtube. This off-line 360° video rendering allows for real-time orientation with a suitable viewer, providing a “steerable window” looking into the virtual world. The video is combined with either Ambisonic or decoded Ambisonic virtual speaker tracks. The 360° audio is then converted to binaural in real-time as a function of the window orientation with an appropriate player. Players have been developed in the FUI-BiLi project by partners Orange Labs and Arkamys.
B. N. Postma and B. F. Katz, “Perceptive and objective evaluation of calibrated room acoustic simulation auralizations,” J. Acoust. Soc. Am., vol. 140, pp. 4326–4337, Dec. 2016, doi:10.1121/1.4971422.
B. N. Postma and B. F. G. Katz, “Correction method for averaging slowly time-variant room impulse response measurements,” J. Acoust. Soc. Am., vol. 140, pp. EL38–43, July 2016, doi:10.1121/1.4955006.
B. N. Postma and B. F. Katz, “Creation and calibration method of virtual acoustic models for historic auralizations,” Virtual Reality, vol. 19, no. SI: Spatial Sound, pp. 161–180, 2015, doi:10.1007/s10055-015-0275-3.
B. F. Katz, B. Postma, D. Thery, D. Poirier-Quinot, and P. Luizard, “Objective and perceptive evaluations of high-resolution room acoustic simulations and auralizations,” in Euronoise, (Crete), pp. 2107–2114, May 2018, (url).
B. N. Postma, D. Poirier-Quinot, J. Meyer, and B. F. Katz, “Virtual reality performance auralization in a calibrated model of Notre-Dame Cathedral,” in Euroregio, (Porto), pp. 6:1–10, June 2016, (url).
B. N. Postma and B. F. G. Katz, “Acoustics of Notre-Dame Cathedral de Paris,” in Intl. Cong. on Acoustics (ICA), (Buenos Aires), pp. 0269:1–10, Sept. 2016, (url)
D. Poirier-Quinot, B. N. Postma, and B. F. G. Katz, “Augmented auralization : Complimenting auralizations with immersive virtual reality technologies,” in Intl. Sym on Music and Room Acoustics (ISMRA), (La Plata), pp. 14:1–10, Sept. 2016, (url).
B. N. Postma, A. Tallon, and B. F. Katz, “Calibrated auralization simulation of the abbey of Saint-Germain-des-Prés for historical study,” in Intl. Conf. on Auditorium Acoustics, vol. 37, (Paris), pp. 190–197, Institute of Acoustics, Oct. 2015, (url).
B. F. Katz, D. Q. Felinto, D. Touraine, D. Poirier-Quinot, and P. Bourdot, “BlenderVR: Open-source framework for interactive and immersive VR,” in IEEE Virtual Reality (IEEE VR), (Arles), pp. 203–204, Mar. 2015, (url).
J-M. Lyzwa and A. Baskind, “Exemple d'une production en multicanal 5.1 utilisant des techniques mixtes binaurales et transaurales de spatialisation : la Vierge de Jules Massenet,” presented at les Journées de la Recherhce, Conservatoire National Supérieur de Musique et de Danse de Paris, 18-20 Mar 2014.
In the press
“Have you heard of the Cathedral Notre Dame de Paris’ ‘ghost orchestra?”,” Hindustan Times, June 2017, (url).
S. Sermondadaz, “VIDÉO. Pour recréer Notre-Dame de Paris en réalité virtuelle, le son aussi a son importance,” Sciences et Avenir, 28 June 2017, (url).
G. Lynch, “You can now experience what Notre Dame looks and sounds like - thanks to VR,” www.techradar.com, June 26 2017, (url).
P. Van Nuffel, “Un concert live à Notre-Dame de Paris reproduit sur base du son 3D et de la VR,” datanews.levif.be, 26 June 2017, (url).
“Improving virtual reality and exploring ear shape effects on 3-D sound,” Science Daily, 26 June 2017, (url).
L. Mondragon, “Listeners Seeing What They Hear: Virtual Reality & 3D Acoustics Integration,” The Sceince Times, 28 June 2017, (url).
J. Oliveira, “Notre Dame tiene una ‘orquesta fantasma’ con sonido 3D (“Notre Dame has a ‘phantom orchestra’ with 3D sound”),” El País, 5 July 2017, (url). (in Spanish).
A. Brataas, “Seeing with your ears: Novel acoustics project aims to improve virtual reality, explore ear shape effects on 3-D sound,” Phys.org, 25 June 2017, (url). (EurekaAlert).
P. Labbe, “Ghost Orchestra : une prouesse auditive en réalité virtuelle dans Notre Dame de Paris,” www.realite-virtuelle.com, 3 July 2017, (url).
J. Edwards, “Signal Processing Supports a New Wave of Audio Research: Spatial and Immersive Audio Mimics Real-World Sound Environments [Special Reports],” IEEE Signal Processing Magazine, vol. 35, pp. 12–15, Mar. 2018, doi:10.1109/MSP.2017.2784881.
C. Rizoud, “Notre-Dame, l’acoustique aussi en question,” www.forumopera.com, 29 April 2019, (url).
N. Karel, “Ghost Orchestra: сохранился цифровой отпечаток акустики Нотр-Дама (“Ghost Orchestra: preserved digital imprint of acoustics by Notre Dame”),” Radio France Internationale ru.rfi.fr, 22 April 2019, (url). (in Russian).
B. Boren, “How audio researchers preserved Notre Dame's treasured acoustics before the fire,” Los Angeles Times, 21 April 2019, (url).
“Reconstructing the Acoustics of Notre Dame,” Acoustical Society of America press release www.newswise.com, 3 may 2019, (url).
Rediffused as “Computationally Reconstructing the Acoustics of Notre Dame,” in Communication of the ACM, 3 May 2019, (url).
“Recréer l'acoustique de Notre-Dame de Paris,” CNRS press resease insis.cnrs.fr, 29 April 2019, (url).
Rediffused on www.sorbonne-universite.fr, 30 April 2019, (url).
J. Ouellette, “Mapping Notre Dame’s unique sound will be a boon to reconstruction efforts,” Ars Technica, 14 May 2019, (url).
B. Katz & M. Pardoen, “Comment les acousticiens peuvent reconstruire le « son » de Notre-Dame,” theconversation.com, 16 May 2019, (url).
Rediffused on www.france-catholique.fr, 17 May 2019, (url).
Rediffused on www.infochretienne.com, 20 May 2019, (url).
Rediffused on Science & Vie, 25 May 2019, (url).
Highlighed in Le Monde, 24 May 2019, (url).
C. Becker, “Écoutez l’étonnante acoustique de Notre-Dame de Paris en 3D,” fr.aleteia.org, 27 Apr 2019, (url).
S. Gervais, “La recréation de l'acoustique de Notre-Dame de Paris grâce à la réalité virtuelle,” www.francemusique.fr, 26 Apr 2019, (url).