This project was carried out in the context of the ANR-ECHO project concerning Digital Heritage and Historic Auralizations. From 2013 to 2017, this work was carried out at LIMSI. Since 2017, it is being led by the Sorbonne University, at the Institut d'Alembert, in the the group LAM.
The project proposes to create realistic auralizations for applications in historic research in the field of the performing arts. The French ECHO project studies the use of voice in the recent history of theater. It is a multi-disciplinary project which combines the efforts of historians, theater scientists, and acousticians. In the scope of this project an audio-visual simulation was created which combines auralizations with visualizations of former configurations of the Théâtre de l'Athénée over a series of renovations, enabling researchers to realistically perceive theater performances in foregone rooms. Simulations include the room, 2 actors on stage, and an audience. To achieve these simulation, architectural plans were studied from archives providing various details of the different theater configurations, from which the corresponding visual and room acoustic geometrical acoustics (GA) models were created. The resulting simulations allow for 360° audio-visual presentations at various positions in the theater using commercial standard hardware.
The Cloud Theatre virtual reconstructions comprise a number of elemetns, combined to cteate the final result.
The objective of the Cloud Theatre project is to create a realistic auralization of a theatre play in the Athénée Theatre (Paris). Two actors are recorded (audio and video) performing ''Ubu Roi, Act I'' in an anechoic environment. The audio recordings are used to create an auralization of the actors, using a 3D calibrated model of the Theatre (CATT Acoustics). The visual recordings, realised with a Kinect v2 (depth and RGB feeds), are used to create 3D point clouds of the actors, projected in the virtual environment. The position and orientation of actors' head is extracted from the video recordings and used to simulate head position and voice directivity in the auralization. A virtual audience is added to the theatre, simulated based on a procedural set of sounds and animations. The final 360° and flat renderings are created using Blender Cycles. The overall simulation is repeated for several spectator positions in the Theatre, the final material is used to study the impact of said position on visual and acoustic perception of the theatre play.
Integration of the model into Blender, texturing, compositing, lighting, and rendering was realised by David Poirier-Quinot. Some more discussion on the visual elements can be found on his webpage.
The 360° image renderings combined with the 3D Higher Order Ambisonic audio streams, were combined into a publicly accepsible format via the Facebook 360 platform. See all available videos on the project's Facebook playlist.
Cloud Theatre 2015 configuration - front row
Cloud Theatre original configuration - mid room
Cloud Theatre 2015 configuration - balcony
For any questions, please contact the head of the project: Brian F.G. Katz