Skip to content | Accessibility Information

Harvey, C., Debattista, K., Bashford-Rogers, T., Chalmers, A., 2017.

Multi-Modal Perception for Selective Rendering

Output Type:Journal article
Publication:Computer Graphics Forum
ISBN/ISSN:0167-7055
Volume/Issue:36 (1)
Pagination:pp. 172-183

A major challenge in generating high-fidelity virtual environments (VEs) is to be able to provide realism at interactive rates. The high-fidelity simulation of light and sound is still unachievable in real time as such physical accuracy is very computationally demanding. Only recently has visual perception been used in high-fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended to by the viewer at a much lower quality without the difference being perceived. This paper investigates the effect spatialized directional sound has on the visual attention of a user towards rendered images. These perceptual artefacts are utilized in selective rendering pipelines via the use of multi-modal maps. The multi-modal maps are tested through psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform significantly better than only using image saliency maps that are naively applied to multi-modal VEs.