https://doi.org/10.1140/epjs/s11734-025-01596-x
Review
From traditional algorithms to artificial intelligence: a review of the development of sensory substitution sonification methods
Institute of Industrial Ecology UB RAS, S. Kovalevskoy Str., 20, 620990, Ekaterinburg, Russia
Received:
12
February
2025
Accepted:
14
March
2025
Published online:
7
April
2025
This review investigates trends in the development of image sonification methods applied in visual-to-auditory sensory substitution systems. The authors consider both traditional visual-to-auditory sensory substitution sonification methods, such as The vOICe and EyeMusic, and modern approaches with artificial intelligence (AI). The papers for the review were collected from Google Scholar and Lens search databases using thoroughly composed search queries based on selected keywords. As a result of the review, the authors proposed a classification of traditional sonification methods depending on the set of visual-to-auditory mappings they use: (1) column-by-column sonification, (2) sonification of the entire image into one complex auditory signal, and (3) 3D sonification—using depth as the main visual input. AI-based papers were classified into algorithms which use AI for input data preprocessing and algorithms which use AI for output data postprocessing in the visual-to-auditory sensory substitution systems. The authors describe the existing relationships between traditional visual-to-auditory sensory substitution sonification methods and visual-to-auditory sensory substitution sonification methods with AI. In conclusion, the advantages and disadvantages of existing solutions are formulated; in particular, it is shown how traditional and up-to-date solutions attempt to solve the problem of sensory–cognitive overload experienced by users of visual-to-auditory sensory substitution systems.
Copyright comment Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
© The Author(s), under exclusive licence to EDP Sciences, Springer-Verlag GmbH Germany, part of Springer Nature 2025
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.