Karimpur, HarunHarunKarimpurHamburger, KaiKaiHamburger2022-11-182017-05-312022-11-182016http://nbn-resolving.de/urn:nbn:de:hebis:26-opus-129084https://jlupub.ub.uni-giessen.de/handle/jlupub/9319http://dx.doi.org/10.22029/jlupub-8707Spatial representations are a result of multisensory information integration. More recent findings suggest that the multisensory information processing of a scene can be facilitated when paired with a semantically congruent auditory signal. This congruency effect was taken as evidence that audio-visual integration occurs for complex scenes. As navigation in our environment consists of a seamless integration of complex sceneries, a fundamental question arises: how is human landmark-based wayfinding affected by multimodality? In order to address this question, two experiments were conducted in a virtual environment. The first experiment compared wayfinding and landmark recognition performance in unimodal visual and acoustic landmarks. The second experiment focused on the congruency of multimodal landmark combinations and additionally assessed subject s self-reported strategies (i.e. whether they focused on direction sequences or landmarks). We demonstrate (1) the equality of acoustic and visual landmarks and (2) the congruency effect for the recognition of landmarks. Additionally, the results point out that self-reported strategies play a role and are an under-investigated topic in human landmark-based wayfinding.enNamensnennung 4.0 Internationalmultimodalitymodalitywayfindinglandmarkcongruencyddc:150Multimodal Integration of Spatial Information: The Influence of Object-Related Factors and Self-Reported Strategies