Dövencioglu, Dicle N.Dicle N.Dövenciogluvan Doorn, AndreaAndreavan DoornKoenderink, JanJanKoenderinkDoerschner, KatjaKatjaDoerschner2022-11-182020-06-032022-11-182018http://nbn-resolving.de/urn:nbn:de:hebis:26-opus-151894https://jlupub.ub.uni-giessen.de/handle/jlupub/9521http://dx.doi.org/10.22029/jlupub-8909The human visual system is remarkably good at decomposing local and global deformations in the flow of visual information into different perceptual layers, a critical ability for daily tasks such as driving through rain or fog or catching that evasive trout. In these scenarios, changes in the visual information might be due to a deforming object or deformations due to a transparent medium, such as structured glass or water, or a combination of these. How does the visual system use image deformations to make sense of layering due to transparent materials? We used eidolons to investigate equivalence classes for perceptually similar transparent layers. We created a stimulus space for perceptual equivalents of a fiducial scene by systematically varying the local disarray parameters reach and grain. This disarray in eidolon space leads to distinct impressions of transparency, specifically, high reach and grain values vividly resemble water whereas smaller grain values appear diffuse like structured glass. We asked observers to adjust image deformations so that the objects in the scene looked like they were seen (a) under water, (b) behind haze, or (c) behind structured glass. Observers adjusted image deformation parameters by moving the mouse horizontally (grain) and vertically (reach). For two conditions, water and glass, we observed high intraobserver consistency: responses were not random. Responses yielded a concentrated equivalence class for water and structured glass.enNamensnennung 4.0 Internationalddc:150Seeing through transparent layers