• Policy
    • FAQ JLUdocs
    • FAQ JLUdata
    • Publishing in JLUdocs
    • Publishing in JLUdata
    • Publishing Contract
    • English
    • Deutsch
View Item 
  •   JLUpub Home
  • JLUdocs
  • Publikationen im Open Access gefördert durch die UB
  • View Item
  •   JLUpub Home
  • JLUdocs
  • Publikationen im Open Access gefördert durch die UB
  • View Item
  • Info
    • Policy
    • FAQ JLUdocs
    • FAQ JLUdata
    • Publishing in JLUdocs
    • Publishing in JLUdata
    • Publishing Contract
  • English 
    • English
    • Deutsch
  • Login
JavaScript is disabled for your browser. Some features of this site may not work without it.

Seeing through transparent layers

Thumbnail
Files in this item
10.1167_18.9.25.pdf (1.685Mb)
Date
2018
Author
Dövencioglu, Dicle N.
van Doorn, Andrea
Koenderink, Jan
Doerschner, Katja
Metadata
Show full item record
BibTeX Export
Quotable link
http://dx.doi.org/10.22029/jlupub-8909
Abstract

The human visual system is remarkably good at decomposing local and global deformations in the flow of visual information into different perceptual layers, a critical ability for daily tasks such as driving through rain or fog or catching that evasive trout. In these scenarios, changes in the visual information might be due to a deforming object ... or deformations due to a transparent medium, such as structured glass or water, or a combination of these. How does the visual system use image deformations to make sense of layering due to transparent materials? We used eidolons to investigate equivalence classes for perceptually similar transparent layers. We created a stimulus space for perceptual equivalents of a fiducial scene by systematically varying the local disarray parameters reach and grain. This disarray in eidolon space leads to distinct impressions of transparency, specifically, high reach and grain values vividly resemble water whereas smaller grain values appear diffuse like structured glass. We asked observers to adjust image deformations so that the objects in the scene looked like they were seen (a) under water, (b) behind haze, or (c) behind structured glass. Observers adjusted image deformation parameters by moving the mouse horizontally (grain) and vertically (reach). For two conditions, water and glass, we observed high intraobserver consistency: responses were not random. Responses yielded a concentrated equivalence class for water and structured glass.

URI of original publication
https://doi.org/10.1167/18.9.25
Collections
  • Publikationen im Open Access gefördert durch die UB
Namensnennung 4.0 International
Namensnennung 4.0 International

Contact Us | Impressum | Privacy Policy | OAI-PMH
 

 

Browse

All of JLUpubCommunities & CollectionsOrganisational UnitDDC-ClassificationPublication TypeAuthorsBy Issue DateThis CollectionOrganisational UnitDDC-ClassificationPublication TypeAuthorsBy Issue Date

My Account

LoginRegister

Statistics

View Usage Statistics

Contact Us | Impressum | Privacy Policy | OAI-PMH