How we see liquids

dc.contributor.authorAssen, Jan Jacob Reindert van
dc.date.accessioned2023-03-28T12:42:57Z
dc.date.available2018-03-21T09:16:57Z
dc.date.available2023-03-28T12:42:57Z
dc.date.issued2018
dc.description.abstractWe have great understanding of objects and materials we encounter in everyday life. This helps us to quickly identify what is predator and what is prey, what is eatable and poisonous. Despite large image differences our visual system is able to extract material properties very consistently. Liquids are a category of materials that appear to be particularly challenging, due to their volatile nature. We are able to estimate complex liquid properties such as runniness or sliminess. How are we able to do this? How is it possible that we can perceive that honey is thicker than milk? Or that water in a glass is the same material as water spraying in a fountain. Four studies were conducted to achieve a better understanding of the image information we use to estimate liquid properties.In study 1 we specifically look at the contributions of optical cues while estimating a range of liquid properties. Using the same liquid shapes, but with different optical appearances, we studied which perceived properties (e.g., runniness) are influenced by optical or mechanical cues.We can encounter liquids in many different states and contexts. In study 2 we specifically look at the constancy of viscosity perception despite radical changes in shape. How consistently do we actually perceive liquids? We simulated a range of different scenes to learn how sensitive observers are to shape changes when estimating viscosity.In study 3 we look into specific shape features underlying visual inferences about liquids. By comparing observers viscosity ratings with perceived shape features, we show how the brain exploits 3D shape and motion cues to infer viscosity across contexts despite dramatic image changes.In study 4 we estimate the perceived viscosity of an image with neural networks. Machine learning is a powerful tool and facilitates major breakthroughs with difficult visual tasks. Here we trained a neural network specifically designed to mimic human performance while estimating viscosity.Our results show that the perception of liquids is mainly driven by optical, shape and motion cues. We show great perceptual constancy in rating viscosity across a wide range of scenes. Mid-level features (e.g., spread, pulsing) are an important and reliable source to estimate viscosity consistently across contexts.en
dc.identifier.urihttp://nbn-resolving.de/urn:nbn:de:hebis:26-opus-135092
dc.identifier.urihttps://jlupub.ub.uni-giessen.de//handle/jlupub/15764
dc.identifier.urihttp://dx.doi.org/10.22029/jlupub-15146
dc.language.isoende_DE
dc.rightsIn Copyright*
dc.rights.urihttp://rightsstatements.org/page/InC/1.0/*
dc.subjectmaterial appearanceen
dc.subjectviscosityen
dc.subjectliquiden
dc.subjectrecognitionen
dc.subjectvisual featuresen
dc.subjectperceptionen
dc.subjectperceptual constancyen
dc.subjectmachine learningen
dc.subject.ddcddc:150de_DE
dc.titleHow we see liquidsen
dc.typedoctoralThesisde_DE
dcterms.dateAccepted2018-03-20
local.affiliationFB 06 - Psychologie und Sportwissenschaftde_DE
local.opus.fachgebietPsychologiede_DE
local.opus.id13509
local.opus.instituteAbteilung Allgemeine Psychologiede_DE
thesis.levelthesis.doctoralde_DE

Dateien

Originalbündel
Gerade angezeigt 1 - 1 von 1
Lade...
Vorschaubild
Name:
VanAssenJanJaap_2018_03_20.pdf
Größe:
18.34 MB
Format:
Adobe Portable Document Format