How humans choose where to grasp objects
We utilize our hands as all-purpose tools to feel and touch the world, to climb, point, create art (e.g., by playing musical instruments, drawing, or sculpting), communicate, and to pick up objects. Throughout these actions the hand shapes differently. When reaching out to grasp something, the human hand unfolds into a shape that is appropriate ... for grasping and manipulating the object securely. Humans can grasp an object in a variety of ways by varying their digit placements on it. Even when we consider a precision grip, where only thumb and index finger are in contact with, e.g., a coffee cup, there might already be several thousand combinations of finger and thumb locations on the cup’s surface available to choose from. Computationally, this choice is far from trivial. However, selecting the proper grasp locations is necessary to ensure successful interaction with the object. Despite that computational challenge, humans are able to determine secure, comfortable grasp locations that minimize slippage and torsion by considering an object's shape, material characteristics, and the desired action outcome. The aim of this thesis was to understand how humans achieve this complex task. Study 1 investigated which factors determined where humans grasped 3D objects and how these were prioritized according to their relative importance. We used motion tracking to record and analyze participants’ grasping behavior with blocky wooden and brass objects and merged those findings with a computational model in order to predict where novel 3D-printed plastic objects would be grasped. We found that the limits force closure imposed on a grasp, the participants’ natural grip axis (NGA), and grasp aperture were the three key factors that participants considered to determine ideal grasp locations. The amount of torque a grasp would produce, and to what extend the participant’s hand would occlude the object, were factors that we found to carry less relevance for grasp choice. Whether torque, specifically, would be considered, depended on the overall weight of the object. Predicting human grasps, even for the novel objects, worked remarkably well and our research had thus far demonstrated that human grasps followed our ideal grasp rules. Moving on, we were interested in how grasp rules might interact. Study 2 inspected whether grasp locations would be chosen so that the grasp would be aligned to the NGA, or so that the grasp would fall on the higher friction contact areas and result in a more secure grasp pose. Our goal was to understand how the interactions between these two factors influenced grasp preference. NGA alignments were manipulated by rotating a cubic target object, whereas grasp stability was manipulated by altering the object’s surface characteristics. We discovered that participants favored the higher friction surfaces and sacrificed alignments with their NGA in order to create stable grasp configurations. Having researched the various factors that influenced grasp locations, their relative importance and, to some extent, their interactions, we aimed to understand how the brain computes and combines these constraints to produce appropriate motor outputs. Study 3 used functional magnetic resonance imaging (fMRI) to examine, how grasp-relevant information is represented across sensorimotor brain areas. Participants planned and carried out pre-selected grasps to multi-material 3D objects, designed to disentangle how the brain coded the NGA, object mass, and grasp aperture. We found that the orientation of the grasp was predominantly encoded during grasp planning in dorsal regions. The size of the grasp was, encoded during both planning and execution phases in various groupings of dorsal and ventral regions. Predominantly during execution, we found encoding of object mass throughout dorsal, ventral, and motor areas. Taken together, these sets of experiments provide insights into how humans use a combination of factors to make decisions about where to place their grasps.