Differences in the Coding of Spatial Relations in Animals and Objects
Eric E. Cooper & Brian E. Brooks
We have recently done an experiment on how the relations among primitive in animals and objects are coded for the purposes of visual recognition. Specifically, we photographs of animals and objects that were rotated in picture plane on a computer screen to participants.
An example of the conditions in our experiment:
Computational models of object recognition that posit categorically coded spatial relations among primitives predict the poorest recognition performance when objects are rotated 135†. Models that posit coordinate relations among primitives where exact locations of the primitives are coded, on the other hand, predict that recognition will be the poorest at about 180†. According to our theory, animals are recognized using a shape representation system that codes co-ordinate relations while objects are recognized using categorical relations. Therefore, our theory predicts the following pattern of results:
Notice that the actual results map nicely onto our predicted results:
The results suggest that animal identification uses a shape representation employing co-ordinate relations in which the precise locations of visual primitives are coded while object recognition uses relations that are coded categorically.