Discuss the role of bottom up and top down processes in visual perception.

Authors Avatar

Kamran Khan, 9918104.                                                                        April 2000

Discuss the role of bottom up and top down processes in visual perception.

Theories of perception tend to explain how the visual system operates in terms of two kinds of processes: bottom up, data driven ones, that are pre-attentive, and are a consequence of how the nervous system is structured and top down or concept driven processes which  rely on experience, inference and expectations. Apart from the retina, the rest of the visual system has two way connections ( for example the interplay between the lateral geniculate nuclei and the visual cortex), and thus top down processing is an obvious implication of the manner in which the human anatomy is organised.

Gestalt psychologists have described how we organise sensation to perceive separate objects: and have developed a number of rules for figure ground organisation.

The basic laws of Gestalt are: proximity, similarity, closure, and good continuation. But physiological evidence for the internal existence of their laws (which they termed isomorphism and believed to be based on electrical fields in the brain) has not been found. Lashley et al (1951; cited in Eysenck, M. 1998) were unable to affect vision by disrupting the electrical fields in the cortex of monkeys.

However, the grouping theories that they have described occur in perception even at short durations of less than 100 m/s, this suggests that they must be a bottom up process.

Template theories suggest that copies of the objects are stored in the long term memory, which are matched against to produce pattern recognition. This theory  is completely concept driven, and thus only those objects stored in memory can be seen and novel objects cannot be perceived at all. Since one object needs various templates to represent all the different shapes and sizes that result from viewing a 3D object,

it requires an almost infinite numbers of templates to be stored, and accessed.

 

Feature theorists solve the problem by breaking an image into basic features, which are compared with similar patterns in the memory. So called feature detectors, identified by Hubel and Weisel(1959, 1968) as single cortical cells, identify specific(simple) features in their receptive field such as curves, angles, or straight edges. Other complex cells respond to angles, or movement and process the information on a different level, each level adding another layer of sophistication to the visual stimulus.

Join now!

Typified by Selfridge’s pandemonium model, feature net models explain perception entirely in terms of data driven processes. They do not account for the complexity of a human environment, or explain the importance of context.  

Both approaches are unified in a bi-directional model of pattern recognition, such as that offered by  Mcclelland, Rumelhart, and Hinton (1986; cited in Gleitman 1999).

This system offers answers where there is ambiguity, and bi-directional inhibition of alternative detectors explains why priming effects result in faster reaction times. This model begins with a knowledge-driven hypothesis, which makes the visual system more sensitive to data ...

This is a preview of the whole essay