Another weakness is that template theory is its inability to account for the recognition of patterns viewed from non-standard angles. Once again we would have to have more templates stored for recognising objects that are upside down or inverted. This would then make out long term memory highly uneconomical in terms of storage.
Prototype theories developed on template theories and suggested that humans have prototypes which are typical representations of a pattern/object rather than templates. A prototype is an abstract model stored in the long term memory, which embodies the most typical features of an object or pattern. When we see an object, we compare it to our set of prototypes and identify the stimulus as soon as we have a reasonably close match. For example, you may have a stored prototype for each of your teachers such as height and facial features, however if a teacher were to come to the class wearing glasses for the first time, you would still recognise him/her because the match would be close enough to the stored prototype.
A strength of this theory is that it is far more flexible and economical than the original template theory and it can account for the ability to recognise different pictorial representations of the same basic shape.
Further support is provided experimentally, several studies have shown that prototypes can be helpful in recognising patterns such as geometrical shapes and simple line drawings. (Keele and Reed)
However there are weaknesses, the theory is vague on how an image is matched up with the appropriate prototype. Also the theory does no adequately explain how the prototypes are stored in the memory.
Another problem is that it does not explain how we are able to recognise objects that are partially obscured.
Feature detection models were developed in response to the many problems found with template and prototype theories. Feature detection theories improved on both template and prototype theories as they focused on the separate features of a pattern and not on the pattern as a whole. According to this theory, we process images in terms of their basic constituent parts. We match features of a pattern to features stored in memory rather than attempt to match a whole pattern to a template or prototype.
The first real feature analysis model was proposed by Selfridge and is one of the most influential models within feature theories. The pandemonium model essentially involves a hierarchical system of demons that perform various information processing tasks that eventually lead to the recognition of an image. The pandemonium model involves several simultaneous stages compared to template models, which only involve two stages and looks at more detail when analysing what the particular pattern/object is. It also has the advantage over template models in that with a limited set of feature detectors it can recognize a potentially infinite number of objects and it will also recognize letters in spite of changes in size, orientation, and other distortions. The pandemonium model also has advantages over earlier theories as it becomes more biologically plausible if one assumes that the demons function rather like neurons.
The hypothesis that we analyse and recognise objects with regard to distinctive features is supported by Neisser. He found that when asked to identify a target letter (Z) embedded within distracters that shares the same features, for example X and E, it took people longer to locate the letter Z than when it was embedded within letters that did not share the same features, for example C and O. This experiment shows that distinctive features must play a role in pattern recognition; if they did not it would have taken the same amount of time to find the target letter within both groups of letters. An advantage of feature models is that they also analyse the extent to which a feature is present, not just whether a feature is present or not. Therefore, even if a feature is not perfect the pattern can still be recognised. This helps explain how we are able to read different handwriting.
Further support of biological plausibility in feature models was provided by Hubel & Wiesel through their studies of individual cells in the retina and visual cortex. They measured the responses of the individual neurons in the visual cortex to stimuli projected onto the retina of a cat. In simple terms what they found was that there are specialised cells for feature detection in the visual cortex, with certain cells responding maximally to certain stimuli, for example straight-line stimuli in a particular orientation. Their research supports feature models as it demonstrated that the visual system builds an image from simple stimuli into more complex representations
However there are weaknesses to this theory, it lacks complexity. The theory is too simplistic to explain how we recognise complex objects. It has been criticised for being reductionists.