The Multistore model of memory was introduced by Atkinson and Shriffen (1968) and Waugh and Norman (1965). It shows that information is stored and rehearsed in specific places in the brain, and some is briefly held in sensory stores. If information is repeated in the STM, it is passed to the LTM, otherwise it is forgotten. Alternative explanations to memory can account for the empirical findings. For example, the Multistore model does not explain the differences in the amount of material recalled. Some say that the model is over-simplified, assuming that each store functions in a uniform fashion whereas evidence suggests that each has a number of different components. Although this is a useful conceptual tool, it doesn’t give an adequately detailed account.
The Working Memory Model was proposed by Baddeley (1986) and was an attempt to provide more detail, although it only deals with the STM and recently activated parts of the LTM. It concentrates on systems by which information is processed in STM, and emphasizes the idea of an active processor. He projected that STM was not passive but actively involved in processing. Empirical evidence has come from studies where subjects have been shown to be able to perform two tasks, which involve STM concurrently.
Baddeley suggested that working memory consists of a Central Executive, which controls three slave systems. The central executive is the most important part of the system and is used with difficult tasks such as problem solving. It can process information from any of the senses, and can even store information for a short period of time. The three slave systems are the visuo-spatial scratch pad, the articulatory loop, and the primary acoustic store. The only problem with this model is that is does not take LTM into consideration.
Craik and Lockhart (1972) offered an alternative to the STM/LTM dichotomy, by suggesting that LTM is mainly determined by the processing that occurs at the time of learning. This is the Levels of Processing model. Primary memory (STM) involves continuous rehearsal; secondary memory (LTM) contains material, which has been processed at a deeper level, which alone determines the duration of the memory.
The classic empirical test of this model was to present subjects with a list of questions to which they had to answer ‘yes’ or ‘no’. The questions led subjects to unwittingly process the words at a physical (shallow), phonemic or semantic (deep) level. This was followed by an unexpected recognition test; those words that had been deeply processes were remembered best. Experiments showing how organization of material unwittingly leads to good recall also demonstrate that processing and memory may well be equivalent.
This is another model that is said to be over-simplified, and the depth of processing model in some ways describes rather than explains what is happening. Also the notion of depth in an intuitive one and cannot be measured in any scientific sense, and while depth is important in some situations, other features, such as relevance, can account for different results.
I believe that the working memory model is the least plausible as it does not account for LTM. Although the Multistore may be simple and does not have a detailed account, I think that it clearly show how STM and LTM memory works and makes sense. The Levels of Processing model is difficult to use because depth cannot be measured in a scientific sense.