Atkinson and Shiffrin suggested that whilst encoding in short-term memory was acoustic, for long-term memory encoding was semantic. Baddeley’s (1966) study into encoding in STM and LTM provides evidence for this idea. His aim was to explore the effects of both acoustic and semantic coding in short-term memory and long-term memory. Participants were asked to recall a list of five words in serial order immediately after presentation; the five words came from one of four separate categories- acoustically similar words, acoustically dissimilar words, semantically similar words, semantically dissimilar words. For the long-term memory study the participants were required to see ten words, and were tested after an interval of twenty minutes, rather than immediately.
Baddeley found from this experiment that in STM words with similar sounds were harder to recall than those which sounded dissimilar. The similarity of meaning compared to the dissimilarity appeared to have no significant effect in STM. When participants recalled words in the long-term memory trial they performed much better recalling semantically dissimilar words than when they were asked to recall words with similar meanings. Acoustic similarity appeared to have no effect in the long-term memory trial.
From these findings Baddeley was able to conclude that whilst short-term memory depends heavily on acoustic encoding, long-term memory depends heavily on semantic encoding.
These findings and the conclusion drawn from them supports the theory put forward through the Multi Store Model of memory.
Evidence to demonstrate Atkinson and Shiffrin’s claim that decay is the mechanism for forgetting in short-term memory is taken from the study performed by Glanzer and Cunitz (1966). In this experiment participants took part in a free recall test; they memorized a list of words and then repeated them. When allowed to repeat words immediately after memorizing them, participants were able to recall the words at the bottom of the list much more accurately than those in the middle of the list. However, when the subjects were delayed before repeating the list, this effect (the ‘recency’ effect) was lost. Atkinson and Shiffrin claimed that this was evidence to prove that the words at the end of the list remained in short-term memory, whilst those at the beginning had been lost due to decay.
Although areas of Atkinson and Shiffrin’s model have been supported by evidence provided both through real life situations and through laboratory experiments, there is also much criticism around the Multi Store Model of memory. It has been argued that the Multi Store Model is too linear; it is believed the flow of information through the memory system must be more interactive than suggested by the Multi Store Model. In order to pass a visual stimulus, such as the pattern of the letter ‘M’, into STM from the sensory memory, first information about the letter shapes and the sound of each letter must be accessed from the long-term memory before being used to pass the original data into short-term memory. This would mean that the information must be able to jump stages of the model and flow backwards, rather than moving linearly from one store to the next. This idea opposes that proposed by Atkinson and Shiffrin.
The model is also considered to be too simple as it views each of the three stores as being unitary- each made up of a single store, with no substores; however it is commonly thought in psychology that the short-term memory is made up of more than one component.
The short-term memory store has different systems for different types of information. This idea is presented through the Working Memory Model of memory by Baddeley and Hitch (1974). In the Working Memory Model, short-term memory is comprised of three sections: the central executive (responsible for controlling and coordinating the operation of the phonological loop and the visuo-spatial sketchpad), the phonological loop (the system that deals with spoken or written word), and the visuo-spatial sketchpad (used for storing and processing information in the spatial or visual form). This idea has been supported by Baddeley and Hitch’s (1976) investigation to see whether two tasks can be successfully performed at once. They assumed two initial ideas: 1. if two tasks are making use of the same component of STM they cannot be performed in unison successfully, 2. if two tasks are making use of two different components of STM they can be performed in unison successfully.
The method Baddeley and Hitch used to investigate these hypothesises is as follows: the dual task technique- two tasks are performed at the same time, the first is a digit span task involving repeating a list of numbers, and the second is a verbal reasoning task involving answering true or false to a set of questions.
It was found that as the number of digits increased although the time taken to respond to the verbal reasoning questions increased the time delay was not significant (merely fractions of a second), and there was no increase in errors made in answering. From this it was concluded that each task used a separate substore of the short-term memory- verbal reasoning requiring the central executive, whilst digit span requiring the phonological loop.
As Atkinson and Shiffrin’s Multi Store Model represented the stores as simplistic unitary stores, these results would be impossible according to the model. This experiment contradicts their idea and suggests that it was too simplistic.
Their model was also unable to explain exceptions found, such as that of autistic savants. The subjects challenge the theory that repetition and rehearsal must be used in order for information to pass from the short-term memory to the long-term memory; as they are able to recall precise figures without the need for rehearsal, and show no sign of decay over time, nor any sign of possible forgetting mechanisms. It’s because Atkinson and Shiffrin’s model is so solid that exceptions like this cannot be explained or accounted for.
An alternative to the Multi Store Model was suggested by Craik and Lockhart (1972) in the Levels of Processing model. Their idea tackled the simplicity of Atkinson and Shiffrin’s theory by emphasising the mental processes that occurred in the memory, and not fixating on such a rigid structure; like in the Multi Store Model which resulted in it being labelled as ‘too linear’.
Craik and Lockhart rejected the idea of separate stores of memory, instead suggesting that the stimulus inputs go through a variety of different processing operations, each processing operation coming from a spectrum organized by ‘depth’.
They proposed two terms to explain this; shallow processing, and deep processing. Shallow processing merely involved recognizing the shape or visual appearance of the stimulus; for example, the shape of the letters within a word. Deep processing involves understanding the stimulus and encoding the information semantically.
In the Multi Store Model of memory the idea of rehearsal was used to explain the movement of information from the short-term memory to the long-term memory; in Levels of Processing ‘rehearsal’ is explained in greater details as it is divided into two possible options. The first idea of rehearsal is maintenance rehearsal; this is going over information by rote repetition, and is classed as shallow processing. The second type is elaborative rehearsal, where semantic links are made to the information, and so it is an example of deep processing.
The difference between the two sorts of rehearsal is important as Craik and Lockhart claim in their model that shallow processing will only produce weak, short term retention of information, whilst deep processing will ensure that retention is strong and long-lasting.
The study performed by Hyde and Jenkins (1973) supports the assumption that the level of processing directly affects the retention of information. When presented with lists of 24 words auditorily participants were required to perform one of these tasks; rate the words for pleasantness, estimate the frequency the words occur in the English language, detect the number of times either the letter ‘e’ or ‘g’ occurred in the words, class the words by part of speech (whether it was a noun, adjective, verb etc.), or decide whether the words would fit into a given sentence frame.
Whilst half the participants were informed they would be required to recall the words afterwards (the intentional learning group) the other half were unaware (the incidental learning group). The control group of participants learnt the words without performing any of the orienteering tasks (this was to compare with the results from the other two groups to demonstrate the effects of the experiment).
Hyde and Jenkins tested the participants’ recall of the original 24 words, finding that the differences between the intentional and incidental learning groups were minimal- indicating that the results were not affected by knowing whether or not recall would be required after hearing the words.
This idea supports Craik and Lockhart’s level of processing theory. They expected that there would be very little or no difference between the two groups, as according to their model retention is not affected by learning being intentional or not, but depends on the level of processing taking place.
It was also discovered that the participants who were involved in semantically processing the words (rating the words for pleasantness, or estimating the frequency of occurrence in the English language) were able to recall more of the words accurately than the participants who performed superficial orienteering tasks (such as detecting the letter ‘e’ or ‘g’ within the words).
This too supports the level of processing theory- the retention of superficially analysed words is weaker and shorter term; demonstrating shallow processing, whereas the retention of semantically analysed words is stronger and longer term; demonstrating deep processing.
One finding contradicted the Levels of Processing theory, though. It was found that participants who were asked to perform the sentence frame task- deciding whether the words would fit into a given sentence frame- gave the lowest level of recall in the experiment. This involved analysing the data semantically and so should have reflected deep processing, yet the low accuracy of recall results would suggest only shallow processing had occurred.
Although Levels of Processing has been praised for its lack of rigid structures, fault can also be found in this. There is no precise definition of ‘depth’, and therefore it cannot be judged on a spectrum where each processing would come in terms of ‘depth’, they can only be placed in comparison of one another but the terms cannot be linked to a definite number or scale.
Craik and Lockhart also failed to explain why deeper Levels of Processing should lead to better retention, although some evidence did support their claim, they never elaborated on the fact or gave justification. Some researchers have said that this is because the level of processing doesn’t affect retention at all, but instead it is the amount of processing effort that is used in learning that determines the rate of retention.
Tyler et al (1979) used his investigation to gain proof for this hypothesis. He used two sets of anagrams that were presented to the participants, the first group were classed as easy (for example, doctro), and the second group were classed as difficult (for example, ocdrto). As expected the participants, when asked to recall the words, were able to remember more of the difficult words than the easy ones. They did not use deeper processing for the more difficult category, as semantic understanding was not required in either group; however their retention was clearly better for the harder anagrams. Tyler and his colleagues claimed this as proof that retention depended on processing effort, not processing level.
Taking this into account Craik and Lockhart (1986) adjusted their original model idea by suggesting, with evidence to back up their hypothesis, that other factors may also affect the rate of retention, such as elaboration and distinctiveness of words. One example of an investigation to substantiate this idea is Hunt and Elliott’s (1980) experiment, where it was found that participants could recall more words that had distinctive patterns of tall and short letters than words that appeared to have no distinctive patterns.
The key ideas of Levels of Processing may still be valid though. Deep processing can lead to better retention, but for other reasons than originally suggested. In order to recall a piece of information the memory often needs cues so that it can locate and use the relevant piece of data. By using deep processing more elaborative mental representations may be created. This means that the information is now linked and related in the memory to a greater number of items of information, each of which can be used as a cue. As there are a greater umber of associations made with any one ‘deeply processed’ item, it is more likely that when trying to recall the information a cue will be found and will trigger the recall- there is a greater recall potential.
Deeply processed items ought to be more distinctively represented in the memory store as well. This should reduce the amount of interference that may occur, as suggested by Atkinson and Shiffrin as the mechanism for forgetting in long-term memory. This is called release from proactive interference- where by making new items distinctive from old items in the memory you decrease the chance of interference from existing memory strains. Each cue that is made will only match one item, so cannot be confused with other data that is similar and has already been stored.
None of the models formulated by psychologists are entirely accurate, however they each have their strengths, and evidence from investigations that can support the theories behind them. Atkinson and Shiffrin’s Multi Store Model of memory made the key assumptions that there were three stores of memory, but it also showed may faults. It was criticised of being too simple- claiming that the three stores were unitary when other research has shown proof of substores, and not demonstrating how information can be accessed from the long-term memory in order to encode information for the short-term memory; as the diagram of the model is said to be too linear and does not account for interaction between the memory stores.
Craik and Lockhart’s Levels of Processing model took a different view on the idea of memory, showing that there were not three memory stores, but rather that the retention of information merely differed by the depth it was processed at. However there was no independent definition of depth so it would be difficult to classify procedures using this model. It also agreed with the Multi Store Model of memory that acoustic encoding was “short-term” whilst semantic encoding was “long-term”, this idea has been supported by many study’s, including those of Conrad (1964) and Baddeley (1966).
The validity of the ideas within Levels of Processing was improved when psychologists suggested that although the retention of information was not always dependent on the depth of processing, it did appear to be dependent on the effort of processing, as shown by Tyler et al (1979).