An Experiment on Memory and the Effect of Different Levels of Processing.

Authors Avatar
AS Psychology Coursework

An Experiment on Memory and the Effect of Different Levels of Processing

Contents

Abstract 2

Introduction 3

Method 5

Results 8

Discussion 10

References 12

Appendix 1 14

Consent Form

Appendix 2 15

Standardised Instructions

Appendix 3 16

Sample Answer Sheet

Appendix 4 17

Word List

Appendix 5 18

Debriefing Sheet

Appendix 6 19

Raw Data

Appendix 7 20 Statistical Calculations

Abstract

This investigation is based on the Levels of Processing model of memory and the work of Craik and Lockhart which states that the deeper the processing used, the more likely the recall.

The aim of the study was to investigate the relationship between the level of processing required and the relative recall rates. The experimental hypothesis was that there will be a significant positive relationship between the depth of processing required, and the likelihood of the word's recall. The null hypothesis was that there will be no significant positive relationship between the depth of processing required, and the likelihood of the word's recall. An experimental research method was used with an independent measures design. The sampling method used was opportunity, the sample size was 10 males and 10 females, and they were all office workers from Cirencester.

The results showed no evidence of a positive relationship between the level of processing required and the likelihood of recall. On average 36.1% of words remembered required visual processing, 30.5% required acoustic processing and 34.9% required semantic processing.

Hence, the experimental hypothesis was rejected and the null hypothesis accepted. As the null hypothesis was accepted, the conclusion drawn was that the level of processing of information has no effect on its likelihood of recall.

Introduction

The origin of this study was the subject of Memory.

STM and LTM

In 1890 the American philosopher and psychologist, William James, suggested memory is split into two stores - short term (STM) and long term (LTM).

The short-term memory can be defined as holding information for a few seconds, basically until it has been used. George Miller (1956, as cited in Askam et al, 2000) described its capacity as 'the magical number seven, plus or minus two'. This means that experiments have shown that, on average, people remember between five and nine pieces of information, either individually or chunked.

The long-term memory differs in a few vital ways from the STM.

Firstly, it stores 'a person's knowledge of the world, their past life, and their plans for the future' (Askam et al, 2000 pg. 7). Also the LTM holds information for considerably longer than the STM; in fact the exact length of time is unknown, as is the capacity, both of which appear to depend on several factors including the apparent relevance of the information to the individual, and how frequently it is recalled.

Models of Memory

There have been various models of memory devised over the years. These try to provide a logical image of the memory processes and stores.

The first of these is the most influential Multi-Store Model, and devised by Atkinson and Shiffrin (1968, as cited in Gross, 1990), and is basis for all the other models.

The key point of this model is that the memory involves three stores linked by various processes.

Information is received by the senses and goes into the 'Sensory Store'. If this information is attended to, it passes to the STM Store; if not, it is lost. Within the STM store information is rehearsed, it then passes to the LTM store. If it is not rehearsed, it is lost. The final point in this model is that for information to be recalled from the LTM it must first be retrieved to the STM.

The Working Memory Model, created by Baddeley and Hitch (1974, as cited in Gross, 1990), aimed to provide a more detailed picture of the STM. They believed it contained many components or sub-systems each specialising in a particular task, with one, the so-called central executive, controlling the whole operation. This ultimately meant that the information did not just sit in the STM waiting to be passed into LTM, but it was constantly analysed and 'worked on'.

The final memory model, designed by Craik and Lockhart (1972, as cited in Gross, 1990), refers to how the information was encoded and processed rather than how it is transferred between stores. They found that information can be processed at different levels and that the more it is processed; the better it is remembered.

In order to support this, they carried out an experiment that looked at visual, acoustic and semantic processing. They gave their participants some cards with questions about a certain word. These either required visual, e.g. 'Is this word in upper case? BOAT'; acoustic, e.g. 'Does this rhyme with leaf? TIN'; or semantic, 'Is PAPER an animal?'. Later they were asked to recall the words.

From this experiment Craik and Lockhart found that the words that were processed in a deeper way, in other words, semantically - by meaning, were more likely to be recalled than acoustically processed words, and even more likely to be recalled than visually processed words, as these require the least processing.
Join now!


Since then, this experiment has been repeated many times, and has been with this study. The aim of which was to investigate the relationship between the level of processing required and the relative recall rates.

Hypotheses -

Experimental Hypothesis

There will be a significant positive relationship between the depth of processing required, and the likelihood of the word's recall.

Null Hypothesis

There will be no significant positive relationship between the depth of processing required, and the likelihood of the word's recall.

Method

Design

The hypothesis for this study was ...

This is a preview of the whole essay