‘Structured observational studies’ ‘have been guided by at least some of the assumptions of positivism’ (see part 2 above) and the criticisms of positivism are therefore those of SO, see part 3 and 4 for more detail[c].
By way of contrast: in unstructured observation the researcher does not accumulate empirical, quantitative data but seeks to describe and paint the occurrences with rich descriptions, seeking to capture the voice and language of the observation. The challenge here is to avoid personal bias of preconceived assumptions that may skew the interpretation of events. (Research Methods in Education, 2001, p. 44). However; later categorisation of recorded interviews, tests, questionnaires,classroom settings and so forth may be translated to QN data (Data based exercise 5 Media Guide, p.30-33; E891 DVD; SG p.142).
Categorisation presents major challenges in SO:
-
Comprehensive listing of all possibilities to prevent anomalies at recording stages and difficulty in the later analysis phase (Research Methods in Education, 2001, p.44; SG, p141; Wright & Walkuski, 1995, p.67-70; Junod et al. 2006, p.100-101)
-
Categories may not be synchronous with other research preventing cumulation of knowledge (Hargreaves, 2007, p5; SG, p.141; Wright & Walkuski, 1995, p.67-70), and may change with time and conceptions and understanding (SG p.141), however it must be noted that the later is true of all social phenomena studied in any field and even in the natural world – global warming for instance. The difficulty arises when these temporal (the position in cycle, for instance school evolution point, or weekly cycle and so forth (Research Methods in Education, 2001, p. 59)),spatial (environment, geography and so forth), and social (for instance the researcher’s cultural, socio-economic constructs, (Research Methods in Education, 2001, p.)) aspects are not taken into account when making claims and generalisations.
- The question of degrees, quality or extent of the category is rarely recorded (SG p.141)
- Validity and Reliability of data reflecting the need for accuracy and trustworthiness (SG p144) draws attention to the need for adequate training and testing of coders is imperative (Wright & Walkuski, 1995, p.67; SG p143), and consistency and sustaining of attention for a prolonged period is required (SG, 143).
-
Difficulty in ‘handling marginal cases’ where categories are not adequate or the unusual occurs (SG p.144[d])
[Word count: 563]
-
An assessment of the strengths and weaknesses of structured observation from this point of view. (Approximately 1000 words.)
‘There is no automatic connection between positivism, and the use of quantitative methods of enquiry’ (SG p79) but an embracing of the scientific method as one of its tools clearly defines the connection, and indeed ‘quantitative research has been shaped by … positivism’ (SG. p..78)
As in the natural science world, both QN and QL researchers rely on ‘qualitative knowing’ (SG,p.135) or ‘experience’ (Moulty, 1978, quoted by Cohen et al.2001, p10, Medawar, 1981 as quoted in Cohen et al,2001, p.15) which will help refine and define the hypothesis being tested. Interestingly Polanyi (1959), recognised the subjective nature of ‘qualitative knowing’ in both the natural and social science world, while acknowledging that the validity of claims and ‘conclusions of scientific research’ are not necessarily negated due to this factor(SG, p.135). Even in the world of the much esteemed randomised-control tests of evidence-based medical research, Hargreaves recognises the idiosyncratic nature of directed research (Hargreaves, 2007, p.6) and hypothesising!
This understanding is fundamental in the recognition of the strength that SO has brought to educational, and social research in general – the richness of QL data and qualitative knowing (see part 2), combined with the strength of QN data characteristics, that comes from validity, reliability and adhering to the demands of the scientific methodologies.
To exemplify some of the weaknesses as documented in the analysis of categorisation difficulties (see part 2 above), let us examine the Cheffer’s Adaptation to Flander’s Interaction Analysis System (CAFIAS) and the Academic Learning Time – Physical Education (AET-PE) SO systems and methods (Wright & Walkuski, 1995; Junod et al. 2005). CAFIAS affords recording of categories of verbal and non-verbal code in each of 6 teacher and 4 student behaviour patterns (Wright & Walkuski, 1995, p.66) but no qualifying or quality of the behaviour and no recording of outcomes for the student which might provide a measure of the whole focus of education – student learning. Data is therefore ‘skewed’ in that it primarily records teacher behaviour ((Wright & Walkuski, 1995, p67). But CAFIAS does provide an important measures which when subjected to analysis, has been used as a tool in successful habit-training of teachers (Wright & Walkuski, 1995, p66,67). AET-PE on the other hand focuses on student outcome but again in terms of behaviour as an indicator of teaching activity which promotes learning (Wright & Walkuski, 1995, p66,67), and neither system takes any account or carries any category which might illuminate the importance of emotion in the learning process – Bloom’s Affective Domain (Bloom’s taxonomy of affective levels – Anderson et al. 2001, p.47).
However; (ignoring the quality of this particular research) as demonstrated by Junod et al. (2005) researchers may and do establish categories to best suit the variable under consideration. Controls may be dependently, temporally, and spatially paired (paired within same classroom); and/or independently, temporarily and spatially controlled (control group of same age, same era, but in different classroom and may therefore vary ethnographically and have very different stimuli to the treatment, or targeted, study group). And indeed re-brand whole new categories and concepts such as categorising silent reading as a passive rather than an active engaged task Junod et al. 2005, p87-104).
This latter however demonstrates one of the major difficulties and attendant weakness in social research and educational research in particular – the non-accumulative nature of research and therefore the building of knowledge. That is researchers tend to ignore the possibility, widely used within the sciences as an extremely important tool in the epistemology toolbox, of replication, verification and authentification of other’s research work (Hargreaves, 2005, p5; (Research Methods in Education, 2001, p.138; Schofield, 2005, p.195). The contextual reasons for this deficiency have been given variously and accumulatively as: geographical, social, cultural and temporal differences within studies, not to mention the idiosyncrasy of human nature (Cohen et al. 2001 p3-9;Gage, 2005, p151-180), but… is not the natural world subjected to the same inconsistencies: animal behaviour; changing environments; changing climates; seasonal effects; inter and intra species difference, and so forth? (see part 4)
Directly utilising positivism in the field of SO brings all the weaknesses and strengths that attend to positivism consequently to SO.
Strengths of applying positivism in SOs:
- Bringing all the strengths of QN methods (see part 2) to the field
-
Relatively fast processing of data when compared to transcription, thus sample size maybe larger and large number of replications (adding validity and reliability) (Research Methods in Education, 2001, p.238)
-
Controls groups, randomised allocation, pairing, correlation, population testing, association, identification of confounding variables, probability, degree of represenation within the population testing, reproducibility, critical and self critical nature and much, much more, in short: the entire strength of statistical analysis, validity and reliability (Research Methods in Education, 2001, p.238; SG Part 4)
Weaknesses:
-
Coders restricted to recording only those behaviours pre-assigned to categories in the SO (Research Methods in Education, 2001, p.241)
-
Methods of isolated point sampling ignores the ‘essential cumulative nature of classroom talk as a continuous, contextualised pattern (Research Methods in Education, 2001, p.241)
-
Other research methods not reliant on positivism such as ethnography and socio-cultural discourse analysis claim more authentic methodologies focusing on interrogating culture and real-life classroom, thus raising ‘critical awareness’ (Research Methods in Education, 2001, p.241). Despite the strength of statistical analysis it cannot decipher the building of meaning within a classroom (Research Methods in Education, 2001, p.243), no way to measure identification of student misconceptions and the consequent conflict which leads to learning (Driver, et al, 2008,60, 61), nor the origination of motivation and self-directed learning (Driver et al., 2004, p70,71)
- Hypothesis testing and prior categorisation is to identify prior assumptions and this has been recognised as a weakness when compared to pure qualitative research
- Difficulties attendant to true randomisation, representative sample size selection, avoidance of unintentional bias, systematic error introduced for instance through non-responsiveness, confounding variables, assignment to control and treatment groups, minimising backgroud effects, ambiguity of meaning of response / coding, significance testing, error in ascribing causal agents, assumptions derived from mere associated or correlated variables, failure to identify the real causal variable when it is hidden or unknown yet correlated to a co-variant, spurious, documented variable (SG, Part 4)
-
And yet the greatest of these is probably that recognised by Hampen-Turner’s (1970) (referenced in Cohen et al.(2001, p.18)) objection to positivism because it ignores the all important ‘human qualities’ and rather focuses on a predictable, repetititious nature which, by their very nature alien to the human spirit (Cohen et al. (2001), p17[e])
[Word count 1082]
-
A conclusion exploring the implications of the assessment you have carried out for the use of structured observation in educational research. (Approximately 500 words.)
The most limiting weakness of the positivism / SO marriage is the limitation placed on it by categorisation, the difficulty in ascribing degrees or quality (although Junod et al. (2005) did reach a compromise in subdividing categories and reassigned a long-standing active engagement to a passive (silent reading) and thereby found how important active engagement is to learning in ADHD students (Junod et al. 2005, pp 84-104)), and also the dehumanisation methodologies inherent in this research philosophy[f].
It appears to me that one very important step that is missing in Educational epistemology is recognisable by comparison with tables in Appendix I-III: Cohen et al’s stages in scientific epistemology. The very basic principal principle is testing & repeating (by others!) and thus, validation or rejection by the scientific community. This appears to rarely happen in the education research world, with all vying to pursue his/her own pet theory. Thus there is no solidarity within the community of learners, ‘accepted’ knowledge is not being accumulated, and practice guidelines emanating from the research are very much muddied by the entirety of the quantity of conflicting research - the epistemological framework is faulty. Is this because of the arguments focusing on the research philosophy? But couldn’t the SO/positivists group together for instance and form a constructive- community of learners, rather than a community made up of ones? (TR).
The real answer to this conundrum, I suspect, may[g] be that presented in the Research Methods Handbook where it is recognised that the vulnerability and exposure of practice and institution to peers is to ‘to open oneself and one's colleagues to self-doubt and criticism’ (Research Methods in Education, 2001, p.138).
But… isn’t this what the scientist faces also? Are the social scientists such delicate flowers? Do they not have the courage of their convictions and confidence in their research methodologies and tools? Or are we all prima-donnas not willing to take the support role but rather star ourselves? Food for thought!
One way or another some key implications (besides the epistemology), is the recognition of the power of well derived hypotheses and categories in SO research, various authors speak of an ‘experiential knowing’ – this knowing comes from initial, informal qualitative research and analysis – problem solving and awareness. Another key factor is the power of statistical strength in handling quantitative data to illuminate the true patterns that may form the basis for generalisations and laws.
It is in the combination of the use of these two factors: qualitative and quantitative methods - one informing the other in continuous circles, each of which leads to an offshoot that again forms another circle, the whole (having been independently verified by the learning community!) contributing to the development of generalisations, concepts, and perhaps even laws. Or, of course leading to rejection via perhaps the rejection of one’s null hypothesis (!) and back to the drawing board for another spiral, each time building on the previous.
Ethics: not to be forgotten in the assessment is the benefit of the true guidance of ‘absolute ethical standards’ which allow no degree of freedom’ (pardon the pun!) for the ‘ends to justify the means’ or allowing for a ‘watered-down’ adherence to principles (Cohen et al. (2001), p58). Strict adherence to ethical guidelines when researching in the social sciences is crucial and extends the generalisability of findings[h].
[Word Count 493]
[Total word count: 606+563+1080+493=2742!]
References:
Anderson, L.W., & Krathwohl, D. (Eds.) (2001). A Taxonomy for Learning, Teaching and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives. New York: Longman.
Cohen, L., Manion, L., and Morrison, K. (2001). Research Methods in Education 5th Edition. Routledge Falmer.
Driver, R., Asoko, H., Leach, J., Mortimer, E., and Scott, P. (2004) ‘Constructing Scientific Knowledge in the Classroom’. in Scanlon, E., Murphy, P., Thomas, J. and Whilelegg, E. (eds) Reconsidering Science Learning. Routledge Falmer, London and New York.
Driver, R., Leach, J., Scott, P., and Wood-Robinson, C. (2008) ‘Young People’s Understanding of Science Concepts’, Block 2 Articles. SEH806 Contemporary Issues in Science Learning. Milton Keynes, TheOpen University.
Hargreaves, D. (2007) ‘Teaching as a research-based profession: possibilities and prospects (The Teacher Training Agency Lecture 1996)’ in Hammersley, M. (ed) Educational Research and Evidence-based Practice, London, Sage in association with The Open University (the Course Reader).
Hartley, J. Chesworth, K. (2000) Qualitative and Quantitative Methods in Research on Essay Writing: No One Way. Journal of Further and Higher Education, Vol. 24:1
Junod, E.V., DuPaul, J., Jitendra, A.K., Volpe, R.J., Cleary, K.S. (2006) Classroom observations of students with and without ADHD: Differences across types of engagement. Journal of School Psychology. Vol. 44 pp 87-104
Medawar, P.B. (1972) The Hope of Progress. London: Methuen. Review by Rafe Champion, available at Last accessed 6 May, 2011.
Mouly,G.J. (1978) Educational Research: the Art and Science of Investigation. Boston: Allyn & Bacon.
Schofield, J, (2007) ‘Increasing the generalizability of qualitative research’ in Hammersley, M. (ed) Educational Research and Evidence-based Practice, London, Sage in association with The Open University (the Course Reader).
Wright, S., Salkuski, J. (1995) The use of systematic observation in physical education. Teaching and Learning. 16(1), pp 65-71.
Appendix I:
= Missing from Ed? Res???
-
The functions of science p.11 Source Maslow 1954 (Cohen et al. (2001)Its problem-seeking, question-asking, hunch-encouraging, hypotheses-producing function
- Its testing, checking, certifying function;its trying out and testing of hypotheses; its repetition and checking of experiments; its piling up of facts
- Its organizing, theorizing, structuring, function; its search for larger, and larger generalizations
- Its history-collecting, scholarly function.
- Its technological side; instruments, methods, techniques.
- its administrative, executive, and organizational side.
- Its publicizing and educational functions
- Its applications to human use
- Its appreciation, enjoyment, celebration, and glorification
Appendix II
An sequential, eight-stage model of the scientific method (Box 1.6, Cohen et al, 2001, p16)
- Hypotheses, hunches and guesses
- Experiment designed; samples taken; variables isolated
- Correlations observed; patterns identified
- Hypotheses formed to explain regularities
- Explanations and predictions tested
(Note to self – this stage of a community of learners – scientists testing the findings of others is, as far as I can discern thus far, not a custom in Ed Res – see Hargreaves p.5 ‘Much ed res I, … non-cumulative’. He (p14) notes social scientists defend this by stating that the ‘problems are of an intractable nature’ and ‘because the environment changes so that old solutions do not fit the new circumstances; (Martin Rein, Social science and public policy. Penguin Books, 1976, p.23)
- Laws developed or disconfirmation (hypothesis rejected)
- Generalizations made
- New theories –
(Note to self: And thus… Paradigm shifts – Thomas Kuhn, The structure of scientific revolutions, 1962)
Appendix III
Stages in the development of a science (Box 1.5, Cohen et al, 2001, p16)
- Definition of the science and identification of the phenomena that are to be subsumed under it.
- Observational stage at which the relevant factors, variables or tiems are identified and labelled; and at which categories and taxonomies are developed
- Correlation research in which variable and parameters are related to one another and information is systematically integrated as theories begin to develop
- The systematic and controlled manipulation of variables to see if experiments will produce expected results, thus moving from correlation to causality
- The firm establishment of a body of theory as the outcomes of the earlier stages are accumulated. Depending on the nature of the phenomena under scrutiny, laws may be formulated and systematized.
- The use of the established body of theory in the resolution of problems or as a source of further hypotheses.
[a]Clear/coherent account. Main focus on Cohen- limited use of SG.
Anthony Burke
[b]A little over length but not by as much as usual! Well outlined.
Anthony Burke
[c]Very good use of course material.
Anthony Burke
[d]comprehensive account of what structured observation is. Excellent. I have made limited comments throughout as there is no need- not laziness on my part!!!
Anthony Burke
[e]Strengths and weaknesses outlined well.
Anthony Burke
[f]Needs a little more explanation.
Anthony Burke
[g]Here and above – a little too much written in the first person.
Anthony Burke
[h]Important points made- however, all that was required was a focussed summary of the main points raised earlier. This was your weakest section and could have benefitted from a little redrafting to ensure it fully answered the question posed.
Anthony Burke