As previously stated, the study claims to be a randomised control trial (RCT), more specifically an open experiment, meaning that everyone participating in the study was aware of who was in which group and it was conducted within a controlled environment. In this case, it is quite appropriate to use an RCT, as the questions posed by the study seem best answered with this means. Randomised control studies are designed to be carried out within a practice environment, within which variables can be easily controlled or manipulated (Hek & Moule, 2006). Unfortunately, although experiment-design studies are easier to control, the do have some disadvantages. For example, they can be seen to be particularly susceptible to the Hawthorne Effect, whereby participant’s responses are skewed by the knowledge that they are in a trial (Carter, 1996). An open design, in this specific case, was probably the only way to make this study feasible. Blinding the participants to the theme of the study would have proved extremely difficult, and also quite unethical given the ages of the participants (Parahoo, 2006).
A possible alternative design could have been based more around a qualitative design: placing more emphasis on the personal experience of the patient after using the multimedia software provided and the traditional methods. A semi-structured interview, whereby all of the participants are interviewed, using a set of questions to provide a loose structure (Hek & Moule, 2006), would have allowed a more subjective view of patient experiences, but suffers from being completely subjective, as well as expensive, difficult to measure and almost impossible to apply to an entire population (Bell, 2005). It is also a method fraught with reliability issues: for example, interviewing is not necessarily an innate skill, and those conducting the interviews will need to be experienced, so as not to inadvertently ‘lead’ the participant, or provide any cues that may influence the respondent (Hek & Moule, 2006).
The sample used in this experiment consisted initially of 246 children fitting the recruitment criterion, falling to 228 after attrition for various reasons. The sample appears to be convenience sample: the participants were obtained from a population that the researcher had easy access to (Parahoo, 2006). It appears that the initial 246 children were those that responded positively to participating in the trial from 1000 children asked. This, however, is not explicitly stated, and has been interpreted from given information.
The use of convenience sampling is appropriate for the research design. This method of sampling involves selecting participants that are easily available (Herek, 1997), in this case attendees at the researcher’s clinics. Convenience, or accidental, sampling can be appropriate for a number of different reasons. For example, the researchers in this case needed to gain access to a very specific group of patients, namely asthmatic children, and therefore any asthmatic children presenting at the clinic would fit the criteria. It is therefore a cheap and easy method of gaining participants (Parahoo, 2006). Although a large number of research papers use such a method (Webb, 2003), it does have some rather unfortunate drawbacks. For example, patients attending the clinic would have been from a rather small region, and therefore the sample and by default the research findings could not be said to be representative of the population (Hek & Moule, 2006), something that quantitative research seeks to achieve.
An alternative method of sampling could be stratified random sampling, whereby the potential participants are randomly selected according a specific frame, accounting for variables such as ethnic group and gender, thus ensuring that a more accurate representation of the population is obtained (Hunt & Tyrrell, 2001). Unfortunately, this sampling method can become extremely complex: as only 1000 potential participants were initially identified, accounting for several different variables could have resulted in an extremely small sample (Hunt & Tyrrell, 2001). Indeed, in order to increase this sample size could become extremely expensive and time-consuming.
In order to be applicable to a population, an appropriate sample size is needed that reflects the population as a whole (Parahoo, 2006). According to Hek and Moule (2006), a smaller sample size is more appropriate to a qualitative study (interested in the quality of the information) rather than a qualitative study (interested in gaining more information to make wider judgements). As this study has been identified as quantitative, a larger sample size would be expected (Hek & Moule, 2006). As approximately 20 million Americans suffer from asthma (this being an American study), a convenience sample of 228 children can hardly be representative of this number. The research makes no justification or explanation as to why such a small number was used, something that would be expected for such an unusually small sample (Parahoo, 2006).
Data collection methods refer to the methods/tools used by the researchers to obtain their data for analysis. In this case, the data collection methods included a variety of questionnaires were used, as well as demographic forms and the physiological measurement of forced expiratory volumes/flows (FEV): both FEV1 and FEF75%. A questionnaire is popular method of data collection, especially for quantitative research, mainly because it is predetermined, standardised and structured: three important and defining factors in quantitative research (Leung, 2001). That is not to say that, when correctly used, it does not have a place within qualitative research (Hek & Moule, 2006). Questionnaires, in this case, were used for the most of the data collection.
The use of questionnaires, although potentially quite difficult, can be advantageous. A well-designed questionnaire can be a fast, objective and comprehensive tool used to collect large amounts of data from very large populations in a reasonably small time frame (Milne, 1999). In this case, the questionnaires were filled out within the environment of the clinic, meaning that not only could several different people be participating at the same time, response rates would also have been exceptionally high (a common downfall of the questionnaire can be poor return rate).
Questionnaires can be quite difficult to design and implement for several different reasons. For example, they can be expensive to develop and may require piloting before mainstream use (Parahoo, 2006). In this case, the majority of the questionnaires were designed by the researchers themselves, rather than using previously-validated examples, which could be costly (Parahoo, 2006). In such a case, the reliability and validity of the tool could be called into question. The lack of independent validation can mean that a poor quality questionnaire is employed, which could then impact upon the validity of the results (Milne, 1999). The results of questionnaires can also be quite limiting: respondents may not get to sufficiently give their views as there is no facility for this in a standardised format (Milne, 1999).
An alternative to the questionnaire approach could have taken the form of some type of debriefing interview, for example a structured, one to one interview with a researcher using a guiding questionnaire. The advantages of this qualitative method of data collection include the sheer volume of information that can be obtained, as well as the potential for a much deeper understanding of patient’s feelings (Arthur & Nazroo, 2003). Unfortunately, such an interview would be extremely time-consuming, and therefore not appropriate for a quantitative design structure.
The data analysis section of the research appears initially to be quite comprehensive. The presence of a specific data analysis ‘plan’ indicates that this section was well thought out and planned in advance. Despite this, the statistics are quite complex and difficult to follow. Selection of data evaluation techniques depends upon a number of variables, for example the sample size, sample method and research design (Parahoo, 2006).
The researchers make quite extensive use of significance tests: for example, the t-test used to compare the means average results between the control and experimental groups and by extension the p-value of 0.05 are both valid and appropriate data analysis techniques in these circumstances (Parahoo, 2006). Unfortunately, the results of the t-test can be quite difficult to interpret, as the researchers have neglected to include a value for the degrees of freedom. Without this value, it is impossible to extrapolate the parameters for such a t-value, and therefore an accuracy statement is impossible.
The Cochran-Mantel-Haenszel (CMH) test employed to determine significant difference also seems to be an appropriate test. Significant different in this case refers to the odds that the difference between experimental and a control group is down to chance (Statistics.com, 2007). The CMH test is one such test, and seems to have been employed appropriately to show that the significant differences between the groups were not down to chance in all cases.
With regard to presentation of data, two charts were presented: one relating to the spirometry results and the other the results of the questionnaire on acceptance of interactive multimedia. The table relating to lung function test shows data concerning both the experimental and control groups, including both their mean results and the standard deviation. The mean result refers to the average of the results over 12 months (Parahoo, 2006), whereas the standard deviation identifies how far around the mean the data was spread (Hek & Moule, 2006). From this chart, it can be inferred that lung function improved in both groups over a period of twelve months, but improved more in the experimental group than the control group. This method of presenting the data is both clear and easy to interpret (Donnan, 1996).
The chart representing acceptance of interactive multimedia shows what percentage of participants circled each response in the relevant questionnaire. From looking at the chart, it can be inferred, for example, that 100% of participants found the program very or somewhat easy, or that 71% found the program very or somewhat interesting. The relevant questionnaire was, as previously stated, designed along the 5-point Likert scale design, and as such was intended to provide a spectrum of responses for participant’s feelings. It appears that the researchers are using both of the positive responses, rather than breaking the results down into 5 different bands. As the Likert scale was designed to be interpreted using the 5 different bands individually (Likert, 1932), it seems that the results are in fact incorrectly presented.
There are a wide variety of different tests that could have replaced any one of the different statistical tests employed. For example, instead of the CMH test, the Chi squared test could have been employed: both are non-parametric tests and both look for significance between two variables (Parahoo, 2006).
Ethically, the study appears quite sound on face value. Hek and Moule (2006) set out four different ethical aspects that should be considered in any research: the principles of veracity, justice, beneficence and fidelity/respect. The researchers, for example, showed beneficence. This corresponds to not doing any harm to the participants, ensuring that participants benefit from the study and that the weak/vulnerable are protected from harm (Hek & Moule, 2006). This is demonstrated by the inclusion of at least the standard, approved intervention for all of the patients: no-one was explicitly refused help.
Veracity refers to ensuring that the truth is always told to participants, and that they are entitled to full disclosure before participating in research (Hek & Moule, 2006). Again, this seems to have been implemented quite well: the fundamental design of the study was open, so that all involved were aware of what group they were placed in.
Justice refers to being equal to all participants, and not favouring some over others. It also includes being non-discriminatory and ensuring that patient’s needs are made the priority over the study (Hek & Moule, 2006). The sampling method employed ensured that most discrimination was eliminated: the first 1000 attendees were asked to participate. Unfortunately, it appears that by providing extra support to one group over another, the research team could have been seen to be favouring one group over the other. However, as the entire purpose of the study was to prove that the extra intervention made a difference to outcomes, this flaw is inherently unavoidable.
Fidelity and respect refer to an array of factors including promoting independence among the participants, respecting autonomy, dignity and providing the right to self-determination and providing anonymity (Hek & Moule, 2006). Promoting independence could be interpreted as being promoted by providing the additional intervention: participants of the study were encouraged to use the multimedia system independently as far as possible. All of the patient data was made anonymous by converting patient names into numbers prior to randomisation. Finally, the right of patients to withdraw was made apparent by the initial attrition rate experienced.
Some unfortunate ethical issues that seem to have been overlooked include some serious consent issues. The researchers state that the child and caregiver had to be willing to sign a consent form before being allowed to participate. However, there is no mention of informed consent made. Informed consent means that patients are given sufficient information and sufficient time to process that information and provide understanding before consenting (Parahoo, 2006). It has been suggested that for informed consent to be obtained, a ‘cooling off’ period should be allowed, whereby the participants could change there mind and opt out.
The results of the study seem quite promising. From a wider perspective, the results show that the experimental group faired better than the control group, indicating that the intervention provided was at least not detrimental to the overall health of the participants. They show that given the extra intervention, outcomes are improved. The results also seem to prove the two hypotheses tested: lung function was improved, and the improvement was shown to be statistically significant, plus the additional intervention provided was acceptable, and was again statistically significant to within a given value for p.
Looking more in depth, the analysis of the data does seem to have been slightly misconstrued, specifically the interpretation of the data obtained from the additional intervention questionnaire. From a clarity perspective, the results are quite difficult to interpret at an undergraduate level student with little previous experience, although the target audience of the research is probably not undergraduate nursing students.
The overall validity and usefulness of the results are difficult to interpret. For instance, the sample size seemed to be extremely small, and therefore when the researchers imply that multimedia education can improve all outcomes, the results do not necessarily back up this assumption.
The impact that this study will have on policy has the potential to be quite significant. Internet access is now quite common in the western world, and most children are familiar with the reasonably computer-literate. Therefore, adding this particular intervention into the current framework for care could be quite beneficial, although it would probably be more appropriate further research to be carried out before widespread integration.
A recent study by Lintonen et al (2007) intimated that the use of information technology, although still in its infancy, had the potential to be developed into a powerful tool for health promotion, as demonstrated through several different current applications, such as smoking cessation.
The Essence of Care Health Promotion Benchmarks (DOH, 2006) indicate that patients should have access to information in a way that meets their needs, as well as identifying that a range of different methods should be used in health promotion. This system could be another method by which this is achieved.
With regard to current policy, this study neither supports nor challenges: instead, it seeks to augment current practice with the addition of another type of intervention. Conversely, it does raise some interesting questions regarding current policy. For example, why this type of intervention has not been further researched or even already implemented?
As the research in itself appears to be quite limited in several ways, further research into the topic could pave the way for such a system to be implemented within the UK. This further research, of course, would need to be quite different from the current research. For example, a much larger sample size would help to provide further validity to findings, whereas integrating more qualitative methods, such as possibly interviewing a selection of participants may help to provide further depth.
This particular piece of research fits into practice in a number of ways. It has allowed the exploration of several key ethical issues in more depth, such as consent issues for children and adults, as well as issues surrounding informed consent. As little was mentioned with regard to informed consent, it can only be assumed that this was not felt relevant, something from a healthcare angle can be seen as quite unprofessional. It has also shown that information technology in healthcare should be used much more comprehensively: the technology is available for health promotion systems such as this to be implemented, something that patients may find useful.
References
Arthur, S, and Nazroo, S. (2003) 'Designing fieldwork strategies and materials', in J. Ritchie and J. Lewis (eds.) Qualitative Research Practice, pp.109-137, London: Sage.
Asthma UK. (2004) What We Do. Available at: [accessed: 28th June, 2009]
Bell, J. (2005). Doing your research project: a guide for first-time researchers in education, health and social science (4th Ed.). Buckingham: Open University Press.
Carter, D. E. (1996). ‘Quantitative Research’ ). In D. Cormack (Ed.) The research process in nursing (4th ed.) (pp.77-88) Oxford: Blackwell Science
Cormack, D. and Benton, D. (2000). Asking the research question. In D. Cormack (Ed.) The research process in nursing (4th ed.) (pp.77-88) Oxford: Blackwell Science
Department of Health (2001). Hospital Admission statistics 2001-02. Available at: [accessed: 18th June, 2009]
Department of Health (2004). Children’s Asthma Framework. Available at: [accessed: 28th June, 2009]
Department of Health (2006). Essence of care for health promotion. Available at: [accessed: 28th June, 2009]
Donnan, P. T. (1996). Quantitative Analysis (descriptive). In D. Cormack (Ed.) The research process in nursing (4th ed.) (pp.77-88) Oxford: Blackwell Science
Evidence-Based Medicine Working Group (1992). Evidence-based medicine: a new approach to teaching the practice of medicine. Journal of the American Medical Association, 268, 17:2420-5
Hek, G. and Moule, P. (2006). Making sense of research. (3rd ed.). London: Sage Publications.
Herek (1997). A brief introduction to sampling. Available at: [accessed: 9th May, 2009]
Hunt, N. and Tyrell, S. (2001) Stratified Sampling. Available at: [accessed: 12 July, 2009]
Krishna, S. Balas, E. A. Francisco, B. D. & Konig, P. (2006). “Effective and sustainable multimedia education for children with asthma: a randomised control trial”. Children’s Healthcare 35(1), pp. 75-90
Leung, W. (2001). How to design a questionnaire. Available at: [accessed: 28th June, 2009]
Likert, R. (1932). "A Technique for the Measurement of Attitudes". Archives of Psychology 140: 1–55.
Lintonen, T. P. Konu, A. I. & Seedhouse, D. (2007). “Information technology in health promotion”. Health Education Resources 2008 June: 560-6
Maggs-Rapport, F. (2000). Combining methodological approaches in research: ethnography and interpretive phenomenology. Journal of Advanced Nursing, 31, 1:219-25
McMaster University Evidence Based Medicine Group (1996). Evidence-based medicine: the new paradigm. Available at: [accessed 10th July, 2009]
Milne 1999 available at: [accessed 10th July, 2009]
NMC (2008) The Code in Full. Available at: [accessed on 28 May 2009]
Parahoo, K. (2006). Nursing Research: principles, process and issues. (2nd Ed.) Basingstoke: Palgrave-Macmillan
Sackett, D. L. (1996), Evidence based medicine: what it is and what it isn't. BMJ, 1996; 312:71-72
Statistics.com (2004) Generalized Cochran-Mantel-Haenszel tests. Available at: [accessed 10th July, 2009]
Webb, C. (2003). Research in brief. An analysis of recent publications in JCN: sources, methods and topics. Journal of Clinical Nursing. 12:931-4