If the firm is looking for particular attitudes and beliefs which are connected to a service or product they offer, it is probably more advantageous for them to search out a specific group that will want to use that service or product. For example, it would be of little use for a company looking into services for women suffering menopause to have the attitudes of teenage males (or in fact anyone other than women suffering or worrying about suffering the condition) (Conboy et al, 2001). Such responses would almost certainly serve to confound the results and subsequent analysis. Simply putting the survey on their web site should serve this purpose to some extent, as it will only be accessed by people who have a vested interest in the topic matter. This may, however, prove to be too limiting. There are then two options for soliciting responses, active and passive advertisement (Bailey et al, 2000). Active advertisement would involve posting requests on ‘newsgroups’ on the Internet which serve as discussion forums for specific issues. By targeting specific newsgroups that are associated with the firm’s interests (such as those on menopause, or women’s groups) they can ensure that all those involved would be informed and interested in the survey subject. Passive advertisement would involve indexing the questionnaire under search engines so that when specific words are typed into the search engine (such as menopause, or women’s problems) the survey will be presented for the individual to access. Moreover this technique can be adapted by using more or less specific words in the search string (menopause vs. woman) to alter specificity of interests by the participant. What should be noted is that such a technique may be more open to self-selection biases (Birnbaum, 2000).
The next thing to consider are the questions used within the study. Even before all the considerations regarding the presentation of the survey on the Internet, the structure of a questionnaire can be vitally important. It is necessary to decide what attitudes and beliefs are being measured, and how the existence and subsequently the strength and nature of these factors can be best explored. For this last point in particular, it is necessary to assess when and how often to use open and closed questions (Gillham, 2000). The closed style employs statements or questions to which the response style is simply to circle one of a number of possibilities (eg Yes/No, or All The time/ Most of The Time/ Sometimes / Never). These types of question have a number of obvious advantages both in general and on the Web. Firstly they are very easy to statistically analyse, as each response can be given a value, and then each grouping (such as for a particular attitude being measured) can be easily amalgamated to give a total score for that section. This is of particular benefit when the survey is presented on the Internet, as the data collection and analysis can be fully automated as part of the server software package. It even allows for instant feedback to the respondent, which may be an appeal for them to take part in the first place. These questions are also very effective at measuring attitude and belief strengths, as long as the presentation is counterbalanced to control for acquiescence effect (Gillham, 2000). A drawback with the closed question is that it can be very difficult, if not impossible to tell if a question has been misinterpreted, and so answered inaccurately. This is even more of a potential problem when it is considered that there may be a great number of Internet respondents who do not have English as their native language (therefore another important question for the demographic form).
It is then also important to consider the use of open questions in which the respondent is asked about an attitude and then given a number of lines to word a response in their own style. It is much easier to tell from these responses whether a question has been misinterpreted, and they also provide extra information about beliefs and attitudes that a researcher may have overlooked and thus not included a direct question for. Two common problems with this question style are that they are not suitable for statistical analysis, and they are often not answered very comprehensively or honestly due to potential embarrassment and need for social desirability (Schuman & Presser, 1996). The first criticism is even more important when considering translation to the Internet, as it does not allow for the previously mentioned automated data collection (which is useful as it is efficient, has much less potential for data input errors, and is easier for the researcher) (Michalak, 1998). Thus it may be useful for the researcher to use these in conjunction with the closed questions, as a ‘catch-all’ method that can be manually checked afterwards for any information that could not be gained from the closed questions (and thus the statistical data). The second criticism is much less of a problem on the Internet. Studies have shown that the extra anonymity that it affords leads to a much lower level of social desirability and much less biased responses (Keisler & Spraull, 1986). It has also been observed that response to open questions on the web often receive much more comprehensive responses than their traditional paper equivalents (Krantz & Dalal, 2000). Open questions are particularly important during the vital piloting stage. At this stage they can allow the experimenter to see what areas he has overlooked and so needs to include or revise. No matter what type of question employed, the experimenter must be careful not to ask ‘leading questions’ that are worded in such a way as to bias response by making one attitude or belief look more desirable than another.
When thinking about the creation of a survey, it is also very important (but often overlooked) to consider the order of the questions. It is the responsibility of the researcher to keep the respondent at ease, and to ensure that they do not feel that the line of questioning is not too personal or blunt. To this end a survey (particularly one investigating attitudes and beliefs) should start with some very light, general questions and then gently more into the more personal and serious areas. It is equally as important to lead out of the questionnaire in the same style, finishing with more light-hearted questions to leave the respondent at ease. From another perspective it is often the case that the order questions are laid out in can be used to ‘cognitively guide’ the respondent into a certain mindset that helps them to give accurate responses (Schuman & Presser, 1996). The questions can also often be found to lead on from one another. It has been suggested that order is potentially jeopardised when the survey is presented on the internet as participants are more leisurely in their involvement and may skip between questions as they want (Epstein et al, 2001). This need not have to be the case as long as the firm creating the survey takes this into account. The program running the survey can be written in such a way that the questions are only presented in small sets, and all fields (response blocks) need to be filled in before the respondent may move on to the next set. In this way there is actually more potential control over order in web based surveys. While on the subject of respondent controls, some other problems are environment and presentation format on the web. Both of these factors have been shown to be significant affectors of response in questionnaires, and yet both are very difficult to control when Internet users are taking part in the survey. Environment can be easily regulated in traditional surveys, but with Internet users it can range from home, to school or work, and can vary on background noise, time taken interruptions & distractions and even time of day. The best that can be done here is to give a present a brief preliminary statement before the survey is administered, requesting a few basic rules to be adhered to regarding some of these factors. These should not be made too stringent however, as otherwise potential participants may be ‘scared off’. The issue of presentation format is currently much more of a concern as it can be greatly effected by the type of computer being used, the service provider used to actually access the internet, and the browser software being employed to actually look at the page with. In both the presentation and environment cases, it may be important to ask some questions (again probably as part of the demographic form) about all of these so that any potentially confounding variables can be considered. Once more this would be particularly useful as part of the piloting study, as then specific tests can be run on the data to see which if any of these factors can significantly affect responses and so should be moved against.
It is of prime importance to consider security of the information you are given when conducting a survey. The ethical guidelines of the various national psychological associations all state that it is the responsibility of any psychologist to respect the confidentiality of their subjects. Moreover a basic prerequisite of a survey such as the ones discussed here is a statement that the information being collected will only be used in the form of anonymous data, and for the purposes of this research, unless permission is specifically sought for the replication of any quotes. Unfortunately the very nature of the Internet means that data protection in this public domain can be difficult (Liaw, 2002). What may be necessary is some sort of internal network for storage of the information that has a ‘gateway’ computer with encryption and controlled access. One other option is to have some sort of registration for the site with passwords. This would be a hassle for many users, but could also be incorporated with the demographic questions to save time and effort.
There are, then, a number of suggestions that can made to a firm who wishes to run attitude and belief surveys on their website. They should identify their targets and collect their sample accordingly. They should carefully consider which types of questions to employ, and how to extrapolate the results. They should be mindful of order effects and construct the survey accordingly. They should ensure that they are able to follow the ethical guidelines established in psychology, with particular emphasis on data protection. It is also advisable that they collect as much demographic data as possible on the respondents so they can understand who they are working with and what effects this may have. They should seek to control as many independent variables as possible. The benefits of running a survey on the Internet are obvious. It allows access to a great number of people quickly and efficiently, it is far cheaper and easier to organise than a traditional paper-and-pencil survey and it allows the automation of much of the time consuming data transfer and analysis. Despite all this the last piece of advice for the firm could be that there may be great benefit from running their internet survey in conjunction with a traditional survey. There are still a number of questions yet to be fully answered regarding the effective use of Web-based surveys, and still a number of potential confounds that should be explored. The validity of wed based studies has still to be adequately assessed (Buchanan & Smith, 1999). By comparing the web survey results to those of the same questionnaire run in the traditional manner (perhaps on a much smaller sample, just for comparative purposes) the reliability of the findings could be more accurately assessed.
References:
Baily, R.D., Foote, W.E., & Throckmorton, B. (2000) Human Sexual Behaviour: A Comparison of college and Internet Surveys in M.H. Birnhaum (Ed) (2000) Psychological Experiments on the Internet. Academic Press (London).
Best, S.J., Krueger, B., Hubbard, C., & Smith, A. (2001) An Assessment of the Generalisability of Internet Surveys in Social Science Computer Review # 19 (2), 131-145.
Birnbaum, M.H. (2000) Psychological Experiments on the Internet. Academic Press (London).
Buchanan, T & Smith, J.L. (1999) Using the Internet for Psychological Research: Personality Testing on the Web in British Journal of Psychology # 90, 125-144.
Conboy, L., Domar, A. &O’Connell, E. (2001) Women at mid-life: Symptoms, Attitudes and Choices, an Internet-Based Study in Maturitas # 38, 129-136.
Epstein, J., Klinkenberg, W.D., Wiley, D., & McKinley, L. (2001) Insuring Sample Equivalence Across Internet and Paper-and-Pencil Assessments in Computers in Human Behaviour # 17, 339-346.
Gillham, B. (2000) Developing a Questionnaire Continuum (London).
Kiesler, S., & Sproull, L.S. (1986) Response Effects in the Electronic Survey in Public Opinion Quarterly # 50, 402-413 –Cited in Birnbaum, 2000.
Krantz J. H. & Dalal, R. (2000) Validity of Web-based Psychological Research in M.H. Birnhaum (Ed) (2000) Psychological Experiments on the Internet. Academic Press (London).
Liaw, S.S. (2002) An Internet Survey for Perceptions of Computers and the World Wide Web: Relationship, Prediction and the Difference in Computers in Human Behaviour # 18, 17-35.
Michalak, E.E. (1998) The Use of the Internet as a Research Tool: the Nature and Characteristics of Seasonal Affective Disorder (SAD) Amongst a Population of Users in Interacting with Computers # 9, 349-365.
Schuman, H & Presser, S. (1996) Questions and Answers in Attitude Surveys, Sage Publications (London)
H
d