INTRODUCTION TO QUANTITATIVE RESEARCH METHODS CHAPTER 4

pdf
Số trang INTRODUCTION TO QUANTITATIVE RESEARCH METHODS CHAPTER 4 45 Cỡ tệp INTRODUCTION TO QUANTITATIVE RESEARCH METHODS CHAPTER 4 266 KB Lượt tải INTRODUCTION TO QUANTITATIVE RESEARCH METHODS CHAPTER 4 0 Lượt đọc INTRODUCTION TO QUANTITATIVE RESEARCH METHODS CHAPTER 4 33
Đánh giá INTRODUCTION TO QUANTITATIVE RESEARCH METHODS CHAPTER 4
4 ( 13 lượt)
Nhấn vào bên dưới để tải tài liệu
Đang xem trước 10 trên tổng 45 trang, để tải xuống xem đầy đủ hãy nhấn vào bên trên
Chủ đề liên quan

Nội dung

4 Methods of Inquiry `It is a capital mistake to theorize before one has data!' `I have no data yet. It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts' Sherlock Holmes, A Scandal in Bohemia Holmes's methods of detection, he said, were `an impersonal thing ± a thing beyond myself'. The methods of quantitative social science research are similarly a thing apart from us. Our research designs and our research definitions are open to scrutiny and criticism. Even if we guess in social science research ± abduction ± we still need to test our guesses, our observations, with data. This is Holmes's point. `Data' is essential before we start to make `why' or `because' conclusions from our observations. But recognizing what is and what is not a clue, data, is itself an art, as we saw in the last chapter. Brother Cadfael, the monk-detective in Ellis Peters' novels, always held back on his decisions on what was and what was not a `clue'. In The Sanctuary Sparrow a young man comes to the abbey seeking sanctuary, safety, after being chased and beaten by men seeking his death. The abbot asks the men why they are chasing the young man: `My Lord, I will speak for all, I have the right. We mean no disrespect to the abbey or your lordship, but we want that man for murder and robbery done tonight. I accuse him! All here will bear me out. He has struck down my father and plundered his strong-box, and we are come to take him. So if your lordship will allow, we'll rid you of him' (Peters, 1985: 11±12). The abbey looks after the young man while Brother Cadfael investigates. `We have time, and given time, truth with out', says Cadfael (Peters, 1985: 23). Cadfael senses that the young man is innocent but does not let this influence his thinking on innocence or guilt in his investigation. Lord Peter Wimsey, Dorothy Sayers' aristocrat detective, is also warned by Parker, his police friend, not to accept uncritically what appears to be obvious. Wimsey is not amused. `Five-foot ten,' said Lord Peter, `and not an inch more.' He peered dubiously at the depression in the bed-clothes, and measured it a second time with the gentlemanscout's vademecum. Parker entered this particular in a neat pocket-book. `I suppose,' he said, `a six-foot-two man might leave a five-foot-ten depression if he curled himself up.' ME THODS OF INQUIRY `Have you any Scotch blood in you, Parker?' inquired his colleague, bitterly. `Not that I know of,' replied Parker, `Why?' `Because of all the cautious, ungenerous, deliberate and cold-blooded devils I know,' said Lord Peter, `you are the most cautious, ungenerous, deliberate and cold-blooded. Here am I, sweating my brains out to introduce a really sensational incident into your dull and disreputable little police investigation, and you refuse to show a single spark of enthusiasm.' `Well, it's no good jumping at conclusions.' `Jump? You don't even crawl distantly within sight of a conclusion. I believe if you caught the cat with her head in the cream-jug, you'd say it was conceivable that the jug was empty when she got there.' `Well, it would be conceivable, wouldn't it?' `Curse you,' said Lord Peter. (Sayers, 1989: 54±55) INVOLVEMENT AND METHOD A good research design reduces the risk of bias and of `jumping the gun' on conclusions. A good research design is careful in its decision on what counts as a `clue'. The men chasing the young man thought that they had the right clues, but they did not. This is not to say that there should be no personal involvement in research. Some methods of detection in social science research involve the researcher as the `data collecting instrument', such as participant observation. Participant observation ± for example living with a traditional society in a remote village in Indonesia ± requires a research design. Figure 4.1 provides an overview of the relationship between methods of data collection and involvement. Social surveys and structured interviews involve standardized questions for large groups or populations. Semi-structured interviews and focus groups involve more Numbers Involved Many Social Surveys and Structured Interviews Semi-structured Interviews and Focus Groups In-depth Interviews Observation Participant Observation High Personal involvement of the researcher Few FIGURE 4.1 Methods of data collection and personal involvement (adapted from Worsley,1977) 65 B A L N AV E S A N D C A P U T I open questions or prompts. The researcher is not personally involved with participants. In-depth interviews, observation and participant observation, however, assume smaller numbers and may entail greater personal involvement by the researcher. Notice that `experiment' has not been included in Figure 4.1. Experiments are a separate case. Small numbers of participants may be involved but the researcher is `experimenter' rather than `participant'. A participant observation study, in contrast, involves the researcher directly in the lives of the people that they are studying. The `data' or `evidence' in a participant observation may be the accounts of the participants and the accounts of the researcher. These `accounts' are not necessarily measured. In quantitative studies, such as experiment, the observations are measured. As we found in Chapter 3, the collection of statistics requires a particular kind of research design. Figure 4.2 is a checklist on this design. We have, to this point, introduced the whole process associated with operationalization, including the literature review. We have not examined, however, the methods themselves or data analysis. Most modern research methods use a range of data collection techniques ± questionnaires, structured interviews, in-depth interviews, observation and content analysis. The three most common forms of data collection are case study, survey and experiment. Case studies investigate `what is happening' and are very common in policy research and in exploratory Type of inquiry exploration description - explanation Units of analysis individuals groups organizations Sampling probability non-probability Hypothesis/ Research Question Time dimension - cross-sectional longitudinal Method case study survey experiment Measurement operational definitions Data analysis FIGURE 4.2 Checklist for research design 66 ME THODS OF INQUIRY Case Study (questionnaire, interview, content analysis, observation) Research Question/ Hypothesis Survey (questionnaire, interview, content analysis, observation) Experiment (questionnaire, interview, content analysis, observation) FIGURE 4.3 Research methods and techniques of data collection (based on DeVaus,1990: 6.Used by permission) work. A survey in comparison can cover a range of issues and normally results in a variable by case matrix (person by age, person by education). Questionnaire is one of the most common ways of collecting data for a variable by case data matrix, but it is not the only way. Experiments, like surveys, result in a variable by case matrix. In experiments, however, there is also the intervention by an experimenter. Figure 4.3 provides a summary of the major methods. In the modern mind experiments are often associated with `laboratory research', in particular experiments with rats (and white rats at that). But the motivation for `experiments' has a long history. For Francis Bacon, a philosopher of science, the goal of an experiment is to `put nature to the test'. Everyone knows that science does experiments, but let us investigate further how experiments differ from other types of methods for analysing observations. EXPERIMENTAL DESIGN O, vengeance! Why, what an ass am I! This is most brave, That I, the son of a dear father murder'd, Prompted to my revenge by heaven and hell, Must, like a whore, unpack my heart with words, And fall a-cursing like a very drab, A scullion! Fie upon't! foh! ± About, my brain! I have heard That guilty creatures, sitting at a play, Have by the very cunning of the scene Been struck so to the soul that presently They have proclaim'd their malefactions; For murder, though it have no tongue, will speak 67 B A L N AV E S A N D C A P U T I With most miraculous organ. I'll have these players Play something like the murder of my father Before mine uncle: I'll observe his looks; I'll test him to the quick: If he but blench, I know my course. The spirit that I have seen May be the devil: and the devil hath power To assume a pleasing shape; yea, and perhaps Out of my weakness and my melancholy, ± As he is very potent with such spirits, ± Abuses me to damn me: I'll have grounds More relative than this: ± the play's the thing Wherein I'll catch the conscience of the king. Hamlet, Act II, Scene II Hamlet, one of Shakespeare's most famous characters, is not your traditional detective, but he took up the role of detective. Hamlet is not a scientist, but he took up the role of experimenter. Hamlet was told by a ghost that the king had killed his father. Hamlet wanted to investigate the claim. Hamlet also wanted to create situations that tested those he thought were participants in the murder. In this case he wanted to create a play for the king which was a recreation of the king's murder of Hamlet's father. The play, Hamlet thought, would get the king to declare his guilt; at least that was the plan. Hamlet created an experiment ± he wanted to manipulate situations in order to observe what the effects would be. He wanted a clear and unambiguous sign that the king was the murderer. Hamlet found, though, that life is messy. Trying to test everyday life has its downsides. Columbo, the 1970's television detective, also took an experimental approach to his detection. When he thought that he knew who the murderer was, he would return again and again to the suspect to see what her or his reaction would be. Each time that a suspect thought that Columbo had finished questioning and was about to leave, Columbo would return to ask about `. . . one more thing'. Columbo's approach was intentionally annoying, leading the suspect to make errors. Experiments for the scientist are the ideal way of collecting knowledge. They allow for the identification of separate variables and keep all extraneous ± unwanted ± variables controlled. An experiment is `controlled observations of the effects of a manipulated independent variable on some dependent variable' (Schwartz, 1986: 5). We might want to test, for example, a new psychotherapy for people who have a fear of detective fiction. We could find a sample of sufferers, have them undergo the psychotherapy and see if their fear disappears. The problem with this approach is that even if patients improve, we cannot be sure that the therapy was responsible. It may be that people with a fear of detective fiction improve by themselves (spontaneously) or it may be that something in the therapeutic situation other than psychotherapy itself (having someone care) was responsible for improvement. The only way to find out for sure that the psychotherapy was the 68 ME THODS OF INQUIRY `cause' is to control for these extraneous factors by conducting a true experiment. This means creating a second group of people who fear detectives (called the control group) but who do not get the psychotherapy. If they improve as much as the group that does get psychotherapy, then factors other than the psychotherapy may be the answer. There is always the possibility, of course, that simply getting attention from the therapist affects those with the phobia. This is a `placebo effect'. A placebo control group, under such circumstances, might also get attention, although not the psychotherapy, from a therapist. If both groups improved under these conditions, then we would probably rule out the psychotherapy as the cause. Figure 4.4 gives an overview of basic experimental design. As you can see, the skill of an experiment is in the ability to control variables, including assignment to the experimental and control groups. Ideally, the experimental and control groups need to be the same before the experiment starts. If the phobias of one group are greater than the other, you can see that the results will not be reliable. Participants are often assigned at random to experimental and control groups in the hope that this will result in equal assignment of people to both groups. The skill of experimental method also includes choosing a study that in fact requires an experimental design. Examine the statements below: 1 2 3 Women believe that they are better at dancing than men. Children who are sensitive to poetry in early childhood make better progress in learning to read than those who do not. Remembering a list of items is easier if the list is read four times rather than once. All these hypotheses involve relationships between variables. However, the last item is most appropriate to experimental method. The first question is about belief, rather than behaviour. The second question involves natural Experimental Group Control Group Measure dependent variable (fear of detective fiction) The same? Measure dependent variable (fear of detective fiction) Administer psychotherapy Remeasure independent variable Remeasure independent variable Change? FIGURE 4.4 Basic experimental design 69 B A L N AV E S A N D C A P U T I language, which, by its nature, is difficult to manipulate. The last question is an obvious candidate for a classical experimental design. Manipulating and controlling variables in social science research has its limitations. Hamlet was planning to intervene in people's lives to see how they reacted. This raises obvious issues about right and wrong ± ethics. You cannot create brain-damaged people, for example, to see how brain damage affects their driving behaviour. In such cases we would be looking at choosing brain-damaged people after they had received the injuries from accident. Such selection is called ex post facto experimentation. The nature of the intervention in many ways defines the experimental design that is most appropriate for your study. There can be little doubt that `experimental science' has affected research design and society itself and people's assumptions about cause and effect. If experiments can establish causes, then identification of causes can assist all areas of life, including business. But there is a major difference between `establishing cause' and `establishing correlation'. Establishing correlation is different from establishing causation. Kaplan (1987: 238±239) demonstrates this in a simple way. He cites a newspaper article on stressfulness of occupations. A study investigated 130 job categories and rated them on stressfulness using Tennessee hospital and death records as evidence of stress-related diseases such as heart attack and mental disorder. Jobs such as unskilled labourer, secretary, assembly-line inspector, clinical lab technician, office manager, foreperson were listed as `most stressful' and jobs such as clothing sewer, garment checker, stock clerk, skilled craftsperson, housekeeper, farm labourer, were labelled as `least stressful'. The newspaper advised people to avoid the stressful occupations. Kaplan (1987) points out that the evidence may not warrant the newspaper's advice. It is possible that diseases are associated with specific occupations, but this does not mean that holding the jobs causes the illnesses. People with a tendency to heart attack, for example, might be more likely to select jobs as unskilled labourers. The direction of causation might be that the state of health causes job selection. Another alternative is that a third variable is having an effect. Income levels, for instance, might affect both stress and illness. `It is well known that poor people have lower health status than wealthy people' (1987: 239). Let's look at three possible cases of causation: Job Illness Illness Job Economic Status Job Illness In the first, the job causes the illness. In the second, there is a tendency of people with illnesses to select particular jobs. In the third, economic status, a third variable, affected job choice and illness. To establish causation we 70 ME THODS OF INQUIRY would need to know that both X and Y variables co-vary, that X precedes Y in time, and that no other variable is the cause of the change. At the beginning of the 20th century the idea that experimental social science could easily establish causes was particularly appealing to industries involved in human persuasion. The advertising industry trade journals at the beginning of the century, for example, made it clear that an understanding of the psychology of audiences was essential for advertising success and that this was what their clients were paying for. In 1920, Professor Elton Mayo, chair of Psychology and Ethics at Queensland University, gave the major address at the Second Advertising Men's Conference: The ad. expert is an educator in the broadest and highest sense of the term. His task is the persuasion of the people to be civilized. . . . You must think for the housewife and if you do that for her and if she finds you are doing it, you will have her confidence. . . . It is necessary to understand the fear complexes that are disturbing our social serenity. It is not the slightest use meeting Satanism or Bolshevism by organized rage or hate. Your only chance of dealing with these things is by research, by discovering first and foremost of the cause of this mental condition. (cited in Braverman, 1974: 144±5) Mayo went on to be internationally famous in the area of industrial psychology and was involved in the famous Hawthorne Experiments in the 1930s and 1940s. The linkage of scientific experimental psychological research to commercial needs was well established in the United States by 1920 with the publication of Walter Dill Scott's Psychology and Advertising. In 1922, J.B. Watson, the famous behavioural psychologist, was appointed vice-president of advertising company J. Walter Thompson. Professor Tasman Lovell was Australia's first chair of psychology in 1923 and joined in the chorus of voices for detailed scientific research of consumer attitudes. An advocate of behavioural psychology, he proclaimed the need for advertising men to `become versed in the study of instinctive urges, of native tendencies for the need to assert himself, ``to keep his end up'', which is an aspect of the social instinct that causes him to purchase beyond what is required'. It was not until the mid-1930s, however, when audited circulations of newspapers were available, that advertising firms introduced market analysis on a large scale. J. Walter Thompson (JWT), an established American advertising agency, employed two psychologists, A.H. Martin and Rudolph Simmat, to oversee advertising research. Martin used mental tests he had developed at Columbia University to measure consumer attitudes towards advertising. In 1927 he established the Australian Institute of Industrial Psychology in Sydney with the support of the University of Sydney's psychology department and the Chamber of Manufacturers. The Institute brought `local business men in contact with advanced business practices'. Simmat was appointed research manager for JWT when it established its Australian branch in 1929. JWT standardized art production and research 71 B A L N AV E S A N D C A P U T I procedures, including segmentation of audiences. The agency divided Australian society into four market segments, based on income. Classes A and B were high income housewives. Classes C and D were average or below average income housewives. Class D had `barely sufficient or even insufficient income to provide itself with the necessities of life. Normally Class D is not greatly important except to the manufacturer of low price, necessary commodities' (Simmat, 1933: 12). Interviewing techniques were also standardized by Simmat, who had found that experience had shown that women were usually more effective as fieldworkers than men. `Experiments have indicated that persons with a very high grade of intelligence are unsatisfactory for interviewing housewives . . . usually a higher grade of intelligence is required to interview the higher class of housewife than is required to interview the lower grade housewife' (Simmat, 1933: 13). By 1932 JWT had interviewed 32,000 Australian housewives. Advertising was targeted to specific audiences, with sophistication `definitely softpedaled' for Classes C and D. `We believe that farce will be more popular with our Rinso [detergent] market than too much subtlety.' Lever, a soap manufacturer, was one of the first and major supporters of `scientific advertising'. Simmat expressed Lever's vision when he said that `Advertising enables the soap manufacturer to regard as his legitimate market every country where people wash or ought to wash'. Lever was the largest advertiser of the period. In 1933±4 Lever bought 183,000 inches of advertising space in metropolitan dailies. Soap, a simple product, crossed all market segments. The confidence among social scientists at the beginning of the 20th century that they could establish `cause and effect' was brazen, to say the least. Psychoanalysts also sold their expertise in establishing `causes' of behaviour. Take, for example, the illustrious Dr Ernest Dichter of the Institute of Motivational Research, who in the 1950s lectured to packed halls of advertisers and their agents about why people buy their goods. They must have been among the strangest gatherings held for Sydney and Melbourne businessmen. Developing his theme that `the poorest way to convince is to give facts,' he led his listeners into psycho-analysis, folklore, mythology, and anthropology. He told them of some of his case histories. There was the Case of the Nylon Bed Sheets. Women would not buy Dupont's nylon non-iron bed sheets, though they were good quality and competitively priced. In despair they consulted Dr. Dichter. He drew up his questionnaire and sent his researchers to interview the women. After exploring their answers and looking into the sexual and folk associations of bed sheets he discovered that the women were unconsciously jealous of the beautiful blonde lying on the sheets in the advertisements. (Actually, they said their husbands wouldn't like them.) When Grandma was substituted for the blonde, up went the sales. (`I'm surprised,' he said, `that most of my theories work.') Then there was the Blood and Virility Case. Men had stopped giving blood to the Blood Bank. When consulted, Dr. Dichter discovered they uncon72 ME THODS OF INQUIRY sciously feared castration or loss of masculinity. The Bank's name was changed to the Blood Lending Bank, advertisements of beautiful girls trailing masculine blood-donors were prepared, and all went well. (Jones, 1956: 23) Meanwhile, actual experiments were far more conservative in their conclusions and far more useful than Dichter's theories (guesses?) about the effects of advertising. Carl Hovland's experimental research on the effects of propaganda is a good example. He provided wartime research for the Information and Education division of the US army. Early in 1945 the Army reported that morale was being negatively affected by over-optimism about an early end to the war. The Army issued a directive to the troops informing them of the difficult tasks still ahead. The Army wanted to emphasize that the war could take longer than presumed. The directive provided an ideal topic for research ± which messages are best for influencing people? Hovland et al. (1971) used the directive in an experiment on the effect of presenting `one side' versus `both sides' in changing opinions on a controversial subject, namely the time it would take to end the war. The Armed Forces Radio Services, using official releases, constructed two programmes in the form of a commentator's analysis of the Pacific war. The commentator's conclusion was that it would take at least two years to finish the war in the Pacific after Victory in Europe. `One Side'. The major topics included in the program which presented only the arguments indicating that the war would be long (hereafter labeled Program A) were: distance problems and other logistical difficulties in the Pacific; the resources and stock piles in the Japanese empire; the size and quality of the main bulk of the Japanese army that we had not yet met in battle; and the determination of the Japanese people. This program ran for about fifteen minutes. `Both Sides'. The other program (Program B) ran for about nineteen minutes and presented all of these same difficulties in exactly the same way. The additional four minutes in this later program were devoted to considering arguments for the other side of the picture ± U.S. advantages and Japanese weaknesses such as: our naval victories and superiority; our previous progress despite a two-front war; our ability to concentrate all our forces on Japan after V-E Day; Japan's shipping losses; Japan's manufacturing inferiority; and the future damage to be expected from our expanding air war. These additional points were woven into the context of the rest of the program, each point being discussed where it was relevant. (1971: 469) Hovland conducted an initial survey of the troops in the experiment to get an idea of their opinions about the Pacific before hearing the broadcast in order to compare their opinions after the broadcast. The following tables, from Hovland's data, show that the effects were different for the two ways of presenting the messages depending on the initial stand of the listener. Table 4.1 shows that two-sided messages were effective for those who already estimated a short war and one-sided messages were more effective 73
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.