Discussion: Discuss, elaborate and give example on the topic below. Please use the Reference/Module I provided below. Professor wont consider outside sources. Please be careful with spelling and Grammar.
Author: Jackson, S.L. (2017) Statistics plain and simple. (4th ed.). Boston, MA: Cengage Learning
When there is a between-subjects design, use a one-way between-subjects ANOVA, which uses only one independent variable. 275 words.
In this chapter, we discuss the common types of statistical analyses used with designs involving more than two groups. The inferential statistics discussed in this chapter differ from those presented in the previous two chapters. In Chapter 5, single samples were being compared to populations (z test and t test), and in Chapter 6, two independent or correlated samples were being compared. In this chapter, the statistics are designed to test differences between more than two equivalent groups of subjects.
Several factors influence which statistic should be used to analyze the data collected. For example, the type of data collected and the number of groups being compared must be considered. Moreover, the statistic used to analyze the data will vary depending on whether the study involves a between-subjects design (designs in which different subjects are used in each group) or a correlated-groups design. (Remember, correlated-groups designs are of two types: within-subjects designs, in which the same subjects are used repeatedly in each group, and matched-subjects designs, in which different subjects are matched between conditions on variables that the researcher believes are relevant to the study.)
We will look at the typical inferential statistics used to analyze interval-ratio data for between-subjects designs. In Module 13 we discuss the advantages and rationale for studying more than two groups; in Module 14 we discuss the statistics appropriate for use with between-subjects designs involving more than two groups.
•Explain what additional information can be gained by using designs with more than two levels of an independent variable.
•Explain and be able to use the Bonferroni adjustment.
•Identify what a one-way between-subjects ANOVA is and what it does.
•Describe what between-groups variance is.
•Describe what within-groups variance is.
•Understand conceptually how an F-ratio is derived.
The experiments described so far have involved manipulating one independent variable with only two levels—either a control group and an experimental group or two experimental groups. In this module, we discuss experimental designs involving one independent variable with more than two levels. Examining more levels of an independent variable allows us to address more complicated and interesting questions. Often, experiments begin as two-group designs and then develop into more complex designs as the questions asked become more elaborate and sophisticated.
Researchers may decide to use a design with more than two levels of an independent variable for several reasons. First, it allows them to compare multiple treatments. Second, it allows them to compare multiple treatments to no treatment (the control group). Third, more complex designs allow researchers to compare a placebo group to control and experimental groups (Mitchell & Jolley, 2001).
To illustrate this advantage of more complex experimental designs, imagine that we want to compare the effects of various types of rehearsal on memory. We have participants study a list of 10 words using either rote rehearsal (repetition) or some form of elaborative rehearsal. In addition, we specify the type of elaborative rehearsal to be used in the different experimental groups. Group 1 (the control group) uses rote rehearsal, Group 2 uses an imagery mnemonic technique, and Group 3 uses a story mnemonic device. You may be wondering why we do not simply conduct three studies or comparisons. Why don’t we compare Group 1 to Group 2, Group 2 to Group 3, and Group 1 to Group 3 in three different experiments? There are several reasons why this is not recommended.
You may remember from Module 11 that a t test is used to compare performance between two groups. If we do three experiments, we need to use three t tests to determine any differences. The problem is that using multiple tests inflates the Type I error rate. Remember, a Type I error means that we reject the null hypothesis when we should have failed to reject it; that is, we claim that the independent variable has an effect when it does not. For most statistical tests, we use the .05 alpha level, meaning that we are willing to accept a 5% risk of making a Type I error. Although the chance of making a Type I error on one t test is .05, the overall chance of making a Type I error increases as more tests are conducted.
Imagine that we conducted three t tests or comparisons among the three groups in the memory experiment. The probability of a Type I error on any single comparison is .05. The probability of a Type I error on at least one of the three tests, however, is considerably greater. To determine the chance of a Type I error when making multiple comparisons, we use the formula 1 − (1 − α)c, where c equals the number of comparisons performed. Using this formula for the present example, we get
1 − (1 − .05)3 = 1 − (.95 )3 = 1 − .86 = .14
Thus, the probability of a Type I error on at least one of the three tests is .14, or 14%.
Bonferroni adjustment A means of setting a more stringent alpha level in order to minimize Type | errors.
One way of counteracting the increased chance of a Type I error is to use a more stringent alpha level. The Bonferroni adjustment , in which the desired alpha level is divided by the number of tests or comparisons, is typically used to accomplish this. For example, if we were using the t test to make the three comparisons just described, we would divide .05 by 3 and get .017. By not accepting the result as significant unless the alpha level is .017 or less, we minimize the chance of a Type I error when making multiple comparisons. We know from discussions in previous modules, however, that although using a more stringent alpha level decreases the chance of a Type I error, it increases the chance of a Type II error (failing to reject the null hypothesis when it should have been rejected—missing an effect of an independent variable). Thus, the Bonferroni adjustment is not the best method of handling the problem. A better method is to use a single statistical test that compares all groups rather than using multiple comparisons and statistical tests. Luckily for us, there is a statistical technique that will do this—the analysis of variance (ANOVA), which will be discussed shortly.
Another advantage of comparing more than two kinds of treatment in one experiment is that it reduces both the number of experiments conducted and the number of subjects needed. Once again, refer back to the three-group memory experiment. If we do one comparison with three groups, we can conduct only one experiment, and we need subjects for only three groups. If, however, we conduct three comparisons, each with two groups, we need to perform three experiments, and we need participants for six groups or conditions.
Using more than two groups in an experiment also allows researchers to determine whether each treatment is more or less effective than no treatment (the control group). To illustrate this, imagine that we are interested in the effects of aerobic exercise on anxiety. We hypothesize that the more aerobic activity engaged in, the more anxiety will be reduced. We use a control group that does not engage in any aerobic activity and a high aerobic activity group that engages in 50 minutes per day of aerobic activity—a simple two-group design. Assume, however, that when using this design, we find that both those in the control group and those in the experimental group have high levels of anxiety at the end of the study—not what we expected to find. How could a design with more than two groups provide more information? Suppose we add another group to this study—a moderate aerobic activity group (25 minutes per day)—and get the following results:
|Control Group||High Anxiety|
|Moderate Aerobic Activity||Low Anxiety|
|High Aerobic Activity||High Anxiety|
Based on these data, we have a V-shaped function. Up to a certain point, aerobic activity reduces anxiety. However, when the aerobic activity exceeds a certain level, anxiety increases again. If we had conducted only the original study with two groups, we would have missed this relationship and erroneously concluded that there was no relationship between aerobic activity and anxiety. Using a design with multiple groups allows us to see more of the relationship between the variables.
Figure 13.1 illustrates the difference between the results obtained with the three-group versus the two-group design in this hypothetical study. It also shows the other two-group comparisons—control compared to moderate aerobic activity, and moderate aerobic activity compared to high aerobic activity. This set of graphs allows you to see how two-group designs limit our ability to see the full relationship between variables.
FIGURE 13.1FIGURE> Determining relationships with three-group versus two-group designs: (a) three-group design; (b) two-group comparison of control to high aerobic activity; (c) two-group comparison of control to moderate aerobic activity; (d) two-group comparison of moderate aerobic activity to high aerobic activity
Figure 13.1a shows clearly how the three-group design allows us to assess more fully the relationship between the variables. If we had only conducted a two-group study, such as those illustrated in Figure 13.1b, c, or d, we would have drawn a much different conclusion from that drawn from the three-group design. Comparing only the control to the high aerobic activity group (Figure 13.1b) would have led us to conclude that aerobic activity does not affect anxiety. Comparing only the control to the moderate aerobic activity group (Figure 13.1c) would have led to the conclusion that increasing aerobic activity reduces anxiety. Comparing only the moderate aerobic activity group to the high aerobic activity group (Figure 13.1d) would have led to the conclusion that increasing aerobic activity increases anxiety.
Being able to assess the relationship between the variables means that we can determine the type of relationship that exists. In the previous example, the variables produced a V-shaped function. Other variables may be related in a straight linear manner or in an alternative curvilinear manner (for example, a J-shaped or S-shaped function). In summary, adding levels to the independent variable allows us to determine more accurately the type of relationship that exists between the variables.
A final advantage of designs with more than two groups is that they allow for the use of a placebo group—a group of subjects who believe they are receiving treatment but in reality are not. A placebo is an inert substance that participants believe is a treatment. How can adding a placebo group improve an experiment? Consider an often-cited study by Paul (1966, 1967) involving children who suffered from maladaptive anxiety in public-speaking situations. Paul used a control group, which received no treatment; a placebo group, which received a placebo that they were told was a potent tranquilizer; and an experimental group, which received desensitization therapy. Of the participants in the experimental group, 85% showed improvement, compared with only 22% in the control condition. If the placebo group had not been included, the difference between the therapy and control groups (85% − 22% = 63%) would overestimate the effectiveness of the desensitization program. The placebo group showed 50% improvement, meaning that the therapy’s true effectiveness is much less (85% − 50% = 35%). Thus, a placebo group allows for a more accurate assessment of a therapy’s effectiveness because, in addition to spontaneous remission, it controls for participant expectation effects.
DESIGNS WITH MORE THAN TWO LEVELS OF AN INDEPENDENT VARIABLE
|Allows comparisons of more than two types of treatment||Type of statistical analysis (e.g., multiple t tests or ANOVA)|
|Fewer subjects are needed||Multiple t tests increase chance of Type I error; Bonferroni adjustment increases chance of Type II error|
|Allows comparisons of all treatments to control condition|
|Allows for use of a placebo group with control and experimental groups|
Imagine that a researcher wants to compare four different types of treatment. The researcher decides to conduct six individual studies to make these comparisons. What is the probability of a Type I error, with α = .05, across these six comparisons? Use the Bonferroni adjustment to determine the suggested alpha level for these six tests.
ANOVA (analysis of variance) An inferential parametric statistical test for comparing the means of three or more groups.
As noted previously, t tests are not recommended for comparing performance across groups in a multiple-group design because of the increased probability of a Type I error. For multiple-group designs in which interval-ratio data are collected, the recommended parametric statistical analysis is the ANOVA (analysis of variance). As its name indicates, this procedure allows us to analyze the variance in a study. You should be familiar with variance from Chapter 3 on descriptive statistics. Nonparametric analyses are also available for designs in which ordinal data are collected (the Kruskal-Wallis analysis of variance) and for designs in which nominal data are collected (chi-square test). We will discuss some of these tests in later modules.
We will begin our coverage of statistics appropriate for multiple-group designs by discussing those used with data collected from a between-subjects design. Recall that a between-subjects design is one in which different participants serve in each condition. Imagine that we conducted the study mentioned at the beginning of the module in which subjects are asked to study a list of 10 words using rote rehearsal or one of two forms of elaborative rehearsal. A total of 24 participants are randomly assigned, 8 to each condition. Table 13.1 lists the number of words correctly recalled by each participant.
one-way between-subjects ANOVA An inferential statistical test for comparing the means of three or more groups using a between-subjects design.
Because these data represent an interval-ratio scale of measurement and because there are more than two groups, an ANOVA is the appropriate statistical test to analyze the data. In addition, because this is a between-subjects design, we use a one-way between-subjects ANOVA . As discussed earlier, the term between-subjects indicates that participants have been randomly assigned to conditions in a between-subjects design. The term one-way indicates that the design uses only one independent variable—in this case, type of rehearsal. We will discuss statistical tests appropriate for correlated-groups designs and tests appropriate for designs with more than one independent variable in Chapter 8. Please note that although the study used to illustrate the ANOVA procedure in this section has an equal number of subjects in each condition, this is not necessary to the procedure.
TABLE 13.1 Number of words recalled correctly in rote, imagery, and story conditions
|¯¯¯X=4X¯=4||¯¯¯X=5.5X¯=5.5||¯¯¯X=8X¯=8||Grand Mean = 5.833|
The analysis of variance (ANOVA) is an inferential statistical test for comparing the means of three or more groups. In addition to helping maintain an acceptable Type I error rate, the ANOVA has the added advantage over using multiple t tests of being more powerful and thus less susceptible to a Type II error. In this section, we will discuss the simplest use of ANOVA—a design with one independent variable with three levels.