174
Views
5
CrossRef citations to date
0
Altmetric
Original Articles

The Efficacy of Learning Teams: A Comparative Analysis

Pages 336-344 | Published online: 28 Jun 2013
 

Abstract

This article investigates the effect of long-term learning teams on a student's retention of factual knowledge, ability to apply theories to analyze political events, and perception of the learning experience. The study compares two sections of an introductory world politics course—one that grouped students into long-term learning teams and one that did not. The results revealed that learning teams that met weekly helped students recall facts and cooperate with others to learn. However, learning teams did little to enhance a student's ability to use theories to explain political events. This suggests that future analysis of learning teams might focus on the dynamics found within the group and/or on how student personalities affect the learning process.

Notes

Note: X2 for factual knowledge is 21.13 with p < .05 and 11 degrees of freedom. X2 for application/analysis is 14.19 with p < .05 and 7 degrees of freedom. The X2 for factual knowledge is based on the examination of Columns A and B, while the X2 for application /analysis is based on the examination of Columns C and D. NA is not applicable.

Note: GE is greater than or equal to; ns is not significant. All significance tests are two tailed.

Note: ns is not significant. All significance tests are two tailed.

Hartlaub and Lancaster (Citation2008, 380) found that small-group team approaches are employed in teaching a high proportion of both introductory and advanced political science courses.

Learning teams must be differentiated from learning communities. As Huerta (Citation2004, 291) notes, there are several forms of learning community. One of the more common is the “linked class” learning community in which a cohort of students enrolls together in a series of courses. This cohort is encouraged to work together as they handle assignments from each of the linked classes. In some cases, these assignments are coordinated across the courses and/or the classes may examine common themes or problems from several disciplinary points of view. Another more informal type of learning community involves weekly meetings between professors and students to explore mutually agreed upon topics that may or may not be associated with a formal course. As Davis (Citation1993, 147–148) points out, learning teams involve cooperative learning within the context of a single course.

These descriptions are based on Davis’ (Citation1993, 147–148) discussion of learning teams. The labels were devised by the author. To clarify, informal learning teams exist for one class session. When the class ends for the day, the team dissolves. Project learning teams work on a single educational task, disbanding when the assignment is completed. Long-term learning teams (the type examined in this study) are formed early in the course and meet throughout the semester or quarter, disbanding after the final examination or final assignment for the course has been completed. Additional discussions of learning teams include explanations by Occhipinti (Citation2003) and Huerta (Citation2007) of how they employ learning teams in class and descriptions by Bryant (Citation2005), Du et al. (Citation2005), and Wilson et al. (Citation2007) of how learning teams can be used outside formal class settings and online.

Wilson et al. (2007, 133) refer to their teams as discussion groups. These groups, however, were set up to promote student discussions and “remained constant throughout the semester,” thereby meeting the functional and structural requirements of a long-term learning team.

For additional discussions of active learning and how it is defined, see Bonwell and Eison (Citation1991), Bean (Citation2001, 121–181), and Du et al. (Citation2005).

The Institutional Review Board for Human Subjects Research at Miami University (Ohio) approved this research. Miami University is a publicly financed institution in southwest Ohio with approximately 17,000 students. U.S. News ranks Miami #79 among national universities. Miami accepts approximately 79% of its applicants. Among Miami's students, 25% received a composite ACT score of 24 or lower, while 25% received a score of 28 or higher.

To determine whether student performances in fall and spring semesters differ, the author compared a spring 2010 World Politics class of 72 students with the fall class examined here. The classes met at the same time on the same days of the week and received the same lectures and the same assignments as were used for the present analysis. The author taught every class, and learning teams were not employed in either class. A comparison of the factual knowledge and application scores for both classes revealed that they were nearly identical. Hence, the author concluded that this research was not affected by the comparison of the fall 2010 and spring 2011 semesters.

The reader is reminded of the discussion in Note 3 of the distinction between informal, project, and long-term learning teams. It should be noted that research designs similar to the one employed here have been used by Brooks (Citation2008) to investigate the effects of studying abroad, Engstrom (Citation2008) to evaluate the use of a comparative approach when teaching introductory American government, and Omelicheva and Avdeyeva (Citation2008) to assess the relative merits of lectures and debates for encouraging analytical thinking and recall of course material. It also is important to point out that the fall class was strictly a lecture course and had no elements of peer learning. That is, neither informal learning teams (such as discussion groups) nor project learning teams (such as small-group assignments) were employed.

The learning teams were assigned to meet outside the normal class meeting hours. As Trudeau (Citation2005, 290) notes, this allows for the maximum coverage of course material while still providing for a substantial amount of student discussion and interaction. In addition, the learning teams met without the attendance of either the professor or a graduate student assistant to avoid having the presence of an authority figure inhibit discussion (see Clawson et al. Citation2002, 713 and Wilson et al. Citation2007, 132 on this point).

The class was provided with detailed written instructions outlining the rules the learning teams were expected to follow. In addition, the author carefully discussed the functions of the learning teams with the class.

The author evaluated these forms each week and suggested that learning teams make changes as needed.

Attendance at team meetings was good, with 76% of the class missing no more than one team meeting, 84% missing two meetings or less, and 90% missing three meetings or less.

As was noted earlier, the scholarly literature depicts learning teams as enhancing factual recall by providing frequent reviews of course material that reinforce the presentations in class and in the readings.

The grading procedure employed assigned an A to answers that provided accurate and detailed definitions for a concept, a B to answers that were accurate but not very detailed, a C to answers that contained minor inaccuracies, a D to answers that had major inaccuracies, but that still seemed to have some grasp of the concept, and an F to answers that could not provide any accurate information regarding the concept. To guard against biased grading, 15 examinations from each of the exams in each of the classes in the study were intermixed with examinations from previous classes and graded again. In all cases, this exercise produced the same grade as was originally assigned.

To remind the reader, the scholarly literature describes learning teams as enhancing application and analysis by providing students with a chance to collaborate in evaluating how theories may be used to solve problems.

An example of such a problem would be applying deterrence theory to analyze the outbreak of the Gulf War in 1990, the invasion of Poland in 1939, or the beginning of the Korean War in 1950. When grading these application exercises, an A was assigned to answers that explained the theory in detail and applied it flawlessly to the problem, a B answer explained the theory and applied it accurately but had minor mistakes or omissions, a C answer had major mistakes either in the explanation of the theory or in its application, a D response had major mistakes both in the explanation and in the application, and an F was assigned when the student did not seem to comprehend either the theory or how it could be applied. To avoid bias, the same type of intermixing of exams that was described in Note 14 was employed with this exercise with the same results.

Chi square allows one to compare two or more distributions to assess whether they are independent or appear to come from a single population (see Hodges et al. Citation1975, 219–237).

The z test for statistical significance was employed instead of a t test because the sample sizes were relatively large and because the z test does not require that the data be normally distributed. Of course, the data in this analysis are not from a random sample, which both the z test and the t test require. Therefore, all significance tests here should be treated as an approximation. All tests were two tailed. See Hodges et al. (Citation1975, 185–188) and Bowen and Weisberg (Citation1980, 134–138).

The absence of a difference between the classes in the proportion of A grades on the factual knowledge exams implies that the classes had an approximately equal number of students who routinely excel in class.

The classes were evaluated to assess whether a higher overall class GPA, a larger number of first-year students, or more political science majors in one class or the other affected the results. The fall class had an overall GPA of 3.7, while in the spring the overall GPA was 3.6. There was no significant difference between these figures. First-year students accounted for 11% of the class in the fall and for 13% in the spring. Political Science Department majors made up 31% of the class in the fall and 30% in the spring. A comparison of factual knowledge and application grades revealed no differences between first-year and other students and between majors and nonmajors. Hence, it seems appropriate to conclude that these variables did not affect the results. All data pertaining to the students’ GPA, year in college, and major came from the university registrar.

The “overall learning experience” and “ease of learning” scores were calculated by multiplying the number of students who responded “excellent” by 4, the number who responded “good” by 3, the number who said “fair” by 2, and the number answering “poor” by 1. These numbers were then totaled and divided by the number of responding students. The “learning was enhanced by working with other students” score is the proportion of students who responded “yes.”

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.