80
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Teaching Bits: A Resource for Teachers of Statistics

&

Abstract

This column features "bits" of information sampled from a variety of sources that may be of interest to teachers of statistics. Bob abstracts information from the literature on teaching and learning statistics, while Bill summarizes articles from the news and other media that may be used with students to provoke discussions or serve as a basis for classroom activities or student projects. Bill's contributions are derived from Chance News ( http://www.dartmouth.edu/~chance/chance_news/news.html). Like Chance News, Bill's contributions are freely redistributable under the terms of the GNU General Public License (http://gnu.via.ecp.fr/copyleft/gpl.html), as published by the Free Software Foundation. We realize that due to limitations in the literature we have access to and time to review, we may overlook some potential articles for this column, and therefore encourage you to send us your reviews and suggestions for abstracts.

From the Literature on Teaching and Learning Statistics

Principles and Standards for School Mathematics

by The National Council of Teachers of Mathematics, Inc. (2000), Reston, VA.

The new NCTM Principles and Standards for School Mathematics is now available. Copies of the new Principles and Standards can be purchased directly from NCTM at their web site (http://www.nctm.org). The following excerpt is from the web site’s introduction to the Principles and Standards:

With more than 400 content-packed pages, it delineates six Principles that should guide school mathematics programs and 10 Standards that propose content and process goals. The book features full-color photos and artwork and includes a coupon for a complimentary CD-ROM with the fully searchable E-Standards, the electronic edition of the book.… Principles and Standards offers vision and direction for school mathematics programs. The Principles set forth important characteristics of mathematics programs, and the Standards discuss the mathematics that students need to know and be able to do across the grades. The grade-band chapters (pre-K–2, 3–5, 6–8, 9–12) provide both specific expectations for those grades and a plethora of engaging examples to bring those ideas to life. The introductory and final chapters describe the broader vision of the document—introducing that vision and then setting forth how we need to work together to attain that vision.

“The Geometry of Statistics”

by David L. Farnsworth (2000). The College Mathematics Journal, 31(3), 200-204.

The article provides a geometric justification for the centrality of the arithmetic mean and normal density function. The presentation demonstrates that the preferred measure of central tendency for a given situation (e.g., arithmetic mean versus the median) is dependent on the geometry used to represent the situation and the definition of distance within the geometry. This article may provide a deeper understanding of central tendency, variance, and density functions for the mathematically sophisticated student.

“Discovering an Optimal Property of the Mean”

by Neil C. Schwertman (2000). The Mathematics Teacher, 93(4), 304-311.

The author describes two activities that illustrate how to determine the optimal location for a radio transmitter and for a fire station. The activities are designed to lead students to discover a unique property of the arithmetic mean: the sum of the squared deviations about any point is a minimum when the chosen point is the arithmetic mean. The activities encourage students to use reasoning and quantitative problem-solving to discover solutions to practical problems. Activity sheets are provided at the end of the article.

“Using Financial Headlines and the Internet to Keep Statistics Classes Fresh”

by Marilyn B. Durkin (2000). The Mathematics Teacher, 93(4), 318-323.

The article provides suggestions on how to use newspaper headlines and the Internet to help students develop analysis, research, and writing skills. Newspaper headlines that make claims about business trends are presented in class. The claims are translated into testable hypotheses. Students then search the Internet for data and information that will address the hypotheses. The students learn how to organize the information to produce a complete picture of the situation, and to decide whether or not the newspaper claim is justified. Examples are provided that address patterns in the Dow Jones industrial average, the existence of correlations between the DJIA and other entities (e.g., stock in a company), and the question of whether one financial indicator is a better market indicator than others.

Mathematical Thinking and Learning: Special Issue on Statistical Thinking and Learning 2000, Volume 2, Numbers 1 and 2

“Statistical Thinking and Learning”

by Brian Greer (Editor’s Introduction), 1-9.

“The Longitudinal Development of Understanding of Average”

by Jane M. Watson and Jonathan B. Moritz, 11-50.

Abstract: The development of the understanding of average was explored through interviews with 94 students from Grades 3 to 9, follow-up interviews with 22 of these students after 3 years, and follow-up interviews with 21 others after 4 years. Six levels of response were observed based on a hierarchical model of cognitive functioning. The first four levels described the development of the concept of average from colloquial ideas into procedural or conceptual descriptions to derive a central measure of a data set. The highest two levels represented transferring this understanding to one or more applications in problem-solving tasks to reverse the averaging process and to evaluate a weighted mean. Usage of ideas associated with the three standard measures of central tendency and with representation are documented, as are strategies for problem solving. Implications for mathematics educators are discussed.

“Inventing Data Structures for Representational Purposes: Elementary Grade Students’ Classification Models”

by Richard Lehrer and Leona Schauble, 51-74.

Abstract: This study concerns the development of children’s understanding of data and classification. Three intact classrooms of students and their teachers (18 at 1st and 2nd grade, 25 at 4th grade, and 22 at 5th grade) worked over several sessions to categorize drawings made by other children and to progressively “mathematize” their categorization rules. The goal was to develop a model that would make explicit their initial guesses about the grade level of the artists. In small groups, students developed, applied, and made iterations of revisions to their data models. The details of instruction and duration varied somewhat in accord with the capabilities and needs of students of different ages. The youngest children evolved systems of attributes that described their categories in a post hoc fashion but failed to come to regard those rules as a model to guide classification. In contrast, 4th and 5th graders considered their category systems as models that logically constrained the members admitted to categories. These students came to appreciate dimensional attribute-value structure, although many continued to include redundant or extraneous information. They incorporated and discussed a variety of kinds of decision rules, including ways of combining information, such as differentially weighing diagnostic attributes. By engaging with data characterized by prototypic rather than crisp membership values, students had the opportunity to see the intellectual work performed by practices of data modeling.

“Controversies Around the Role of Statistical Tests in Experimental Research”

by Carmen Batanero, 75-98.

Abstract: Despite widespread use of significance testing in empirical research, its interpretation and researchers’ excessive confidence in its results have been criticized for years. In this article, the logic of statistical testing in the Fisher and Neyman-Pearson approaches are described, some common misinterpretations of basic concepts behind statistical tests are reviewed, and the philosophical and psychological issues that can contribute to these misinterpretations are analyzed. Some frequent criticisms against statistical tests are revisited, with the conclusion that most of them refer not to the tests themselves but to the misuse of tests on the part of researchers. In accordance with Levin (1998a), statistical tests should be transformed into a more intelligent process that helps researchers in their work. Possible ways in which statistical education might contribute to the better understanding and application of statistical inference are suggested.

“Assessment in Statistics Education: Issues and Challenges”

by Joan Garfield and Beth Chance, 99-126.

There have been many changes in educational assessment in recent years, both within the fields of measurement and evaluation and in specific disciplines. In this article, we summarize current assessment practices in statistics education, distinguishing between assessment for different purposes and assessment at different educational levels. To provide a context for assessment of statistical learning, we first describe current learning goals for students. We then highlight recent assessment methods being used for different purposes: individual student evaluation, large-scale group evaluation, and as a research tool. Examples of assessment used in teaching statistics in primary schools, secondary schools, and tertiary schools are given. We then focus on 3 examples of effective uses of assessment and conclude with a description of some current assessment challenges.

“Toward Understanding the Role of Technological Tools in Statistical Learning”

by Dani Ben-Zvi, 127-155.

Abstract: This article begins with some context setting on new views of statistics and statistical education. These views are reflected, in particular, in the introduction of exploratory data analysis (EDA) into the statistics curriculum. Then, a detailed example of an EDA learning activity in the middle school is introduced, which makes use of the power of the spreadsheet to mediate students’ construction of meanings for statistical conceptions. Through this example, I endeavor to illustrate how an attempt at serious integration of computers in teaching and learning statistics brings about a cascade of changes in curriculum materials, classroom praxis, and students’ ways of learning. A theoretical discussion follows that underpins the impact of technological tools on teaching cognitive and sociocultural processes. Subsequently, I present a sample of educational technologies, which represents the sorts of software that have typically been used in statistics instruction: statistical packages (tools), microworlds, tutorials, resources (including Internet resources), and teachers’ metatools. Finally, certain implications and recommendations for the use of computers in the statistical educational milieu are suggested.

Journal of Educational and Behavioral Statistics: Teacher’s Corner

“Some Class-Participation Demonstrations for Introductory Probability and Statistics”

by Andrew Gelman and Mark E. Glickman (2000). Journal of Educational and Behavioral Statistics, 25(1), 84- 100.

The authors present classroom demonstrations they use in an introductory undergraduate course in probability and statistics. The activities provide students with experience in a wide variety of topics that include estimation, experimental design, survey design, survey sampling, conditional probability, random sequences, hypothesis testing, confidence intervals, statistical power, and multiple comparisons.

The American Statistician: Teachers Corner

“How Large Does n Have to be for Z and t intervals?”

by Dennis D. Boos and Jacqueline M. Hughes-Oliver (2000). The American Statistician, 54(2), 121-128.

Abstract: Students invariably ask the question “How large does n have to be for Z and t intervals to give appropriate coverage probabilities?” In this article we review the role of β 1 ( X ) / n , where β 1 ( X ) is the skewness coefficient of the random sample, in the answer to this question. We also comment on the opposite effect that β 1 ( X ) has on the behavior of tintervals compared to Z intervals, and we suggest simple exercises for deriving rules of thumb for n that result in appropriate confidence interval coverage. Our presentation follows the format of lesson plans for three course levels: introductory, intermediate, and advanced. These lesson plans are sequentially developed, meaning that the lesson plan for an intermediate level course includes all activities from the lesson plan for an introductory course, but with additional explanations and/or activities.

“Spline Bottles”

by Thaddeus Tarpey and John Holcomb (2000). The American Statistician, 54(2), 129-135.

Abstract: The relationship between the volume of a liquid in a bottle as a function of the height of the liquid is used to illustrate polynomial regression and the fitting of spline models. A polynomial regression exercise is provided where students attempt to determine the shape of a bottle using only the height and volume data from the bottle. The height and volume data for bottles are easy to obtain and provide an interesting data-collecting activity for introducing spline models in a regression course.

“The Effects of Active Learning Methods on Student Retention in Engineering Statistics”

by Paul H. Kvam (2000). The American Statistician, 54(2), 136-140.

Abstract: An experiment was carried out to investigate the long-term effects of active learning methods on student retention in an introductory engineering statistics class. Two classes of students participated in the study —one class was taught using traditional lecture-based learning, and the other class stressed group projects and cooperative learning-based methods. Retention was measured by examining the students immediately after the course finished, and then again eight months later. The findings suggest that active learning can help to increase retention for students with average or below average scores. Graphical displays of the data, along with standard statistical analyses, help explain the observed difference in retention between students in the two different learning environments.

The Statistics Teacher Network Spring 2000, Number 54

”A Look at Statistics on One of Casio’s Color Graphing Calculators” by Margaret Deckman, 1-4.

The article provides a favorable review of the Casio 9850GA PLUS color-graphing calculator. The author describes how to perform operations on the calculator that are commonly used in statistics courses. In the conclusion, the author states “I wrote this brief article to give the readership knowledge that a lesser expensive yet comparable calculator to the TI-83 exists, namely the CASIO 9850GA PLUS. I have enjoyed using it; they seem to have thought of everything!”

“Teaching Probability to Young Children” by Cyrilla Bolster, 4-7.

The author describes a simple four-step procedure that provides an effective way to teach probability to young students. The article illustrates how students can use the procedure to determine whether a game of chance is fair.

In applying the procedure, students develop a probabilistic understanding of fairness and explore the role of chance and random events in determining the fairness of a game.

“Introducing Descriptive Statistics and Graphical Summaries” by Brad Warner, 7-8.

The author describes a ten-minute class activity designed to give students a better understanding of the roles played by descriptive statistics and graphical summaries in statistics. One student is chosen to sit facing the rest of the class with the instructor standing behind the chosen student. The instructor shows the rest of the class an object. The class tries to get the chosen student to make a correct identification by presenting adjectives that describe the object. The goal is to use as few adjectives as possible. The activity illustrates that only a few adjectives are needed to identify a complex object. The result is likened to the use of summary statistics to describe a complex dataset. A similar game can be played by using drawings instead of adjectives and likening the process to graphical summaries. The author suggests that the activity be performed prior to a formal treatment of summary statistics and graphical summaries in order to provide students with a better understanding of their purpose.

Teaching Statistics

A regular component of the Teaching Bits Department is a list of articles from Teaching Statistics, an international journal based in England. Brief summaries of the articles are included. In addition to these articles, Teaching Statistics features several regular departments that may be of interest, including Computing Corner, Curriculum Matters, Data Bank, Historical Perspective, Practical Activities, Problem Page, Project Parade, Research Report, Book Reviews, and News and Notes.

The Circulation Manager of Teaching Statistics is Peter Holmes, [email protected], RSS Centre for Statistical Education, University of Nottingham, Nottingham NG7 2RD, England. Teaching Statistics has a web site at http://s cience.ntu.ac.uk/rs s cs e/ts/.

Teaching Statistics, Summer 2000 Volume 22, Number 2

“Storks Deliver Babies (p = .008)” by Robert Mathews, 36-38.

Summary: This article shows that a highly statistically significant correlation exists between stork populations and human birth rates across Europe. While storks may not deliver babies, unthinking interpretation of correlation and p-values can certainly deliver unreliable conclusions.

“Using Probability Intervals to Evaluate Long-Term Gambling Success” by John S. Croucher, 42-44.

The author provides examples of how simple probability calculations can be used to demonstrate the risks of gambling to statistics students.

“Assessment of Student Performance in Statistics” by Carl J. Huberty, 44-48.

Summary: This article describes assessment and scoring methods that have been used successfully in graduate- level statistics teaching.

“Six Billion Characters in Search of an Author” by Mike Fuller, 49-52.

From the Author’s Introduction: Because it permeates such a wide range of topics, people working with statistics often have to acquire information about, and an understanding of, topics far from their own. The Internet is a tremendous resource for finding relevant material on almost any area. This article firstly looks at searching using the Internet. It then goes on to apply this approach to finding web based resources about human population issues —an area in which statistical data and analysis have a lot to offer.

“Bar and Pie Charts: Ideas for the Classroom” by Pat Perks and Stephanie Prestage, 52-55.

The authors describe three hands-on activities designed to help students learn how to construct bar and pie charts by focusing student attention on representations of frequencies and proportions.

“Two-Way Interactions: They’re Top of the Pops” by Vanessa Simonite, 58-60.

The author describes an activity designed to illustrate a two-way interaction and the risks of specifying too simple a model. Students and staff answer questions about pop music in the 1960’s and the 1990’s. Results presented in the article produced an effect for respondent type, no main effect for musical era, and an interaction between respondent type and era. The author concludes that the activity provides for quick collection of data, is interesting and entertaining for students, and illustrates the main points of two-way interactions quite well.

Topics for Discussion from Current Newspapers and Journals

“In an AIDS Study, the Devil Is in the Details”

by Jerome Groopman, The New York Times, 2 April 2000, Week in Review, p. 4.

A recent medical study on sexually transmitted diseases has raised serious ethical questions. The study, led by Dr. Thomas Quinn of Johns Hopkins University, was reported in the March 30, 2000, issue of the New England Journal of Medicine. The research involved 15,127 subjects in ten clusters of rural villages in Uganda. The aim of the first part of the study was to determine if the presence of other sexually transmitted diseases such as syphilis and gonorrhea increased the risk of being infected with the HIV virus.

All subjects were initially counseled on prevention of HIV, offered free condoms and confidential HIV testing, and encouraged to inform their partners if they became infected with HIV. Then, in five of the clusters, subjects were periodically given antibiotic treatment to reduce the risk of sexually transmitted diseases; subjects in the other five clusters were not given antibiotic treatment. Five community-based surveys were conducted in intervals of ten months in order to study to the incidence and progress of sexual diseases. The study found that antibiotic treatment did not decrease the prevalence of HIV, but it did decrease the number of other sexually transmitted diseases. These results were reported earlier in an article in the British journal The Lancet.

The New York Times article reports on a second aspect of the study. The researchers initially identified 415 couples who, at the beginning of the study, had one partner HIV-positive and the other HIV-negative. The doctors did not inform the uninfected partners of their partners’ infection. By the end of a 30-month follow up period, 90 of the original HIV-negative partners had become positive. The main finding of this part of the study was that the “viral load” is the chief predictor of the risk of heterosexual transmission of HIV. In other words— and perhaps not surprising—the more virus you are carrying, the more likely you are to infect a partner.

In a New England Journal of Medicine editorial accompanying the study, Dr. Marcia Angell stated that a study that withheld treatment for HIV patients and did not inform their HIV-negative partners would not be approved in the U.S. After reviewing the arguments for having different guidelines for developed and underdeveloped countries, she continues to believe that ethical standards should not be different in different countries. Dr. Quinn responded by noting that Ugandan authorities had insisted that only the infected subjects had the right to tell their partners that they were infected. In addition, all subjects were given better care, including antibiotics for such diseases as gonorrhea and syphilis, than they would have obtained had they not been part of the study.

These issues bring to mind the Tuskegee study in the U.S., in which black men who had contracted syphilis were deliberately not given penicillin so that the researchers could observe the course of the disease. A major difference in the present case is that anti-HIV drugs are simply not available in Uganda. But whether that fact justifies the present study is still open to debate.

“The Politics of Race and the Census”

by Steven A. Holmes, The New York Times, 19 March 2000, Sect. 4, p. 3.

The Office of Management and Budget (OMB) is responsible for determining how the federal government collects census information on race and ethnicity. After extensive hearings and consultations, the OMB decided in 1997 to include five categories for race: (1) American Indian or Alaska Native, (2) Asian, (3) Black or African-American, (4) Native Hawaiian or other Pacific Islander, and (5) White. They established two categories for ethnicity: (1) Hispanic or Latino and (2) Not Hispanic or Latino. The so-called “minimal standards” require that respondents be offered the option of selecting one or more racial designations.

The Census follows these standards but allows for more detailed information. The first question is on ethnicity and asks whether or not you are “Spanish/Hispanic/Latino.” If you are, you can then mark a box for (1) Mexican, Mexican American, Chicano, (2) Puerto Rican, (3) Cuban, or (4) other Spanish/Hispanic/Latino. The second question asks about race. You are asked to select one or more of the following categories: (1) White, (2) Black/African-American/Negro, (3) American Indian or Alaskan Native, (4) Asian Indian, (5) Chinese, (6) Filipino, (7) Japanese, (8) Korean, (9) Vietnamese, (10) Native Hawaiian, (11) Guamanian, (12) Chamorro, (13) Samoan, (14) Other Pacific Islander, and (15) Some other race.

Mixed-race couples welcome the changes, since their children were previously forced to designate a single race. Civil rights groups had worried that any kind of “multiracial” categorization would reduce the number officially recognized as belonging to a specific minority. In response to political pressure from these groups, OMB decided that people listing themselves as white and also a member of a minority will be considered to be of that minority.

OMB guidelines also clarify the responsibilities of businesses, government agencies, and educational institutions that have to report the racial breakdown of their work force or student body. Rather than requiring that they provide data on all 63 combinations possible under the minimal standards, they will be asked to report the numbers in the five races listed in the minimal requirements and four racial combinations: (1) American Indian/Alaskan Native and white, (2) Asian and white, (3) African-American and white, and (4) American Indian/Alaskan Native and African-American.

You can listen to much more detail about how the 2000 Census is progressing in an NPR interview with Census Bureau Director Kenneth Prewitt (“Talk of the Nation,” NPR, Tuesday, March 21, 2000). Responding to concerns about privacy, Prewitt explains why the government wants to know how far people drive to work or how many flush toilets they have. The program is available online at http://www.npr.org/ramfi les /totn/20000321.totn.01.rmm

“Study: Men, Women Differ in Navigating”

by Malcolm Ritter, The Boston Globe, 21 March 2000, C2.

“Neuroscience: Taxicology”

The Economist, 18 March 2000, p. 83.

These articles provide two complementary views on what determines people’s ability to navigate.

The Globe describes a study from the April issue of the journal Nature Neuroscience. German researchers scanned the brains of 12 men and 12 women as they navigated a virtual reality maze. One difference that emerged was in the activity of the hippocampus, an inner brain structure known to be involved in memory. The brain scans showed men using both their right and left hippocampi during the task, while women used only their right. However, women appeared to also use an outer part of the brain called the prefrontal cortex.

Earlier studies have indicated that women are more inclined to use landmarks for navigation, while men rely on their sense of geometrical direction. The researchers therefore speculated that activity in the cortex was related to storing landmarks, while the hippocampus was related to geometric reasoning. The present study provides no information about whether the sex differences are genetic or learned. But the fact that similar differences have appeared in rat studies was interpreted to mean that there may be at least some genetic component.

The Economist describes related results from the current edition of the Proceedings of the National Academy of Sciences. Researchers at University College London compared brain scans of London taxi drivers to those of a reference group. Drivers must pass a rigorous exam on routes through the city to obtain a taxi badge, which the researchers interpreted as evidence of superior navigational ability. Since their sample of drivers were all right- handed males, a reference standard was constructed by averaging scan results of 50 right-handed “ordinary” men.

The only differences observed were in the hippocampus. The back of the hippocampus was larger in the taxi drivers than in the reference group. When the differences were plotted against years of taxi driving experience, it was observed that the back of the hippocampus grows and the front parts shrink over time. This suggests that the brain may be more physiologically adaptable than neuroscientists have traditionally thought.

“The Rankings Game: Mirror, Mirror, Who’s the Most Beautiful of

All?”

by John A. Paulos, ABCNEWS.com, May 2000.

This is from John Paulos’ monthly online column “Who’s Counting?” at http://abcnews.go.com/sections/science/whoscounting_index/whoscounting_index.html

Paulos discusses the volatile nature of the rankings we find so often in the popular media. He begins with the annual list of the 50 most beautiful people published by People magazine (appearing in their May 6 issue). He wonders why so many people who were on last year’s list did not make it this year. In fact, only three people held onto their spots: actors Ben Affleck and Freddie Prince, Jr., and singer Ricky Martin. All but 11 of this year’s most beautiful people were actors (18), actresses (15), or singers (6). As Paulos observes, no scientist or mathematician made this year’s list.

Paulos sees many other examples of high variability in ratings, such as lists of “best colleges” or “best places to live.” For example, in the U.S. News and World Report college rankings this year, the per-student spending on instruction and education-related services was weighted more heavily than last year. As a result, in the ranking of “best national universities,” Caltech moved from 9th place in 1999 to first place in 2000. Paulos points out that institutions don’t really change this dramatically from year to year. He suggests that those who produce these rankings have an incentive to introduce variability to keep interest up.

Finally, Paulos observes that, while stock markets have exhibited high variability of late, news reports can exaggerate the trends. For example, when the Dow is at 11,000, a 2% drop corresponds to 220 points, which the news will surely report as a bad day for the market. On the other hand, he feels that news commentators and market analysts have their own “most beautiful” internet stocks whose attraction, as with the most beautiful people, may be largely based on hype. In any case, Paulos notes that the businesses themselves don’t change as fast as their market valuations.

“A Tax on Both Your Poxes: Fed’s Report is a Strange Brew”

by Ken Ringle, The Washington Post, 29 April 2000, C1.

“Gonorrhea Rates Decline with Higher Beer Tax.” That is the startling title of a new report from the Centers for Disease Control, which investigated the effect of national alcohol policy over the period 1981-1995. In fact, in 24 of 36 states, higher beer prices were accompanied by lower gonorrhea rates. But the title reads like a textbook example of the perils of confusing association with causation. David Murray, research director of the Statistical Assessment Service (a resource for science writers) did not mince words in criticizing the report. He compared the conclusion to saying that “the sun goes down because we turn on the street lights.” But, as the present article ruefully notes, the themes of “sex, youth and alcohol” apparently proved irresistible to the press.

All of this makes one wonder what the CDC had in mind. An editorial accompanying their report warned against jumping to causal conclusions, citing two limitations of the study. First, reporting practices for gonorrhea differ across states, making direct comparisons difficult. Second, it noted that “the analysis may be subject to confounding effects of unobservable factors. Omitting these variables could cause substantial bias.” Nevertheless, Harold Chessen, a health economist at the CDC, states that alcohol use is known to be associated with risky sexual behavior. He is quoted as saying that the report is “consistent with the idea that higher taxes can reduce sexually transmitted disease rates.”

Chessen was reportedly amused when asked why the Prohibition hadn’t proved helpful in reducing risky behavior back in the 1920s. He responded simply that “our study didn’t address Prohibition! I don’t know.”

“Oops, Sorry: Seems That My Pie Chart Is Half-Baked”

by Patricia Cohen, The New York Times, 8 April 2000, B7.

In her 1985 book The Divorce Revolution, sociologist Lenore Weitzman reported an alarming fallout from California’s no-fault divorce law: divorced women’s standard of living had dropped 73% on average, while that of their former husbands had increased by 43%. These figures were widely publicized, and were frequently quoted in public debate on divorce law for the next decade. But in 1996, another sociologist found that Weitzman’s data had been flawed, leading to an exaggeration of the effect she found. Divorced women’s standard of living had dropped, but only by 27%; the men’s standard had gone up, but only by 10%.

While Weitzman’s error appears to have been an honest mistake in data-processing, such cases raise the issue of when and how new research findings should be reported to the public. In the academic world, a research paper goes through a rigorous process of peer review before it is accepted for publication in a scholarly journal. But when the findings are of wider public interest, there is a great temptation to disseminate them earlier, and stories like The Divorce Revolution are often picked up by the popular press.

A recent example cited in the article concerns the debate over school choice. Edward Muir of the American Federation of Teachers charges that schools are rushing to implement new education programs whose effectiveness has never been verified by peer-reviewed research. His critics counter that Muir is agitating on behalf of the teachers’ unions and questioning only the programs that the unions oppose.

Max Frankel of the American Association for the Advancement of Science sees peer-review as critical to the acceptance of scientific results. Still, he acknowledges that there may be compelling reasons to shortcut the process, such as “when public health is at stake or Congress is about to enact a sweeping policy change.” Henry Levin of Columbia University disagrees, noting that “one study should never make a difference between moving towards policy or not.”

Other social scientists point out that we cannot reasonably expect that exciting new findings will escape media coverage. In light of this, it was encouraging to ready the following letter to the editor in response to the article (14 April 2000, A30):

Reading [the story] brought back memories of my struggles with a required statistics class in my senior year at the University of Wisconsin in 1946.

The one lasting lesson I learned from that class was to read every chart, graph and survey result with skepticism. There may have been problems with the sampling, the questions asked may have been wrong, the information misinterpreted, and researchers may have made human errors, and so on. Your article reconfirms those concerns.

ELINOR POLSTER Shaker Heights, Ohio

“Lucky Numbers – A Casino Chain Finds a Lucrative Niche: The Small Spenders”

by Christina Binkley, The Wall Street Journal, 4 May 2000, A1.

The 1990s saw a boom in casino construction as more and more states moved to legalize gambling. Harrah’s was the first in many locations, only to see its business eroded by flashier new establishments. But continuing investment in ever-more-lavish facilities is an expensive way to compete. Harrah’s has turned to statistics to take a careful look at its existing customer base, particularly the “small spenders,” with an eye towards improving profits.

Harrah’s already had plenty of data available. The casino had been providing “frequent gambler cards” that customers presented before playing in order to receive such perks as free meals, rooms, or show tickets. Meanwhile, the casino was collecting information on their gambling behavior. The original idea was that indexing the give-aways to various levels of gambling would increase spending, but the results were disappointing. Then Harrah’s chairman Phil Satre got a tip from Coca-Cola’s marketing experts, who suggested hiring a chief operating officer with a background in marketing. Satre offered the job to a Harvard Business School professor named Gary Loveman, who brought sophisticated market analysis techniques to Harrah’s data.

The frequent gambler cards collected data on everything from where players lived to how fast they played slot machines. This turned out to be a gold mine for market research. Some of Loveman’s analysis uses the logic of clinical trials. In one test, Harrah’s divided a sample of frequent slot machine players into two groups. The “controls” were offered a standard marketing package worth $125, including a free room, two steak dinners, and $30 in gambling chips. “Treatment group” members were simply offered $60 in chips. Even though the latter was much less expensive for the casino, it produced more gambling. As a result, Harrah’s has now instituted the new promotion scheme and has seen it generate more than twice the profit per person-trip.

Another promotion targeted a group of individuals who lived near casinos and were identified as “avid gamblers” by the speed with which they pushed slot machine buttons. The company tried to encourage back-to-back visits by making cash and food offers that expired in consecutive two-week periods. In the target group, the average number of trips per month jumped from 1.1 to 1.4.

“If Biology Is Ancestry, Are These People Related?”

by Nicholas Wade, The New York Times, 9 April 2000, Sect. 4, p. 4.

“DNA as Detective Again, But on a Biblical Case”

by Walter Goodman, The New York Times, 21 February 2000, E7.

The first article discusses the recent use of DNA fingerprinting to determine the origin of a surname. This new technique uses the fact that, unlike other chromosomes, most of the DNA on the Y chromosome is passed on unchanged from father to son. Mutations can cause small changes in the DNA. Such changes are rare, and as they accumulate they provide a chain that can be followed backwards in time to trace ancestral lineage.

Dr. Bryan Sykes of Oxford University is an expert in using this technique to study ancient populations. Recently, he has formed a company called Oxford Ancestors, which is seeking to patent its application of the technique to trace surnames. (Further details can be found in Sykes’ research article with Catherine Irven: “Surnames and the Y Chromosome,” American Journal of Human Genetics, April 2000, 1417-1419.) Sykes has found that most English surnames have come down from a single original individual. Historically, surnames appeared in England between 1250 and 1450, and often identified trades—Smith is a familiar example. The article focuses on Sykes’ effort to trace his own surname.

Voter registration records list close to 10,000 Sykses in the U.K., many near a single town in Yorkshire. Dr. Sykes sent letters to a random sample of 269 men from three nearby counties, asking them to send him cells brushed from their inside cheek on a cotton swab. Bryan received a total of 61 responses, 48 of which provided usable cells. He found only one common “DNA signature,” which led him to conclude that there had indeed been a single original Mr. Sykes. He also estimated that half of today’s Sykses do not carry this signature, which indicates a “non-paternity event” in their family histories. (Dr. Sykes himself carries the original Sykes signature.)

So far Sykes has analyzed three other surnames and found that each arose from a unique ancestor. He points out that, if this procedure is found to be successful for other names and markers, it would have applications to genealogy and forensics.

The second Times article describes how the technique of using the Y chromosome as a marker has been used to support the claims of the Lemba people of South Africa that they have Jewish heredity. This story was featured in a PBS broadcast entitled “Lost Tribes of Israel” (Nova, WGBH Boston, 22 February 2000). PBS has a web site related to the story: http://www.pbs.org/wgbh/nova/is rael/

The same kind of DNA fingerprinting was also used recently to provide evidence that Thomas Jefferson fathered a family with his slave, Sally Hemings.

“Will Juiced Ball Study Yield Fruit?”

by Dan Shaughnessy, The Boston Globe, 11 May 2000, E1.

“Behind-the-Seams Look; Rawlings Throws Open Baseball Plant Doors”

by Gary Mihoces, USA Today, 24 May 2000, 1C.

“Researchers Say Core Has Changed”

by Hal Bodley, USA Today, 26 May 2000, 8C.

In major league baseball this season, home runs are being hit at a record rate. If the trend continues, the year’s total will reach 6254, an increase of more than 13% over last year’s record of 5528. Dramatic hitting production inevitably leads fans to speculate about “juiced” or “lively” balls. Baseball commissioner Bud Selig is not inclined to accept this explanation. He points out several other factors that might be involved: new expansion teams have diluted pitching staffs, batters are getting bigger and stronger, and new ballparks are getting smaller. Finally, this year umpires have been calling pitches high in the strike zone as balls, further favoring batters.

Nevertheless, Selig does not want to overlook any possibility. He has asked Dr. Jim Sherwood, a professor of mechanical engineering at the University of Massachusetts at Lowell, to investigate whether the balls manufactured this year still meet major league specifications. According to the article, the rule book says that a ball traveling 100 mph that hits a wall of 2-inch northern white ash wood is supposed to bounce back at 57 mph. Test results from Sherwood’s lab will be available later this year.

The USA Today articles provide interesting background on the manufacture of baseballs. The first of these articles describes a tour of the factory in Costa Rica where all of the balls used in the major leagues are made. Rawlings owns the factory and has been the sole supplier of the major leagues since 1977. In the factory, wool yarn and cotton string are machine-wound around a solid rubber core. A sample of each day’s balls are tested to measure the “coefficient of restitution” (COR), the ratio of bounce-back speed to pitch speed in a collision with white ash. The article says the major league rule requires a COR between 51.4% and 57.8%. Rawlings aims for the high end of this range and insists there have been no changes in recent years.

The rubber cores themselves are made in Batesville, Mississippi, by the Muscle Shoals Rubber Company. This company has been the sole supplier of this product for the entire time that Rawlings has been making the balls (and supplied Spalding, the previous maker, as well). A spokesperson from Muscle Shoals is quoted as saying “We haven’t changed a thing.”

But something has measurably changed, according to the second USA Today article, which reports on a collaboration between Universal Systems of Solon, Ohio, and the energy laboratory at Penn State. Using a CAT scanner designed to test cores in the petroleum industry, the researchers compared the cores used in baseballs in the 1930s with the ones being used today. A spokesperson from Rawlings is quoted as saying that the core hasn’t changed over this period, but the researchers report finding significant differences. Still, they are not ready to say that these differences are responsible for the home run surge.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.