344
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

Teaching Bits: A Resource for Teachers of Statistics

Abstract

This column features "bits" of information sampled from a variety of sources that may be of interest to teachers of statistics. Deb abstracts information from the literature on teaching and learning statistics, while Bill summarizes articles from the news and other media that may be used with students to provoke discussions or serve as a basis for classroom activities or student projects. Bill's contributions are derived from Chance News (http://www.dartmouth.edu/∼chance/chance_news/news.html). Like Chance News, Bill's contributions are freely redistributable under the terms of the GNU General Public License (http://gnu.via.ecp.fr/copyleft/gpl.html), as published by the Free Software Foundation. We realize that due to limitations in the literature we have access to and time to review, we may overlook some potential articles for this column, and therefore encourage you to send us your reviews and suggestions for abstracts.

Research and Resources on Teaching and Learning Statistics

Teaching Statistics: Resources for Undergraduate Statistics

Thomas Moore, Editor (2000). Joint Publication of the American Statistical Association and the Mathematics Association of America (MAA Notes).

This book includes a collection of classic original articles that helped to shape the statistics education reform movement, as well as guidance to others who are involved in the process of improving their teaching. It includes descriptions and teacher notes regarding several of the most innovative and effective projects and products that have been developed in recent years. The book also includes ideas for using real data in teaching, how to choose a textbook, how to most effectively use technology, as well as guidance regarding assessment. It can be ordered through the ASA (see the ASA website at http://ww2.amstat.org).

“Challenges of Implementing Innovation”

Barbara S. Edwards (2000). The Mathematics Teacher (Online), v.93, n.9. http://www.nctm.org/mt/2000/12/innovation.html

In this informative paper, Edwards outlines the major steps involved in the process of undergoing educational reform, as well as some of the major obstacles that are likely to be encountered. She provides a thorough citation of the latest research in this area of “the process of change.” Edwards makes an important point: “A strong desire to change does not by itself ensure a successful innovative effort. Indeed, research has shown that a commitment to change is only one ingredient needed for successful change.” Some of the factors cited by Edwards as being important to the process of change include: a vision of what the change should be (goals), a well-defined context within which the change is to take place, an ability to compare what is practiced to what is envisioned, a means to provide teacher support, as well as consideration of teachers' past experiences, beliefs about teaching learning, and a teacher's knowledge of mathematics and pedagogy.

“Are our Teachers Prepared to Provide Instruction in Statistics at the K-12 Levels?”

Christine Franklin (2000). Mathematics Education Dialogues, 2000–10, National Council for Teachers of Mathematics.

http://www.nctm.org/dialogues/2000-10/areyour.htm

In this essay, Franklin addresses the critical need for better training and preparation of pre-service and in-service K-12 teachers in the area of statistics. She stresses the need for teachers to be able to handle the rapid growth in the number of students taking Advanced Placement statistics courses and examinations in high school, stating that the number of AP Statistics exams taken has increased from 7,500 in 1997 to 35,000 in 2000. Franklin points out that the reformed statistics curriculum requires teachers to move beyond the traditional formula and algorithm approach, which is a good idea, but presents a problem, since most K-12 teachers believe their backgrounds to be inadequate to teach under the reform guidelines.

“Deciding When to Use Calculators”

Anthony D. Thompson and Stephen L. Sproule (2000). Mathematics Teaching in the Middle School, v.6, n.2.

Abbreviated Abstract: The influence of technology, particularly the calculator, in the middle school classroom has become a compelling issue for both practicing and prospective teachers. The National Council of Teachers of Mathematics (1989) encourages the use of calculators in the middle grades, but teachers face a number of difficulties when they introduce calculators in their classrooms. These teachers ask, “When should I use calculators?” and “What should students know before I allow them to use calculators?” The larger question that teachers often ask is “On what basis do I make the decision to use calculators with my students?” The purpose of this article is to share a framework that we provide to middle school mathematics teachers to help them decide when to use calculators with their students. This framework helps the teacher focus less on the calculator and more directly on his or her own educational goals and the students' needs and abilities.

Teaching Ideas and Applications

“Herman Hollerith: Punching Holes in the Process”

Art Johnson (2000). Mathematics Teaching in the Middle School, v.5, n.9.

What was it like to work for the Census Bureau over one hundred years ago? An short but very interesting biography on Herman Hollerith (1860–1929), the man who designed and developed the first mechanical tabulating system. Hollerith was a census statistician whose first task was to analyze the data from the 1880 census. His new tabulating system involved the use of punch cards to activate electric counters.

“A Career that Counts”

Art Johnson (2000). Mathematics Teaching in the Middle School, v.5, n.9.

What is it like to work for the Census Bureau today? Johnson interviews Amy Smith, who works at the Administrative Records and Methodology Research Branch of the Population Division of the Census Bureau, and provides insightful information for students in practical, interesting terms. Johnson also includes some group activities that focus on collection and summarization of data relating to the U.S. Census.

“Using Beam and Fulcrum Displays to Explore Data”

David P. Doane and Ronald L. Tracy (2000). The American Statistician, v.54, n.4, pp. 289–290.

Abstract: The beam-and-fulcrum display is a useful complement to the boxplot. It displays the range, mean, standard deviation, and studentized range. It reveals the existence of outliers and permits some assessment of shape. Embellishments to the beam-and-fulcrum diagram can show the item frequency, and/or a confidence interval for the mean. Its intuitive simplicity makes the beam-and-fulcrum an attractive tool for exploratory data analysis (EDA) and classroom instruction.

“Well-Rounded Figures”

by Yves Nievergelt (2001). The College Mathematics Journal, January, 2001.

We always tell students to be careful when rounding during the process of a complex calculation, but a student might ask if rounding really makes a big difference. This article discusses the consequences of improper rounding procedures in the context of calculating a variance. An example is provided where a variance is calculated using three different rounding methods, obtaining incredibly different answers of: 2.88, 0.51, and even −21.00.

“Sumgo Here and Sumgo There”

David E. Meel (2000). Mathematics Teaching in the Middle School, v.6, n.4.

Abstract: The idea of “sumgo” was suggested by the game of bingo and the need to illustrate the utility of educational games, help students practice skills, and introduce new concepts. This game was designed to investigate an interesting distribution while practicing a computational skill. As a result, the activity described in this article focuses on the concepts of sample spaces and exact probabilities while providing practice in addition. In designing “sumgo,” I envisioned a mathematics class actively engaged with the game while practicing addition and learning about data interpretation, experimental and theoretical probability, and the consequences of randomness.

“Teaching Probability to Young Children – Part 2”

Cyrilla Bolster (2000). The Statistics Teacher Network, n.55, pp. 5–7.

Part 1 of this article (discussed in the previous issue) helps young children develop the idea of fairness in terms of probability through the use of games. In Part 2, the following issues are examined by looking at games: How do I know if a game is fair or unfair? What am I up against and what can I do about it? How can I be sure a game is random? Can I really predict what is going to happen in a game of chance? How do I decide whether to play a game or not?

“Enriching Students' Mathematical Intuitions with Probability Games and Tree Diagrams”

Leslie Aspinwall and Kenneth L. Shaw (2000). Mathematics Teaching in the Middle School, v.6, n.4.

This article provides some probability games and tasks designed to help students overcome misconceptions and misleading intuitions connected with the concept of “fairness” in outcomes.

“Runs With No Winner in a Lottery”

Richard Iltis (2000). The College Mathematics Journal, November, 2000.

This article puts a new spin on lottery discussions. The proposal is that people are more motivated to play the lottery when the likelihood of having to split the winnings is decreased. This occurs when there are more possible numbers to choose from. The article examines the issue of determining how many fewer winners one can expect in that case, using the case of the Oregon “Big Bucks” lottery.

“The Case of the Missing Lottery Number”

W.D. Kaigh (2001). The College Mathematics Journal, January, 2001.

In 1998, the Arizona state lottery experienced 32 games in a row where the digit “9” was never selected as one of the three digits in the winning sequence. Was this just a situation of random chance, or was there more to it? In this article, the author explains why he believes there was more to it.

“The Probability of Winning a Lotto Jackpot Twice”

Emeric T. Noone, Jr. (2000). The Mathematics Teacher (Online), v.93, n. 6.

A question that oftentimes comes up in student discussions about probability is the following: How likely is it that someone can win a lottery twice? This article provides some ideas for helping students answer this question by figuring out what the probability is, and thinking about it.

“Using Deming's Funnel Experiment to Demonstrate Effects of Violating Assumptions Underlying Shewhart's Control Charts”

Ross S. Sparks and John B. F. Field (2000). The American Statistician, v.54, n.4, pp. 291–302.

Abbreviated Abstract: Deming's funnel experiment is used to demonstrate the effect of blind use of Shewhart's (sample mean) and R charts for process data that violate at least one of the assumptions underlying their correct application. Simple graphical methods of checking the assumptions are given. How to correctly apply Shewhart charts to the funnel experiment data is discussed and an application is used to illustrate a solution. This article also outlines how the funnel experiment could be used for training in the correct use of statistical process control charts.

“Spinning for Confidence”

Marie A. Revak (2000). The Statistics Teacher Network, n.55, p. 4.

Students create and spin a spinner that contain wedges of different proportion and color, collecting data on the proportion of spins for each wedge (each student's spinner looks exactly the same.) Confidence intervals are created by each student, and are presented on the board using the same scale. After collecting the results, the students try to estimate the true proportions; then the actual answers are revealed. The data collection for this project is very swift, and the project fosters a sound interpretation of confidence interval and feelings of data ownership.

“Simple and Effective Confidence Intervals for Proportions and Differences of Proportions Result from Adding Two Successes and Two Failures”

Alan Agresti and Brian Caffo (2000). The American Statistician, v.54, n.4, pp. 280–288.

Abbreviated Abstract: The standard confidence intervals for proportions and their differences used in introductory statistics courses have poor performance, the actual coverage probability often being much lower than intended. However, simple adjustments of these intervals based on adding four pseudo observations, half of each type, perform surprisingly well even for small samples. In teaching with these adjusted intervals, one can bypass awkward sample size guidelines and use the same formulas with small and large samples.

“Estimation in Discrete Choice Models with Choice-Based Samples”

Donald M. Waldman (2000). The American Statistician, v.54, n.4, pp. 303–306.

Abbreviated Abstract: Students of applied statistics and econometrics need exposure to problems in the theory of estimation under ideal conditions. Such problems include heteroscedsticity, variable measurement error, and endogeneous covariates. One problem that is sometimes overlooked is whether the sample has been independently drawn. This article explores the importance of random sampling in behavioral models of choice. A popular method of data collection in those models is to sample individuals who have made the same choice, and then pool several such subsamples. This selection on the dependent variable presents problems in estimation. A weighted maximum likelihood estimator which overcomes the problem with the nonrandom nature of the sample is investigated with both a hypothetical and a real example.

Reviews

Book Review: Minitab Handbook, 4th ed. (Ryan and Joiner)

Robert W. Hayden (2000). The Statistics Teacher Network, n.55, pp. 3–4.

Hayden provides a very favorable review of the latest edition of this handbook: “Many feel that it is just the book they have been looking for…it can be very useful to you even if you do not use Minitab in teaching statistics. It is in a class by itself, somewhere between a software manual and a statistics textbook.” Hayden points out that this handbook promotes an understanding of data, and the learning of statistics within the setting of using Minitab to analyze real data sets and answer real questions. A minor distraction is that solutions are not provided for the exercises.

Program Review: The Data Detectives: A Student Simulation in Data Collection and Analysis (The Exchange Network)

Susan J. Bates (2000). The Statistics Teacher Network, n.55, pp. 1–3.

A middle school teacher discusses her positive classroom experience with this research exchange program, which pairs classrooms of students from different geographic locations together (grades 1–3, 4–6, and 7–9) to design, collaborate, and exchange field and descriptive data research. Teaching materials and readiness activities are also included. Bates notes the strong sense of ownership, interest, and discovery-based learning that took place when her class participated in this project: “We worked only two class periods a week on Data Detectives and the students could hardly wait to get back to their partner students to continue each time … I encourage you to consider becoming an exchange partner!”

Software Review: ActivStats (Paul Velleman)

Norman Preston (2001). The College Mathematics Journal, March, 2001.

This reviewer gives a positive review of ActiveStats, saying that it does just what its name implies: teaches statistics in a way that actively involves students.

Topics for Discussion from Current Newspapers and Journals

“Talk About Slim Margins…”

by David Leonhardt, The New York Times, November 12, 2000, “Week in Review,” p. 3.

“Surprise! Elections Have Margins of Error, Too.”

by George Johnson, The New York Times, November 19, 2000, Sect. 4, p. 3.

“We're Measuring Bacteria With a Yardstick.”

by John Allen Paulos, The New York Times, November 22, 2000, p. A27.

These three articles comment on how close the presidential election was. They all echo the theme that the final difference was in some sense within the “margin of error” of our electoral process.

The first draws an analogy with Olympic sports competitions. In Sydney last summer, American swimmers Anthony Earvin and Gary Hall Jr. shared the gold medal in the 50-meter freestyle when each was timed at 21.98 seconds. Splitting it any finer was declared to be unfair after the 1972 Games, where a 400-meter race was decided by mere thousandths of a second. The article declares that “for all meaningful statistical purposes, the Florida vote was a tie.” A difference of 300 votes out of six million is 1 in 20,000, or one-tenth the size of the potential difference between swimmers Earvin and Hall. The article concludes with an (unexplained) estimate of the chance that another race this close will occur in the next century: “the probability is just a handful out of a million.” For comparison, the article notes that this is considerably smaller than the 1-in-1000 chance scientists recently gave for an asteroid colliding with the Earth in 2071. (For more on the asteroid, see the Washington Post reports later in this column.)

The second article compares the election with opinion polling, where we are accustomed to seeing margin of error statements. When the margin of error is larger than the difference measured in a poll, then the difference could be attributed to chance variation. The article therefore suggests that the winner of the presidential election was effectively chosen at random. Sources of “random” variation in the election included confusing ballots, machine errors reading the punch cards, and legal decisions on which recounts to accept. Confusion arising from the “butterfly” ballot was widely publicized. Industry spokesman stated the accuracy of the card readers ranged from 99% to 99.9%. This sounds impressive, but in absolute terms it potentially represents tens of thousands of errors statewide, which is more than the margin of victory. One downside of the Electoral College, according to the article, is that is “magnifies” such chance errors. For these reasons, the election might ultimately be a less accurate reflection of the will of the people than a statistically well-designed poll would be. “If we trusted statistics over counting,” writes the author, “we could dispense with elections and just go with the polls.”

In the last article, John Allen Paulos (well-known as the author of Innumeracy) also argues that the Florida election results amount to a tie. The difference is the certified tally is smaller than the errors obviously present: the tens of thousands of ballots disqualified for double voting, the anomalous total for Buchanan in Palm Beach, the disputed absentee ballots, and so on. The title of the article comes from Paulos' analogy that “measuring the relatively tiny gap in votes between the two candidates is a bit like measuring the lengths of two bacteria with a yardstick. The Florida electoral system, in particular, is incapable of making such fine measurements.” Paulos concludes that we might as well toss a coin to declare a winner.

The New York Times on the Web maintains a site on the 2000 Election at http://www.nytimes.com/pages/politics/elections/index.html

There you can find an interactive guide to the Florida vote, including data maps by county, and a timeline of the Bush margin through the various recounts.

By now, there have been many detailed statistical analyses of the election results. In our last edition of “Teaching Bits,” we provided links to some of them. Here is the URL of another source, which includes brief annotations describing the statistical methods used.

http://www.bestbookmarks.com/election/#links

Or, if you would like to try your own analysis, you can download the election figures from the Florida Department of State:

http://enight.dos.state.fl.us/Report.asp?Date=001107

“Speeding Object May Have a Date with Earth.”

by William Harwood, The Washington Post, November 4, 2000, p. A6.

“Risk to Earth from Object in Space is Lowered; Lab Recalculates Threat of Impact.”

The Washington Post, November 7, 2000, p. A9.

On September 29, astronomers using the Canada-France-Hawaii telescope in Hawaii discovered an object in space moving on a “near-earth orbit.” According to researchers at NASA's Jet Propulsion Laboratory in California, the object, named 2000 SG344, might be a small asteroid or part of a discarded rocket. They estimated that the object's trajectory would bring it within 3.8 million miles of Earth in the year 2030. Given uncertainties in the exact orbit, they calculated a 1-in-500 chance that the object would actually hit the Earth.

The first article quotes Dan Yeomans, the manager of NASA's Near-Earth Orbit project, as saying that 2000 SG344 had the best chance of hitting Earth of any object detected to date. He added that “if future observations show the impact probability is increasing rather than decreasing as we expect, then we'll have to make some decisions as to whether we should mount some mitigating campaign.” As it turned out, the second article reported that the estimated probability of a collision in 2030 had been rather drastically adjusted downward‐‐to zero! Additional observations of the orbit showed that the object would come no closer than 2.7 million miles to the earth in 2030, so there is no chance of a collision then. However, there is now an estimated 1-in- 1000 chance of a collision in 2071.

“Food News Can Get You Dizzy, So Know What to Swallow.”

by Linda Kulman, US News & World Report, November 13, 2000, pp. 68–72.

This article summarizes a number of cases where the news media have presented conflicting dietary advice, based on the results of epidemiological research. Prominent examples include coffee, eggs, butter/margarine, salt and fiber. Part of the problem is that the public doesn't understand the process by which hypotheses are tested and revised before a scientific consensus is reached. According to Harvard epidemiologist Walter Willett, “If things didn't shift, it would be bad science.” David Murray of the Statistical Assessment Service (STATS) sees a conflict between the timetable of careful research and the deadlines faced by news reporters. He says “While science is contingent and unfinished, journalists want something definitive. They impose closure by the deadline. Out of that, the public thinks they are always changing direction.” Nevertheless, says science writer Gina Kolata of the New York Times, the blame does not rest solely with reporters. She points out that the scientists are themselves quite enthusiastic about their own findings: “They say, ‘I myself am taking folic acid.’ I used to feed off their enthusiasm. But when you see one [study] after another fall, I've become much more of a skeptic.” Another issue is the source of funding for the studies. The article cites research partially funded by the Chocolate Manufacturers Association which found that certain compounds in chocolate may be good for the arteries.

The author of the article recommends that critical readers consider the following questions when evaluating a health study.

  1. Does the study corroborate earlier research?

  2. How big is the claimed effect? Is it attributable to eating reasonable amounts of the food in question?

  3. Does the news story provide real data or only anecdotal evidence?

  4. Does the story offer a biological explanation for the effect?

  5. Were the tests conducted on humans?

  6. Was the study published in an established journal?

  7. How might the sponsorship of the study have affected the reporting?

  8. Read past the headline.

For explanation of the last item, the article quotes Dr. Marcia Angell, the former editor of the New England Journal of Medicine: “The breakthroughs are in the first paragraph and the caveats are in the fifth.”

“Studies Find No Short-term Links of Cell Phones to Tumors.”

by Thomas H. Maugh II, The Los Angeles Times, December 20, 2000, p. A1.

A cellular phone places a source of radio waves against the user's head, and over the years there has been public speculation that this might increase the risk of brain cancer. Two recent case control studies have found no such risk.

The first of these studies was published in the December 20 issue of Journal of the American Medical Association (Joshua E. Muscat et. al., Handheld cellular telephone use and risk of brain cancer. JAMA, 20 Dec 2000, Vol 284, No. 23, pp 3001–3007). The study involved 469 men and women, ages 18 to 80 who were diagnosed with brain cancer between 1994 and 1998. They were compared with 422 people who did not have brain cancer, but were matched to the cancer patients by age, sex, race, years of education and occupation. The second, similar, study appeared in the New England Journal of Medicine (Peter D. Inskip, et. al., Cellular- telephone use and brain tumors. NEJM, 11 Jan. 2001, Vol. 344, No. 2, pp. 79–86). In this study, 782 patients who were diagnosed with brain tumors between 1994 and 1998 and compared them with 799 people who were admitted to the same hospitals for conditions other than cancer. Neither study found an increased risk of brain cancer among those who used cell phones over a period of two or three years.

Researchers cautioned that data on heavy, long-term cell-phone use are not yet available. However, a large study now underway in Europe should provide such evidence. That study will not be concluded until 2003.

“After Standing Up To Be Counted, Americans Number 281,421,906.”

by Steven A. Holmes, The New York Times, December 29, 2000, p. A1.

Figures from the 2000 Census have been announced, giving the US population as 281,421,906. This is about 6 million more than the estimate of 275,843,000 that the Census Bureau made on October 1. Debate continues as to whether statistical adjustment would give more accurate figures. Republicans argue that the higher than expected total shows that efforts to improve traditional counting, including an advertising campaign to encourage compliance, have paid off. Kenneth Blackwell of Ohio, who co-chairs a board that monitors the Census, said: “We may have a situation where the differential undercount is wiped out.” But Census Director Kenneth Prewitt was more cautious, commenting that “There is no way I can tell you today that these numbers are accurate. We are going to work these data backwards and forwards to find out how accurate we are, and then we're going to tell you.”

Last year the Supreme Court decision ruled that statistically adjusted data could not be used for apportioning seats in the House of Representatives. Thus the overall impact on the 2003 Congress is already known. A total of 12 seats will shift in the reapportionment, with ten states losing seats and eight states gaining (for example, New York will lose two and California will gain one). On the other hand, the article reports that the Court did not rule on whether states could use statistical data when redrawing their own congressional districts. Census officials are expected to announce in February whether they believe that sample survey data from some 300,000 households should be used for this purpose. We can expect further political debate when the announcement comes.

“Under Suspicion: The Fugitive Science of Criminal Justice.”

by Atul Gawande, The New Yorker, 8 January 2001, pp. 50–53.

“Coins and Confused Eyewitnesses: Calculating the Probability of Picking the Wrong Guy.”

by John Allen Paulos, “Who's Counting,” ABCNEWS.com, 1 February 2001. http://www.abcnews.go.com/sections/scitech/WhosCounting/whoscounting.html

These stories highlight the difficulties in the use of police lineups for identifying criminals. The main point is that the ability of eyewitnesses to correctly identify a suspect depends critically on such factors as the make-up of the lineup, whether the suspects are presented sequentially or simultaneously, and the information provided to the witness‐‐in particular, whether the actual culprit is in the line-up. Of particular concern is the false positive rate; that is, the chance that an eye-witness will incorrectly implicate a suspect. The New Yorker article states “in a study of sixty-three DNA exonerations of wrongfully convicted people, fifty-three involved witnesses making a mistaken identification, and almost invariably they had viewed a lineup in which the actual perpetrator was not present.”

The New Yorker article summarizes the findings of Gary Wells, a psychologist at Iowa State University, who has extensively studied the eyewitness problem. His web page http://psych-server.iastate.edu/faculty/gwells/homepage.htm includes links to many resources on the subject. There is also an on- line demonstration where you can try to pick a suspect out of a photo lineup after viewing a security camera videotape.

Paulos' piece for ABCNEWS.com illustrates the problem via a thought experiment on coin-flipping. You are presented with a “lineup” of three pennies, and are told that two are fair and one has a 75% chance of heads. You previously observed that one of these pennies came up heads three times in a row. If you identify this one as the “culprit,” what is the chance you are right? Using Bayes theorem, Paulos shows that the answer is 63%. He notes that this is not out of line with actual experience in police lineups, where “the probability of a correct identification…is frequently as low as 60 percent.” He adds that “what's worse, innocents in the lineup are picked up to 20 percent or more of the time…”

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.