9/16/05

From the official MythBusters website

It's a tough job separating truth from urban legend, but the MythBusters are here to serve. Each week special-effects experts Adam Savage and Jamie Hyneman take on three myths and use modern-day science to show you what's real and what's fiction.That's right, they do more than explain how something may or may not be scientifically possible. Through trial and error they actually demonstrate it.

First, let me say I love this show! I believe that they examine myths quite well the vast, vast majority of the time. However, there are examples of times where their experiments and methodology are less than sound.

Statistics plays a huge role in science, which has been undergoing a statistical revolution for quite some time (Salsburg 2001). Essentially statistics is the method of the scientific method, so for a show to claim to present science, it is a requirement to present sound statistics in my opinion. I am primarily interested in the statistical aspect of science and will examine instances where the MythBusters (as in the TV show, not any specific people) misused statistical reasoning.

**Episode 28: Is Yawning Contagious?**

The description reads

Using a specially fabricated chamber complete with two-way mirror and a hidden camera, Kari, Scottie and Tory set out to see whether a yawn, like a cold, truly can be caught.

The are not many good things that can be said about the first yawning experiment. There were too many confounding variables. From this first experiment they get major credit for realizing improvements that need to be made.

For the second experiment they took a convenience sample of 50 people (which is the largest sample size they said they've *ever* used on MythBusters), which was impressive, and administered the treatment to every 2 out of 3 people. This resulted in 34 people getting the yawning treatment. The control group, the group that didn't receive the yawning treatment, was composed of 16 people.

The outcome was as follows

Treatment | Control | |||

Yawned | 14 | |||

Didn't Yawn | 36 | |||

34 | 16 | 50 |

The hypothesis being tested was

vs

Yawning treatment leads to more yawning

Now it is time to examine the statistical theory used to analyze a general 2x2 table. There are many analyses: chi-square, Yates corrected chi-square, normal approximation, and so on, but Fisher's Exact Test works for any 2x2 table regardless of cell counts.

A general 2x2 table looks like

Treatment | Control | |||

Outcome | R1 | |||

No Outcome | R2 | |||

C1 | C2 | T |

One can use the hypergeometric distribution to calculate the probability of obtaining tables as extreme or more extreme than the observed one.

[(number of ways to choose k from C1) (number of ways to choose R1-k from C2)] / (number of ways to choose R1 from T) =

[nCr(C1, k) nCr(C2, R1-k)] / nCr(T, R1) =

(R1! R2! C1! C2!) / [T! k! (R1-k)! (C1-k)! (R2-C1+k)!]

With the marginal totals fixed, simply let k vary from 10 to 14, and add up the probabilities

I've received comments like 'What, do you expect them to do all of these calculations on the show?' which miss the point entirely and miss it in two ways. First, in practice such things are done using software (many are free), and could be done in about a minute, and that time estimate is including the time for data entry. Note that I'm just showing theory here. I don't expect anybody to do Fisher's Exact Test calculations by hand. Second, because MythBusters is a television show, it is not necessary, nor does it make for entertaining television, to show the gory details (of anything) in the actual episode. A __very__ brief description and the conclusion would suffice. For example, check out the 9/05 issue of the journal __Significance__. It has an article by the statistician Martin Bland discussing his experience with a statistical analysis and how it was presented on television. Bland did the analysis for the Horizon homeopathic dilution experiment.

Anyway, doing that, one obtains p-value = .512782 (one tailed).

Note that the p-value is not less than the traditional cutoff of .05. Therefore, there is no evidence to reject the hypothesis that the yawning treatment and if a person yawns are independent.

The MythBusters looked at the percents, 10/34 ~ 29% and 4/16 = 25%, and concluded that there *is* evidence to reject the hypothesis, probably because they considered the percentages to be different enough. That is, they concluded that there is evidence for the yawning treatment having an effect and said

Given how large our sample was, I'd say it's confirmed

and then other MythBusters chimed in *"confirmed"*.

A proper conclusion based on statistical analysis of their data would be that the myth remains plausible.

The MythBusters often revisit myths, and design improved experiments to test them again based on viewer input. I expect them to revisit the yawning experiment sometime in the future.

For the interested reader, I recommend work done by Robert Provine on scientifically studying yawning. Check out the 11-12/05 issue of __American Scientist__.

If you enjoyed *any* of my content, please consider supporting it in a variety of ways:

- Check out a random article at http://statisticool.com/random.htm
- Buy what you need on Amazon from my affiliate link
- Share my Shutterstock photo gallery
- Sign up to be a Shutterstock contributor
- Search Statisticool.com: