Here are three kinds of avoidable statistics mistakes that I notice in published papers.
No Statistics at All
The most common blunder is not using statistics at all when your paper clearly uses statistical data. If your paper uses the phrase “we report the average time over 20 runs of the algorithm,” for example, you should probably use statistics.Here are two easy things that every paper should do when it deals with performance data or anything else that can randomly vary:
First, plot the error bars. In every figure that represents an average, compute the standard error of the mean or just the plain old standard deviation and add little whiskers to each bar. Explain what the error bars mean in the caption.
Second, do a simple statistical test. If you ever say “our system’s average running time is X seconds, which is less than the baseline running time of Y seconds,” you need show that the difference is statistically significant. Statistical significance tells the reader that the difference you found was more than just “in the noise.”
For most CS papers I read, a really basic test will work: Student’s t-test checks that two averages that look different actually are different. The process is easy. Collect some N samples from the two conditions, compute the mean X and the standard deviation s for each, and plug them into this formula:
t=√N1s12+N2s22X1−X2
then plug that t into the cumulative distribution function of the t-distribution to get a p-value. If your p-value is below a threshold α that you chose ahead of time (0.05 or 0.01, say), then you have a statistically significant difference. Your favorite numerical library probably already has an implementation that does all the work for you.
If you’ve taken even an intro stats course, you know all this already! But you might be surprised to learn how many computer scientists don’t. Program committees don’t require that papers use solid statistics, so the literature is full of statistics-free but otherwise-good papers, so standards remain low, and Prof. Ouroboros keeps drawing figures without error bars. Other fields are moving beyond the p-value, and CS isn’t even there yet.
Failure to Reject = Confirmation
When you do use a statistical test in a paper, you need to interpret its results correctly. When your test produces a p-value, here are the correct interpretations:- If p<α: The difference between our average running time and the baseline’s average running time is statistically significant. Pedantically, we reject the null hypothesis that says that the averages might be the same.
- Otherwise, if p≥α: We conclude nothing at all. Pedantically, we fail to reject that null hypothesis.
Simple statistical tests like the t-test only tell you when averages are different; they can’t tell you when they’re the same. When they fail to find a difference, there are two possible explanations: either there is no difference or you haven’t collected enough data yet. So when a test fails, it could be your fault: if you had run a slightly larger experiment with a slightly larger N, the test might have successfully found the difference. It’s always wrong to conclude that the difference does not exist.
If you want to claim that two means are equal, you’ll need to use a different test where the null hypothesis says that they differ by at least a certain amount. For example, an appropriate one-tailed t-test will do.
The Multiple Comparisons Problem
In most ordinary evaluation sections, it’s probably enough to use only a handful of statistical tests to draw one or two bottom-line conclusions. But you might find yourself automatically running an unbounded number of comparisons. Perhaps you have n benchmarks, and you want to compare the running time on each one to a corresponding baseline with a separate statistical test. Or maybe your system works in a feedback loop: it tries one strategy, performs a statistical test to check whether the strategy worked, and starts over with a new strategy otherwise.Repeated statistical tests can get you into trouble. The problem is that every statistical test has a probability of lying to you. The probability that any single test is wrong is small, but if you do lots of test, the probability amplifies quickly.
For example, say you choose α=0.05 and run one t-test. When the test succeeds—when it finds a significant difference—it’s telling you that there’s at most an α chance that the difference arose from random chance. In 95 out of 100 parallel universes, your paper found a difference that actually exists. I’d take that bet.
Now, say you run a series of n tests in the scope of one paper. Then every test has an α chance of going wrong. The chances that your paper has more than k errors in it is given by the binomial distribution:
1−i=0∑k(in)αi(1−α)n−i
which grows exponentially with the number of tests, n. If you use just 10 tests with α=0.05, for example, your chance of having one test go wrong grows to 40%. If you do 100, the probability is above 99%. At that point, it’s a near certainty that your paper is misreporting some result.
(To compute these probabilities yourself, set k=0 so you get the chance of at least one error. Then the CDF above simplifies down to 1−(1−α)n.)
This pitfall is called the multiple comparisons problem. If you really need to run lots of tests, all is not lost: there are standard ways to compensate for the increased chance of error. The simplest is the Bonferroni correction, where you reduce your per-test α to nα to preserve an overall α chance of going wrong.
No comments:
Post a Comment