Holger Bösch, Fiona Steinkamp, and Emil Boller reviewed 380 studies of attempts to psychically influence randomly generated sequences of 0’s and 1’s to favor one or the other. Consistently with earlier analyses, they found slightly more results matching the value the volunteer was aiming for. Although the difference in proportions of 0’s and 1’s was only slight, the probability of obtaining such a difference purely by chance was very small. So it looked as if the effect was real: something was leading to an outcome favoring the volunteer’s target. The question was whether the difference was caused by the participants’ psychic powers, or by something else—analogous to the bent coin. One possibility Bösch and his colleagues suggested was that it could be due to something called publication bias. This is the very real phenomenon that editors of scientific journals are more likely to publish experiments that report positive results than those that report negative results. In random-number-generator experiments of the kind outlined above, a positive result is one that shows a difference in the proportions of 0’s and 1’s in the direction the volunteer was aiming for, while a negative result is one where no such difference was found. Publication bias isn’t due to dishonesty or malevolence on the part of journal editors, but is subconscious, probably arising from the fact that it’s much more exciting to show that something, instead of nothing, has happened. The fact that publication bias might explain the results doesn’t prove that genuine psychic powers didn’t play a role. But it is another way to account for the results. And then the onus is on those proposing the more unorthodox explanation to show why publication bias cannot account for the results—remember David Hume’s comment that he accepted an explanation only if the alternative seemed less likely.
An interesting book I picked up recently that shares some interesting statistical principles and knowledge to explain how things that are seemingly improbably or even impossible can actually happen. I highlighted quite a few of examples or passages that I found interesting, and this will be the 1st of 4 excerpts that I’ll be sharing.
As what is mentioned in this excerpt, do enough studies and experiments testing the same thing, and you’ll eventually find what you’re looking for :p . I remember reading somewhere in another book (sadly before I began to take notes), that if you do enough experiments for the thousands different kinds of drugs, you’ll eventually get 1 result that is positive that you can market, just by statistical probability.
Publication bias is real, and it actually obscures us from knowing what is the real “base rate”, i.e how many experiments that succeed over how many experiments that failed. That is why when you’re looking for studies to back up your daily argument on the internet, you should be looking across multiple studies and doing a literature review. Obviously that is too much work for internet points. This might also explain why everyone is able to find something that supports their statement.