The International Agency for Research on Cancer (IARC) is a well-respected body set up by the World Health Organisation. It has conducted many large epidemiological studies into possible carcinogens. Let's take two of them. We'll call them Product X and Product Y.
There were two major findings for Product X. They were:
Odds ratio: 1.40 (1.03-1.89)Odds ratio: 1.15 (0.81-1.62)
There were also two major findings for Product Y. They were:
Odds ratio: 0.78 (0.64-0.96)Odds ratio: 1.16 (0.93-1.44)
You will notice that each study found one small but significant finding and one small but non-significant finding. In the case of Product Y, however, that significant finding suggested a protective effect.
None of these findings are particularly strong, but—if you had to pick—you would say that Product X was the most likely to be the real carcinogen, right? After all, both findings for Product X show a potential increased risk, and the largest of them is not only statistically significant but is more than twice as large as Product Y's.
But that's not how these findings were reported at all. The WHO issued a press release saying that there was no conclusive evidence that Product X caused cancer and blamed "biases and errors" for the study's findings. The WHO also issued a press release for Product Y, saying that it definitely did cause cancer and blamed weaknesses in the study for its failure to show this more clearly.
Consequently, the BBC reported that Product X "does not appear to increase the risk" of getting cancer, but reported that Product Y represented "a definite, although small, risk" of getting cancer.
So why would the weakest associations be hyped up while the stronger associations were downplayed?
The World Health Organisation has not decided to wipe mobile phones off the face off the earth.