Collective Fraud in Plain Sight: Bad Statistics in Health Science

A letter to the editors of the Journal of the American Medical Association with bad interpretation of data (Johnson et al., 1997). The statistical power of a study with a sample size of 100 to detect an effect size of 0.35 standard deviations using a two tailed T-test and 0.05 significance is only 0.42. It is more probable that the small samples are farther from the true parameter than the other way around.
See Cugelman et al., 2011 for improper interpretation of this chart — authors ignored statistical power when measuring qualities of studies despite statistical power being part of the Downs and Black Instrument that they used to measure the qualities of the studies that they used!

“We would have a radically different HIV/AIDS situation if Republicans and Ronald Reagan just acted better.”

Johnson et al., 2010
There is a kicker — considerable positive effects are an artifact of overusing studies on “CSWs” (Commercial Sex workers)
The US odds ratio of 1.41 is equal to an effect size of 0.189 Cohen’s d units — small effect, when the 4 studies looking at youth are evaluated independently, there is no statistically significant effect.
Left: Obesity and natual log of income by state and county correlation by year. Right: Natural log of income and T2D correlation sorted by sex (authors did not indicate why no curve for men exists — presumably harms their case).

In Conclusion…

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store