Aychamo BanBan
<Banned>
- 6,338
- 7,144
The thing that always annoys me is when they study something that likely has zero physiological effect, and there is noa priorireason to believe it should, but they'll look at something like 20 different outcomes, and when one of them is statistically significant (even if its clinically insignificant) they'll advertise on how xxx is amazing for this (useless outcome.) When we accept that a p value of 0.05 or less is significant, there is at least a 1 in 20 chance that the outcome being investigated could be falsely significant, so when they look at 20 or so outcomes, it's highly likely that one will be found to be significant! ("Oh look, farting twice before dinner increases heart rate variability by 5% at 9 months!")In support of this. One study rarely amounts to anything. A few studies rarely amount to anything. Consider looking at the amount of studies with positive, negative, neutral/equivocal findings. Really, evaluating the sampling, methodological rigor, level of analysis, effect size, and the general body of literature are just a good start. Karl Popper/Thomas Kuhn would also be a worthwhile read.