Eagle
rock gospel singers - What Paul did (2013)
A new
paper published in PLOS Biology argues that neuroscientific research is compromised
due to a significant bias in the reporting of animal studies, thus greatly
exaggerating the impact of future therapy platforms in humans. While this is
excellent news to anyone who wishes to put an end to animal testing in science,
this study still holds several testimonies as to the various maladies
inflicting contemporary science. First and foremost is the utter addiction to
statistics. The reality in 20th/21st century experimental science is that if it
doesn't have numbers than it's meaningless. It is completely irrelevant that statistical
methods are changed frequently, even within the same paper, just so they'll
deliver the desired result. It's also irrelevant that the statistical tools
used are often wrong and do not fit the experimental design. It's all beside
the point: put a table rife with numbers and augment it with a line-rich graph
and you're done. As a corollary, statistical power is the king: scientists in
medical and life sciences are not bothered by ethical, only financial and
statistical. They have to come up with the largest N (group size) that will a)
allow for a statistical power and b) be cheap. Unfortunately for most labs all
over the world, particularly in neuroscience, where monkeys are considered to
be the best animal model, b often trumps a, with authority.
A quasi solution that is all too popular is a
methodological tool called meta-analysis: the pooling together of a number (preferably
large, though more often than not, rather small) of published studies to create
a larger N. to justify this seemingly scientifically absurd procedure (those
are DIFFERENT papers, with different procedures, designs, subjects, and goals),
authors present an elaborate scheme of criteria for the exclusion/inclusion of
papers into their analysis.
A new meta-analysis conducted by researchers from
Stanford takes this concept a step further, and performs a meta-analysis of 160
previously published other meta-analyses, covering a total of roughly a
thousand research papers. The topic of interest was studies of potential
treatments of various human neurological disorders such as MS, Alzheimer's,
Parkinson's, stroke, and spinal cord injury. The aim of the paper was to test
the validity of the statistical tools used and whether a suitable group size
was used. To that end, the authors picked only the most precise study in each
meta-analysis as an indication whether the expected number allowed for
statistically significance.
The main finding, one that should seriously alarm any
funding agency, public health policy makers, and the public in general, is that
only eight (8!!!!) of the 160 papers covered can claim to have established
valid statistical significance.
In addition, almost half of the studies suffered from the
basic flaw of small N. bear in mind, those are published papers, i.e., they
were approved by editors and peer reviewers.
As only 108 papers were deemed "somehow
effective", the number of studies claiming statistical significance for
positive results was double what it should be. Such a statistically horrendous
skew should immediately raise various ethical concerns. However, the authors
are quick to dismiss the possibility of a ubiquitous phenomenon of scientific
fraud, and focus on two other explanations.
The first is that investigators selectively choose the
statistical tool that provides them with the a priori set desired outcome (and
that does not constitute fraud?).
The second reflects the immensely strong bias editors of
"prestigious" journal have toward publishing positive results (and
preferably novel ones). This creates a reality in which an astronomical amount
of data (yes, science mostly produces negative results) never sees the light of
day and discarded and never shared with the relevant scientific community. Thus,
the bulk of trials and experiments simply cannot be included in any analysis.
The authors suggest that these biases are the culprit in
the inappropriate promotion of treatments from animal studies into human
clinical trials. I suggest that this papers offers an extremely rare honest
glimpse into the way science is performed today. I have to reserve that: the
way the BUSINESS of science is conducted today. Those are profoundly two
different things, and we (as well as hundreds of thousands of innocent animals)
suffer the consequences of this dissonance every day.
Tsilidis
KK et al. 2013.
Evaluation of excess significance bias in animal studies of neurological
diseases. PLoS Biology. 11 (7): e1001609 (10.1371/journal.pbio.1001609)