Monday, August 19, 2013

Why do journals no longer publish hypothesis without validation?

In the past journals regularly published hypotheses.

The first Watson and Crick paper was little more than a hypothesis (The first sentence of that paper was: "We wish to suggest a structure for the salt of deoxyribose nucleic acid (D.N.A.)."). So was the Corey-Pauling alpha-helix paper. 

The "one-gene, one-enzyme" paper by Beadle and Tatum was a hypothesis. The first DNA coding paper by Gamow was a hypothesis. 

Although not formally published, Crick's famed tRNA paper, which introduced the "wobble-hypothesis" for the third anticodon position, which allowed deciphering of the genetic code, was a hypothesis, was widely circulated among the practitioners. 

The central dogma paper by Crick was a hypothesis. The so-called French-flag model of morphogen gradient in developmental biology was a hypothesis by Lewis Wolpert. Crick wrote a paper in Nature in early 1970s on the probable physical size limit of morphogens, which was entirely a hypothesis (no morphogen was yet identified). The proposal that eukaryotic chromosome ends must have a special structure (specifically, a hair-pin, which some 15 years later was discovered as telomeres) was a hypothesis advanced by Jim Watson in the late 1960s in Nature. In 1964, Robin Holliday proposed the now famous Holliday junction model of DNA recombination, which could be directly tested some 25 years after the publication of the hypothesis.  The second realistic model  of DNA recombination, the so-called Meselson-Radding model, published in PNAS was entirely hypothetical.

In other areas of science publishing hypothesis was the norm.  

The famous Bohr's paper on atomic theory was strictly speaking a hypothesis (consistent with past data), the general theory of relativity was a hypothesis (proved a few years later by observing the bending of light past the sun during a complete solar eclipse). Schroedinger's equation paper was a hypothesis (it can't be derived). Plank's famous paper that introduced quantization of energy was a hypothesis. 

 I could go on and on. 

In recent times, journal editors and reviewers have generally and unintentionally conspired together to not publish hypothesis without validation, because of impact factor considerations.  A hypothesis without validation is hard to evaluate, and so it is risky for a journal to publish because it might be proved wrong. If proved wrong, the article would not be cited further, and this should lower the journal's impact factor rating.

For example, in early 1970s,there was a paper published in Nature entitled "A quantum mechanical muscle model" (by CWF McClare), which proposed that actin and myosin molecules generate force through a quantum mechanical "resonance" process, which turned out to be untestable (not incorrect, mind you), and was hardly cited (the untimely death by suicide of the author due to mental depression might have also contributed to the paper being not much cited, however). 

This does not necessarily need to be the case.  For example, the Meselson-Radding model of DNA recombination turned out to be incorrect in general (though there are some specific cases wherein it is likely true), and yet was widely cited because this (ultimately incorrect) model prompted a flurry of experimental and theoretical investigations.  

As Carl Popper, the preeminent philosopher of modern science, has shown (See, "Conjectures and Refutations" by Popper), hypotheses that are proven wrong are more useful hypotheses for the progress of science than are hypotheses that are difficult to test. 

So when the mud settles, we might look back to this age and conclude that the current journal trends might indeed have impeded the progress of science!

No comments: