I recently peeped in a post titled Three myths about scientific peer review on Michael Nielsen's blog (by way of a post on Kjerstin's blog). Now; Michael Nielsen is undoubtedly a very smart and knowledgeable guy, and I'm taking nothing away from his professional accomplishments - within his actual profession of theoretical physics and quantum computing (a self-professed "pioneer of quantum computing") - when I say that this particular post failed to impress me in a most spectacular manner. With his post, Nielsen aims to "debunk three widely-believed myths about peer review, myths which can derail sensible discussion of the future of peer review." The three myths are as follows:
Myth number 1: Scientists have always used peer review.
While many scientists believe that peer review has been widely employed since early in the history of science, this is false, and worse still, most scientific journals didn't routinely use peer review until the mid 20th century, Nielsen states. He then goes on to mention examples to "illustrate the point" (which point is meant to be illustrated remains somewhat unclear), including the fact that most of the Great Albert Einstein's papers did not pass through peer review. As a matter of fact, it might be that only one of his papers went through a peer -review process, and this review came back negative. Outraged, Einstein wrote a scathing letter to the editor of the journal, and submitted his work elsewhere.
A couple of things with this example. First of all; I don't recall having ever heard anyone use "Oh yeah? Well..scientists have ALWAYS used peer review" as an argument neither for not against peer review. As a matter of fact, I fail to see the merit of using such an argument. The fact that something has been around forever (even though it really hasn't in this case) hardly qualifies as the sole reason for anything.
Second; Albert Einstein was a rare scientific mind, in all likelihood ranking among the most influential scientists in our history. While I'm sure that many scientists would like to think that they are close to his league, the odds are overwhelmingly against them being anywhere near his level. "Peer" means "one that is of equal standing with another". Let me ask you a rhetorical question: Do you expect a peer review system to work particularly well for a singular mind and statistical outlier like Albert Einstein? Personally, I would prefer a system which works for the vast majority of submitted work and has a somewhat lower reliability for values with large deviations from the mean. Pretty much anyone working with "real" data would know that an algorithm which is meant to include extremities and outliers is little more than a model of noise.
Third; the reviewer was correct and Einstein was wrong in this (rare) instance. So; this is actually an example of the peer review process working like a charm - even the work of Albert Einstein could benefit from peer review (not that I'm expecting this to be a robust trend). Using Albert Einstein in an example about peer review....unimpressive.
Myth number 2: Peer review is reliable.
Here, Nielsen starts off with the time-tested "every scientist has gotten bad reviews" in a slightly new wrapping: "Every scientist has a story (or ten) about how they were poorly treated by peer review - the important paper that was unfairly rejected, or the silly editor who ignored their sage advice as a referee. Despite this, many strongly presume that the system works “pretty well”, overall."
Ya think? Now; I have personally complained about - what I perceive to be - poor reviews on this blog. Funny thing is; I typically don't make a post every time I get solid, constructive feedback, which actually happens most of the time. I also tend to make blog posts about things which are at least slightly outside of the norm. Moreover, if every scientist has a story about poor treatment, then that represents a very small fraction of the number of publications said scientist ought to have if he or she is anywhere close to productive. And you know what; every student has a tear-dripping story (or ten) about getting an undeservedly poor grade. Every actor or musician can give you a tale involving unfair reviews. Most people feel they pay too much tax, etc. You can't please everyone, and every system is flawed.
Nielsen goes on to say that there isn't much evidence to suggest that the peer review system "works pretty well", and describes a rather famous example of papers about organic superconductors by German physicist Jan Hendrik Schoen in 2000 and 2001 which were published (after passing peer review) in very prestigious journals. Eventually, the findings appeared too good to be true, and it turned out that much of Schoen's work was fraudulent. Natch, a bunch of his papers was retracted, and there was a big hubbub. I remember this very well; early in 2001 my Ph.D. advisor put me - the newest addition to his research group - on a project aiming to investigate surface-immobilized DNA structures for their possible use as superconductors, based on these papers. For me, that was a waste of time to the tune of two months. To the grad student initially assigned to the project, it was the final straw and exit grad school by way of M.Sc.
Sure; it was really bad that so many manuscripts slipped past the reviewers. Good job it was caught. You know what, though? It wouldn't have caused a big stir unless such a slip-up happened very rarely. Every time some douchebags like Sudbø, the cold fusion people*, or Schoen get caught, there is a big media production, and an outrage in the scientific community. This - to me - is comforting. Does it mean that the peer review system has flaws? Sure - plenty of them. However, this does not amount to much unless you've got an alternative which has been reliably documented to outperform the existing system. When that time comes, I'm all about the new system. I'm highly susceptible to facts and logic. Bandwagon-jumping - not so much.
Myth number 3: Peer review is the way we determine what's right and wrong in science.
I don't really know what to say about this alleged myth, as I've never heard or read about anyone using this one as an argument pro or con peer review either.
There's plenty of room for improvement when it comes to the peer review process. For example, it's pretty obvious trend that the time between submission and acceptance of a manuscript is significantly shorter when big-shot scientists are present on the author list. Any ol' WOS search can confirm this. To a certain extent, this is due to the fact that big-shot scientists are very good at what they do. However, the productivity of some of these prominent scientists, and the rather large number of members in their research groups, highly suggest that their direct involvement with at least some of the papers emanating from their groups is minimal. Still, reviewers might be more likely to give the benefit of the doubt to a paper from a demonstrably excellent scientist from an Ivy-League university than to a work by Dr. Carlos Bandidos from the Dept. of mexican arctic and alien studies, Universidad de mexico. Is it fair? Arguably not. Is this particular bug fixable? You betcha'. A double blind system where the authors don't know the identities of the reviewers and vice versa is used by some journals, and unless it's a very narrow field with few players, I believe this evens the playing ground. When I act as a reviewer, I don't look at the author list until after I've read the manuscript and made up my mind regarding the quality of the work. I must admit that I've been surprised on some occasions by the abysmal quality of manuscripts coming out of some pretty famous labs.
There are plenty of unsatisfactory aspects in the peer review system. However, I have yet to see a system which outperforms it.
Myth number 1: Scientists have always used peer review.
While many scientists believe that peer review has been widely employed since early in the history of science, this is false, and worse still, most scientific journals didn't routinely use peer review until the mid 20th century, Nielsen states. He then goes on to mention examples to "illustrate the point" (which point is meant to be illustrated remains somewhat unclear), including the fact that most of the Great Albert Einstein's papers did not pass through peer review. As a matter of fact, it might be that only one of his papers went through a peer -review process, and this review came back negative. Outraged, Einstein wrote a scathing letter to the editor of the journal, and submitted his work elsewhere.
A couple of things with this example. First of all; I don't recall having ever heard anyone use "Oh yeah? Well..scientists have ALWAYS used peer review" as an argument neither for not against peer review. As a matter of fact, I fail to see the merit of using such an argument. The fact that something has been around forever (even though it really hasn't in this case) hardly qualifies as the sole reason for anything.
Second; Albert Einstein was a rare scientific mind, in all likelihood ranking among the most influential scientists in our history. While I'm sure that many scientists would like to think that they are close to his league, the odds are overwhelmingly against them being anywhere near his level. "Peer" means "one that is of equal standing with another". Let me ask you a rhetorical question: Do you expect a peer review system to work particularly well for a singular mind and statistical outlier like Albert Einstein? Personally, I would prefer a system which works for the vast majority of submitted work and has a somewhat lower reliability for values with large deviations from the mean. Pretty much anyone working with "real" data would know that an algorithm which is meant to include extremities and outliers is little more than a model of noise.
Third; the reviewer was correct and Einstein was wrong in this (rare) instance. So; this is actually an example of the peer review process working like a charm - even the work of Albert Einstein could benefit from peer review (not that I'm expecting this to be a robust trend). Using Albert Einstein in an example about peer review....unimpressive.
Myth number 2: Peer review is reliable.
Here, Nielsen starts off with the time-tested "every scientist has gotten bad reviews" in a slightly new wrapping: "Every scientist has a story (or ten) about how they were poorly treated by peer review - the important paper that was unfairly rejected, or the silly editor who ignored their sage advice as a referee. Despite this, many strongly presume that the system works “pretty well”, overall."
Ya think? Now; I have personally complained about - what I perceive to be - poor reviews on this blog. Funny thing is; I typically don't make a post every time I get solid, constructive feedback, which actually happens most of the time. I also tend to make blog posts about things which are at least slightly outside of the norm. Moreover, if every scientist has a story about poor treatment, then that represents a very small fraction of the number of publications said scientist ought to have if he or she is anywhere close to productive. And you know what; every student has a tear-dripping story (or ten) about getting an undeservedly poor grade. Every actor or musician can give you a tale involving unfair reviews. Most people feel they pay too much tax, etc. You can't please everyone, and every system is flawed.
Nielsen goes on to say that there isn't much evidence to suggest that the peer review system "works pretty well", and describes a rather famous example of papers about organic superconductors by German physicist Jan Hendrik Schoen in 2000 and 2001 which were published (after passing peer review) in very prestigious journals. Eventually, the findings appeared too good to be true, and it turned out that much of Schoen's work was fraudulent. Natch, a bunch of his papers was retracted, and there was a big hubbub. I remember this very well; early in 2001 my Ph.D. advisor put me - the newest addition to his research group - on a project aiming to investigate surface-immobilized DNA structures for their possible use as superconductors, based on these papers. For me, that was a waste of time to the tune of two months. To the grad student initially assigned to the project, it was the final straw and exit grad school by way of M.Sc.
Sure; it was really bad that so many manuscripts slipped past the reviewers. Good job it was caught. You know what, though? It wouldn't have caused a big stir unless such a slip-up happened very rarely. Every time some douchebags like Sudbø, the cold fusion people*, or Schoen get caught, there is a big media production, and an outrage in the scientific community. This - to me - is comforting. Does it mean that the peer review system has flaws? Sure - plenty of them. However, this does not amount to much unless you've got an alternative which has been reliably documented to outperform the existing system. When that time comes, I'm all about the new system. I'm highly susceptible to facts and logic. Bandwagon-jumping - not so much.
Myth number 3: Peer review is the way we determine what's right and wrong in science.
I don't really know what to say about this alleged myth, as I've never heard or read about anyone using this one as an argument pro or con peer review either.
There's plenty of room for improvement when it comes to the peer review process. For example, it's pretty obvious trend that the time between submission and acceptance of a manuscript is significantly shorter when big-shot scientists are present on the author list. Any ol' WOS search can confirm this. To a certain extent, this is due to the fact that big-shot scientists are very good at what they do. However, the productivity of some of these prominent scientists, and the rather large number of members in their research groups, highly suggest that their direct involvement with at least some of the papers emanating from their groups is minimal. Still, reviewers might be more likely to give the benefit of the doubt to a paper from a demonstrably excellent scientist from an Ivy-League university than to a work by Dr. Carlos Bandidos from the Dept. of mexican arctic and alien studies, Universidad de mexico. Is it fair? Arguably not. Is this particular bug fixable? You betcha'. A double blind system where the authors don't know the identities of the reviewers and vice versa is used by some journals, and unless it's a very narrow field with few players, I believe this evens the playing ground. When I act as a reviewer, I don't look at the author list until after I've read the manuscript and made up my mind regarding the quality of the work. I must admit that I've been surprised on some occasions by the abysmal quality of manuscripts coming out of some pretty famous labs.
There are plenty of unsatisfactory aspects in the peer review system. However, I have yet to see a system which outperforms it.
*Edit: By "the cold fusion people" I am referring to the famous "work" of Fleischmann and Pons in the late 80's which didn't turn out to be reproducible in any way, shape or form.
26 comments:
..every student has a tear-dripping story (or ten) about getting an undeservedly poor grade...
Naaa. I had more then ten grades in my university days. Just another solid proof that geniuses won't get any recognition in their life time.
;-)
Seriously, what the dude says is that the peer review system isn't perfect. So? Did anybody think it was? They are hardly myths at all.
You have this completely backward! Peer-review is highly effective and reliable. In the case of cold fusion, hundreds of top-notch replications of cold fusion have passed peer-review and have been published in major journals, such as J. Electroanal. Chem. Therefore we can be certain that cold fusion is real.
You will not learn this fact by reading the New York Times, Scientific American or Wikipedia, because they do not apply peer-review or any other objective method. They repeat nonsensical rumors about cold fusion without fact-checking them.
When peer-review is done correctly and objectively it helps separate fact from fiction. In the case of cold fusion, you made a drastic mistake because you did not read authoritative, peer-reviewed scientific literature. You just proved yourself wrong.
I suggest you read something about cold fusion before commenting on it. See:
http://www.lenr-canr.org/
Seriously, what the dude says is that the peer review system isn't perfect. So? Did anybody think it was? They are hardly myths at all.
Exactly. Two of the myths were hardly relevant for the quality of the process, and the "myth or reliability" was not exactly debunked.
I have yet to hear someone claim that peer review is perfect. Yet for all its flaws, the most ardent critics never seem to present an alternative.
Jed: Assuming that you're being a wise-ass, I like your style
Perhaps I am being a wise ass . . . But it is a matter of fact that hundreds of peer-reviewed papers confirmed cold fusion. You can confirm that in a university or National Laboratory library. (I have a collection of 1,200 peer-reviewed cold fusion papers mainly from the Los Alamos library.) No one can argue that these papers do not exist. You might argue that the ~2,000 authors of these papers are all wrong, but I think that is impossible. No widely replicated experiment in history has ever been wrong. If such errors could escape undetected, the experimental method itself would not work.
Regarding your comment:
"I have yet to hear someone claim that peer review is perfect. Yet for all its flaws, the most ardent critics never seem to present an alternative."
Exactly right. It is better than the alternatives. It has been abused at times, and used as a method to prevent publication of good papers. But when peer-review is done objectively, according to the rules, it is helpful. I have seen cold fusion papers much improved by the process. Cold fusion research Prof. Melvin Miles, (Distinguished Fellow of China Lake, ret.) is one of the most careful, painstaking people I know, but he told me the peer-review process has been invaluable for finding errors and improving his papers. Martin Fleischmann, Julian Schwinger and others also told me this.
Cold fusion researchers favor peer-review because they tend to be conservative, distinguished, mainstream, fuddy-duddy professors. Otherwise they would not get funded, because the research is controversial, thanks to ignorant people who attack it.
Having said all of that in favor of peer review, let me agree that it has been abused in some cases. It has been used to destroy academic freedom and suppress information. Schwinger wrote:
"The pressure for conformity is enormous. I have experienced it in editors rejection of submitted papers, based on venomous criticism of anonymous referees. The replacement of impartial reviewing by censorship will be the death of science."
http://lenr-canr.org/acrobat/SchwingerJcoldfusiona.pdf
Even the best institutions can be brought down by stupid and evil people. Free market capitalism, for example, is enormously beneficial. But recent events on Wall Street and in the banks show that when idiots end up in charge, they can abuse capitalism and lose trillions of dollars.
The experimental method and peer-review are superb mechanisms for discovering the truth, and in the case of cold fusion they functioned exactly the way they were supposed to. By 1991 there was no question that cold fusion is a real nuclear effect and that it produces temperatures and power density as high as a uranium fission reactor core. Unfortunately, idiots have abused the peer-review system and ignored the scientific method, and they have largely prevented funding for this research. You can't blame the system for this, or for the sub-prime mortgage disaster. You have to blame the culprits who are at fault.
Perhaps I am being a wise ass . . . But it is a matter of fact that hundreds of peer-reviewed papers confirmed cold fusion.
Empirically or theoretically? Because if it is the former and you don't mind me asking, what is the main problem? Upscaling or yield?
By 1991 there was no question that cold fusion is a real nuclear effect and that it produces temperatures and power density as high as a uranium fission reactor core. Unfortunately, idiots have abused the peer-review system and ignored the scientific method, and they have largely prevented funding for this research. You can't blame the system for this, or for the sub-prime mortgage disaster. You have to blame the culprits who are at fault.
Are you suggesting that Fleischmann's findings are indeed reproducable and that all the other groups who failed to get the same results are simply incompetent?
You wrote:
"Empirically or theoretically?
Empirically. There is still no widely accepted theoretical explanation, although control factors and necessary conditions are much better understood.
"Because if it is the former and you don't mind me asking, what is the main problem? Upscaling or yield?"
In my opinion, the main problems are materials and control. This is similar to other surface catalysis effects. The Italian Nat. Labs and the NRL are making progress identifying effective materials, but it is very time consuming and difficult. The control factors are known but difficult to achieve. That is to say, when you reach a certain level of loading, open circuit voltage, flux and so on, you can be sure the effect will turn on, but the cathode is likely to disintegrate before you reach those conditions. That's the materials problem rearing its head again, as Palin would say.
"Are you suggesting that Fleischmann's findings are indeed reproducable . . ."
Absolutely. By September 1990, electrochemist Fritz Will tallied replication reports from 92 groups in 10 countries. In his 2007 book, Storms tallied ~200 groups measuring excess heat; ~70 reporting tritium and so on (with some overlap). These are all quality replications at mainstream labs; Will and Storms ignore marginal claims -- of which there are many.
". . . and that all the other groups who failed to get the same results are simply incompetent?"
Not at all! This is a very difficult experiment, as Fleischmann and others said in March 1989. It was, at that time, roughly as difficult as making a transistor was in 1948.
(Actually the failure rate for transistors even in the mid 1950s was higher than for cold fusion. In both cases, they get around the limitation by making hundreds of devices: transistors or cold fusion cathodes, and then winnowing out the ones that work. This takes months.)
There have been fleeting observations of cold fusion since 1927, reported by Paneth and Peters, Mizuno and others. Mainly charged particles and bursts of heat I think. Before they announced, Fleischmann and Pons worked for 3 or 4 years to the point where they could make the effect work most of the time, at far higher levels than anyone previously achieved. They are among the crem de la crem of electrochemists. (Fleischmann was the president of the Electrochem. Soc., author of the authoritative textbook, FRS, etc.) Oriani and others have told me this is the most difficult experiment he did in his 50-year career. When they read the particulars they were astounded by the loading levels, never mind the nuclear effect.
I tallied published reports from mainstream U.S. and Canadian labs that failed to replicate in 1989. There were 20 groups with 135 people. However, they were all nuclear scientists, and I do not think they consulted with or employed experts in material science or electrochemistry. It would have been a miracle if they had succeeded. They did not realize what they were up against.
These are well-documented experiments and the reasons they failed are obvious in retrospect. Much less was known in 1989. This is normal for groundbreaking research.
The nuclear scientists working on their own resembled electrochemists trying to build a Tokamak reactor. The groups that succeeded, at Los Alamos, China Lake, BARC and elsewhere were multidisciplinary. They were mainly electrochemists who got help from nuclear people measuring tritium, neutrons and so on, and experts in calorimetry. Very few people had the full set of necessary skills.
Fleischmann famously botched the neutron measurements. Fortunately, he is an expert in calorimetry, and as he says, heat is principal signature of the reaction. (Not all electrochemists are experts in calorimetry, but he is.)
In my opinion, the main problems are materials and control. This is similar to other surface catalysis effects.
I see. Not sure that I agree with the surface catalysis comparison though, as the main challenges here often manifest themselves as structural and dispersion issues (surface area, effective pore diameters, adequate dispersion of catalyst,..)
The control factors are known but difficult to achieve. That is to say, when you reach a certain level of loading, open circuit voltage, flux and so on, you can be sure the effect will turn on, but the cathode is likely to disintegrate before you reach those conditions.
So the system is likely to break down before or just when you reach operational conditions? Doesn't that put a hefty dent in the reproducibility? Also; how do these devices compare to a tokamak with respect to current development?
In his 2007 book, Storms tallied ~200 groups measuring excess heat; ~70 reporting tritium and so on (with some overlap). These are all quality replications at mainstream labs; Will and Storms ignore marginal claims -- of which there are many.
What type of S/N are we talking about for this "excess" heat and for how long?
Not at all! This is a very difficult experiment, as Fleischmann and others said in March 1989. It was, at that time, roughly as difficult as making a transistor was in 1948.
Well; then I'm sure you'll agree that it at least partially violates the reproducibility criterion in the peer review system if only a minute fraction of theoretically qualified scientists can produce the same effects even with the same equipment?
You wrote:
". . . So the system is likely to break down before or just when you reach operational conditions?"
Not if you select the right materials and do the experiment correctly. It only breaks down when you load too fast, or use an alloy that cannot hold together.
"Doesn't that put a hefty dent in the reproducibility?"
That depends on how you measure reproducibility. As I said, it resembles the situation with some types of transistors in 1955, when only 2 or 3 per hundred worked. Suppose you pre-test 100 cold fusion cathodes and find 3 with the necessary characteristics. These 3 will work every time. Is that 3% reproducibility, or 100%?
"Also; how do these devices compare to a tokamak with respect to current development?"
It is hard to compare the two. A Tokamak requires far more energy input that it outputs. (I do not know the ratio, but it is large.) It outputs very high power ~10 MW for a fraction of a second; a total of say ~6 MJ. A Tokamak is the size of a two-story house.
Cold fusion cathodes run with varying amounts of input depending on the technique. Input ranges from ~3 times input; to ~1/20 input; to zero input in a "fully ignited" reaction (with heat after death or gas loading). Cold fusion cathodes are the size of coin and they put out anywhere from a fraction of a watt to ~100 W, but they keep going for weeks or months. The record is around 500 MJ, I think.
The big difference is that a Tokamak costs $1 billion, and the ITER one is supposed to cost $5 billion, whereas a cold fusion experiment costs about $50,000, and sometimes a few million bucks. (The ones at Mitsubishi and Toyota cost about $10 million I think. Some of them use linear accelerators which are huge, of course, and cost hundreds of millions.)
"What type of S/N are we talking about for this 'excess' heat and for how long?"
That depends on the calorimeter, power levels and so on. SRI has a high precision one that produces a high s/n ratio even with a fraction of a watt output. Obviously the ratio is high when there is no input power and 100 W of output. That's palpable.
I do not keep track of record duration but I guess it is about 5 months. Toyota ran one for: 158 days, 101 W average, 1.5 times input, 294 MJ total. (Roulette et al., 1996) They will run indefinitely, but researchers turn them off and cut up the cathodes for mass spectrometry and other analysis, which is the hard part of the experiment.
Determining the s/n ratio for things like tritium is entirely different from excess heat. It ranges from ~60 times background, to several million times background in a few cases. Plus there is complexity because many of these experiments are "warm fusion." That is to say, modified or partial plasma fusion, that produces a far greater reaction than predicted by conventional theory. See for example Claytor (LANL). The autoradiograph of the BARC Ti cathode shown at LENR-CANR.org had 10E16 atoms of tritium where theory predicts at most 10E9. I am not sure what you would call that s/n ratio.
It is complicated.
"Well; then I'm sure you'll agree that it at least partially violates the reproducibility criterion in the peer review system if only a minute fraction of theoretically qualified scientists can produce the same effects even with the same equipment?"
That is not the case at all! Most of the world's top electrochemists with relevant experience tried doing cold fusion within a year or two, and most of them succeeded. (There are not many electrochemists in the world.)
The 20 groups of nuclear physicists in the U.S. who failed to replicate had no experience in electrochemistry, material science or calorimetry. They were not a bit qualified -- not "theoretically" or any other way. They had no idea how to do this experiment, and no idea how difficult it is. That isn't their fault! As I said, one of the world's best electrochemists (Fleischmann) did not know how to measure neutrons, something any undergrad nuclear sci. student can do.
The equipment was not the same. That was one of the major problems. The cathodes were totally different, and there is no chance they would have worked. Most important, these 20 groups of nuclear scientist tested roughly 20 cathodes, usually for a short duration, whereas electrochemists tested several or hundreds simultaneously (in an array), and they knew you must wait a few weeks before the cathode even loads. They measured loading, OCV and other control parameters that the nuclear physicists had never heard of.
Nuclear scientists are good at what they do, but what they do is far different from electrochemists do, and vice versa. Expecting one group to master the other's job after reading a short paper in March 1989 is ridiculous.
As I said, you have to have experts from both groups work together to do this experiment. At laboratories where they did work together, the effect was replicated, often at high s/n ratios.
In 1989, there were a dozen or so people in the world who just happened to have degrees in electrochem and nuclear physics or a related field. People such as McKubre, Bockris, Miles, Oriani, Storms and Mizuno. They were doing things such as using electrochemical implantation to study the effect of hydrogen in nuclear reactor walls -- which happens to involve both disciplines. They knew exactly how to do this experiment. They replicated after 6 months to a year. Note that it takes 6 months just to set up the experiment, calibrate, pre-test cathodes, etc. (A few of them happened to have the right equipment all set up to go.) The nuclear scientists and other non-experts thought they could do this in 6 days or a few weeks. Any electrochemist could have told them they were wrong, but they did not think to ask an electrochemist.
I wrote:
"Suppose you pre-test 100 cold fusion cathodes and find 3 with the necessary characteristics. These 3 will work every time. Is that 3% reproducibility, or 100%?"
I should point out that the situation has improved over the last several years. Nowadays the Italian Nat. Nuc. Labs (ENEA), the NRL and some others in Israel and the UK can make a cathode that will work the first time and every time. (If you know how to make cathodes work, that is.)
In other words, the reproducibility problem has been largely solved by these people. Unfortunately their metallurgical methods are complicated and have not been widely disseminated or taught. I wish that I could reprint more of their work at LENR-CANR but alas they are copyright, and several are only in Italian as far I know.
Plus there are several "warm fusion" methods such as ion beam loading with linear accelerators that have always worked 100% of the time, as far as I know. They are not practical, and you can't measure heat, but the nuclear effects are clear and reproducible. Obviously, this technique is performed by nuclear scientists -- not electrochemists! It has nothing to do with electrochem. The metal loads up in fraction of a second. They tried this soon after they heard about cold fusion, and quickly confirmed anomalies in metal lattices. This is an example of nuclear experts doing what they are trained to do, and succeeding.
That depends on how you measure reproducibility. As I said, it resembles the situation with some types of transistors in 1955, when only 2 or 3 per hundred worked. Suppose you pre-test 100 cold fusion cathodes and find 3 with the necessary characteristics. These 3 will work every time. Is that 3% reproducibility, or 100%?
In my opinion, it wouldn't be reproducible in the peer review sense unless those 3% were also spread among a number of different research groups, so I'd go with the lower number as the higher limit of reproducibility.
That is not the case at all! Most of the world's top electrochemists with relevant experience tried doing cold fusion within a year or two, and most of them succeeded. (There are not many electrochemists in the world.)
Electrochemists are not that rare a species..
The equipment was not the same. That was one of the major problems. The cathodes were totally different, and there is no chance they would have worked. Most important, these 20 groups of nuclear scientist tested roughly 20 cathodes, usually for a short duration, whereas electrochemists tested several or hundreds simultaneously (in an array), and they knew you must wait a few weeks before the cathode even loads. They measured loading, OCV and other control parameters that the nuclear physicists had never heard of.
At least some of this would belong in the M&M part of the publication, right? If you expect other scientists to be able to reproduce your results, then you've got to outline both the procedures and what conditions must be met.
It's got to be reproducible both within the confines of a single lab and with respect to a multitude of research groups with qualified personnel and the necessary equipment. Until such time, .....
Now I realize that this criterion and resulting dilemma also would apply to the aforementioned transistor as well as Edison's light bulbs. The problem is that a lot comes down to trust if only a single or a handful of research groups can perform an experiment A with the outcome B.
You wrote:
". . . These 3 will work every time. Is that 3% reproducibility, or 100%?
In my opinion, it wouldn't be reproducible in the peer review sense . . ."
So, you would not publish any papers on cloning? The success rate is ~1 per 1000. How about the Top Quark, which requires trillions of collisions, and has never been replicated outside of Fermilab? That's zero independent replications. I think reproducibility is more complicated than a simple success versus failure rate ratio.
Actually, by 1991, reproducibility was much higher than 3% at Los Alamos, China Lake and many other labs, and it is close to 100% nowadays with custom designed cathodes. It was never as low as 3% for the full experiment, but only for preliminary testing of cathodes or sections of wire on a spool.
". . . unless those 3% were also spread among a number of different research groups . . ."
Hundreds of groups have replicated cold fusion, as I mentioned.
". . . (There are not many electrochemists in the world.)
Electrochemists are not that rare a species.."
In March 1989, there were not more than a dozen electrochemists equipped to do this experiment, with authorization, funding and the necessary cooperation of nuclear physics experts.
". . . They measured loading, OCV and other control parameters that the nuclear physicists had never heard of. . . .
At least some of this would belong in the M&M part of the publication, right?"
The initial publication was inadequate and rushed out the door because the university was worried about intellectual property. Fleischmnann wanted to delay publication two or three years, he later told me.
In any case, an adequate description of "all you need to know" to do this experiment would fill thousands of pages. Looking at my shelf for example . . . . you need to master a graduate level textbook on electrochemistry (which I have not done!), Storms' 300-page textbook on cold fusion, Himminger & Hohne's "Calorimetry" and probably a bunch of books on nuclear physics and mass spectroscopy. This is what I need just to translate and edit papers, never mind replicating the experiment. As I said, the notion that a nuclear physicist could read a few papers on cold fusion and then replicate it is absurd. It is like suggesting he might clone a sheep, or engineer a Boeing jetliner.
"If you expect other scientists to be able to reproduce your results, then you've got to outline both the procedures and what conditions must be met."
Since hundreds of other scientists have reproduced, obviously this has been done.
But it takes far more than an "outline." It takes a shelf full of books and decades of experience to reproduce these results, just as it does to reproduce cloning, a Tokamak, the Top Quark results, brain surgery and cancer treatments, or a Pentium processor.
"It's got to be reproducible both within the confines of a single lab and with respect to a multitude of research groups with qualified personnel and the necessary equipment. Until such time, ....."
"That time" came in late 1990, as I said. You can confirm that in any university library.
"The problem is that a lot comes down to trust if only a single or a handful of research groups can perform an experiment A with the outcome B.
Two hundred plus labs and ~800 peer-reviewed papers is not a "handful." It is an overwhelming number. No other claim in the history of science has been so widely replicated at such high s/n ratios and yet still doubted. In the normal course of events, 5 or 10 quality replications at places like Los Alamos would clinch the matter, and remove all doubts in the minds of all scientists. You still express doubts not because of the quality or results, or the paucity of results, but because you have not read the literature and you do not know what has been done, who did it, or any other details.
You probably have not had the opportunity to read this literature, and you were not aware that it exists. So it is reasonable that you have not read it. But it is NOT reasonable for you to say "until that time comes . . ." when it came 18 years ago! Unless you have read several hundred of the leading papers and found errors in them, you are not in a position to challenge me on that assertion. (Anyone can find errors in the bottom ~200 papers. I can send you a list of abominable cold fusion papers, but there are lousy papers in any academic field.)
It is even more unreasonable that the editor of the Scientific American publishes nonsensical rumors, which I supposed he fished out of some Internet sewer. When the researchers and I send him actual scientific papers he refuses to read them or acknowledge them because -- he told me -- reading papers is "not my job." He goes on claiming in the magazine that no replication has been published! That is a violation academic ethics. Unfortunately, many leading decision makers at journals, the DoE and elsewhere have acted this way. You can't condemn the whole system just because some people are bad actors and do not follow the rules, but there has been widespread abuse in the case of cold fusion.
Academic political opposition to new ideas is not uncommon. Nobel laureates and other distinguished scientists tried to "strangle" the laser (as Townes put it), amorphous semiconductors, the MRI and and many other breakthroughs, not to mention technological breakthroughs such as the incandescent light, the airplane, the telegraph, the photograph and the zipper. There are hundreds of examples in the history books. But when the editor of Sci. Am. refuses to acknowledge that distinguished scientists such as the Chairman of the Indian Gov't Atomic Energy Commission have published papers in peer-reviewed journals, and when he calls such people lunatics and charlatans, the academic politics have gotten out of hand.
I wrote:
"Unless you have read several hundred of the leading papers and found errors in them, you are not in a position to challenge me on that assertion."
Not me. I am not the expert. Let me rephrase:
The late Heinz Gerischer, Director of the Max Planck Institute for Physical Chemistry in Berlin -- one of the word's top electrochemists -- reviewed the literature and met with leading researchers in 1990. He wrote:
"In spite of my earlier conclusion, - and that of the majority of scientists, - that the phenomena reported by Fleischmann and Pons in 1989 depended either on measurement errors or were of chemical origin, there is now undoubtedly overwhelming indications that nuclear processes take place in the metal alloys."
Several thousand other experts of his caliber also reviewed the literature and came to similar conclusions. Hundreds of them independently reproduced the effect. They have said so as emphatically as Gerisher did. When a professional scientist says "undoubtedly overwhelming indications," that is emphatic. He means it.
Unless you have read the papers Gerisher read, and you can give very convincing reasons why he and these others are wrong, I think you should at least reserve judgment, review the literature first, and not jump to conclusions about how "It's got to be reproducible . . . Until such time, ....." They say it is already reproducible. Don't contradict them or dismiss them until you have done your homework and you can convincingly show why they are wrong. (And frankly . . . good luck on that!)
Please note this is not a so-called "Appeal to Authority" (more properly "Misuse of Authority"). These people are genuine authorities. See:
http://www.nizkor.org/features/fallacies/appeal-to-authority.html
On FINAL note. Sorry to beat this to death, but . . . I wrote:
"Several thousand other experts of his caliber also reviewed the literature and came to similar conclusions."
Please DO NOT tell me that thousands of other experts read the literature and concluded that cold fusion is not real. That is incorrect. I have read every paper and book ever published by skeptics. There are only a dozen or so. These authors say they have not read the literature. They claim there is no such literature. They do not list any papers or have any footnotes in their books. They do not address the technical claims made in the papers. Their assertions about the experiments are entirely imaginary. For example, Hoffman claims that the heavy water used in the experiments is recycled from CANDU nuclear reactors.
I have corresponded with hundreds of scientists while working on LENR-CANR.org, mainly to answer routine inquiries, provide papers not on file, and so on. People have visited LENR-CANR 1.6 million times, so I have had many occasions to deal with the scientific public. Naturally, some readers do not believe cold fusion is real, but out of the hundreds I have communicated with, only one expressed doubts about the mainstream results, and in my opinion he has screw loose. Aside from that fellow, I do not know any qualified professional who has read 10 or more papers and is not convinced that cold fusion is a real nuclear effect.
Obviously, scientists who have not read the literature know nothing and have no right to any opinion. You cannot judge experimental results by ESP.
So, you would not publish any papers on cloning? The success rate is ~1 per 1000. How about the Top Quark, which requires trillions of collisions, and has never been replicated outside of Fermilab? That's zero independent replications.
I think cloning was a really bad comparison here. But still; as long as the 1 ppm reproducibility stretched across the board - as being within the confidence interval of said experiment being performed in many different labs, then it could be reproducible within those limits and if reported as such be perfectly acceptable. If only one lab has the capabilities/instrumentation necessary to perform a set of experiments, then this would also be perfectly acceptable. The problem comes when there are many labs and groups failing to reproduce a set of data found in literature and then complains about this to the source - here the journal. There is a good deal of trust going into the peer review process in that the reviewers don't physically replicate the experiments. If other groups try to replicate the experiments without success and report this to the journal, there is a problem somewhere.
Hundreds of groups have replicated cold fusion, as I mentioned.
You keep mentioning the hundreds of groups which have replicated cold fusion...but you also keep bringing up all kinds of caveats for why replication would fail, and that's my main problem. I've got no particular axe to grind for or against this technology as such.
The initial publication was inadequate and rushed out the door because the university was worried about intellectual property. Fleischmnann wanted to delay publication two or three years, he later told me.
Interesting. In my experience, the tendency would be more towards bringing in the technology transfer office at this point rather than rushing a publication.
In any case, an adequate description of "all you need to know" to do this experiment would fill thousands of pages. Looking at my shelf for example . . . . you need to master a graduate level textbook on electrochemistry (which I have not done!), Storms' 300-page textbook on cold fusion, Himminger & Hohne's "Calorimetry" and probably a bunch of books on nuclear physics and mass spectroscopy. This is what I need just to translate and edit papers, never mind replicating the experiment. As I said, the notion that a nuclear physicist could read a few papers on cold fusion and then replicate it is absurd. It is like suggesting he might clone a sheep, or engineer a Boeing jetliner.
I massively disagree. We're not talking about taking someone off the street and putting them to work on this project. Rather, it's likely that - from what you describe here - one or several research groups at a chemistry or chemical engineering department should be able to possess the basic knowledge required and absorb the rest. Like I've stated before; electrochemists are not that rare.
But it takes far more than an "outline." It takes a shelf full of books and decades of experience to reproduce these results, just as it does to reproduce cloning, a Tokamak, the Top Quark results, brain surgery and cancer treatments, or a Pentium processor.
Again, it's unreasonable to assume that someone who's attempting to reproduce something like this would start from scratch. By the way; the statement that it takes decades of experience to reproduce cancer treatments or a pentium processor is just plain wrong.
In the normal course of events, 5 or 10 quality replications at places like Los Alamos would clinch the matter, and remove all doubts in the minds of all scientists. You still express doubts not because of the quality or results, or the paucity of results, but because you have not read the literature and you do not know what has been done, who did it, or any other details.
To be absolutely honest with you, the doubt that I'm expressing at this point is largely a result of your repeated statements of hundreds of replications of this experiments, immediately followed by all kinds of caveats as to why other groups wouldn't be able to do just that. Like I said, I've got zero axe to grind here and this is not my field. If you'd simply stated that yes; the original publication was rushed and lacking and thus retracted, but that other groups have confirmed the general phenomenon, and added something about how the technology is not sufficiently understood at this point in time, and very difficult to scale up - I'd have been absolutely fine with that.
You probably have not had the opportunity to read this literature, and you were not aware that it exists. So it is reasonable that you have not read it. But it is NOT reasonable for you to say "until that time comes . . ." when it came 18 years ago! Unless you have read several hundred of the leading papers and found errors in them, you are not in a position to challenge me on that assertion.
See above......
It is even more unreasonable that the editor of the Scientific American publishes nonsensical rumors, which I supposed he fished out of some Internet sewer. When the researchers and I send him actual scientific papers he refuses to read them or acknowledge them because -- he told me -- reading papers is "not my job." He goes on claiming in the magazine that no replication has been published! That is a violation academic ethics.
No offense, but why would you care about Scientific American? It's not peer-reviewed (as far as I know), and it's generalized to the point of being a glorified cartoon...surely you've got other battles to fight. For example; industry is - in my experience - much less concerned with whether or not a technology or concept is sufficiently reported in high-impact, peer-reviewed journals.
Obviously, scientists who have not read the literature know nothing and have no right to any opinion. You cannot judge experimental results by ESP.
Re-read this statement and get back to me, Chief...
You wrote:
". . . as long as the 1 ppm reproducibility stretched across the board - as being within the confidence interval of said experiment being performed in many different labs, then it could be reproducible within those limits and if reported as such be perfectly acceptable."
Well, it is way higher than 1 ppm -- more like 60 to 80% nowadays, but otherwise I think that describes the situation pretty well. The irreproducibility is well understood, you might say. It has improved recent years. Cravens and Letts just published a comprehensive guide to improving reproducibility.
"The problem comes when there are many labs and groups failing to reproduce a set of data found in literature . . ."
"Many labs" did not fail to reproduce cold fusion. Only ~20 in the U.S. failed, as I said. Plus a few others that did not publish papers, such as Georgia Tech. The ones that replicated outnumbered the failures by 1991.
There were also three failures that turned out to be false negatives. Later reviews of the data showed that they produced excess heat, albeit not much.
Failure is not a big deal. It is a normal part of experimental science. People often fail to replicate experiments in electrochemistry. It does not mean they are incompetent, or that the original claim is wrong. It means they need to work for another 6 months.
There were some dumb mistakes, too, such as exposing heavy water to air, and galvanizing cat hairs onto the cathode in one notable instance.
"If other groups try to replicate the experiments without success and report this to the journal, there is a problem somewhere."
Of course there were problems! (Cat hairs -- meow!) But they were identified. In retrospect it was clear why some of the early replication attempts failed and others succeeded.
That's how science is supposed to work, and how it did work.
"You keep mentioning the hundreds of groups which have replicated cold fusion...but you also keep bringing up all kinds of caveats for why replication would fail, and that's my main problem."
Well, it is a tough experiment.
"Rather, it's likely that - from what you describe here - one or several research groups at a chemistry or chemical engineering department should be able to possess the basic knowledge required and absorb the rest."
That is correct. That is exactly the situation. Many groups at places such as Mitsubishi and Toyota had no relevant experience in similar experiments, and had to learn from scratch how to do it. It took years of hard work. They can do it now, in some cases 100% of the time.
"Again, it's unreasonable to assume that someone who's attempting to reproduce something like this would start from scratch."
Everyone starts from scratch. People have been doing this since 1927, but the science is still young and there are many unanswered questions.
Of course it is easier now than it was in 1989. You can read detailed instructions, and consult with people who have succeeded.
". . . .of hundreds of replications of this experiments, immediately followed by all kinds of caveats as to why other groups wouldn't be able to do just that."
I never said that! Lots of other groups learned to do it. There were, as I said, only about a dozen set up to replicate in March 1989. There are now 200 to 300 (when you count heat, tritium, ion beam loading and so on). It took time, but what experiment doesn't take time? Very few important experiments in history could be replicated in weeks. The x-ray and a few others come mind. It took a year to build the second maser.
The maser and laser is a good example. It took years to make the first one, and another year to replicate it, but now we pump out millions of lasers every day. It takes months to make one cold fusion device now, because it is done painstakingly by hand, but with a few billion dollars of R&D and a bunch of automated equipment, someday we may churn out millions every day. It isn't hard to map out how the process could be improved and automated, but it sure is hard work doing it.
"If you'd simply stated that yes; the original publication was rushed and lacking and thus retracted . . ."
It wasn't retracted. The neutron part was wrong but the rest was confirmed. And of course the effect does produce neutrons, but not as many as originally claimed -- and millions of times fewer than plasma fusion does.
". . . but that other groups have confirmed the general phenomenon . . ."
They confirmed it in detail. Just about all of the original claims stand, except I think most people consider it a surface effect rather than a bulk effect. Even the Ni claims are coming along.
". . . and added something about how the technology is not sufficiently understood at this point in time, and very difficult to scale up - I'd have been absolutely fine with that."
That is the case.
"No offense, but why would you care about Scientific American?"
I am using him to illustrate the problem with people who do not abide by the academic rules, and who do not honor peer review.
The only reason I care about him is that his attacks have stymied funding in many cases. I have seen letters from funding agencies and the Patent Office that cite the Sci. Am., the New York Times and the Wall Street Journal as evidence that cold fusion does not exist! That's ridiculous.
"Many labs" did not fail to reproduce cold fusion. Only ~20 in the U.S. failed, as I said.
Actually; if ~20 actually reported that they failed to reproduce the results, you can be reasonably certain that many more failed but didn't report this.
Failure is not a big deal. It is a normal part of experimental science. People often fail to replicate experiments in electrochemistry. It does not mean they are incompetent, or that the original claim is wrong. It means they need to work for another 6 months.
On the other hand, the probability that the original work is not reproducible or not sufficiently described drops in proportion to the number of groups who report their failures..
That is correct. That is exactly the situation. Many groups at places such as Mitsubishi and Toyota had no relevant experience in similar experiments, and had to learn from scratch how to do it. It took years of hard work. They can do it now, in some cases 100% of the time.
Cool
That is correct. That is exactly the situation. Many groups at places such as Mitsubishi and Toyota had no relevant experience in similar experiments, and had to learn from scratch how to do it. It took years of hard work. They can do it now, in some cases 100% of the time.
Not the way you described it a few comments back, with having to absorb fundamental (albeit graduate level) electrochemistry and such.
I never said that! Lots of other groups learned to do it. There were, as I said, only about a dozen set up to replicate in March 1989. There are now 200 to 300 (when you count heat, tritium, ion beam loading and so on). It took time, but what experiment doesn't take time? Very few important experiments in history could be replicated in weeks.
Indirectly you do, as you keep bringing up the number of successful attempts, yet simultaneously bring up all kinds of explanations for why other groups fail. You don't need to do the latter provided the former can be documented in a convincing manner.
It wasn't retracted.
If it wasn't retracted, then I stand corrected
On a side note and from the point of view of my chosen discipline, I'd prefer for Fleischmann to be remembered for SERS.
You wrote:
"Actually; if ~20 actually reported that they failed to reproduce the results, you can be reasonably certain that many more failed but didn't report this."
Not too many. I would have heard about them by now. There were some, such as the one with the cat hair.
There were also several that succeeded but never published a report, because of academic politics. And as I said, 3 of those 20 were actually false negatives, albeit only marginal results.
"On the other hand, the probability that the original work is not reproducible or not sufficiently described . . ."
It was not well described. Fleischmann and Pons themselves did not understand it well enough to describe it. They told me so. That is one of the reasons they wanted to wait two or three more years before publishing. However, over the next year or two much better descriptions were written, and reproducibility increased a great deal.
"Not the way you described it a few comments back, with having to absorb fundamental (albeit graduate level) electrochemistry and such."
That is exactly what they had to do. Literally, in some cases. Three young researchers at universities got PhDs in electrochem for doing this experiment. At other labs they hired experienced electrochemists.
In many cases the experiment was not replicated until 1991 simply because people were busy doing other things, or waiting for funding, or waiting for time on the ion beam. The press described a mad rush to replicated, and breathless excitement. Some people may have rushed, but the ones I met years later did it at a leisurely pace as they got around to it, just as they would do any other experiment.
The press, the DoE ERAB panel, and some scientists rushed to judgment, which was silly. The ERAB panel made up its mind and was drafting their report in the fall of 1989 when serious replications were just getting underway. In the summer they visited Miles at China Lake and asked him if he had observed any excess heat. He said "no, but I just started." A few months later he called them back and said he was now getting excess heat. They ignored him. Soon after that they drafted the final report and listed him as a "no." Unfortunately, this report was cited as the reason to stop funding all cold fusion research at the DoE in 1990.
You wrote:
"On a side note and from the point of view of my chosen discipline, I'd prefer for Fleischmann to be remembered for SERS."
That's an interesting comment . . . It seems to me you have a limited perspective stuck in the present, and what I would call persistent negativity.
If the researchers learn to control the effect, and the academic politics can be overcome, it is likely this will become a practical source of energy. I cannot think of any technical reason that would prevent that. I think it is not likely they will overcome the politics. The researchers will probably all retire or die and be forgotten instead, because the opponents are younger. But they might succeed. In that event, Fleischmann and Pons will be remembered for discovering the most important technology since fire.
Fleischmann is well aware of this. He knows the stakes are high, and worth the sacrifice. He and the other senior scientists went into this with their eyes open, fully aware of what would happen to them. A few minutes after the 1989 press conference, he told a friend that his career was over, and his reputation would be savaged. I sense that Pons was more naive, but he soon learned.
I know dozens of professional scientists whose careers were derailed or ended because they reported positive results in cold fusion. Their personal lives and marriages were disrupted. Academic politics are fierce and unforgiving.
It has always been this way. In his autobiography, Townes described what happened when he invented the maser. Several leading scientists including two Nobel laureates did their best to "strangle" the discovery and prevent any more research, because they were sure it was a mistake -- theoretically impossible, that is. He wrote that if he had not had tenure, he would never been allowed to do it. You can find any number of similar accounts of other breakthroughs such as the MRI. People get upset with someone proves their theory is wrong!
That's an interesting comment . . . It seems to me you have a limited perspective stuck in the present, and what I would call persistent negativity.
Alternatively, what I wrote could be taken to mean that the work on SERS has had a demonstrable impact on the scientific discipline(s) I work with, and as such more important to me personally - as well as to a significant percentage of the scientific community - than something which may or may not ever have practical implications on the society at large for any number of reasons. It seems to me you have what I would call poor reading comprehension.
You wrote:
"Alternatively, what I wrote could be taken to mean that the work on SERS has had a demonstrable impact on the scientific discipline(s) I work with, and as such more important to me personally . . ."
That's understandable. Some of Arthur Clarke's old friends and the pilots whose lives he saved probably remember him more for his work on radar than his books, because it touched them personally. They had a direct use for it.
What I meant was, from Fleischmann's point of view, and mine, it seems you are almost denying the importance of his life's work, which is cold fusion. You "prefer" he be remembered for something else. It almost sounds as if you think this work is something he should be ashamed of, or you have concluded that it will never amount to anything.
You are quite right that cold fusion "may not ever have practical implications on the society at large." Politics may ensure that it is forgotten, or marginalized for decades the way photography and other discoveries were. But I think that even as a scientific breakthrough, without practical applications, cold fusion is one of the most important discoveries in the history of science, and by far the most important discovery Fleischmann made.
Since the 1920s people have reported sporadic evidence that nuclear reactions occur in metals at room temperature. Fleischmann and Pons were the first to pin down this phenomenon, make it repeatable enough to study, and make it happen on a large scale so that it can be measured with great confidence. They get 99.9% of the credit, as far as I am concerned.
What I meant was, from Fleischmann's point of view, and mine, it seems you are almost denying the importance of his life's work, which is cold fusion. You "prefer" he be remembered for something else. It almost sounds as if you think this work is something he should be ashamed of, or you have concluded that it will never amount to anything.
I've re-read the original statement I made, and I still don't think that your interpretation of it is the most obvious. You are assigning far more sinister motives to my statements than what I feel you have justification for.
I've been following these comments from the sideline, and they've been quite interesting. I'll add my $0.02, for what it's worth.
Cold fusion has never been accepted by the scientific community as a whole. The skeptics points out poor reproducibility, poor experimental design, other mechanism and background/ matrix interferences as alternative explanations of the observations made in cold fusion experiments. Jed Rothwell has documented the arguments pro cold fusion.
Whatever the truth may be, the result is a situation where it's not easy for scientists to do cold fusion research, be it due to lack of funding, career issues, problem getting published in high ranking journals, etc. Basically, the cold fusion devotees are in many ways isolated. This may lead to lack of self-criticism within the community and automatic dismissal of alternative explanations raised by skeptics.
Indeed, when Wilhelm mentions Fleischman’s contribution to SERS, Rothwell immediately went defensive, made the absolute most negative interpretation of that statement and turned it into an attack on cold fusion. Just another example of how polarized this topic is.
My personal spin: If cold fusion is proven to a level where it's accepted, the scientists who performs and publishes that experiment, are the ones who will be remembered.
As for peer review (which was the original topic): Although it sometimes happens, the peer-review system is not design to uncover fraud or "separate fact from fiction". I.e. the reviewers don't replicate the experiments etc. The peer-review system is only there to ensure a minimum of quality of what's been published. And although it's not perfect, it does a decent job at that.
Although it sometimes happens, the peer-review system is not design to uncover fraud or "separate fact from fiction". I.e. the reviewers don't replicate the experiments etc. The peer-review system is only there to ensure a minimum of quality of what's been published.
Absolutely. And this is also a very good reason why journals take it very seriously when other research groups complain to the editor that they failed to reproduce the experiments, etc.
Hi writer
since looking your post could it be the same about same issues in
[url=www.avg-free-download.org]avg free download[/url]
Post a Comment