Show Mobile Navigation
           
Our World |

10 Reasons Scientific Studies Can Be Surprisingly Inaccurate

by Christopher Clifford
fact checked by Jamie Frater

Our society celebrates science for its accuracy and objectivity, and the general public usually considers evidence published in scientific journals to be unquestionably true. However, there have been a number of alarming studies, particularly focusing on life science research, that suggest scientific publishing might not be as reliable as we’d like to think.

10Many Pre-Clinical Studies Can’t Be Reproduced

ThinkstockPhotos-160756458

There’s currently a huge amount of research into the mechanisms of cancer. Unfortunately, translating this research into finding targets for treatment has proven difficult, with the failure rate of clinical trials in oncology being higher than most other areas. And it does seem like many of these failures can be blamed on problems with the pre-clinical research on which they were based.

One review found that only six out of 53 landmark pre-clinical cancer papers could be reproduced. Many of the studies that couldn’t be reproduced didn’t use blind testing—meaning that the testers were aware of whether they were dealing with the control group or the experimental group, potentially leading to investigator bias. Other studies were found to have only presented the results which supported their hypothesis, even if they weren’t a good representation of the data set as a whole. Astonishingly, there is no specific rule preventing this and papers are regularly accepted for publication without presenting the entire data set obtained.

Another study looked at 67 studies, mostly in the field of oncology, and found that less than 25 percent of published data could be reproduced in the lab without major inconsistencies. This is now such a common problem that venture capital companies apparently have an unspoken rule that around 50 percent of academic studies will be impossible to reproduce in their industrial laboratories.

9Negative Results Often Aren’t Published

ThinkstockPhotos-474455593

A negative result occurs when researchers hypothesize that something will happen, but then find that they can’t gain that result. One study looked at over 4,600 papers from all disciplines between 1990 and 2007, finding that the publication of positive results had increased by 22 percent over that period. By 2007, an astonishing 85.9 percent of papers reported positive results. This is supported by studies showing that negative sentences like “no significant differences” have dropped in usage, while significant results are much more likely to be fully reported. If they are published, negative results will probably only appear in low-impact journals.

This publication bias actually creates quite a few problems. For a start, scientists are often unable to see if a study has already been done, meaning unnecessary repetition. It can also affect the results of meta-analysis studies, which compare all of the literature on a particular issue. Publication bias also results in huge pressure to achieve positive results, which could lead to inflated findings, research misconduct, or fewer risks being taken by researchers. After all, if your future career is based mostly on whether you can get positive results to publish, that would certainly have some impact on the way you design your research and interpret your findings.


8Peer Review Often Fails To Detect Major Errors

ThinkstockPhotos-470428045

Passing the peer review process is currently the gold standard for research papers. However, when researchers purposely gave error-filled biomedical papers to peer reviewers at a major academic publisher, they found that the reviewers only spotted an average of 2.58 out of nine major errors. Even more worryingly, training to improve their performance only made a minor difference. In fact, a full quarter of the 607 reviewers tested detected an average of one or less of the errors. (In fairness, some reviewers actually rejected the paper without finishing reviewing it, meaning that they may have found more errors if they had continued to the end.)

Reviewers were particularly bad at detecting errors related to “the analysis of data and inconsistencies in the reporting of results.” This may be linked to the poor understanding of statistics among biologists. The area least affected by training was “putting the study in context, both in terms of the pre-existing literature and in terms of the implications of the findings for policy or practice.” This make sense, as mastering the literature on a particular topic takes a long time to achieve.

Other studies reached similar conclusions. One presented 221 reviewers with a previously published paper that had been modified to include eight errors. On average, the reviewers only detected two of the new errors. Another found that reviewers failed to detect at least 60 percent of the major errors in a paper on migraines (although most of the reviewers still didn’t recommend accepting the study for publication). Taken together, such papers suggest that the peer review process in the biomedical sciences has plenty of room for improvement.

7Peer Reviewers Get Worse Over Time

ThinkstockPhotos-468912771

In 2009, a 14-year-long study highlighted further problems with the peer review process. The study, which saw journal editors rate the quality of every review they received, suggested that peer reviewer performance decreased by an average of around 0.8 percent each year. These findings are supported by a 2007 paper in PLOS Medicine, which found that no type of training or experience was a significant indicator of reviewer quality, but that younger scientists generally provided higher-quality reviews. A 1998 survey also found that younger scientists performed better as peer reviewers, while those on editorial boards performed worse.

The reasons for this trend are not clear, although suggestions include cognitive decline, competing interests taking over, or increased expectations in reviewing making the job harder over time. There is also some evidence that older reviewers are more likely to make decisions prematurely, fail to comply with the requirements for structuring a review, and may have a decreased knowledge base as their training becomes out of date. This is worrying, since increased age often comes with increased authority.


6Most Rejected Papers Are Eventually Published Elsewhere

ThinkstockPhotos-475033359

Following the peer review process, journals will either reject a paper or accept it for publication. But what happens to those rejected papers? One study followed papers rejected by Occupational and Environmental Medicine, finding that 54 percent of them ended up being published elsewhere. Interestingly, more than half were published by a group of seven other major journals in the same field.

A similar study found that 69 percent of papers rejected by a general medical journal were then published elsewhere within 552 days, while another found at least half of the studies declined by Cardiovascular Research eventually ended up in another journal. A fourth study showed that 56 percent of papers rejected by the American Journal of Neuroradiology eventually found another home.

Before you panic, this might simply mean that rejected papers are improved before being sent to other journals. Furthermore, not all papers are rejected for poor methodology. Popular journals are often difficult to get into due to space constraints, while many papers are rejected simply because the topic might not be quite right for the journal in question. (The Cardiovascular Research study accounted for this last possibility, finding that most of the papers later published elsewhere were not rejected due to an unsuitable topic.)

As studies have shown, the most prestigious journals are subject to a much higher degree of scrutiny. In fact, well-known journals tend to retract more papers than their less prestigious counterparts, simply because they receive much more scrutiny following publication. So the issue is that a flawed paper might be shopped around to various journals until one agrees to publish it, allowing it to enter the scientific record. Therefore, these studies do suggest a discrepancy between the quality of papers in various scientific journals.

5Suspect Practices

ThinkstockPhotos-478810143

With the constant pressure on scientists to publish positive results, there are indications that falsification and other suspect practices are a common occurrence. One meta-analysis of 18 surveys showed 1.97 percent of scientists admitted falsifying their own work, 33 percent admitted other suspect practices, 14.1 percent said they were aware of colleagues who falsified their work, and 72 percent said they knew of other suspect practices by colleagues. The true numbers may be even higher—surveys of researchers and trainee researchers found larger percentages would be willing to take part in such practices, even if they didn’t admit to actually doing them already.

Some of the questionable practices noted included “dropping data points based on a gut feeling,” “failing to publish data that contradicts one’s previous research,” and “changing the design, methodology, or results of a study in response to pressures from a funding source.” These actions can be very difficult to prove, and it will likely be impossible to prevent them entirely without shifting away from a system obsessed with publishing positive results.


4Scientists Don’t Share Enough Information

ThinkstockPhotos-462438903

In training, scientists are taught that experiments should be written up in a way that allows them to be completely recreated from the information included. However, a 2013 study looking at the sharing of resources in biomedical research found that 54 percent of research resources (such as the type of antibodies or organisms used in experiments) were not described in enough detail to allow them to be identified. Another study of research using animals found that only 59 percent of papers included the number and character of the animals used. Without such information it is very hard to reproduce experiments precisely and therefore verify the results. It also creates a barrier to detecting methodological issues with a specific resource.

There have been some attempts to address the problem, including the ARRIVE guidelines, which seek to maximize the reporting of animal models. The Neuroscience Information Framework has also created antibody registries, while the prestigious journal Nature has called for increased reporting of resources.

However, the authors of the 2013 study noted that only five out of 83 biomedical journals had rigorous reporting. A worrying survey on biomedical research adds to this evidence, showing that willingness to share all resources with other researchers fell 20 percent between 2008 and 2012. Some of the researchers surveyed said they would be happy to share data on personal request but were less happy with putting the data into an online database.

3Hoax Papers

ThinkstockPhotos-525416149

In 1994, the physicist Alan Sokal submitted an article to the cultural studies journal Social Text which he purposefully littered with unsubstantiated “nonsense,” while simultaneously making sure to “flatter the editors’ ideological preconceptions.” Among other things, Sokal’s paper linked quantum field theory to psychoanalysis and claimed that quantum gravity had serious political implications. Sokal wanted to prove that postmodernist theory had become divorced from reality: “Incomprehensibility becomes a virtue; allusions, metaphors and puns substitute for evidence and logic. My own article is, if anything, an extremely modest example of this well-established genre.”

While Sokal took aim squarely at cultural studies, hoax papers have also occurred in computer science. In 2014, an astonishing 120 papers were removed from publications owned by Springer and the Institute of Electrical and Electronic Engineers after it was discovered that they were computer-generated gibberish. The papers were made using a piece of software called SCIgen, which was originally developed at MIT in 2005 as part of an attempt to prove that conferences weren’t properly vetting papers.

SCIgen combines random strings of words to create hoax papers, including one promising to “concentrate our efforts on disproving that spreadsheets can be made knowledge-based, empathic, and compact.” The program apparently proved more popular than its creators had intended—the 120 removed papers had all been succesfully submitted to Chinese conferences and subsequently published. Springer noted that they do peer review before publication, making it even stranger that they published 16 of the fake papers.

Another hoax paper, created by John Bohannon and purporting to look at the anti-cancer properties of a type of lichen, was accepted by 157 journals (98 rejected it). This was despite the fact that “any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s shortcomings immediately.” Bohannon wanted to highlight the fact that many open-access journals focus too heavily on making money through author publication fees, even at the cost of publishing inaccurate research.


2Psychology As An Example Of The Problems

ThinkstockPhotos-458570517

Psychology is badly affected by many of the issues on this list. Research has suggested that around 97 percent of published studies are positive, and the problem has persisted since at least 1959. Furthermore, one survey found that over 50 percent of psychologists said they would wait until they got positive results before deciding to publish. More than 40 percent admitted they only reported studies that achieved a positive result. It has been suggested that this is because psychology journals tend to go for novel results over rigorous ones. Whatever the reason, psychology has been placed alongside psychiatry and economics as the branches of science most likely to suffer from publication bias.

A further problem is that psychology studies are rarely replicated—since many journals will not publish straight replications, there isn’t much incentive for researchers to put in the effort. One possible solution is Psychfiledrawer, which publishes replication attempts. However, there is still little professional benefit to submitting and some researchers have been put off by the risk of criticism from colleagues.

More positively, there has been a recent push for study replication and a promising study which found 10 of the 13 studies it tried to replicate produced the same results. While this doesn’t solve the lack of published negative results, nor the commonly mentioned problems with statistical analysis, it is a step in the right direction.

1Studies Done On Mice Often Can’t Be Extrapolated To Humans

ThinkstockPhotos-490777337

Many studies in the biomedical sciences are done on animal models, particularly mice. This is because mice actually have many similarities to humans, including similar biochemical pathways. Around 99 percent of human genes have a mouse homolog (gene with a common ancestor).

But while research on mice has led to some notable successes, there have been problems translating it to humans in areas such as cancer, neurological diseases, and inflammatory diseases. Recent research suggests that despite the aforementioned genetic similarities, gene sequences that code for specific regulatory proteins are very different between humans and mice.

The differences are particularly important in the nervous system, where there are many differences in brain development. The mouse neocortex—which is strongly linked to perception—is much less developed than the human equivalent. Moreover, the symptoms of neurological diseases often have a cognitive component, and the differences in cognition between mice and humans are clearly vast.

So next time you see an astonishing medical breakthrough announced on the news, be sure to check whether the research was done on mice or humans.

Christopher Clifford is currently writing his first novel, short stories in various genres, and articles on a mixture of subjects. For more writing by him you can visit his website at christopherclifford.co.uk.

fact checked by Jamie Frater

229 Shares
Share223
Tweet
WhatsApp
Pin3
Share3