Every few years, there is a book that puts science under the microscope and leads those of us involved in the endeavour to ask ourselves some serious questions. One such example is Bad Science by Ben Goldacre (2017). This book investigates the woeful science reporting performed by the media, which, seemingly without shame, can report, for instance, that drinking one glass of red wine a day will prevent cancer and cognitive decline, but the next day, will report the opposite ‘findings.’
Bad Science was preceded by Bad Pharma, which Goldacre published in 2012. Bad Pharma exposed the excesses of the pharmaceutical industry, whereby undesirable results are buried (known as the ‘bottom drawer phenomenon’) and big names in medicine and science are recruited to lend their names to ghostwritten manuscripts. Goldacre’s work has had a profound effect on the practice and publishing of research stemming from drug testing and clinical trials. His work has helped encourage and gradually increase the prior registration and monitoring of clinical trials, whereby researchers cannot alter their purported outcomes to falsify their work or omit the publication of results from trials which show negative or no effects for novel drugs.
I have long been recommending both of the above books to my own research students, and I am pleased to discover another book to add to that canon. Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth by Stuart Richie (2020) was shortlisted for the recently awarded 2021 Royal Society Science Book Prize, and deservedly so. The book contains elements of both aforementioned texts, but it primarily focuses on the public face of research—namely, what researchers across a wide range of fields choose to publish and, also, how they publish it. Science Fictions also tackles the thorny issue of replication. Ideally, published scientific and sociological studies should be ‘replicable.’ In other words, if a publication claims that 50% of people act in a certain way under certain circumstances, another research team should be able to find similar results. Replicability undergirds the claims of validity of every social science (and, indeed, contemporary ‘science’ itself). This is why it is so distressing that, on the one hand, an excessively small number of sociological studies are replicated and when they are, woefully few prove to be replicable.
Ritchie writes skillfully for a general audience, but he clearly does not write in a vacuum. Science Fictions is obviously meant to prick the conscience of any scientist who reads it–and I must admit that, at times, my own conscience was pricked as well. It has been over 40 years since I completed my own doctorate in biochemistry, and the field of conducting and reporting science has changed immeasurably since then. However, it is indeed true that we focused on experiments that showed what we hypothesised; yes, we hid minor effects behind statistical parameters that merely indicated the chance of our observations occurring at random; and yes, we hyped our publications to make each study sound as if it was groundbreaking when it was merely a grain of sand on the beach of discovery in our field.
In my 40-year career since the completion of my doctorate, 30 have been spent in academic publishing. For 20 of those years, I have served as editor-in-chief of several high-profile journals in my field. From that perspective, I can mirror all the above practices by authors. We showed an unhealthy respect for studies that ‘worked,’ insisted on the use of statistically significant as opposed to meaningful results, and encouraged authors to make abstracts and the discussion sections of their journal articles more eye-catching. Nevertheless, we have made improvements, and that has been evident to me both as a researcher and an editor.
In Science Fictions, Ritchie examines the motives of academics and what drives them towards poor academic practices. He also covers some of the measures that have been introduced to address this. There are some demonstrably bad apples in the world of science who display all the hallmarks of human nature, including ambition, dishonesty, and arrogance. With any luck, we will continue to catch these researchers and, thankfully, they are relatively rare.
The real problem is that something is wrong with the state of science itself. More accurately, something is wrong with the state of academia, in which the system of academic promotion is overly focused on the superficial outcomes of science rather than on the actual meaning and contribution of the findings. Thus, academics are generally judged initially by the quantity rather than the quality of their publications. Quality of publications does enter the promotion equation, but it is judged on the quality of the journals in which the research is published and, while not entirely without meaning, this guarantees neither the veracity nor the significance of the results. Academics are also judged by a range of arbitrary metrics, such as the h-index., which is a measure of citations. While I have come out publicly on several occasions in its defence, this metric is far from perfect. It is hard to ‘game’ the h-index—but not impossible.
Another notable aspect of Ritchie’s book is the explanation it provides regarding the various measures that have been introduced in recent decades to address some of the problems in the publishing and reporting of science. I was surprised here to see no reference to the range of guidelines included in the Equator Network for the rigorous and standardised reporting of, for example, clinical trials and reviews of evidence. On the other hand, the development of the Open Science movement and the Bill and Melinda Gates-funded initiative, ‘Plan S,’ both of which encourage more open practices in the design, conducting, and reporting of research and, in particular, open access publishing, are covered in Science Fictions. Within these frameworks, the prior registration of clinical trials is included, as well as the increasingly widespread practice of pre-printing manuscripts. Pre-printing refers to making a version of your manuscript publicly available prior to it being subjected to the process of peer review and publication in an academic journal. This permits early sharing of research results (with a ‘health warning’ regarding the lack of peer review) and the possibility of comment on the study and improvements which may be incorporated into the final published version.
We will have to wait a few years to see what impact, if any, Science Fictions will have on the practice and reporting of science. If I have any criticism to share about the book, it is the lack of any solution to the promotion conundrum in academia. Unless the system changes—and there are few indications at present that it will—and shifts away from the continual hamster wheel of seeking research funding devoid of whether the resulting science will be useful, compounded by quantification of publications and the ensuing metrics, it is hard to see how we can reach the ‘promised land.’ I also do not share Ritchie’s negative slant on the academic publication industry. Certainly, there have been sharp practices and enormous profits involved, with publishers like Elsevier and Wiley, in particular, receiving criticism. The two publishers, both of whom I have worked with, run some very profitable journals. But they also run numerous journals that never turn a profit. Moreover, the mainstream academic publishing industry has, after some initial reluctance, embraced Open Science, with Wiley being at the forefront and Elsevier arriving slightly later. The value added of working with a mainstream publisher is illustrated, but the fact is that I now work pro bono for a journal that charges neither authors nor readers to publish and access articles. Compared with a journal run by a mainstream publisher, this is a Sisyphean task.