Reproducibility: Solutions (not)

Let’s go back to the topic of climate change since BishopBlog started this series of blogposts off by suggesting that scientists who question the size of the reproducibility crisis are playing a role akin to climate change deniers, by analogy with Orestes and Conway’s argument in Merchants of Doubt. While some corporate actors have funded doubting groups in an effort to protect their profits, as I discussed in my previous blogpost, others have capitalized on the climate crisis to advance their own interests. Lyme disease is an interesting case study in which public concern about climate change gets spun into a business opportunity, like this: climate change → increased ticks → increased risk of tick bites → more people with common symptoms including fever and fatigue and headaches → that add up to Chronic Lyme Disease Complex → need for repeated applications of expensive treatments → such as for example chelation because heavy metals toxicities. If I lost you on that last link, well, that’s because you are a scientist. But nonscientists like Canadian Members of Parliament got drawn into this and now a federal framework to manage Lyme Disease is under development because the number of cases almost tripled over the past five years to, get this, not quite 1000 cases (confirmed and probable). The trick here is that if any one of the links seems strong to you the rest of the links shimmer into focus like the mirage that they are. And before you can blink, individually and collectively, we are hooked into costly treatments that have little evidence of benefits and tenuous links to the supposed cause of the crisis.

The “science in crisis” narrative has a similar structure with increasingly tenuous links as you work your way along the chain: pressures to publish → questionable research practices → excessive number of false positive findings published → {proposed solution} → {insert grandiose claims for magic outcomes here}. I think that all of us in academia at every level will agree that the pressures to publish are acute. Public funding of universities has declined in the U.K., the U.S. and in Canada and I am sure in many other countries as well. Therefore the competition for students and research dollars is extremely high and governments have even made what little funding there is contingent upon the attraction of those research dollars. Subsequently there is overt pressure on each professor to publish a lot (my annual salary increase is partially dependent upon my publication rate for example). Furthermore, pressure has been introduced by deliberately creating a gradient of extreme inequality among academics so that expectations for students and early career researchers are currently unrealistically high. So the first link is solid.

The second link is a hypothesis for which there is some support although it is shaky in my opinion due to the indirect nature of the evidence. Nonetheless, it is there. Chris Chambers tells this curious story where, at the age of 22 and continuing forward, he is almost comically enraged that top journals will not accept work that is good quality because the outcome was not “important” or “interesting.” And yet there are many lesser tier journals that will accept such work and many researchers have made a fine career publishing in them until such time as they were lucky enough to happen upon whatever it was that they devoted their career to finding out. The idea that luck, persistence and a lifetime of subject knowledge should determine which papers get into the “top journals” seems right to me. There is a problem when papers get into top journals only because they are momentarily attention-grabbing but that is another issue. If scientists are tempted to cheat to get their papers in those journals before their time they have only themselves to blame. One big cause of “accidental” findings that end up published in top or middling journals seems to be low power however, which can lead to all kinds of anomalous outcomes that later turn out to be unreliable. Why are so many studies underpowered? Those pressures to publish play a role as it is possible to publish many small studies rather than one big one (although curiously it is reported that publication rates have not changed in decades after co-authorship is controlled even though it seems undeniable that the pressure to publish has increased in recent times). Second, research grants are chronically too small for the proposed projects. And those grants are especially too small for woman and in fields of study that are quite frankly gendered. In Canada this can be seen in a study of grant sizes within the Natural Sciences and Engineering Research Council and by comparing the proportionately greater size of cuts to the Social Sciences and Humanities Research Council.

So now we get to the next two links in the chain. I will focus on one of the proposed solutions to the “reproducibility crisis” in this blog and come back to others in future posts. There is a lot of concern about too many false positives published in the literature (I am going to let go the question about whether this is an actual crisis or not for the time being and skip to the next link, solutions for that problem). Let’s start with the suggestion that scientists dispense with the standard alpha level of .05 for significance and replace it with p < .005 which was declared recently by a journalist (I hope the journalist and not the scientists in question) to be a raised standard for statistical significance. An alpha level is not a standard. It is a way of indicating where you think the balance should be between Type I and Type II error. But in any case, the proposed solution is essentially a semantic change. If a study yields a p-value between .05 and .005 the researcher can say that the result is “suggestive” and if it is below .005 the researcher can say that it is significant according to this proposal. The authors say that further evidence would need to accumulate to support suggestive findings but of course further evidence would need to accumulate to confirm the suggestive and the significant findings (it is possible to get small p values with an underpowered study and I thought the whole point of this crisis narrative was to get more replications!). However, with this proposal the idea seems to be to encourage studies to have a sample size 70% larger than is currently the norm. This cost is said to be offset by the benefits, but, as Timothy Bates points out, there is no serious cost-benefit analysis in their paper. And this brings me to the last link. This solution is proposed as a way of reducing false positives markedly which in turn will increase the likelihood that published findings will be reproducible. And if everyone magically found 70% more research funds this is possibly true. But where is the evidence that the crisis in science, whatever that is, would be solved? It is the magic in the final link that we really need to focus on.

I am a health care researcher so it is a reflex for me to look at a proposed cure and ask two questions (1) does the cure target the known cause of the problem? (2) is the cure problem-specific or is it a cure-all? Here we have a situation where the causal chain involves a known distal cause (pressure to publish) and known proximal cause (low power). The proposed solution (rename findings with p between .05 to .005 suggestive) does not target either of these causes. It does not help to change the research environment in such a way as to relieve the pressure to publish or to help researchers obtain the resources that would permit properly powered studies (interestingly the funders of the Open Science Collaborative have enough financial and political power to influence the system of public pensions in the United States and therefore, improving the way that research is funded and increasing job stability for academics are both goals within their means but not, as far as I can see, goals of this project). Quite the opposite in fact because this proposal is more likely to increase competition and inequality between scientists than to relieve those pressures and therefore the benefits that emerge in computer modeling could well be outweighed by the costs in actual application. Secondly, the proposed solution is not “fit for purpose”. It is an arbitrary catch-all solution that is not related to the research goals in any one field of study or research context.

That does not mean that we should do nothing and that there are no ways to improve science. Scientists are creative people and each in their own ponds have been solving problems long before these current efforts came into view. However, recent efforts that seem worthwhile to me and that directly target the issue of power (in study design) recognize the reality that those of us who research typical and atypical development in children are not likely to ever have resources to increase our sample sizes by 70%. So, three examples of helpful initiatives:

First, efforts to pull samples together through collaboration are extremely important. One that is fully on-board with the reproducibility project is of course the ManyBabies initiative. I think that this one is excellent. It takes place in the context of a field of study in which labs have always been informally  inter-connected not only because of shared interests but because of the nature of the training and interpersonal skills that are required to run those studies. Like all fields of research there has been some partisanship (I will come back to this because it is a necessary part of science) but also a lot of collaboration and cross-lab replication of studies in this field for decades now. The effort to formalize the replications and pool data is one I fully support.

Second, there have been ongoing and repeated efforts by statisticians and methodologists to teach researchers how to do simple things that improve their research. Altman sadly died this week. I have a huge collection of his wonderful papers on my hard-drive for sharing with colleagues and students who surprise me with questions like How to randomize? The series of papers by Cumming and Finch on effect sizes along with helpful spreadsheets are invaluable (although it is important to not be overly impressed by large effect sizes in underpowered studies!). My most recent favorite paper describes how to chart individual data points, really important in a field such as ours in which we so often study small samples of children with rare diagnoses. I have an example of this simple technique elsewhere on my blog. If we are going to end up calling all of our research exploratory and suggestive now (which is where we are headed, and quite frankly a lot of published research in speech-language pathology has been called that all along without ever getting to the next step), let’s at least describe those data in a useful fashion.

Third, if I may say so myself, my own effort to promote the N-of-1 randomized control design is a serious effort to improve the internal validity of single case research for researchers who, for many reasons, will not be able to amass large samples.

In the meantime, for those people suggesting the p < .005 thing, it seems irresponsible to me for any scientist to make a claim such as “reducing the P-value threshold for claims of new discoveries to 0.005 is an actionable step that will immediately improve reproducibility” on the basis of a little bit of computer modeling, some sciencey looking charts with numbers on them and not much more thought than that.  I come back to the point I made in my first blog on the reproducibility crisis and that is that if we are going to improve science we need to approach the problem like scientists. Science requires clear thinking about theory (causal models), the relationship between theory and reality, and evidence to support all the links in the chain.

Advertisements
Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: