25/07/2012 § Leave a comment
The Committee on Freedom and Responsibility of the International Council for Science is asking various organisations about scientific freedom. They are concerned about the overt muzzling of scientists and reprisals against whistleblowers.
I want to start by saying that this sort of overt censorship is deplorable. It is bad for the muddling-through groping-towards-understanding that we call science. It makes us unnecessarily stupider, and can only serve people with power (else how could it be maintained?).
But in the rush to condemn the silencers, we shouldn’t forget the more pervasive and subtle pressures on researchers. Their ability to conduct research freely and report results openly are also constrained by competitive funding mechanisms and limited career options. It starts at the beginning of the research process. When deciding whether an area of research is ‘worthwhile’, scientists are weighing up effort, risk, and reward. If an area is too risky and doesn’t present enough promise, they will not investigate it. It continues through to publication, when scientists decide what papers to write, submit, and resubmit.
The peer review system is an algorithm for moving most efficiently from our current ignorance toward greater understanding. By subjecting findings to intelligent scrutiny, scientists are trying to remove errors as quickly as possible and direct efforts to best effect. The algorithm does not work as well in practice as in theory, because of three things:
- the stakes: if the stakes are merely one’s reputation as a researcher/scientist, then the goal is to get the results right and relevant. Once the stakes are career and research funding, then one’s continued existence as a researcher is constantly under threat. It becomes a fight for survival. Survival strategies include running with the herd, camouflage, becoming an alpha and keeping the betas in check, and finding an uncontested niche.
- the mistakes: the research process includes a lot of mistakes. It is about pushing out into the unknown, so it necessarily involves getting things wrong. The individual researcher can maximise the risk-return ratio by reducing risk — doing uncontroversial work. Some unconventional work will pay off — it will be spetacularly successful. Most unconventional work will not pay off handsomely, but it can nevertheless advance science. As the stakes for mistakes grow, the appetite for risk falls. In addition, the peer review system — based on the opinion of established experts — can perpetuate both accurate knowledge and mistakes.
- the multiple goals: if the science process is supposed to produce greater knowledge, then the goal (the objective function in economic terms) is clear. However, by expressly funding science for economic growth or commercial gain, government introduces additional goals. The scientist is then seeking a solution to a complex objective function. It includes funding levels, publication counts, economic impacts, some abstract idea of knowledge produced, or even the reputation of the organisation. Publishing and generating knowledge are less important where funding must be obtained for survival or if scientists are tasked with producing economic growth.
The core of the problem is understanding what we mean by ‘scientist’. If by that, we understand a person whose main function is producing and communicating new knowledge, then muzzling scientists and punishing whistleblowers is inherently wrong. More than that, funding and career arrangements that increase rewards for safe and conventional are just as wrong and destructive. If, on the other hand, we think that ‘scientists’ are people who work with knowledge to produce economic returns, then it makes business sense that the unprofitable ones — the ones who hurt the bottom line by their actions or inaction — should be removed.
The more that scientists tout their role in economic development, the more they trigger that second logic.