On the freedom and responsibility of science

The freedom of science is a highly valued and widely appreciated principle. Or so I thought – until Andy Borowitz reminded me of the contrary with his recent news satire in The New Yorker, in which he mocks the growing anti-knowledge attitude in some parts of the U.S. political establishment. Thus triggered to think twice, I’ll dwell a little more on the question of science and how free it actually is.

Freedom from political interference

Science as the universal quest for knowledge is one of the essential drivers of human progress. Through its long-term impact, and more immediately through the funding mechanisms, science is intimately linked with society. While acknowledging these interrelations, the principle of scientific freedom is meant to ensure that science is free from undue political influence. But what does that mean? Just consider a situation where political dogmatism narrowly defines the scientific topics to address, or even worse, mandates the desired outcomes of scientific research. We’ve seen that for example when totalitarian regimes in the 20th century instrumentalised science for their political objectives, selectively accepting the convenient research results, while suppressing other, less welcome evidence. The principle of scientific freedom is an effective safeguard to avoid such “policy-based evidence” (a term coined by Mark Henderson in his “Geek Manifesto”, reviewed in The Independent). As the flipside of the freedom from political interference, this principle ensures that science is free to pursue research, choosing topics according to their scientific relevance, adhering to the standards enshrined in the scientific method, subject to peer-review only.

Ethical constraints

That said, science is not exempt from checks and balances. Rather, the methods employed for acquiring new knowledge are subject to ethical principles. To implement this concept in national research systems, it is standard practice to request an ethical review as a mandatory prerequisite for obtaining government funding. For example, universities in the U.S. have established institutional review boards. Due to their origins in medical research, they are mainly composed of specialists in areas such as ethics or biomedical sciences. It is no surprise then that their judgement on research projects in disciplines like computer sciences or social sciences turned out ill-informed in some cases. As Sarah Zhang pointed out in Wired Magazine only last week, these review boards struggle to assess emerging technologies like big data.

Still, the concept for applying ethical constraints appears sound, demanding adherence to defined ethical standards as the precondition for funding. Its only the actual implementation that shows room for improvement. The composition of review boards must ensure sufficiently broad and deep coverage of all relevant topics, they must incorporate expertise from across a broad range of scientific disciplines, including ethics of course and evolving disciplines as well. Only then will review boards be equipped to fulfil their duty. That will require dedicated effort, this is a straightforward improvement to the ethical control over the immediate planning and execution of scientific research. There is, however, a far greater challenge looming: the question of responsibility for the longer-term outcomes of scientific research.

Taking responsibility

As humankind, we have evolved beyond the point where we can now cause serious damage to our own future; either physically, directly jeopardizing our existence; or metaphysically, potentially changing what it means to be human. In this situation, at this crossroads of responsibility, we are forced to carefully reconsider the principle of scientific freedom and how we apply it. Here’s why.

Historically, we didn’t have the means to hurt ourselves existentially; neither intentionally nor unintentionally. And we were unaware of the detrimental longer-term effects of our technological prowess. That explains the unbridled optimism up to the beginning of the 20th century; the blind faith in the progress that science and technology would inevitably bring about. At that time, we had neither means nor awareness.

In the first half of the 20th century, we learned about the power of the atom and developed means to employ nuclear fission. For the first time, we had created a tool that could potentially destroy our very existence. And we soon became aware of that potential. We suddenly had a means that could bring serious destruction; but we considered that means to be exceptional.

Towards the end of the 20th century, we gained a broader understanding of our impact on the planet, and the extent to which we had already put our own livelihood under severe pressure. But that was only a slow realization in hindsight, as unintended consequences of past actions became more and more visible (think about pollution, depletion of resources, shrinking biodiversity or climate change). These effects had never been anticipated. More precisely: our ancestors didn’t foresee the outcomes their actions, say 200 years ago, could possibly have for us today. And we cannot really blame them. Still, since the late 20th century, our awareness of unintended means of destruction kept growing.

Nowadays, we do try to anticipate, or at least to think about, the potential impact that scientific research might have, in the well-intended attempt to preempt negative consequences. In this endeavour, we seem to focus on the transition zone between basic and applied science, on those emerging technologies that already hold significant promise for future benefits, while their concrete form is still in the making. Take artificial intelligence or genetic engineering as prominent examples that are currently subject to controversial public debate: both cases have the potential to change the psychological, physiological, or even moral meaning of the word “human”, for good or for bad. The outcome of developing these emerging technologies is in no way predetermined or certain. Yet, fully aware of this uncertainty, it is still plausible that these technologies could become new means of destruction. Whether we like it or not, we have lost the innocence of ignorance, and instead gained access to potentially destructive means. Therefore, with maturing awareness and increasing means, today we can neither deny nor reject responsibility; our responsibility for the outcomes of scientific research, both intended and unintended.

Outlook

That poses the fundamental question that I believe our generation needs to address: What is responsible science in the 21st century? In the past, we used one simple question to guide scientific research: Could we? That is a question of mere feasibility: Could we understand, learn, know? And further on towards technology development: Could we do something with that knowledge? Science and technology were the art of the possible: we did what we could. However, given the means available in the 21st century, feasibility alone is not a good enough guideline, it is not sufficient anymore.

Rather, we must complement feasibility with desirability to guide our research. We must add the question: Should we? Should we know this? And further down the road: Should we do this? That question must go beyond the methodological compliance with ethical standards to include the longer-term outcomes of research as well. And if the answer is “no” for some specific research, we’ll have to be very careful how and where to impose constraints: on the basic science in general? Or more specifically on concrete applications of research results in technology development? These are the difficult choices we’ll have to (learn how to) make.

The question of Could-we?-Should-we? presents a real dilemma: How to avoid undesired consequences of scientific research without strangling the much sought-after benefits? That is a serious and an open question, and I’m far from claiming a definitive answer. The only certainty I can offer is this: radical approaches won’t solve the problem. While a full ban of scientific research would mean the end of human progress, a laissez-fair attitude might lead to self-destruction: neither panic nor hype offer sound advice. We’ll have to make smart choices to find the sweet spot between the do-nothing and the do-anything.

Responsibility is tough, but it is the price to pay for freedom. Including the freedom of science. High time for a serious discussion.

 

4 thoughts on “On the freedom and responsibility of science

  1. Thank you for approaching science and research in such a way. Although political interference is diachronic issue in autarchic regimes that constantly overrides the principal of scientific freedom, the ethical constrains and responsible science & innovation are issues with fundamental questions for the society as a whole. Instead of the dual question of “Could we?-Should we?”, I would suggest a slightly modified dilemma. In terms of capability, science would always ask “how much more can we invent and how fast”. On the other hand, society would ask “do we really need it?” If technology is (according to Brian Arthur) a means to fulfill a human purpose, then this human purpose should correspond to the humanity’s values, taking into account all ethical constrains. Last but not least, responsible scientific research is the one that its community accepts not only the positive but also negative results and makes them available to peers, promoting reproducible research, despite the potential political or economic incentives of multinational companies and stakeholders. It is a hard task that if achieved, it leads to freedom and responsibility of science.

    1. Thanks for sharing an interesting perspective. I’d agree that the differences between what I wrote and what you suggest are nuances, so let’s see where those ideas get us.

      Your focus on “does society really need this” is of course a fair point: if research results are not useful, or detrimental, or even against the ethical standards, then why do that research at all? While I agree with that perspective at an abstract level, I see challenges regarding the practicalities. As the full impact of research is impossible to anticipate, a prejudgment on the utility of specific research is at best a fundamental challenge. How could we address that?

      I’d favour a dialogue across society, with scientists and researchers as an integral part of society, and of that dialogue as well: what is it that we (researchers AND society) want? That dialogue would then also allow for a discussion on ethical standards, and how they might actually evolve.

      For freedom and responsibility of science it is fundamentally important to stay fully embedded in society, and to actively engage in open dialogue about research objectives, intended results and benefits, and potential risks as well. Quoting Brian Arthur once again: technology domains and society co-evolve in a process of mutual adaptation. And that will happen anyway, even without a dialogue. But with an open dialogue, we’ll have a far better chance of minimizing the risks and maximizing the benefits that scientific research holds for our future.

      I feel that I’m only scratching the surface of a larger debate here. So I’ll probably come back this topic at later stage, after some more thorough considerations, in a later post.
      Thanks a lot for the inspiration.

      1. Fully agreed with the proposed dialogue across society. Speaking of risks and benefits to society from research, the today’s unfortunate accident of SpaceX reminds me of the very fundamentals.. ..And its not just about the lost rocket and the destroyed satellite that Facebook would use to bring internet to Africa and elsewhere. It is more about disruptive innovation and the will of taking the risk for the extra step. As we continue to push the frontiers of space, there will always be both successes and failures, triumphs and setbacks. The fact that this time it wasn’t just NASA, but a constellation of private companies and governmental institutions that suffer the consequences of the unforeseen accident, should not be used to identify winners or losers from such event. Sharing the risk can make us arrive faster to the result of our experiment. If the result is negative, it does not necessarily mean that it is unsuccessful. Negative results are very much undermined in today’s research (especially in medicine). Responsible innovation is the one that succeeds in preventing failures (financial, environmental, etc.) through detailed planning and social interaction with open dialogue. In addition, it is the one that learns from its mistakes and maps the lessons learned for the future scientists.

      2. That’s a good reminder. Risk is an important dimension of pushing the boundaries of knowledge, and failure is unavoidable. Well, today’s failure can actually be the launch of tomorrow’s success. Very much in Thomas Edison’s sense: I didn’t fail, I only found 9,999 different ways that didn’t work.
        For responsible innovation it is then important to stay open to failure (as against a timid risk avoidance culture) and to share risk (especially the economic cost). And at the same uphold safety standards, for humans and the environment. In the end it’s all about a decent portfolio management: a small portion of a budget should be invested in high-risk-high-payoff ideas, the famous moon shots. They are always on the edge, and could result in spectacular failure or in an even more spectacular breakthrough.

What's your view?

This site uses Akismet to reduce spam. Learn how your comment data is processed.