I recently attended a science forum in which the gist of the message was “such and such activity is bad” and while it raised a lot of interesting and important questions, the bulk of the message was built on vague subjective statements and conjecture. There was very little hard data or detailed research to support the underlying question of “will event X ‘leave a trace’.” I left the presentation feeling like what I had heard was less of a science lesson and more of a campaign pitch.
Ignoring the greater picture that everything anyone does “leaves a trace,” I got into a discussion with friends over the difference between applied science and activism. I’m a firm believer in the idea that science should be practical and applicable, but all too often special interests use science to support their campaign of choice. That’s bad science and it can make for terrible policy. “How can a non-scientist know the difference between good and bad science?” I was asked. Well you can start your litmus test by looking at how a problem and solution were reasoned out, and how bias is recognized and handled.
There are generally two ways to approach a problem or question. One way is to start at the problem and work your way back to the source. The other is to start at the source and try to recreate the problem by working your way towards it.
Starting at the source is important, because it reduces confirmation bias, or the effect that a person’s perception of the problem has on (re)creating the outcome. Sure, it’s easier to see the path when you start at the problem and work backwards, but you ignore all the other possible outcomes that you could have reached as the path branches out with new choices. The strategy of starting at the source and working towards the problem is an inherent part of the scientific method.
All science projects start with a problem. From that problem a question (or series of questions) is formed. The questions seek to address the source of the problem. By understanding the problem at its source and answering the question(s) in whole or in part, scientists develop a better understanding of how the world works.
There are two reasoning processes involved in extracting these answers. The first, and most recognizable, is deductive reasoning. It is often summarized by the popular phrase, “If you eliminate the impossible, whatever remains, however improbable, must be the truth.” Deductive reasoning creates specific outcomes based on general principals.
A common example is “Human’s are mortal. I am a human. Therefore I must be mortal” (expressed mathematically as A = B, B=C, therefore A=C). The point of science is to test these propositions and the reasoning involved by actively trying to show that they are false. The scientific method could be used to attempt to disprove any of the following: A) Humans are not mortal, B) I am not human, or C) I am not mortal. Simply proving (or disproving) my own mortality (the specific outcome) doesn’t determine whether I am human or if humans are immortal. By focusing on the outcome (I am not mortal) and constructing general propositions from specific examples, I engage in the second process of reasoning, induction.
It would be easy to prove something simply by walking backwards from the outcome to the source. We can rephrase the first example to show induction by stating it like so, “I am mortal. I am a human. Therefore all humans must be mortal.” This particular case may be true, but the method isn’t always accurate. Consider another classic example from John Vickers. “All of the swans I’ve seen are white. All swans must be white.” In this example, the general outcome (All swans must be white) exists as a result of a specific observation or input (All of the swans I’ve seen are white). Is this accurate? Do the observations reflect reality? What about other possible observations or examples? New observations or choices could create an entirely new outcome (like seeing a black or grey swan). These kinds of questions are important when working with conclusions formed from induction.
Induction is more commonly used in the social sciences, because recreating an exact scenario is difficult or impossible (and the rules for human and social experimentation can be long and complex). Because of this, social scientists have to work hard to define the exact context in which their observations hold true and be very upfront in acknowledging sources of bias. In the case of swans, it might be that only white swans use a particular migratory path, the observations were recorded wrong, or maybe the observer was simply colorblind. Conclusions drawn from induction are often referred to in probabilistic terms as well. “Based on the observations made, we are 80% sure that all swans are white,” for example.
It’s important to draw the distinction between induction and deduction when talking about science and its application in policy. People should always question motives, bias, and reasoning when science is used to support an idea. When a special interest group (be it industry, conservation or environmentalist groups, or government) “supports” its position with “science” one has to ask if the process started with a foregone conclusion and worked backwards to the source, or if the work genuinely reflects an attempt to disprove the questions at the root of the problem. If all a special interest group wants to see is “white swans,” the science will certainly find nothing but white swans and wrongly conclude that all swans are white. Bias is always a concern, whether using deduction or induction, but especially so when working backwards. It’s not always easy for a lay person to tease all of these component pieces apart for review, but hopefully these points of concern will provide a basic litmus test for telling good science apart from biased activism.