The Paradox of Consensus (Keynote Address)

The following is my keynote address to the high school research science program at the Spring Conference of the Union of Bulgarian Mathematicians in April 2016:

Your presence here indicates more than a passing interest in math and science, and you have demonstrated excellent knowledge and capability in your respective fields. But at the same time, it’s valuable to step back and ask: how can we have confidence in our results?

As scientists, we have a very real interest in being correct: if we are not able to accurately apprehend reality, we are incapable of developing the tools that sustain and improve our lives. If we take that idea seriously, we have to come to terms with our inherent fallibility. We cannot take for granted that we are correct. We need to be aware of how we came to our conclusions so that we can separate objective knowledge from subjective beliefs.

This question, of how we can know what we know, is central to the branch of philosophy known as epistemology, or the study of knowledge. It’s a much harder problem than we tend to give it credit for, and I’d like to begin that subject by offering a concrete example: How can you tell if a coin is fair?

If you have a passing understanding of statistics, you’d probably say that you could tell just by flipping the coin many times. After many flips, a fair coin would show approximately 50% heads, and 50% tails. And while we can never be completely sure if a coin is fair or biased, since the flips are random, the further away from these odds, the more likely the coin is biased.

But what if you did this experiment, and found the following pattern: Heads, Tails, Heads, Tails, Heads, Tails…?

You should be able to tell very clearly that this coin is not fair, despite the fact that it produced the same proportions of heads and tails that would be expected from a fair coin. That’s because you recognized, intuitively, that there was no uncertainty or chance to this coin: it always landed on the side opposite its previous flip.

What you observe with the coin flips isn’t really the outcome itself – that’s just one variable in this system. In both cases, that observable variable is being systematically manipulated by some other unobserved (or perhaps unobservable) factor. The point of all this is that we can have multiple kinds of bias, and some kinds of bias are difficult if not impossible to differentiate from good data using the standard battery of statistical tests.

Indeed, this sort of bias can exist in all kinds of real-life applications. In some cases, additional confirmatory data can actually reduce the likelihood that a hypothesis is valid. This is called the paradox of unanimity, which states that each “confirmation” supports both the conclusion and at the same time increases the likelihood of a bias that invalidates the conclusion. At some point, the marginal confirmation’s effect on the likelihood of bias starts to outweigh its effect in differentiating the so-called null and alternative hypotheses.

Since this is a bit hard to see, here’s another example. Suppose we have a criminal trial where a person is accused of murder. When there are three witnesses, their testimony is credible: one person reports seeing the defendant’s car outside the building where the murder happened, another reports seeing him enter the building, a third sees blood on his shirt shortly after the event. This evidence all tends strongly toward the defendant’s guilt.

But if three dozen witnesses testify, we have a seed of doubt. Sure, it could be because the person actually committed a crime in broad daylight with many bystanders. But, it seems a bit more likely that the prolific testimony could be the product of something else: maybe those witnesses were pressured by the government or a criminal organization to offer testimony in a show trial.

In other words, when the data is too good, or a conclusion too widely supported, it becomes indicative of a bias that undermines the entire research project. Indeed, if you’ve seen the news in the last year or so, you know that Volkswagen was programming its engines to behave differently when tested than when driving, so that they could fool environmental regulators. In fact, it was the relative consistency of the testing data that tipped people off to the fraud, just as securities regulators look for financial portfolios that somehow always beat the market.

Now, there’s an obvious tension here. Truth should produce consensus. As general knowledge about a subject increases, everyone should believe the same thing. The trouble is that truth is not the only thing that produces consensus. Most dangerously, consensus can be reached when a group of people all think the same way and use political power, economic pressure, or social ostracism to reinforce their pre-existing beliefs and suppress dissent. Huge amounts of damage have been caused by people who thought they knew The Truth and then tried to impose it on others, rather than letting their ideas and data stand or fall on their own merits.

I believe there is a similar danger in science, and particularly in how we use and interpret the scientific literature. Taking climate change as an illustrative example, we hear routinely, at least in the US, that “97% of climate scientists agree” that climate change is occurring and that human industrial activity is at the center of it.

Now, I hasten to say that I am not a climate scientist, and I have not personally read into the extensive literature. But awareness of the paradox of unanimity gives me some pause in accepting the conclusion of that 97% at face value. First, it may not be true that there is a consensus, and second, it may not be true that the consensus is legitimate.

I do not mean to say that climate science is invalid; again, I am not a climate scientist. But I do mean to say that we should keep in mind that consensus does not necessarily indicate truth, and that citing a wide agreement among experts is a logical fallacy. It is not the consensus which indicates truth, but the underlying evidence and validity of the methods used to gather and interpret it.

Surely, the 97% could be correct. Successive confirmation could increase our confidence, but each bit of data needs to be independent of all others. If each one of them used a different, independent data set, using a variety of methods, we’d have more confidence that a consensus is rooted in truth rather than hidden biases. The same goes for ruling out alternative factors, such as fluctuations in solar activity. And of course, if the models of that 97% were able to predict future temperatures accurately, we’d have great confidence in their conclusions.

But we cannot discount the possibility, in climate science or any other field, that there’s something going on in the background. Perhaps there’s something in the peer review process that makes it impossible for dissenting voices to get published, leading them to abandon the field. Maybe there are issues with social ostracism that make it impossible for them to retain funding or positions of authority in the academy. Perhaps each of those 97% of climate scientists is using the same data or methodology, which would make all of the studies redundant and therefore worthless. The predominant methodology might turn out to be flawed.

To put a bit of a finer point on things, “science” doesn’t say anything – people do, and we ought to be wary when someone asserts what “science” or “the scientific consensus” says. All people, including you and me, bring our biases with us. Sometimes those are personal biases, but very often they are cultural or institutional. Identifying those biases is an extremely difficult task, and one that is too often ignored in the pursuit of publications, headlines, and prestige.

To be effective scientists, we need to be vigilant against our biases, whether personal or cultural. As you conduct your background research, be discriminating. Don’t merely take the claims in a publication as truth, no matter how many experts’ voices are behind them. This goes especially if you don’t understand the ideas and process behind those conclusions. Without that vigilance, you’ll lead yourself into many intellectual dead ends. With it, you will find your work interesting, productive, and immensely satisfying.

Thank you for your time, and congratulations again on your excellent work.

Leave a Reply

Your email address will not be published. Required fields are marked *

*