Social media can be great, but who couldn’t have imagined some downsides to being in constant communication with our friends and those random people who somehow ended up on our friends lists? The internet debate, for instance. Sometimes it feels like every comments section is a ticking time bomb just waiting for its chance to explode in a blood-bath of messy logic and ruined friendships.
Civil discussion is great, but there’s one thing about these debates (besides the fact they’re often anything but civil) that really gets to me. Maybe you’ve noticed that people have been throwing studies around like some kind of angry glitter. “I see you making a point there. Unfortunately for you, I happen to have a study here. Yeah, a study.” I kind of get the reasoning behind it. Citing your sources sounds pretty legit, and throwing out that relevant study from 2009 out of the University of Rutgers practically wins the argument for you, right?
But it’s actually so, so wrong. Most people using research to support arguments on Facebook and all over the internet seem to have very little idea of how research actually works and what it can (and can’t) do for their arguments. So let’s hit on three main reasons why bringing up studies in debates doesn’t automatically win you any points.
#1: A single study isn’t meaningful on its own.
I can’t count the number of debates I’ve seen on huge, complicated topics where someone goes all “Yeah? Well look at this.” And we’re left with a link, and to apparently sit in wonder and amazement at this person’s superior intellectual abilities. But showing that a study exists that has concluded the same thing you think doesn’t prove a damn thing.
The biggest reason is that research works on consensus. A single study is practically useless out of the context of the rest of the research that has been carried out on that same topic, often over many decades. Any conclusions drawn from research are taken by looking at the big picture of the whole body of research.
And let’s be real. Someone who throws out studies like the Trump administration throws out alternative facts is not aware of all research on all topics. Not even scientists and researchers are because it’s just not humanly possible to know everything about everything. And it’s usually painfully obvious that wreckless citers didn’t even bother to read much about the one study they’re citing.
#2: Research doesn’t prove things.
As more and more research is done on a certain topic, the body of research to make conclusions from grows. And often (but not always) a consensus emerges, an idea that more of the research than not seems to support. But it’s important to realize that scientists never use the word “prove”. They’ll say that the body of research supports this or that idea, but it doesn’t prove it.
It might seem like being unnecessarily picky about wording, but it matters because the conclusion we’ve come to based on available research can always turn out to be incorrect. We could be missing important information that gives us a different picture. Science is, in its purest form, a search for information. If we turn research into nothing but a platform to “prove” our already-formed opinions, it becomes useless.
Of course, all people are biased, even researchers. But we need to be aware of our own biases, and know ourselves well enough to know if we can be swayed. If you’re presented with really good points that go against your strong opinions, can you think seriously about those ideas? Science is the search for the truth, not the search for what we want the truth to be.
#3: All studies are not created equal.
It’s easy to think that taking a consensus of available research means that if we get 10 negatives and 40 positives, the most likely answer to the question at hand is positive. But it’s not quite so simple. To accurately come to any kind of useful consensus, the design of the research has to be evaluated based on its strengths and weaknesses.
This means looking at a bunch of factors from how well the subjects in the study actually represent the population the researchers were trying to learn about to how the researchers went about measuring whatever it was they were measuring. Sometimes you’ll see studies with very small sample sizes, say 10 or so people, or case studies about just one or two people. These are interesting, but they can’t really be used to draw any conclusions. In other words, when it comes to the consensus, they’re not going to get any weight.
Or let’s say researchers did a study on whether or not something’s an effective treatment for depression, but they made up their own short questionnaire to measure depression and everyone else agrees it’s actually a pretty shitty measure of depression. This is a quality issue that takes away from the study’s reliability, and therefore the weight that study’s results are going to get.
Don’t get me wrong. It’s not necessarily a bad thing that more people are getting interested in research. But I can’t help but think that not everyone using research for their arguments is all that concerned about the great and unending search for knowledge that research is all about. Plenty of others have just been exposed to a lot of not-so-scientific science journalism that declares that the latest study proves this idea or smashes that idea to bits. And of course those are great headlines, but they’re terrible science.
If there’s one thing that’s crucial to research, it’s nuance. The uncomfortable uncertainty that leaves us open to learning more and to contradicting our own ideas when we need to. Understanding this can lead to a more realistic understanding of how research works and what it can actually do for us, like helping us develop informed opinions about the complex world around us. But automatically winning arguments? That’s not on the list. Because it turns out that dropping links to unread articles about unevaluated studies isn’t the sick burn we’ve somehow been led to believe.