As I’ve been browsing around the major news networks’ health sections for the last several days, digging around for inspiration for this week’s segment, I have quickly reminded myself why I stopped reading these sections. They’re a constant attack on the paranoia of innocent hypochondriacs and those of us simply interested in our personal health. Journalists are driven to report newsworthy information daily, and will grasp at every whisper of “data” regarding a health threat or benefit, often exaggerating the facts just to generate interest. But can we really blame them, when the phrase “scientific study” is tossed around as if it were a staple of our vocabulary, without proof or merit, without specification or definition? Beyond that, it has become nearly impossible to trust the legitimacy of modern science itself, as we are constantly lacking the means to obtain proof that the story we’re being fed is legitimate. Was the study conducted by an unbiased party? Is it conclusive? Was the data manipulated maliciously or even accidentally? Are the parameters of the study designed to maximize the applicability of the results? This week, rather than explaining a scientific phenomenon in everyday technology, I will pontificate about the scientific method in everyday news, often with some cynicism, as I never claimed this was a wholly unbiased column. This, my friends, is what happens when you don’t email me with your topic questions and ideas like you’re supposed to be doing (email@example.com).
The Scientific Method
In order to be even called “scientific,” a method of inquiry must follow a certain set of guidelines that might seem intuitive. It often starts with an observation, usually of a physical event, but it can also simply be the acknowledgment of an unanswered question. Once a question has been proposed, someone must offer an answer. Scientific study does not move forward blindly; it is purposeful. A theory has to be constructed and then investigated in order to have a strong direction. This potential solution is tested using experimentation, which produces empirical, measurable results, which ideally will show us if the idea is right or wrong.
Of course, it’s not often that clear. Most of the time, the results are inconclusive, and don’t either support or discredit the theory, which is usually more frustrating. If a hypothesis is wrong, at least that can be communicated directly to the scientific community at large. The process of elimination can be extremely beneficial when the question is complex enough.
Random fluctuation or important phenomenon?
The challenge for any researcher who is setting up an experiment is the establishment of an objective, focused, and reproducible environment. The world is full of random fluctuations, microscopic events, and unlikely malfunctions. With so many possible inputs, you can never trace one event to its cause with 100% reproducible accuracy. A machine could break. A gust of wind could blow. The table you’re taking a measurement on could vibrate slightly. A co-investigator could take the same measurement as you with slightly different eyes. A medical patient being monitored could die in a random accident. So instead, we accept that “truth” is a matter of percentages and majorities. If something happens 95% of the time, we say that the other 5% was just due to these expected deviations, and we explain that with an entire sub-field of mathematics called statistical analysis.
This scrutinized method of data collection takes time. The average length of a new drug study is anywhere from five to twenty years, sometimes more. Collecting enough data, in just the right set of circumstances to minimize outside factors necessitates a very involved process. And yet, in the case of human life, twenty years is hardly enough time to follow long-term trends.
This implies some very disturbing things about what is truth in modern science, and consequently, in our modern world. The truth we accept is that the drugs we take are safe, that the foods we eat are healthy, that the devices we use are harmless. And these things are true in the closest way they can be, because practical truth needs a certain kind of evidence, or a complete lack thereof.
Know your sponsors, know your motives
How many of us have bought vitamin water simply because we were led to believe it was a healthier alternative? How many of us have become obsessed with healthy chocolate, acai berries, or the “all-natural” label simply because we’ve been told that “scientific evidence” supports the claims these product manufacturers make? How many of us jump on the newest health buzz-word crazes nutrients like Vitamin B, Vitamin D, Magnesium, antioxidants, and super foods? Studies show that an increased about of blank can dramatically improve your blank! Who is conducting these studies, and can they advertise false information?
While the federal government doesn’t require companies to verify the statements that they plaster on labels (provided they add a disclaimer about the uncertainty of their claims), it doesn’t necessarily mean that they are completely bogus. Many are based in some kind of research. However, all studies have to be funded, and money directs results. It’s very easy to fudge a point here, nudge a line there, toss out the data you don’t want, and maybe do a little photoshop editing, all in the name of scientific inquiry, especially if your proof can’t easily be traced. How can we prove, then, if these scientific studies are suspect? How about with a scientific study? A cnn article cited the following statistic:
“A 2007 study found that research on health drinks that was funded entirely by beverage companies was between four and eight times more likely to find a favorable result than research with no industry support.”
Does anyone else besides me see the irony here?
This is the cycle.
Personal Experience > Large-Scale Data
Another challenge to the scientific community is the power of personal experience. The mob will always believe the anecdote of one impassioned speaker over the collected stories of thousands of people when they have been reduced to cold, impersonal data. This is evidenced in another recent issue that popped up in my health-article perusing: the vaccination debate and its potential link to the outbreak of whooping cough in California.
Vaccination rates in the US are strong, however a growing percentage of people are choosing not to vaccinate their children. Skeptical parents are questioning the motives of overzealous drug companies. They believe their children aren’t at risk, that certain diseases are things of the past. They count on herd immunity. They fear side-effects more than the disease. These individuals will often site single examples of vaccinated children with chronic health conditions as evidence.
But while it is true that no therapy, no drug, no medical practice, and no natural health product is 100% safe and/or effective, many of them are pretty darn close, and a large volume of historical results stand behind practices like vaccination. Everyone’s immune system is different. Everyone’s environment is different. Not just by a small amount, either -- there are billions of circumstance permutations, and it is impossible to tear apart the tapestry of causality one thread at a time. What we have, and what we can count on, is the majority, which gives us net positive results. More children than we can possibly conceive of are being saved every day because of vaccines, orders of magnitude more than are being hurt by them. We have forgotten the disfiguring paralysis of polio, the massive outbreaks of infection that could kill in minutes, but these microbes are still out there; they haven’t simply vanished.
There is also still lingering confusion over a now largely discredited link between vaccines and autism because of the presence of a mercury-based preservative called thiomersal. This is another brilliant irony. A single study conducted by a single group caused waves of fear, discounting years of evidence regarding the safety of this practice, in the very face of the accusation that drug companies were using fear to push their own massive for-profit-only scheme. This study was later retracted because it was determined that the researchers acted unethically, using a biased selection of patients. It was called “dishonest and irresponsible.” The Center for Disease control and Prevention confirmed that their own rigorous study yielded absolutely no evidence that this preservative had any link to the risk of autism.
Why I remain a faithful scientist
I conduct research for a living. I’m going to graduate school, where I will do nothing but research. Every day I am presented with data in the form of images, tissue samples, doctor’s reports, surveys, and measurements. My boss has a brilliant theory, and his livelihood depends on him being right. Does this introduce a pretty significant bias? I should think so. But he believes very strongly that he is right, and he is doing everything he can to gather evidence for his case. The process he has to go through to convince the scientific community is staggeringly rigorous. Sometimes our measurements fail in ways that we understand. Other times the failure is completely inexplicable and completely off scale. In that instance, the wind might have blown. The table might have moved. The power might have surged. Is it ethical to simply throw this completely random data out as nothing more than a statistical anomaly?
I believe in the scientific method because I believe in large numbers. I believe in the complexity of a system. I accept that, sometimes, 95% is close enough. But I also believe in scrutiny. It’s not hard to prove something using the scientific method if you really want it badly enough, so it’s important to see who published a study, who verified it, and to give it some time to simmer. Because the process is so fine, so rigorous, and yet so uncertain, that truth in results comes slowly. But it’s the best way we have to understand the world around us. Just read carefully and skeptically, and look for results that adhere to the tenets of the scientific method.
If you have any feedback, questions, or topic suggestions that you would like to see featured in the future, feel free to sound off in the comments section or email me at firstname.lastname@example.org. Until next time!