Few people remember the specifics of what they learned in their high school science classes. But science has everyday importance. Let’s say you’re deciding whether to evacuate in the face of an approaching hurricane. It would be helpful to know that the destructive force of a storm increases as the square of the wind velocity, because kinetic energy = 1/2mv2, where m is mass and v is velocity. Thus, Category 5 Hurricanes Irma and Maria, with sustained winds originally of 185 mph, have two and a half times as much energy as a Category 3 hurricane with 115 mph winds. Knowing that—and taking the appropriate precautions—could save your life.

Science has other practical advantages, like understanding that putting salt on ice lowers the temperature at which water freezes, which is why we put salt on an icy highway; and that adding salt to water raises the boiling point, making the boiling water hotter than 212 degrees Fahrenheit when it starts to boil. It’s also critical to all manner of public policy, from deciding where to locate nuclear power plants to approving new technologies to making vaccines.

So, what is science? We associate it with facts and experiments. Scientists organize facts in ways that provide insights into how the world around us works—which is what we see in the periodic table of chemicals, for example, and in the explanations of why the planets revolve around the sun. Scientists also perform experiments to gain a deeper understanding of living things, like by dissecting frogs to learn about their anatomy. But above all, science is a method to ensure that experiments and the data derived from them are reproducible and valid. The scientific method is a set of procedures and practices, the aim of which is to provide valid data. When all goes well, science brings us to a deeper understanding of natural phenomena.

The specifics of that method were defined in a short primer published recently by the Society of Environmental Toxicology and Chemistry called, “Sound Science.” It defines sound science as “organized investigations and observations conducted by qualified personnel using documented methods and leading to verifiable results and conclusions.” In other words, it encapsulates the essence of the scientific method: that what we know results from rigorously obtained, empirical, and data-driven observations. If any of those characteristics is missing, the investigations—from lab experiments to clinical and environmental studies—are unlikely to be reliable or reproducible. 

“Organized investigations and observations” require a readily testable hypothesis—for example, that treatments A and B to relieve a headache are equally effective. To test it, scientists conduct an experiment where subjects randomly receive either A or B. The results of the two treatments are then compared, and appropriate statistical methods are applied to ascertain whether we can disprove the hypothesis that the effects of the treatments are the same, which would make the alternative hypothesis—that A is different from B—accepted. That’s the essence of the process for testing a new drug and accumulating evidence to be submitted to regulators for approval. Sometimes, the results of an investigation are published in a peer-reviewed journal, where the researchers provide the details of their methods, statistical analysis, and conclusions.

That might seem straightforward. The scientific method is in theory well understood, and experts in a given field routinely evaluate the methods, results, and conclusions of research performed by “qualified personnel”—i.e., scientists—via regulatory evaluations and the process of peer review. In practice, however, it’s anything but straightforward, especially when politics and other special interests intrude.

According to a survey of 1,576 researchers conducted last year by the journal Nature, more than “70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments.” There are innumerable reasons for that, most of which don’t involve misconduct: Reagents can differ from lab to lab, or even batch to batch, and different labs might inadvertently use different strains of mice, rabbits, or cultured cells. And many experiments are so diabolically complicated that it’s easy to make honest mistakes.

Still, that figure is alarming, and the problem is likely to become worse with the proliferation of “predatory publishers.” These publishers, such as the many listed here, put out journals like the Journal of Internal Medicine Reviews and Advance [sic] in Agriculture & Biology. Anyone can read them online without a subscription fee. The problem, though, is that the papers are published without genuine peer review. These journals are less concerned with providing valid, data-driven knowledge and understanding than with generating cash. They eagerly and uncritically accept virtually any submitted paper as long as the authors pay a hefty publication fee. To date, there are thousands of such scientific journals, publishing tens of thousands of papers a year.

Another equally worrisome trend is the increasing frequency of publishing of flawed advocacy research that is designed to give a false result to support a certain cause or position and can be cited by activists long after the findings have been discredited. The articles describing such “experiments” are often found in the predatory open-access journals.

A good example was a 2012 article by biologist Gilles-Éric Séralini and colleagues, which purportedly showed that rats fed a variety of genetically engineered corn developed more tumors and died earlier than control animals. There were numerous methodological flaws with this study that elicited widespread condemnation from the scientific community and caused it to be retracted by the journal Food and Chemical Toxicology, in which it was originally published. The authors then republished the paper elsewhere, essentially unchanged. And this was only one of the many flawed publications from lead author Gilles-Éric Séralini, an avowed opponent of the genetic engineering of crops who boasts numerous conflicts of interest. Even after retraction or publication in predatory journals, his articles are widely referenced by the anti-genetic engineering lobby and continue to show up on the internet as “evidence” of the risks of “GMOs.”  

Adding to the confusion, non-scientists who rely on scientific findings to promote a cause or argument frequently conflate association and causation. When a study finds an association, that means that two events or findings are merely correlated, while causation means that one event actually leads to another. The cock crowing and then the sun rising are two events that are associated—but one doesn’t cause the other. A more subtle example would be a claim that organic foods cause autism, simply because organic food sales and the incidence of autism have increased in tandem. A similar example is activists’ claims that a certain class of pesticides causes negative effects on bees because the increasing use of that pesticide occurred at the same time that bee populations allegedly began to decline; but in the absence of careful, controlled field experiments that subject bees to that pesticide and measure what happens to the apian population, that claim is insupportable. 

Complicating the interpretation of data still further is the fact that even if there is causation between events, it may be unclear which is the cause and which the effect. For example, a number of studies have found an association between longevity and having a partner or spouse, but that doesn’t prove that having a partner or spouse confers longevity. It could be that people who are healthier and therefore more likely to live longer are also more likely to find a partner.

A related confounding phenomenon called “data dredging” or “data mining” is when an investigator looks at a large number of variables for statistically significant associations and formulates a hypothesis after the analysis is done. That’s how we end up with spurious headlines like this one in USA Today, “Drinking four cups of coffee daily lowers risk of death.” Such conclusions arise when researchers ask their subjects numerous questions about what they eat and drink, and about their activities (exercise, smoking, occupation, hobbies, etc.)—and then try to correlate those answers with health outcomes. If the numbers of questions and outcomes are large enough, spurious statistical associations are inevitable, although there is no causation. Unfortunately, as a result of statistical jiu-jitsu, now many people believe that drinking lots of coffee will actually boost their longevity when there’s no real evidence to suggest that.

Non-scientists are likely to be fooled or manipulated by such claims because scientific illiteracy runs deep. A 2001 study sponsored by the U.S. National Science Foundation found that only about half of all people surveyed understood that the earth circles the sun once a year, while only 45 percent could give an “acceptable definition” of DNA and only 22 percent understood what a molecule was. More recent research by Jon Miller, Professor of Integrative Studies at Michigan State University, found that 70 percent of Americans cannot understand the science section of the New York Times.

Such widespread illiteracy has an impact on policy. In his 2005 book The March of Unreason, British polymath Dick Taverne warned, “in the practice of medicine, popular approaches to farming and food, policies to reduce hunger and disease, and many other practical issues, there is an undercurrent of irrationality that threatens science-dependent progress, and even the civilized basis of our democracy.” We see evidence of such irrationality in significant opposition to important products and technologies such as vaccines, nuclear power, and genetic engineering—and in the embrace of herbal dietary supplements and organic foods.

At least a modicum of scientific literacy is important for citizens. As the Society of Environmental Toxicology and Chemistry’s primer observed: “It is imperative that policy-makers, the media, and the general public are able to distinguish the facts from mere interpretations of a biased constituency. Decision-makers and those who inform them must be able to judge the quality of the science and reasoning that supports a position and must know whether a set of scientific findings is really meaningful to a decision.”

However, even when the science is sound and the data are “meaningful,” politicians and government officials commonly ignore them, often in the cause of bureaucratic empire-building, advancing some ideological goal, or capitulating to activists. When I was an official at the Food and Drug Administration, I once heard a Clinton administration undersecretary of agriculture, who previously had headed an anti-technology advocacy group, deconstruct science: “You can have ‘your’ science or ‘my’ science or ‘somebody else’s’ science. By nature, there is going to be a difference.” Translation: “I don’t care about data or the consensus in the scientific community. My opinions are just as valid.”

The beauty of the scientific method, when done right, is that it protects us from ideology and bias, and helps us understand what is true and what really works. At its best, science can inform sound public policy. But when we ignore or misinterpret science, we move backwards toward a time when irrationality and superstition prevailed.  

--  

Henry I. Miller, a physician and molecular biologist, is the Robert Wesson Fellow in Scientific Philosophy and Public Policy at Stanford University's Hoover Institution. He was the founding director of the FDA’s Office of Biotechnology.

overlay image