In my blog I sometimes use lots of numbers and refer to the results of studies that draw certain conclusions. But how trustworthy are those studies? Should we accept their findings simply because they are “scientific”?
In an article in Pacific
Standard (May/June) called “The Reformation,” Jerry Adler asks these
questions, particularly in regard to studies in the field of behavioral
sciences.
He notes that “for the last several years, a crisis of
self-confidence has been brewing in the world of experimental social science,
and in psychology especially.”
Papers have been retracted, and prominent researchers have
resigned their posts, including Marc Hauser, an evolutionary psychologist and
acclaimed author, and Diederik Stapel, a Dutch psychologist, “who admitted that
many of his eye-catching results were based on data he made up.”
Hoaxes: Some scholars have tried to expose these problems through
hoaxes. In 2011, a psychologist named Joseph P. Simmons and two colleagues
wrote a paper hypothesizing that listening to The Beatles’ “When I’m
Sixty-Four” makes people younger. That paper actually appeared in the journal
Psychological Science. It was a hoax.
This isn’t just a problem for the academic community. “A
whole industry has grown up around marketing the surprising-yet-oddly-intuitive
findings of social psychology, behavioral economics and related fields,” writes
Adler. Think of TED talks, Malcolm Gladwell, the Freakonomics guys.
TED Talks: Many of us are complicit in this. We want fast, definitive
answers. One person criticized TED talks for being simplistic. He did so by
giving a simplistic TED talk.
Why do these scientists fake or manipulate their data? There
is much pressure in the academic community for publishing cutting-edge
material. There are incentives such as tenure, advancement, grants and prizes.
And with fewer dollars available to fund research, scientists are loathe to
spend the money and time to do an experiment and not get the results they want.
Often, people begin an experiment with a hypothesis. If
their data doesn’t quite support that hypothesis, they may keep at it until it
does or simply alter the data.
Simmons points out that “psychologists who deploy enough
statistical sleight of hand can find ‘significance’ in almost any data set.”
One research psychologist, Brian Nosek, directs the
Reproducibility Project, which reproduces certain experiments to see if they
get the same results as those published. This doesn’t necessarily disprove an
experiment’s findings, and it may even help refine its point. It may also turn
up a failure that is glaring, and the paper must be retracted.
Fraud: Adler notes that problems of fraud, statistical analysis and
replication also apply to other fields. Biomedical research is one example. In
search of new drugs, researchers at the pharmaceutical firm Amgen “selected 53
promising basic-research papers from leading medical journals and attempted to
reproduce the original findings with the same experiments,” Adler writes. “They
failed approximately 9 times out of 10.”
Those involved with the Reproducibility Project and other
scientists are part of “a whole new approach to experimental social science,
emphasizing cooperation over competition and privileging the slow accretion of
understanding over dramatic, counterintuitive results,” writes Adler.
He goes on to point out that “pretty much all scientific
findings are tentative.” The Apostle Paul said, “Test the spirits.” Scientists need
to keep testing their results.