Just Say No—To Bad Science

No one is saying that researchers cheat, but how they design a study of sex education can practically preordain the results.

When Doug Kirby sat down recently to update his 2001 analysis of sex-education programs, he had 111 studies that were scientifically sound, using rigorous methods to evaluate whether a program met its goals of reducing teen pregnancy, cutting teens' rates of sexually transmitted diseases and persuading them to practice abstinence (or, if they didn't, to use condoms). He also had a pile of studies that were too poorly designed to include. It measured three feet high.

For us civilians, it's hard to grasp how much of science is subjective, and especially how much leeway there is in choosing how to conduct a study. No one is alleging that scientists stack the deck on purpose. Let's just say that depending on how you design a study you can practically preordain the outcome. "There is an amazing array of things people do to botch a study," says Rebecca Maynard of the University of Pennsylvania.

For instance, 153 out of 167 government-funded studies of bisphenol-A, a chemical used to make plastic, find toxic effects in animals, such as low sperm counts. No industry-funded studies find any problem. It's not that the taxpayer-funded scientists are hallucinating, or that the industry scientists are blind. But here's a clue: many industry studies tested this estrogenlike chemical on a strain of rat that is insensitive to estrogen. That's like trying to measure how stress affects lactation ... using males.

Choosing the wrong methodology can lead science, and the public, astray. Early studies of hormone therapy compared women who chose to take estrogen pills and women who did not. The studies concluded that the pills prevent heart disease. Wrong. Women who chose to take hormones after menopause were healthier and more plugged into the medical system than women who did not. Differences in the women, not the effect of hormones, explained the difference in heart disease.

Which brings us to sex ed. In April, scientists released the most thorough study of abstinence-only programs ever conducted. Ordered up by Congress, it followed 2,000 kids, starting in grades 3 through 8, in rural and urban communities who had been randomly assigned to an abstinence-only program or not. Result: kids in abstinence-only "were no more likely to abstain from sex than their control group counterparts ... [both] had similar numbers of sexual partners and had initiated sex" at the same age.

Earlier studies gave abstinence-only glowing evaluations, as social conservatives publicized. The Heritage Foundation, for one, claimed in 2002 that abstinence-only had been proven "effective in reducing early sexual activity." But this is not a case of dueling studies, with no way to tell which to believe. If you dig into the earlier studies' methodology, you can see how they reached their conclusions.

Many evaluated programs where kids take a virginity pledge. But kids who choose to pledge are arguably different from kids who spurn the very idea. "There's potentially a huge selection issue," says Christopher Trenholm of Mathematica Policy Research, which did the abstinence study for the government. "It could lead to an upward bias on effectiveness."

Claims for abstinence-only also rest on measurements not of sexual activity, but attitudes. The Bush administration ditched the former in favor of assessing whether, after an abstinence-only program, kids knew that abstinence can bring "social, psychological, and health gains." If enough answered yes, the program was deemed effective. Anyone who is or was a teen can decide if knowing the right answer is the same as saying no to sex.

Other studies relied on kids' memory. But up to half of kids forget whether they took a virginity pledge, or pretend they never did. Those who fall off the abstinence wagon are likely to "forget" they pledged, while those who remain chaste might attribute it to a pledge they never made. Both factors inflate the measured efficacy of pledge programs.

A study of another abstinence program found it did a phenomenal job of getting girls to postpone their first sexual encounter. One problem: it evaluated only girls who stayed in the program, says Maynard. Girls who had sex were thrown out. In a related strategy, some studies of true sex ed, not the just-say-no variety, follow kids for only a few months, says Kirby of ETR Associates, a research contractor. But to see any difference between kids who took the class and those who did not, you have to let enough time go by for kids (in the latter group, one hopes) to have sex and get pregnant. A short time horizon may miss a program's effectiveness.

Authors of the problematic studies say they did the best they could with the time and money they had. OK, but as Trenholm says, "there is such a thing as good science and less good science." And you really can tell the difference.