Thursday, September 29, 2005

Science, meet capitalism.

There was an accident on the freeway this morning, which meant I listened to more NPR than my usual getting-to-work dose. Possibly, my peevishness at the boneheads who snarled the roads by colliding so inconsiderately is spilling over into peevishness at the scientists in the stories I heard. You be the judge.

Want to know the sex of your fetus really fast? You might be tempted to get the Baby Gender Mentor test (although you might not be tempted to say "Baby Gender Mentor test" five times fast). But, according to reporter Nell Boyce, you won't have much data to reassure you that this is money well spent.

Basically, Acu-Gen, the biotech firm offering the test, says that they can make an accurate fetal sex determination from a blood sample by 5 weeks after conception. They point to a bunch of publications that purportedly show that what they are offering is scientifically plausible and not a scam at all. The thing is, one of the scientists whose work is cited as supporting Acu-Gen's method, Farideh Bischoff, said in the NPR interview that she was skeptical that such high accuracy could be obtained so early in a pregnancy. (It's worth noting that Acu-Gen screws up the citation, giving it as "Farideh et al." rather than "Bischoff et al.")

Acu-Gen is the only company selling this test, so naturally, the details of the test are proprietary. But there seems not to be any body of, say, clinical trials that they can point to to reassure people who get the test that it's "99.9% accurate". In an interview on the Today Show, Sherry Bonelli, the CEO of PregnancyStore (the only retailer that sells Baby Gender Mentor test), said "They've actually followed more than 2000 women throughout their pregnancies and they've never been wrong." Of course, the emissary of Acu-Gen who responded to NPR's questions about the accuracy of the test, said, in effect, come back in a year when we've followed a bunch of pregnancies to term and can answer your concerns about accuracy.

Do you know that your test is accurate or don't you? (And by "know" I don't mean "know in your heart of hearts" so much as "know from the results of well-designed and well-conducted scientific studies with sufficiently large sample size that the results are reliable".) Pressed on whether tests of the accuracy had already been performed (as Bonelli claimed) or are currently being performed (as the Acu-Gen email to NPR suggested), the head of Acu-Gen, Chang Ming Wang, was evasive.

Diana Bianchi, a fetal DNA expert and another scientist cited by Acu-Gen in support of their method, said in the NPR interview that she was concerned about the claims of high test accuracy, given anecdotal evidence (from sonograms) that the test had given bad results in at least a handful of cases taken together with the lack of persuasive data to support the claims of high accuracy. Sherry Bonelli, Baby Gender Mentor test retailer (who was apparently much more willing to speak on the record than anyone who works for Acu-Gen), said that these scientists are skeptical because ... (wait for it) ... they're jealous of Acu-Gen! See, even though Acu-Gen hasn't provided any evidence that their claim (99.9% accuracy determining the fetus' sex at 5 weeks gestation from a drop of the mother's blood) is true, their critics haven't provided any evidence that Acu-Gen's claim is false. Given that the precise details of Acu-Gen's test are not available to these skeptical scientists (because they're proprietary), it's not obvious how they're supposed to produce such evidence. But that's not going to stop Bonelli from selling the test!

Of course, fetal sex determination via a drop of blood counts as a non-medical test, so the FDA doesn't regulate it (even though results from this test might well lead to a decision to pursue various obviously medical decisions for the expectant mother). Given that there are none of the clinical trials you'd expect for an FDA-regulated test, we're talking about something that, from the point of view of supporting data, is on par with "dietary supplements" advertised late at night on basic cable. Classy!

I wonder how other scientists in the biotech industry feel about this kind of thing. It seems like they should be concerned about a "science"-based product selling itself as science-based to the consumers but putting up very little science to back the claims that are separating consumers from their money. The more this kind of thing happens, the more opportunity there is for consumers to feel screwed over by shoddy science (and in case Acu-Gen's lawyers are reading this, I'm not claiming Baby Mentor Gender test is a scam -- I'm just pointing out that without any data to support it, there is no earthly reason a scientist or an educated consumer would accept its claims!). And, feeling screwed over by shoddy science would tend to feed into a low opinion of scientists. That would sure make it harder for serious scientists in biotech to connect with consumers. And, it would make things harder for scientists in general, even if they're not trying to sell anything but knowledge. Guilt by association sucks, but it's hard to avoid if you don't stand up and call bullsh*t on a member of your community who may be using the mantle of Science to make a fast buck.

One more quickie: this story of a biomedical firm whose listing on the New York Stock Exchange has been delayed. The apparent reason for this delay (NYSE hasn't given an official explanation)? Animal rights groups may have put pressure on the Exchange, because the firm in question, Life Sciences Research, Inc. does a lot of animal testing.

My regular readers (hi, Julie!) know that, while I like doggies and bunnies and duckies, I'm no animal liberationist. But, I'm not all indignant on behalf of Life Sciences Research, Inc. See, this is an example of market forces working, isn't it? There is certainly a demand for animal testing (which is why Life Sciences Research, Inc. has a thriving business), but there are also folks who are agin' it. In a free market (or whatever kind of market it is that we have), consumer opinions make a difference. Even the NYSE is influenced by public opinion.

Babe, that's just another cost of doing business.

Technorati tags: , , ,

Wednesday, September 28, 2005

Research with human subjects -- mine.

Yes, I'm getting ready to study some scientists (and scientists-in-training) at my university. Of course, there will be much Serious Philosophical Analysis, but what I'm planning to analyze are actual practices in science departments. ("Honey, look! There's a philosopher who's paying attention to the real world rather than starting from first principles! Turn on the sprinklers!")

The thing is, getting information on the practices means I'll be asking scientists and scientists-in-training questions, both in interviews and in questionnaires. And that means getting Institutional Review Board approval for my protocol (because respondents to questionnaires and interviews conducted as part of a research project are human subjects). And that means my protocol must ensure that "subjects are fully informed of their rights and of the potential risks and benefits of participation in the research."

Umm, potential risks of sharing information with a philosopher?

  • Participants might become reflective about their everyday professional activities, which can eat up a lot of time that could be spent on other necessary functions, like compiling assessment data or looking for a parking space.
  • Participants might become aware of gaps between the outcomes their professional activities aim at and the outcomes actually achieved. This could lead to feelings of disappointment. Alternatively, this might encourage adjustments of the professional activities to better attain the desired outcomes — another time sink (see above).
  • Student participants might become reflective about their role in the learning process, leading to alienation from their peers.
  • Faculty and student participants may be drawn into discussions with each other about the effectiveness of their various interactions. Discussions take time (see above).
  • Participants might become ensorcelled by the siren song of Philosophy (as the lead investigator of the proposed research did), putting them at risk for additional coursework and professional training, not to mention the stigma of having left a respectable field for ... Philosophy.
  • There is a small risk that participants may sustain paper-cuts from the questionnaire.

What am I forgetting here?

On a related note, there is a danger in giving someone like me a multipage policy for the protection of human research subjects, because it puts ideas in my head. For instance, I now want to find a way to incorporate into this research "taste and food quality evaluation" of "wholesome foods without additives". Possibly brownies.

Would a brownie with your questionnaire count as a potential benefit or a potential risk of participating in the study?

Technorati tag:

Tuesday, September 27, 2005

Teaching scientific reasoning.

Actually, make that "Trying to teach scientific reasoning to a group of students, the majority of whom are kind of freaked out about science."

Julie said I should blog about my online Philosophy of Science class. (She was a student on its maiden voyage; if she wants it blogged, it must be blogged!) So, given my thematic focus on this particular blog, I thought I'd discuss one particular type of activity I use in that class that aims to give students a feel for how scientists reason.

Of course, there are loads of activities that could fit under the broad umbrella of "scientific reasoning". There are computations (the kinds of things science majors do on problem sets). There is the activity of interpreting outcomes of experiments in the lab (and, unless things have changed a lot since I was an undergraduate, the task of working out a plausible explanation in the lab report of why things didn't work as planned). There is the challenge of framing a question and designing a research study or experiment that will lead to an answer. I could go on ... But, in my Philosophy of Science courses, as in most others, we don't have labs, and if I asked my students to do computation-heavy problem sets, they'd come after me.

So I go with what philosophy offers here: the thought experiment. For Philosophy of Science, though, these are thought experiments that are conducted in small groups. This is not just a strategy for making the online class feel more like a class (with other people! and discussion!); I give an analogous set of activities to the "live" version of this class.

The thought experiments I give are frequently drawn from incidents in the history of science, or from classic science fiction examples in the scientific literature. The groups of students are presented with a scenario and given a task along the lines of figuring out how to classify particular substances (by macroscopic properties? by microscopic properties?), or how to choose between two competing theories, or how to design tests for kooky-sounding hypotheses. In most of these tasks, the students start with some sort of interpretive framework cobbled together from the course readings, any prior scientific training they might have, and common sense. They need to use this to respond to certain bits of evidence or information. Before they can draw the conclusions they need to answer the questions I'm asking them to answer, they usually need to get more evidence, or adjust their interpretive framework, or both.

As a bonus, because this is a group exercise, the members of the group frequently have different interpretive frameworks and background assumptions. This means that part of the discussion is the gladiatorial battle between different interpretive frameworks and background assumptions. At the very least, the students become aware of their own assumptions and interpretive frameworks through this clash. (Usually they're so light you can hardly tell you're wearing them!) Often, the groups will succeed in coming to something like consensus by the end. Believe it or not, the consensus is usually built by the exchange of reasoned arguments.

This is lesson #1: Scientific exchanges often involve significant disagreements, but scientists work to come to agreement through rational engagement with each other. (This is why the groups are essential; most of my students don't disagree with themselves enough to learn this lesson from a solo project.)

In terms of dealing with the particular tasks I ask the students to complete, they run into some interesting problems. One is that they are fairly slavish in their loyalty to the facts they learned in high school science. Remember, these are students who, as a group, are scared of science. (There are a few notable exceptions -- but these exceptions tend not to be the ones who hold the high school textbook as the last word on the physical world.) Happy as I am that these students have retained something from their science education, it makes it harder to get them into the thought experiments sometimes, since the scenarios frequently ask how you would make a decision given certain sorts of experimental outcomes that might not jibe with reality as catalogued by the high school science class. To get some insight into what scientists do to get to knowledge, they have to imagine themselves into situations in which they might not know all the stuff that we know now and/or the phenomena in the scenario are slightly different from those in the universe they actually inhabit.

Indeed, this connects to another difficulty the students have with the scenarios drawn from the history of science: they think that the most important thing for them to do is "decide" the classification or theory choice in the way that Science actually decided the matter. It's almost like they see the actual judgment of science as the answer in the back of the book. So far, no one has actually overtly tried to reverse engineer their group response from the "right" historical answer, but they tend to use this answer as a selection criterion unless I intervene.

The thing I want them to experience here is that scientists don't have a "back of the book" to look to and check their answers. They're doing the best they can with partial information and an interpretive framework that proves itself in its usefulness. As such, it is completely possible, at certain junctures, that different groups of scientists could come to different conclusions about a particular problem. In the long run, the different groups can be expected to engage each other with reasoned arguments to come to consensus. But, the consensus doesn't come from having absolute proof that you have The One Right Answer. A lot of scientific decisions, after all, have to do with how to draw our categories, or what we want out of a theory or a model or a measuring device.

I mentioned before that I use activities like these in my live classes (where class meetings are 75 minutes long), but I've found them especially effective online. When the discussion happens on the online discussion board, students seem to work harder to express themselves clearly (because they're writing their contributions). They seem also to respond more seriously to the contributions of others in their groups (again, perhaps because they're written). The students have even been know to respond carefully and critically to their own contributions. And, rather than having to discuss and achieve consensus in 75 minutes, my online students typically have a week or two (depending on the complexity of the task) to really dig into the discussion and try to persuade each other. Not surprisingly, the greater length of time makes for a deeper discussion.

Bonus: Since I am privy to all the groups' discussions, not only can I play devil's advocate, dispense encouragement, and clarify any parameters that need clarifying, but I also know who any free-riders are, and I can award credit (or lack thereof) accordingly.

These are little steps toward building up a better understanding of what scientists are doing for the lay persons I teach. But I think they actually convey something that you don't usually get from canned lab experiments, either. (The handful of science majors who have taken this course strengthen my conviction about this.) A lot of the work ion real science is figuring out what to do next given what you've got. The scientist has a general plan of attack and a bucketful of strategies that seem promising and/or have worked in the past, but a lot of scientific reasoning starts out feeling like a shot in the dark.

Philosophers know a lot about taking shots in the dark!

Technorati tags: , ,

Monday, September 26, 2005

Blueprint to improve science journalism.

Is there a way to get science journalism to work better? (What do I mean by “better”? The facts are reported accurately, and the non-scientist reader has a sense not only of why the science matters, but also of how the science was produced.) Could good science journalism go beyond helping people make rational decisions about what to eat, what to drive, and how to understand various bits of their world, to helping people have a better grasp of scientific reasoning — maybe even helping them see what is creative and beautiful and cool about science?

This post at Pharyngula has made me optimistic. PZ Myers, recognizing the good work of science journalist William Souder, reprints Souder’s take on why so much science journalism disappoints. It’s a beautiful analysis (where "it" = Souder, W. (2005) Of men and deformed frogs: a journalist's lament. In: Lannoo, M. 2005. Amphibian Declines: The Conservation Status of United States Species. U. Calif. Press, Berkeley, pp. 344-347.), and, to my mind, it points to some ways in which things could actually be improved if journalists and scientists put their minds to it. I’ll hit the key points Souder makes, and the optimistic places my mind went envisioning solutions to the problem.

[R]eporters—and more importantly their editors—tend not to see science as a developing story, but rather as a perplexing and boring process that produces "news" only intermittently, usually in the form of a readily digestible "discovery." This, for the most part, eliminates from science reporting what is elsewhere the gold standard in journalism—enterprise. A reporter trying to cover an ongoing story in science is likely to find room for only fragments of it in the paper or on the evening news. Very small fragments. Image that newspaper and TV journalists reported the results of elections, but said not a word about the campaigns leading up to voting day, and you begin to get an idea of the disparity.

What would it take for reporters to view science as a developing story? Why not a radical rethinking of the “science beat”? Reporters in the field in research laboratories and at scientific conferences. Reporters getting a feel not only for how the research is being done, but what’s motivating it, and what surprising twists and turns present themselves on the road between formulating a question and coming to something like an answer.

I just can’t imagine that embedded reporters in a lab would get in the way any more than they do in, say, a military operation. Plus, they might wash some glassware while talking with the researchers.

[J]ournalists are overly reliant on findings published in the scientific literature. In most forms of journalism getting "scooped" is a disaster. In science reporting, it's almost a requirement. The surest way to convince an editor to go with a science story is to show him or her that it has already been published in a scientific journal—or, preferably, that it will appear in one on the very same day you are proposing you run with your version. Here, I think, journalism and science must shoulder the blame equally. Journalists, in choosing only to cover periodic developments, give a false picture of the nature of scientific progress. A paper in a journal reporting a set of findings rarely represents a comprehensive view of the whole field of knowledge about a particular issue; rather it is a snapshot of one facet of our knowledge, incomplete and lacking context. No wonder the public often sees scientific discoveries as contradictory of one another. The public—that is to say, you and I—may feel a little like it's listening to a radio broadcast of a football game in which the plays aren't reported, but only a score is given every few minutes. In a seesaw game (and science is very much a seesaw game) you would never know when one reality might supplant another. At the same time, the scientific community makes better, more continuing coverage of science difficult when most journals require researchers to embargo their findings as a condition of publication. Why don't reporters do a better job of keeping track of what scientists are up to? Much of the time it is because scientists keep it a secret. Embargoing scientific findings that are in press enhances the status and confirms the supreme power of scientific journals—but it inhibits a full public understanding of what science does or does not know about many subjects of vital importance.

Souder is quite right that keeping results close to the vest until your paper in Nature or Science comes out fosters a popular picture of science as a collection of results rather than an ongoing process that involves corrections. The public sees a see-saw when the view from within science is of a jigsaw puzzle with a gazillion pieces. What to do?

Well, the scientific embeds would help by reporting on other parts of the scientific process besides the results. The fear, of course, is that the dispatches from Dr. Smartguy’s lab would undercut Dr. Smartguy’s ability to actually bring results to press in a top-flight, peer-reviewed scientific journal. (Won’t Dr. Smartguy’s competitors become avid consumers of science news?) Who in their right mind would agree to let journalists observe them if it cuts into their publication record. (And then there are grant proposals …)

But surely, just as there are certain sensitive details journalists embedded in military operations cannot report, it should be possible to specify sensitive details in the lab that are off limits until a publication has seen the light of day. Even holding those details back, there are many interesting and important stories to tell about how scientific knowledge is built. Indeed, science journalism might allow for more stories about beautifully designed experiments that didn’t work and promising leads that haven’t panned out (yet) than do the scholarly scientific journals. (Really, though, this might be something the scientific journals should rethink. I would have found it enormously helpful, when I was doing research in chemistry, if there had been a body of research on experiments that just didn’t work on my system. Would have saved me some time!)

Peer reviewing, of course, is what makes a result scientific knowledge, endorsed by the community of science. So, there is a danger in reporting (even obliquely) results that haven’t yet gotten through peer review. So another change that might help here would be to speed up the rate of peer review. To do this without sacrificing the quality of peer reviewing, you’d need more qualified peer reviewers — and they’d need to have the time to actually work through the manuscripts at a reasonable clip. To make that happen, it might be necessary for scientists (and the institutions that employ them) to recognize peer reviewing as an important scientific contribution. I’m not saying that reviewing a manuscript should “count” as much as producing one, but it could certainly be counted more than it is at present.

On a related note, an important part of the science beat should include an examination of peer review. Why is it so important to science? How does the process look, through the eyes of the reviewer and through the eyes of the reviewee? Once a paper has passed through peer review, why is that not the last word on the subject (and how do scientists deal with post-peer-review differences of opinion)? If the lay reader got even a bit of her head wrapped around this, it would be a big improvement over the status quo.

Finally, journalists, and (to a lesser but still substantial degree) scientists as well, place an inordinate significance on human health concerns with respect to ecological problems. Human health, of course, is a paramount consideration, but you should not have to have evidence of people keeling over or growing extra legs to sell an editor on a story about deformed frogs (or ozone depletion, global warming, endocrine disruption, water scarcity, shrinking biodiversity, etc., etc., unto oblivion).

Did I mention that Souder wrote about deformed frogs? But the point is generalizable. There are plenty of stories about science that the public would benefit from knowing even if they don’t have immediate implications for what we eat or drive or whatever. My hunch is that the public’s natural tendency is to be curious about what scientists are doing and what scientists think they’ll learn by doing it. Only the combine force of science education that makes science seem boring and impossibly hard, and scientists saying, “Look, don’t worry what it does, you wouldn’t understand” has deadened this natural curiosity. (C’mon, people are curious about Paris Hilton. Ain’t no way Paris Hilton is more interesting than a superconducting supercollider!)

Here again, I think scientists need to help journalists to make the situation better. Scientists really need to work out at least a good cocktail party explanation of what they’re studying, how they’re studying it, and why it matters. And, the “why it matters” part needn’t be closely linked to human health or gas efficiency or economic productivity. Scientists, and science journalists, can help the public understand that sometimes we are benefited by simply coming to better understanding of a piece of the world that intrigues us. But, if scientists can’t explain why this matters, no fair blaming the journalists for not explaining it.

So, now that we have an idea what needs to be done, let’s get to it!

Technorati tags: , ,

Friday, September 23, 2005

Anti-science chickens coming home to roost.

Remember when I was worrying about the government's relations with science? How I thought government interference with scientists and their results might undermine the ability for the government to actually produce science that anyone can trust?

We may be on our way, folks.

In a story this afternoon on The California Report (sorry, no permalink -- it's the first story in the September 23 archive), it was reported that the U.S. Department of Education is withholding a study on bilingual education. It only took three years and a couple million dollars to do the study, so no big deal. As the description of the story puts it, "Officials say the research failed to meet standards for quality. But skeptics question whether the decision is politically motivated."

See what happens? You get a reputation for trying to thwart the release of scientific results that go against your policy objectives (or, say, those of your big donors). Then, if you withhold a study whose results might have implications for your policy objectives, people will see this as business as usual. Even if there are actually valid scientific reasons for rejecting the study, no one is going to believe you didn't make them up. Which means, of course, that folks will be suspicious as well if a better version of the study comes out and happens to support the position you (or your donors) prefer.

Dr. Free-Ride's Better Half views this as a little victory for the cynical enemies of science. They've got things to the point that a piece of science produced under the government's auspices can be dismissed out of hand regardless of its actual scientific merit or shortcomings. And from there, it's not such a stretch to cutting science out of the public policy dialogue altogether.

I'm a little less negative about this. For one thing, science not done under the government's auspices can still hold its own under scrutiny. For another, it's not obvious to me that the public ends up agreeing that the science doesn't matter. If there were a serious politically motivated effort to withhold a scientific study, wouldn't that indicate that the pols were scared of the science? Wouldn't that be a clue that they knew, deep down, that the science should matter in the public policy dialogue?

Technorati tags: ,

Thursday, September 22, 2005

Numbers don’t lie … unless they’re statistics.

A colleague of mine was nice enough to point me to this news item from The Economist on the reliability of medical research papers. The article in question is “premium content”, but the article it discusses, ”Why Most Published Research Findings Are False”, is freely available. (Rock on, Public Library of Science!)

The paper is by John Ioannidis, an epidemiologist. It goes into loving detail, looking at factors like prior probability of your hypothesis being true, statistical power of the study, and the level of statistical significance, to show why certain common practices in medical research increase the probability that a “statistically significant” result is fairly meaningless. (If you’re not a math fan, it’s tough going. Even if you are a math fan but it’s been a while since you’ve mucked around with prob/stat, you may want to chew carefully.)

It’s an interesting read (and, I imagine, an important one for researchers who want to avoid some of the pitfalls Ioannidis indicates). Rather that recapping his argument here, which would necessarily involve either going into way more mathematical detail than you want me to, or dumbing it down, I’m just going to give you his corollaries:

Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.

Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.

Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.

Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.

Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.

A number of these didn’t surprise me. (“Financial interest might bias my results? Really?”) But I wasn’t expecting Corollary 4 at all. It does seem reasonable that if people in a research area don’t agree on precisely what they are, or should be, measuring, it’s harder to find out what’s really going on in that area. As Ioannidis puts it,

Adherence to common standards is likely to increase the proportion of true findings. The same applies to outcomes. True findings may be more common when outcomes are unequivocal and universally agreed (e.g., death) rather than when multifarious outcomes are devised (e.g., scales for schizophrenia outcomes). Similarly, fields that use commonly agreed, stereotyped analytical methods (e.g., Kaplan-Meier plots and the log-rank test) may yield a larger proportion of true findings than fields where analytical methods are still under experimentation (e.g., artificial intelligence methods) and only “best” results are reported.

As Ioannidis notes, the stage of research at which you’re working out what pieces of the system are important, what kinds of experiments might tell you something useful, etc., is very important for hypothesis generation. But, it would seem, to test these hypotheses in reliable ways you have to move from flexibility to rigidity — in other words, you need to separate the process of generating hypotheses from the process of testing those hypotheses. (Sir Karl Popper, in his grave, does whatever one would do in situations where rolling is not appropriate.)

Similarly, Corollary 6 is not as obvious. It would seem that more people working on a question would lead to better results than fewer people working on it. But hot fields are ones that are often relatively new (with appropriate standards still being worked out — see Corollary 4 again) and, perhaps more importantly, they are fields in which research teams are in fierce competition:

With many teams working on the same field and with massive experimental data being produced, timing is of the essence in beating competition. Thus, each team may prioritize on pursuing and disseminating its most impressive “positive” results. “Negative” results may become attractive for dissemination only if some other team has found a “positive” association on the same question. In that case, it may be attractive to refute a claim made in some prestigious journal.

In other words, the stakes of the competition can influence the flow of information from the labs to the literature. Moreover, it wouldn’t be surprising if the desire to get good results into print first makes mildly promising experimental results look better. (“Holy cow, we found it! Write it up, stat!)

Of course, scientists are human. Scientists can’t help but be biased by what they expect to see and by what they want to see. But that’s where the community of science is supposed to come it. All the individual biases are supposed to somehow cancel out in the scientific knowledge the community produces and endorses. Even if you see something in the data, if other scientists can’t see it, the community won’t accept it. And, the way the story is usually told, competition between scientists for discoveries and recognition and use-through-citation and groupies is supposed to make each of them scrutinize their competitor’s research findings and try to find fault with them.

Ioannidis seems to be saying, though, that it doesn’t always work out that way. Whether because scientists are trying to observe phenomena that are hard to pin down, or because negative findings only turn into publications when they show someone else’s positive findings were mistaken, or because scientists don’t always understand statistical methods well enough to set up good experiments of their own or critique the statistical methods used by papers in the literature, the community of science is ending up with less clarity about what it knows and with what kind of certainty. But, perhaps the insights of Ioannidis and others will help improve the situation.

Also worth noting, of course, is the fact that Ioannidis is concerned with the medical literature, rather than all scientific literature across all fields. There may be special difficulties that come from studying wicked-hard complex systems (like sick humans) that you don’t encounter, say, dealing with a chemical reaction in a beaker. Beyond dealing with different kinds of phenomena and agreed upon (or disputed) ways to characterize and probe them, scientists in different fields deal with different expectations as to what “scholarly productivity” looks like; I’m told the pressure to publish multiple papers a year is especially high in biomedical fields. Finally, while most of us couldn’t find a useful real-life application for string theory if we had spinach between our teeth, biomedical research seems pretty relevant to everyday concerns. Not only does the public eagerly anticipate scientific solutions to a panoply of problems with human health, but there’s a big old pharmaceutical industry that wants to be a part of it. But hey, no pressure!

My favorite section title in Ioannidis’ paper is “Claimed Research Findings May Often Be Simply Accurate Measures of the Prevailing Bias.” This is where the prescription lies: good science will depend on vigilant attention to what prevailing biases there are — in a research group, a department, a scientific field, the community of science, and in the larger societal structures in which scientists are embedded — and serious attempts to ensure that those biases are stripped out of the knowledge that scientists produce.

(And, for an extreme example of how research can give us more reliable information about researcher bias than about the system under study, check out this survey about motherhood and career choices given to freshman and senior women at Yale. If scientific and social-scientific journals had fashion spreads, this would be the “don’t” picture.)

ETA: For some reason, Blogger doesn't want you to see that last link. Here's the URL:

Another URL or two:: There were Blogspot server problems the night this went up, so apparently the links are haunted. Here's where to find Dr. Ioannidis' article:

Here's the URL for the Public Library of Science - Medicine:

Technorati tags: , ,

Tuesday, September 20, 2005

What scientists know (or don't)

I worry a lot about what lay people don't know about science. Sometimes the problem is that smart lay people have been scared away from thinking about science (by the people who tell them that you need a gigantic brain to do science). Other times, people seem to think they can hold forth on scientific debates despite the fact that they don't actually grasp the basics of the science they're talking about.

For an example of the "anyone can weigh in on science X" sort of person, consider the case of Timothy Birdnow, a property manager who has taken it upon himself to set out all manner of problems with the theory of evolution. Useful dissections of Mr. Birdnow's claims have been given by P.Z. Myers and The Questionable Authority. The diagnosis from The Questionable Authority is that Mr. Birdnow overlooks the fact that you can't coast on just your native intelligence here — you actually need to know some biology if you're going to weigh in on a biological debate without looking like a doofus:

In fact, scientists who dare to suggest that they might be more qualified to comment on scientific matters than non-scientists run the risk of being branded as "elitists". It's not elitism, folks, it's specialization. Modern society is far, far too complex for everyone to be good at everything. Most people select career paths that are specialized fields, and they are (normally) better at their own field than they are in other fields. This means that they are (normally) better able to develop informed opinions about matters within their fields than are people who lack the strong background in that area.

Not knowing Mr. Birdnow, it's hard for me to know whether this diagnosis is correct. My experience has been that lay people are much more likely to under-estimate their ability to "get" science than to over-estimate it. There are, of course, people who think they know everything, but they tend to be less likely to barge in to science than, say, philosophy (because how hard could philosophy be?). So my own completely unsubstantiated hunch is that perhaps Mr. Birdnow has a science coach on the side. And maybe that science coach has, um, a political agenda in whose service Mr. Birdnow is spokesmodeling. Or, maybe the science coach really does have a good grasp of biology (and the proper relation of DNA and RNA, etc.), and it's just that the cell phone keeps cutting out during the coaching.

But, it turns out, the specialization isn't just a matter of scientists doing science and property managers doing the property managing. Indeed, it would seem physicists might not know as much as they think about the state of evolutionary theory. Among other things, it seems that math-y scientists (like physicists) might not fully appreciate that theories that don't look like sets of equations can be perfectly good scientific theories. Also, physicists may not know how evolutionary biologists subject their theories to empirical test. It may all be science, but we have different scientific disciplines for a reason. While physicists may "get" the science in evolutionary theory better than, oh, property managers, they probably won't get it as well as someone who works on evolutionary theory for a living.

(Another discussion of the physicist who prompted these observations notes a certain irony in a string theorist policing the goodness of other scientific theories. I'm not going to cast aspersions. Some of my best friends are used to work on string theory.)

So: good to know some science to speak with authority about science. Good to know specific science X to speak with authority about specific science X.

But, economics? It seems a large number of economists (78% of 200 surveyed at the 2005 meeting of the American Economic Association) picked the wrong answer to what was supposed to be a straightforward economics problem. Could it be that the problem was not that straightforward? That not even economists are qualified to hold forth on economics? That economics, while dismal, is not a science?

Dude, I don't have all the answers; just some questions and some links.

Technorati tags: ,

Who'll protect kids from the EPA?

Via my better half, an article from the Baltimore Sun reporting that new EPA rules allow testing of pesticides on children.

Now, the thing is, these are new rules that were prompted by criticisms of earlier problems -- including pesticide studies whose human subjects didn't know what they had been exposed to, nor even the purposes of the studies in which they were subjects! So, the idea was that the new rules would address these problems. Right?

Quoting from the article:

In unveiling the new rules last week, the EPA promised full protection for those most at risk of unethical testing.

"We regard as unethical and would never conduct, support, require or approve any study involving intentional exposure of pregnant women, infants or children to a pesticide," the rule states.

But within the 30 pages of rules are clear-cut exceptions that permit:

  • Testing of "abused or neglected" children without permission from parents or guardians.
  • "Ethically deficient" human research if it is considered crucial to "protect public health."
  • More than minimal health risk to a subject if there is a "direct benefit" to the child being tested, and the parents or guardians agree.
  • EPA acceptance of overseas industry studies, which are often performed in countries that have minimal or no ethical standards for testing, as long as the tests are not done directly for the EPA.

Shall we take these point by point?

1. It is not on the face of it outrageous to think that there might be certain instances in which participating in a scientific study could have a potential benefit for a human subject. If that human is a competent adult, the idea is to explain the potential benefit, as well as the potential harms, and let that competent adult make her own decision. If the human is a child, generally we look to the parent or guardian to make the decision that is in the best interests of the child. (And, there may be an argument here that the adult rendering consent for a child needs to be extra careful, not only about the immediate costs and benefits to the child, but also about how this decision may affect the child's later range of choices in certain, sometimes irreversible ways.)

An abused or neglected child, arguably, does not have access to parents or guardians who can make decisions in the best interests of the child. (Set aside, for the moment, concerns about who gets labeled as abused or neglected; I think there are really worries to raise here, but even if there weren't, we've still got stuff to worry about.) So, the abused or neglected child can't get proper consent from a parent or guardian to participate in a scientific study. But, they get to participate in a study without their parent's permission.

Why is the default position here getting to be a subject in a pesticide study? We're not talking about testing a new, promising drug (late in the drug-approval process, past the stage of figuring out how much of the compound a body can take without getting sick) when no other treatment is available for an illness that is doing you serious harm. We're talking about being exposed to pesticides. What is the potential benefit for the child?

2. I would love to see a succinct explanation of the "ethically deficient" research that is being allowed. Are we talking sloppy notebooks? Lying to the IRB? Being mean to the human subjects? Help me out here.

Also, what are the guidelines for what is crucial to protect public health? How precisely are we defining public health, and what are the boundaries on what is permissible in its protection? (Surely having nourishing food, safe water, and proper sanitation is essential to public health, and sometimes the government ... eh, just doesn't get around to it right away.)

Are unethical scientific experiments suddenly going to lead to big improvements in public health?

3. If we're going to allow kids to undertake more than a minimal health risk, someone needs to spell out what the "direct benefit" to the child being tested could be. Again, we're talking about exposing kids to pesticides. Is this expected to make them healthier? Smarter? What is the likely payoff that justifies the risk they are being asked to undertake? (And c'mon, if the parents are the ones rendering consent here, or if the kids are infants, they aren't even being asked, so it's even more important not to screw up.)

Or are we counting as the "direct benefit" something along the lines of warm meals and a ride to the clinic in an air-conditioned car? Because honestly, that hasn't worked out so well in the past. (Ask the Public Health Service.)

4. Oversees industry studies, performed in places where human subjects are not afforded the protections they are here ... because if you need to have the science to sell your products, you might as well be able to do the studies somewhere life is cheap. It's just good economic sense.

While I suppose it is possible to get scientifically valid data from studies where one treats human subjects unethically, it isn't something scientists like to do. Medical journals tend to have policies against publishing such results. Studies that treat human subjects badly could make it harder for other scientists, no matter how ethical, to find human subjects to participate in future studies -- especially if it makes the news. (In the aftermath of the Tuskegee syphilis experiment, how much harder was it to find willing participants in AIDS research than it might have been otherwise?)

Possibly there is some scientific knowledge that it would be hard to get except by asking human subjects to undertake significant risks. Depending on the nature of those risks, and the age of the subjects, it might even be that the clearest ways to get that knowledge would be, by accepted definitions, unethical.

Is this a good reason to relax the definition of what is ethical?

First, there may be other good ways to answer the scientific question that are not unethical, Sometimes ethical constraints make scientists more clever in how they approach problems. (On the other hand, it would seem that an overabundance of humans on whom to experiment made some of the medical researchers in Nazi Germany absolute morons, scientifically speaking.)

Second, it seems like we shouldn't even get to the let's-lighten-up-on-the-ethics stage unless the scientific knowledge in question is absolutely essential. The harm of not finding a workable answer to the scientific question has to be big, and it has to harm more than just the R&D team trying to bring a new product to market, or the shareholders, or the CEO.

If testing pesticides on children is so essential, and promises so much benefit to the children, then maybe we should go right to the children of the pesticide industry. No, not the kids of the parents working the line in the pesticide factory -- the kids of the CEOs, the stockholders, the lobbyists, etc.

The EPA will make sure it's OK.

Technorati tags: , ,

Monday, September 19, 2005

Academic blogging survey/meme

I picked it up at Pharrryngula, where every day can be Talk Like a Pirate Day with the mere click of a mouse.

The following survey is for bloggers who are actual or aspiring academics (thus including students). It takes the form of a go-meme to provide bloggers a strong incentive to join in: the 'Link List' means that you will receive links from all those who pick up the survey 'downstream' from you. The aim is to create open-source data about academic blogs that is publicly available for further analysis. Analysts can find the data by searching for the tracking identifier-code: "acb109m3m3". Further details, and eventual updates with results, can be found on the original posting:

Simply copy and paste this post to your own blog, replacing my survey answers with your own, as appropriate, and adding your blog to the Link List.

Important (1) Your post must include the four sections: Overview, Instructions, Link List, and Survey. (2) Remember to link to every blog in the Link List. (3) For tracking purposes, your post must include the following code: acb109m3m3

Link List (or 'extended hat-tip'):
1. Philosophy, et cetera
2. Pharyngula
3. Adventures in Ethics and Science
4. Add a link to your blog here.


Age - 37
Gender - Female
Location - San José, California, USA
Religion - Undecided
Began blogging - This one (the one that took) February 2005; earlier attempts started May 2003
Academic field - Philosophy of Science
Academic position [tenured?] - Assistant Professor [not yet]

Approximate blog stats
Rate of posting - almost daily (4-10 times per week, depending on what else is going on)
Average no. hits - 20/day
Average no. comments - 0/day
Blog content - all more or less on the theme of responsible science; some more about current events, some more personal.

Other Questions
1) Do you blog under your real name? Why / why not?
- No, but there are quite enough bread crumbs that anyone who wants to know my real identity can figure it out. The pseudonym is because it's not about me, and because I'm not interested in what the high and mighty at my university might have to say about my opinions.

2) Do colleagues or others in your department know that you blog? If so, has anyone reacted positively or negatively?
- Those who know what a blog is, know I have one. A subset of them read it. The comments have been positive.

3) Are you on the job market?
- No, and, if I get tenure, I hope never to be on the market again. (I like it here!)

4) Do you mention your blog on your CV or other job application material?
- No.

5) Has your blog been mentioned at all in interviews, tenure reviews, etc.? If so, provide details.
- Not that I'm aware of.

6) Why do you blog?
- Originally, the idea was to keep the students in my "Ethics in Science" class thinking beyond class time and assigned readings, to keep them up with relevant current events, etc. Then, the blog became a way for ME to think about issues I teach and things I'm working out in my research.

Also, I find it's a good way to write regularly (in smaller bites), and occasionally a good way to get feedback.

Plus, all the cool kids are doing it.

Pass it on!

Thursday, September 15, 2005

Democratizing science.

It's nice to know I'm not the only one who gets all exercised about some of the issues I talk about here. To wit: T.W. McKinney's comment on this post in which PZ Myers calls for knowledge to the people. McKinney points out that democratizing science might not be such a red-hot idea, depending on just what you have in mind when you say "democratizing":

I think this blog's ideological opponents might think the same of themselves, i.e. that they're "democratizing the process of scientific research." In a very broad sense, they're correct, too-- opening scientific methods to scrutiny by the court of public opinion is one way of "democratizing" science. They also take aim at the community of scientific experts-- who, it must be said, have (and commonsensically) wielded a lot of influence over public policy in various areas, etc. I believe that second aim of "democratization" renders the anti-science conservatives appealing to a lot of average people. If experts are appealed to in legal and political matters, their views (which, in most cases, resemble our best guess at the truth) appear to 'count more' than the average Joe's. So I, somewhat hesitatingly, might suggest that the anti-science folks are also aiming for democratization-- they are, as it were, populists challenging the hallowed ground of academia, and all of those scientists who think they're the one-and-only arbiters of 'the facts.' I'm not sympathetic to the leaders of this movement-- because they're attempting to destroy one of the most successful edifices of western civilization (probably *the* most, if I weren't so cautious in my phrasing)-- but I'm also aware that appeals to expert opinion in certain matters can leave people feeling like their views haven't been counted (or don't matter). Hence, the backlash.

It's good for people to have a say in what happens. It's even better if their say is backed up by reasons and, when possible, by evidence. Thus, we can not only hear what everyone has to say, but also try to find a sensible way to evaluate what has been said and figure out where to go from there.

This is how things are supposed to go in science's public square, too. But the crowd in science's public square has been trained to pay attentions to issues like relevance and credibility when someone makes a contribution. The rules of engagement are rather better defined here (if you saw it in your lab, you have to explain how we can see it in other labs, else we ain't buying) than they are in the broader public square where debates over politics and the fall TV line-up take place. Face it, I can insist that it just isn't relevant that Candidate X would be a good guy to have a beer with, but my insistence doesn't rule it out of the debate.

But, as McKinney notes, appealing to certain scientific standards to disqualify contributions to a scientific debate often looks to non-scientists like a power-play designed to maintain control of "expert" status and tell everyone else what to do. And, he notes (as I have in the past) that crappy science instruction may be a culprit here, leaving the public at large unable to distinguish legitimate moves within scientific discourse from unwarranted silencing of legitimate voices in the debate.

So again, let's fix science education.

But there's another relevant difference between the scientific public square and the broader public square: the goal that brings the scientific community together and brings their voices to the square is the shared goal of achieving a better understanding of how various bits of the world work. People coming to the broader public square nowadays don't seem to have a common goal that would, in its being attained, make everyone in the community better off. Rather, there's a struggle for goods -- for my guy to win so I get the stuff I want (which means your guy won't take my goodies and give them to you). Sad to say, it seems a situation designed to bring out the worst in people rather than, say, reasoned debate.

So maybe it's not that science needs more democracy (although I applaud Pharyngula and sites like it for bringing science to the people). Maybe what we really need is for the general public to engage each other more like scientists do.

Technorati tags: ,

Tuesday, September 13, 2005

Hostile workplace.

Guest blogging over at Brian Leiter's blog, Jessica Wilson writes about just how bad things have gotten for government scientists since November 2000 or so. She includes some essential links, which I'll duplicate here since they're just that important:

2003 Waxman Report, "Politics and Science in the Bush Administration"
Survey of Fish and Wildlife Services (FWS) scientists

And, if you cruise on over to the Government Reform Minority Office Politics & Science page, you'll find plenty more relevant links in the sidebar.

Long story short: Scientists in the government's employ have had their findings suppressed and distorted. The distortions and suppressions have tended either to be so "Science" seems to support, or at least not to undermine, administration policy initiatives; also, they've tended to be very pro-industry. The scientists in the government's employ (plus scientists in general) think the pervasive "political intervention to alter scientific results" is bogus.

Indeed, it would be bogus no matter what the political aims of an administration that interfered with science in the ways alleged. Completely bogus.

I can't help wondering, though, if these political efforts will be self-undermining. The reason to try to interfere with what the science says is that what the science says is taken to be important. If science says that we can drill in ANWAR and drive Hummers while doing no harm to the environment, then by golly we can! If science says that condom use is an effective way to stop the spread of HIV, then policies that discourage condom use are going to need to address this and provide some other justification.

But, if you screw with the science, and if scientists (and the public) notice that you're screwing with the science, then what the science says can't play the same justificatory role for your policies. Why should anyone care whether your bogus, in-house science seems to support your policy initiatives? It seems like it would matter more what real (independent) scientists have to say about the matter.

Suddenly, we're plunged into a world of the real scientists vs. the government scientists (which at this point is not what we've got -- there are lots of scientists working for the government who value their scientific integrity maybe more than their jobs, but if they all get fired ...). Everyone in the scientific community would know where to turn for objective scientific reports. That information would certainly get out to others.

Why, then, would the government even bother to keep their own stable of scientists (as opposed to, say, press release writers)?

By the same token, though, the government (via NSF and NIH and that crowd) funds an awful lot of science by folks who do not think of themselves as "government scientists". No reason to think that there might be more pressure brought to bear on these "independent" scientists to think of their results as "deliverables" that ought to be tailored to the needs of the funders.

Hard to imagine, in this nightmare scenario, that the real scientists would keep taking government funding. If they couldn't find some other source of funding that would let them do independent, objective science, it's easy to imagine them going someplace else where the climate for science is more favorable.

Then the government wouldn't have to worry about wasting its money supporting science that undermines policy initiatives. Heck, once the scientists have fled, science wouldn't even need to be a part of the debate. Just pure, unadulterated politics.

Of course, our economy will be just find without objective scientific research, so no worries. (C'mon, you're not telling me the people in charge would even think of doing something that could seriously harm the economy …)

UPDATE: Discussing Chris Mooney's book The Republican War on Science, Amanda Marcotte makes some very nice points about how this is all part of a larger war on reality. The killer point is that the war against science seems not to be one that can be won on ideas. Else, why not be open about saying, "Science sucks, theology rules, vote for us!"?

Technorati tags: ,

Monday, September 12, 2005

Getting clear on what we're talking about (or, picking nits).

This from the "John Holbo does the extensive reading so I can pick the nits" Department:

Micklethwait and Wooldridge, in their book The Right Nation, claim that "the Right is clearly extending the battle of ideas into new territories." One of these targeted "new territories" is scientific discourse. I, of course, would like to take note of where the "battle of ideas" is a battle within science (such as a battle about what we know, or what we ought to be able to find out to count as getting the job done) and where it is a battle about science (especially about, given a particular piece of scientific knowledge, what one ought to do).

Here are some of the bits John Holbo quoted over at Crooked Timber:

Chapman, a committed Christian, first got interested in the subject because of worries about free speech: in 1995 he rallied to the defense of a California science professor who was threatened with the sack merely for arguing that evolution does not explain everything. …

The intelligent design movement is an example of the Right’s growing willingness to do battle with what it regards as the liberal "science establishment" on its own turf, using scientific research of its own. Right-wing think tanks have attacked scientific orthodoxy on stem cells, arguing that there is no need to harvest embryos, as it should be possible to extract stem cells from adults. They have also pored over the data on global warming. Bjorn Lomborg, the author of The Skeptical Environmentalist (2001), an indictment of green overstatement, is a cult hero in places like the AEI and Discovery. There are also battles brewing on animal rights, euthanasia and the scientific origins of homosexuality. So far the science establishment has given little ground to the conservative upstarts, particularly on intelligent design. In Ohio, some scientists equated supporters of intelligent design with the Taliban. But the Right is clearly extending the battle of ideas into new territories, just as Milton Friedman and others did in economics forty years ago.

The issues, and my take on them:

1. Does evolution explain everything? Does it need to? There's good, healthy discussion within the scientific community about how much any particular scientific theory explains, and how much any particular scientific theory ought to have to explain. And, this is not always a determination you can make quickly — it may take a while to figure out all that a particular theory is capable of explaining (and, some of the things that look like they will be good explanations end up falling apart).

Of course, Karl Popper (the philosopher of science who scientists love the most) famously said that one should be cautious of theories that explain everything. That way lies pseudo-science.

At any rate, the real question one would want to take up is: does evolution explain enough of what we demand that it explain? And if not, do we have a promising alternative that can get the explanatory job done. These are clearly questions within science. And, there are plenty of places you can go to see what the biologists have to say about them.

2. Extracting stem cells from adults rather than embryos. Here, there is the complex of technical issues: Can stem cells be extracted from adult cells? Can this be done with results as reliably good as we would get extracting stem cells from embryos? Will the stem cells extracted this way function in the required or desired ways (i.e., will they be suitable to a particular application, or will the fact that they have been extracted from adult cells mean they won't behave in certain of the ways that embryonic stem cells would)? These are all questions within science.

The question of whether it is morally (or politically) better to get stem cells from adult cells than from embryos is not a question within science but a question about how science ought to be used. Scientist ought to be involved in this discussion, but intellectually honest scientists (and others) can come down on different sides of this issue and the principles of science will not be enough to bring them to agreement.

3. Global warming and "green overstatement". The big scientific questions are what are the data, and what can we conclude (and, especially, predict) from them? Modeling and making accurate predictions are hard, but scientists are pretty good at engaging each other about their modeling practices.

Given that the predictions of even a very good model are no guarantee that what the model predicts will actually come to pass, it's a separate question how one ought to bet about what will happen and what steps, if any, ought to be undertaken to prevent certain possible outcomes. While both of these questions build off of what scientists know (at least, what seems likely and what seems possible given their best models), they also involve value judgments that go beyond the scope of what science can tell you. Is (say) saving a particular city from hurricane-related flooding important enough to warrant the funds required to shore up the levees? This probably depends on what other worthy goals one hopes to accomplish with a limited pot of money. Also, how you bet on the foreseeable possible outcomes depends at least in part on how bad you would judge it to: (a) not undertake a preventative measure that could make a huge difference in avoiding a catastrophic outcome, or (b) look like you expected outcome A when what actually comes to pass is outcome B.

Building good models and collecting good data to shape and test those models = scientific question.
Using those models to decide what to do = question about how to use science. (It would be good to have someone who understands the models in the room when you make your decisions, though.)

4. Animal rights and euthanasia. From their brief mention, I can only guess that these are questions about what science ought to do or facilitate. Science can tell us a lot about pain and consciousness in humans and in non-human animals. But there is a further value decision about what one is obligated to do (or entitled to do) in the light of all the information science can provide. Science can quantify my pain, but science can't tell me whether that pain is meaningful or meaningless.

5. The scientific origins of homosexuality. Presumably, this is about whether there is scientific evidence to establish that homosexuality is innate/genetic/biological/"natural". Of course, all manner of traits (including behavioral traits) are subjects of biological study. Science may end up working out a fairly clear biological story about why individuals end up with the sexual orientations they do. Or, science may end up saying that there isn't any straightforward set of marching orders from your genes (or the hormones to which you were exposed in utero, or the kind of parental influences to which you were exposed while growing up) to your sexual orientation.

Here, I get the feeling that people outside of science really want science to deliver a particular answer. Because then, they can use the official ruling from science to support whatever evaluation they themselves would like to make of homosexuality.

But the thing is, whether or not a particular trait has a biological basis is an entirely separate question from whether we value that trait or try to cure/forbid/eradicate it. You want me to be nice to other people whether or not my niceness comes naturally or only with great effort. If Lance Armstrong is a mutant and that's why he's such a powerful cyclist, it doesn't make his cycling any less cool (or any more worthy of eradication). So ultimately, I'm suspicious of all the non-biologist (on all sides) who feel like they have a lot at stake in whatever biologists can tell us about homosexuality. Because the scientific story won't settle the matter or who we value as human beings and why we value them.

Technorati tags:,,

Friday, September 09, 2005

Reasonable people, reasonable disagreements.

Did you know that the Rio Rancho Public Schools, in Rio Rancho, New Mexico, have a district policy on science education that has been criticized by the New Mexico Academy of Science, a number of science chairs from the University of New Mexico, and the Faculty Senate of the New Mexico Institute of Mining and Technology?

You did if you've been reading The Panda's Thumb.

Anyway, today I had a look at the actual district policy and thought it worth dissecting. Unless, you know, you have a note from a parent or guardian excusing you from the dissection.

Here's the infamous Policy #401 in its entirety:

The Rio Rancho Board of Education recognizes that scientific theories, such as theories regarding biological and cosmological origins, may be used to support or to challenge individual religious and philosophical beliefs. Consequently, the teaching of science in public school science classrooms may be of great interest and concern to students and their parents.

The Board also acknowledges the conditional trust parents place in public education, as well as the requirements of the Constitution and New Mexico education law, that the classroom not be used to indoctrinate students into any religious or philosophical belief system.

Because of these concerns, this policy recognizes that the Rio Rancho Public Schools should teach an objective science education, without religious or philosophical bias, that upholds the highest standards of empirical science.

Therefore, science teachers in Rio Rancho Public Schools will align their instruction with the District’s approved curricula and fully comply with the requirements of the New Mexico 2003 revised Science Content Standards, Benchmarks, and Performance Standards. Age-appropriate emphasis will be given to Strand I, Science Thinking and Practice; Strand II, The Content of Science; and Strand III, Science and Society. When appropriate and consistent with the New Mexico Science Content Standards, Benchmarks, and Performance Standards, discussions about issues that are of interest to both science and individual religious and philosophical beliefs will acknowledge that reasonable people may disagree about the meaning and interpretation of data.

(Adopted: 8-22-05)

I've thrown in some bold emphasis above to highlight the points that I think warrant further discussion.

" … scientific theories, such as theories regarding biological and cosmological origins, may be used to support or to challenge individual religious and philosophical beliefs." I suppose this is true if one has religious and/or philosophical beliefs that include particular commitments.

For example, I hold a certain set of beliefs about the deity, including the belief that the deity affixed the words "love one another," in indelible ink, to the wall of the convenience store down the block, three weeks ago. But when I mention this to the clerk at the store, he whips out the surveillance tape, time and date stamped from three weeks ago, showing a neighborhood kid (Nelson) using a Sharpie to affix those very words to that very spot.

It seems I have a religious belief that is being challenged. Can it survive this challenge? It can if I am not committed to the veracity of surveillance videotape (certainly an option — I saw Minority Report) or if I accept the idea of the deity working through Nelson to affix the words. In other words, my religious or philosophical belief is only challenged to the extent that it is entangled with other specific commitments I might have.

"… the classroom not be used to indoctrinate students into any religious or philosophical belief system …" What precisely is indoctrination? I take it this involves not just the presentation of a view, but an exhortation to commit to it, or else. (Or else what? Get a failing grade? Look dorky in front of your friends? Burn in eternal hellfire?)

I'm a professional educator and I'm not exactly sure how I'd indoctrinate my students even if I wanted to. The ideas I'd like to shove down their throats (Learning is good for you! Doing your homework will make life in this class better!) don't take, at least for a large proportion of the students. But, we're not here to teach me mind control; we're here to dissect the policy.

The intent of this bit seems to be to say, to the extent that science might be a set of commitments resembling religious commitments, science ought not to be something students are forced to commit to, or else. Because that would be akin to religious indoctrination.

We'll return to this point in a minute.

"… an objective science education, without religious or philosophical bias, that upholds the highest standards of empirical science …" I take it this means laying out for the students how scientists work to build and test their theories from empirical evidence. Here are some observations. Here's a theory that seems to account for them. Here's something else that theory would predict; let's go to the lab and see if it happens.

This is good, practical knowledge, although significantly more complex than, say, learning how to conjugate French verbs. You can certainly learn how scientists do science without becoming a scientist yourself (in the same way a 14-year-old can learn all about safe-sex without having any sex). And, you can master the mechanics of a theory, at least as a problem-solving tool, without buying into it (as the high grades of certain former physics students who doubt the world is really as quantum mechanics says it is attest).

"… reasonable people may disagree about the meaning and interpretation of data." If they do it in a reasonable way, they may.

Scientists, of course, frequently encounter observations that are subject to a judgment call. (Is that really a hint of pink I see in the flask, or am I still some ways off from the endpoint of this titration? Is this an accurate reading, or is my detector on the blink?) Nothing unreasonable about that. You get more data to clarify the situation.

But we're concerned with places "the meaning and interpretation of data" supposedly pits science against religion. Here, there is certainly room for disagreement, but it is much more reasonable when people put their commitments on the table.

Dr.F: The deity wrote "love one another" on your wall.
Apu: The surveillance tape proves that Nelson wrote "love one another" on my wall.
Dr.F: I don't believe videography gives reliable data about happenings in the world.
Apu: You could ask Nelson.
Nelson: Yeah, I wrote it.
Dr.F: I don't believe Nelson, either.
Apu: But do you see how, if you accepted the videotape and Nelson's testimony, you'd think Nelson did it?
Dr.F: Yes, I understand the inference. I just don't believe it.

This is a reasonable disagreement. It rests on a clearly articulated disagreement about what should count as meaningful data. Clearly, there is lots of room for further discussion of why certain types of data are deemed reliable and unreliable, or of what sorts of demonstrations might change one's mind about such things. Depending on one's commitments, coming down on one side may seem more "reasonable" than coming down on the other. (What's rational, it seems to me, is a matter of the context -- of what else I know and what else I'm committed to.)

Here's what's not reasonable:

I'm committing to a deity that has properties X and has performed acts Y as detailed in this set of sacred scriptures AND I'm committing to the whole set of empirical facts and the best scientific theories we have to date (with the understanding that these may be updated as further facts and theories present themselves) AND in the cases where these commitments come into conflict, rather than flagging it as part of the mysterious nature of the deity or the as-yet imperfectly understood nature of reality, I am committed to SCIENCE being the party in error, but Lordy, I'm still totally down with science.

You can have science and you can have religion. You can learn about both without practicing either. Like playing accordion and roller skating, they are not mutually exclusive. On the other hand, if you start taking advice on accordion playing from your roller derby coach, or on skating from your accordion teacher, it's possible you'll run into problems.

Thursday, September 08, 2005

Part of the solution, or part of the problem?

I've spent a fair bit of time in these parts bemoaning the low level of scientific education/literacy/competence among the American public. Indeed, I recently expressed the opinion that college graduates ought to do the equivalent of a minor in a particular science. I tell anyone who asks me (and a lot of people who don't) that science is fun. Some of the very best teachers I know are science teachers.

But I wonder sometimes whether I'm helping turn the educational tide or just letting the current drag us in the wrong direction.

You see, I teach a philosophy of science course. (Actually, I teach multiple sections of it, and I teach it every semester.) And, at this university, that philosophy of science course satisfies the upper division general education requirement in science.

Yes, that's right. Students can dodge taking an actual science course by taking a philosophy of science course instead. This yields throngs of students who are scared silly of anything scientific, and who know exactly one fact about philosophy: it's in the Humanities college. (Humanities = fluffy, unthreatening classes where you read novels or watch films or look at paintings, and it's all about what you think is going on, with no right or wrong answers. At least, this is what certain of my students assume before enrolling for this course.)

How on earth, given my aforementioned peevishness about science-scared students and community members, can I live with my role enabling the flight from learning some science?

It doesn't hurt that some of the other options for filling this requirement have well-earned reputations for being "gut" courses (or as some like to say, "science-lite"). Notably absent from the list are many of the standard, science-major-y fundamentals. Instead, the list is heavy on physics for musicians, nutrition and exercise, and astronomy for people who will not do math under any circumstances. (The main exception: the offerings from geology and meteorology seem significantly more "macho" ways to fulfill the requirement. Go earth and atmospheric scientists!) My course, I'm told, is actually kind of challenging. So even if the students are escaping a class in a science department, with me they're not escaping work.

Also, the general education requirement was structured specifically to make students pay attention to the scientific method, to understand the difference between science and pseudo-science, and to understand science as an endeavor conducted by humans that has impacts on humans. As a former beauty queen science student, taking only the hard-core science courses, my experience is that we saw a lot of patterns of scientific reasoning, and we learned to extend these patterns to deal with new problems ... but we didn't have loads of time to get reflective about the scientific method. For me, that reflective awareness didn't really happen until the semester I (1) started doing research, and (2) took a philosophy of science course.

For the brief span of years in which I would have counted as a scientist, I think what I got out of philosophy of science made me a better scientist. (That I fell prey to philosophy's charms and left science is another issue for another post.) And, the small cadre of science majors who take my course (perhaps because they'd be embarrassed to take a "physics for poets" kind of course) seem to get something useful from the course that they can bring back to their science-department understanding of science. In short, the science-y folk seem to think the course gives a pretty reasonable picture of the scientific method and the philosophical questions one might ask about its operations.

But what about the scared-of-science folk?

I can't deny that there's a part of me that wants to sign them up for intro chemistry (and biology, and physics). But I know full well that their hearts would burst before they even got to the first quiz. And, sadly, some of their instructors would decide up front that some of them were just too dumb to learn science.

I'm foolish enough to think even the ones who are scared of science can come to understand something about the way scientist try to connect theories and evidence. I'm naive enough to ask them to think about how scientists make decisions, and to make them do exercises where they have to try to think like scientists. I'm silly enough to make them do research in the scholarly scientific literature, and to ask them to make some kind of sense of some of the articles they find there.

They may start out seeing my course as a way to dodge science, but by the end they are not as scared as science as they were at the beginning. (Or perhaps, they've shifted their fear to philosophy instead …)

Am I right that my course might be making the situation just a little bit better, or am I living a lie?

Technorati tags: ,

Tuesday, September 06, 2005

Science journalism: let's see some.

Do you ever miss good science journalism? I do, and I'm not the only one.

Panda's Thumb notes the story by Chris Mooney and Matthew C. Nisbet in the Columbia Journalism Review on how science journalists have done in covering the evolution vs. intelligent design battles.

The short answer: not so well.

The average American who reads a newspaper — and not just a little local paper, but a paper like The Washington Post, The New York Times, or the Atlanta Journal-Constitution — could hardly be blamed for thinking there's a real live scientific controversy brewing here, given the kind of coverage the battles get. Scrupulously "even handed", the articles print claims from evolution opponents with parallel claims from scientists, then leave it for the reader to sort out. But how do you do that over your coffee if you don't have a firm foundation in scientific reasoning?

"Objective" reporting seems to have become a formula whereby you quote an equal number of people on each side of an issue and leave it at that. Context? What's that?

Turns out, it's what your reader needs to make sense of science reporting.

But of course, people who go into journalism aren't necessarily any better versed in scientific reasoning than are their readers. So maybe science writers put together this kind of "fair and balanced" reporting because, not having a firm scientific foundation to use to assess what they're observing and getting from their sources, can't figure out how to avoid the "experts disagree" storyline. In other words, the scientifically naive are leading the scientifically naive.

Mooney and Nisbet make some recommendations:

So what is a good editor to do about the very real collision between a scientific consensus and a pseudo-scientific movement that opposes the basis of that consensus? At the very least, newspaper editors should think twice about assigning reporters who are fresh to the evolution issue and allowing them to default to the typical strategy frame, carefully balancing “both sides” of the issue in order to file a story on time and get around sorting through the legitimacy of the competing claims. As journalism programs across the country systematically review their curriculums and training methods, the evolution “controversy” provides strong evidence in support of the contention that specialization in journalism education can benefit not only public understanding, but also the integrity of the media. For example, at Ohio State, beyond basic skill training in reporting and editing, students focusing on public-affairs journalism are required to take an introductory course in scientific reasoning. Students can then specialize further by taking advanced courses covering the relationships between science, the media, and society. They are also encouraged to minor in a science-related field.

With training in covering science-related policy disputes on issues ranging from intelligent design to stem-cell research to climate change, journalists are better equipped to make solid independent judgments about credibility, and then pass these interpretations on to readers. The intelligent-design debate is one among a growing number of controversies in which technical complexity, with disputes over “facts,” data, and expertise, has altered the political battleground. The traditional generalist correspondent will be hard-pressed to cover these topics in any other format than the strategy frame, balancing arguments while narrowly focusing on the implications for who’s ahead and who’s behind in the contest to decide policy. If news editors fail to recognize the growing demand for journalists with specialized expertise and backgrounds who can get beyond this form of writing, the news media risk losing their ability to serve as important watchdogs over society’s institutions.

Yes, yes, a thousand times yes — more science education for the reporters! And, while we're at it, more for the people who read the newspapers. And, might I add, scientific reasoning seems like good subject matter for any reporter, at least to the extent that reporters are in the business of collecting facts and testimony, reporting them accurately, and trying to present a reasonable story about what they mean. Every time a reporter goes to a story knowing what the story is going to be before conducting any interviews or collecting any facts, we may be getting an interesting story, but we're not getting news.

There certainly is a political dimension to the evolution vs. intelligent design wrestling match — as there is to other scientific stories. But the politics can't be divorced from questions that are intimately related to what the scientific community is out to accomplish and how scientists set about getting the job done. In this case, you can't just choose which side to stand with; you're also choosing whether you're on the side of sound scientific reasoning. I'm not sure Joe Q. Newspaper-reader gets that this is the choice he's being asked to make.

Sunday, September 04, 2005

Another loss from Katrina.

The attention at this point should still be focuses on immediate matters of life and death -- getting people hit by the hurricane and the flooding out of harm's way, getting them water, food, shelter, medical attention, helping them to find their loved ones.

But after all that comes the larger project of rebuilding lives. And among the group of people trying to pick up and go on will be scientists (including post-docs and graduate students) who were doing scientific research at colleges and universities in the stricken areas.

I don't know (nor, I imagine, does anyone else yet) the extent of the loss: experimental apparatus, reagents, stored data, lab notebooks. Hoods and vacuum lines and all that good stuff. Years of work (potentially) gone.

I do not know what kind of delays to our scientific knowledge might result. In the grand scheme of things, we can live with them. I'm more concerned about what will happen if multiple cohorts of science students and early-career researchers have to suddenly start from scratch.

Yes, I know, lots of people will be starting from scratch. People who have been evacuated with nothing will be trying to find work and get re-established. But, arguably, the kind of help most people will need to get back on track will come down to things that are easier to provide -- food, shelter, clothes appropriate for job interviews and work, leads on jobs, etc. For undergraduates, the needs are different (classes to complete a degree, books, room, board, and tuition), but they could be met by any number of colleges and universities stepping up to take on the students displaced by the hurricane. (See, for example, this list, thoughtfully linked by Bitch Ph.D, of schools jumping into the breach and this letter to to a university dean and provost.)

But imagine you're an nth year graduate student. You've been struggling in the lab (as graduate students do), and finally in your (n-1)th year have you gotten the experimental system to behave and accumulated some really good data. You've turned the corner, not only in terms of making a real (if little) contribution to knowledge in your field, but also in terms of feeling like you could really be a scientist when you grow up.

And now? With the whole lab washed away? Can you believe that you'll be able to get back on track without investing another n years of your life? Is trying to be a scientist still a rational decision?

Let's face it, certain moments in graduate school are soul-crushing enough without Mother Nature screwing with you. I think it would be heartbreaking if a major break in the science pipeline were one of the consequences of Katrina.

Are any of those with university labs in a position to help displaced graduate students and post-docs? Is anyone looking into what sorts of arrangements could be made, and figuring out how, when things have calmed down a bit, to identify the displaced scientists and find out what kinds of help they might need?

Right now, people need food, water, and shelter. But in the long run, we all need scientists. Besides, no one needs extra slings and arrows in graduate school. I'm hopeful that the scientific community will show itself to be a real community here, characterized by its compassion as well as its mad scientific skillz.

Technorati tags: ,