Tuesday, April 26, 2005

I believe the bloggers are the future ...

... let them link and they will lead the way.

OK, I'm back from bashing my head against the wall to get that song out of my head. Back to the matter at hand:

Only hours ago, I was using this space to bemoan the difficulty of getting the public right with science. A big reason I noted was that science is complicated enough that it can be hard to explain clearly. Well, the blogosphere comes through again, yielding a pair of very nice posts that add clarity without dumbing down the scientific issues.

First, check out Majikthise on the scientific status of overweight. Obesity has been grabbing headlines as a big public health crisis. Then, a week or two ago, we got news that being a little overweight might actually be healthier than being of "normal" weight. People celebrated the news over tiramisu, saying to each other, "Those darned scientists! When will they figure this stuff out?"

As you might guess, the story behind the adjustment of the scientific read on what weight is healthy is complicated. The linked story does a beautiful job working through the complications. This is the kind of explanation of science people can use. Sure, you have to read more than a headline, but it might give you something to look at while you're on the stationary bike.

Next, have a look at this post at Philosophy of Biology about why teaching Intelligent Design Theory arguments against evolutionary theory in the high school biology classroom isn't actually a great idea. (Read the comments while you're there.) Michael Sprague lays out a very persuasive case here, making it quite clear what ought to be on the table in a science classroom. He writes:

As a general principle, before you can effectively criticize a theory, you must first learn what the theory is.

The common sense is starkly beautiful. Add to it this comment from Chris:

If you ask me, the "teach the controversy" movement is dangerous not because it threatens justified scientific orthodoxy, but because it threatens the very foundations of science education. Those foundations are not particular theories, empirical findings, etc. They are the basic principles of scientific investigation. They involve teaching the concepts of "theory," "evidence," etc., in ways that are consistent with their actual use in the practice of science.

If we can get the public to the blogosphere, maybe there's hope.

And someone at The Daily Show must have heard my plea for more fake-news coverage of science. Tonight, in coverage of legal doings in Texas, Jon Stewart examined an exchange on CNN where one woman claimed that a study of children in foster care in Illinois in the homes of gay couples were 11 times more likely to be sexually abused than foster children not in the care of gay couples. Quoth she, "It is a proven fact, and that was a research study done in the State of Illinois." Well, Randall Ellis, a LGBT rights lobbyist involved in the CNN exchange, responded by noting that he was unaware of such a study. (As a lobbyist, my guess is he'd be pretty up on the literature so he could respond to it.)

Did Kyra Phillips, the CNN anchor, cut through the spin to get to the truth of whether any credible studies showed the alleged higher rates of abuse? "It's an interesting debate, a good debate. Thank you both very much."

Jon Stewart: "Really? Good debate? Cause it kinda seemed like the one lady was lying. ... Why don't you call them on their bullsh*t on the air? You're an anchor, for f*ck's sake!"

I retract my earlier concerns. I like The Daily Show's science coverage just fine.

Communicating science to the public.

I was planning to write a topical, link-y post (and was amassing a stack articles to discuss and everything). But, seeing as how a cold has taken up residence in my skull, I feel like crud. And, since I need to "comfort-blog," I've decided I'd much rather take up some of the issues that came up in my class today. (For the record, this class is chicken-free matzoh-ball soup for this cranky academic's soul. Or, you know, whatever that is that keeps wiggling my pineal gland.)

Today we were discussing the challenges of getting effective communication of scientific information between scientists and the public. One of the problems that came up is crappy science reporting, even at major news outlets. Not just science reporters who don't get what the scientists are trying to explain to them, but science writers who make up their minds about how the story's going to go, then contact the scientists looking for quotes to support the story. And, of course, ignoring (or spinning) the quotes from scientists that don't support the stories they've already decided to write. And being unresponsive to complaints from the scientists that they've been misrepresented. (I'm not dropping names on this one. But I will say, as I leaf through the paper of record, that one particular author who has got the scientists worked up somehow gets the refrain from a Rupert Holmes hit stuck in my head.)

But honestly? This was more of a shock to my students when I first taught an ancestor of this course in 1999. The mass media seems to have gone to Hell in a handbasket since then, and lots of people today know it. You can't trust what you read anymore, nor even what you see on the network news. If Jon Stewart doesn't cover it, how can we believe it? (Sadly, while The Daily Show has very good coverage of the religion beat, it doesn't have regular science coverage.)

So, if the mass media isn't a reliable conduit for communication of Important Information between scientists and non-scientists, what's left?

Well sure, noted environmental scientist Cameron Diaz has a new show about the environment on MTV. But most of the folks with scientific knowledge to communicate don't have that kind of time (or agent). There are well-established scientists who write for a broader audience (as Stephen Jay Gould did with his columns in Scientific American; unlike Isaac Asimov, Gould hasn't done a lot of posthumous writing). But it's not obvious that most scientists are good at this kind of writing, or that there will be any professional reward for doing so (where by reward, I have in mind something like it not being held against them when they come up for tenure). It's not clear, for that matter, that there's much of an audience for such writing.

And that may be the fundamental problem: maybe the public just doesn't want to know. After all, most people come out of school thinking science is impossibly hard, or boring, or both. (There is, I am sure, a special circle of Hell reserved for the brain trust who has brought this sad state of affairs into being.) People who are not scientists would, most of the time, rather not have to think about science. If they wanted to think about science, maybe they'd have become scientists. How do scientists know what's going on with global climate? Who cares, just tell me if I can still drive my SUV. What's the best way to study the effects of diet on human health? Dude, just tell me whether I should eat carbs or not. Maybe the public wants access to particular bits of scientific knowledge, but the details of how that knowledge was generated are not going to have a big or willing audience.

Indeed, even though the public funds science, a lot of the time the public just doesn't want to hear what the heck is going on in the lab. Maybe we buy science (through our tax dollars) because it's something we think we need (like homeowner's insurance, or an appendix) even though we've never actually made (conscious) use of it. It's almost like we need to know it's there, but we don't want to have to think about it.

Why do we want to know it's there? Maybe because we see science as a generator of objective knowledge, of facts. Some day, those may come in handy. Yes, when the scientists talk amongst themselves, the straightforward facts can get to sounding pretty complicated, but when it gets translated to USA Today, it's all pretty direct.

But how do people in the public actually use these facts? Do they use them at all? People seem inclined to latch onto the headline about the medical or scientific study whose conclusion they agree with (or would prefer to believe). Me? I like the studies that say coffee is good for you. There doesn't seem to be much inclination to root around and find out what most of the scientists think is the case. Maybe it's a side effect of our political discourse, where each opinion seems just as good as the others provided it is delivered in a suitably pleasing and persuasive manner.

But this isn't a healthy diet of scientific information. We're gorging on the chips.

I don't like to be the conspiracy theorist, but there do seem to be quite a few interests that may benefit from the relative ignorance of the American public about matters scientific. Ultimately, I think this ignorance will hurt scientists and the public. I think the scientists will need to work out some reliable ways to get the word out. But I think the public will also need to put some more work into it as well.

More work for everyone! Yeah, you're all pretty psyched. But the alternative is not going to be good for most of us.

Thursday, April 21, 2005

... but maybe the blogging is too much.

Brian Leiter provides an excerpt from an article in the Chronicle of Higher Education about the effect the internet might be having on scholarship in universities.

Short story: maybe not a wholly positive effect.

Here's an excerpt of the excerpt:

David M. Levy, [computer scientist and professor at University of Washington, says] "We're losing touch with the contemplative roots of scholarship, the reflective dimension ... When you think that universities are meant to be in effect the think tanks for the culture, or at least one of the major forms of thinking, that strikes me as a very serious concern."

I am so there. Too much information coupled with too little original thought or analysis and you've got squat. (This is why I actually discourage additional research on certain of the assignments I give.) You have to figure out what's important and then set the other stuff aside. You have to trust your gut on that decision, else you spend all your time combing the mountain of information for the best set of standards for distinguishing the important stuff from the other stuff.

On the other hand, some of us really have an easier time with the contemplation and reflection if we've got others we can bounce ideas off of. The emails and the weblogs and such can actually help with that.

Just one more mean to try to achieve, I guess.

Wednesday, April 20, 2005

Science blogs II: Electric Blogaloo.

Geeky Mom reports many more great issues to think about in terms of scientists and their weblogs. She includes issues relevant to teaching, research, and reading the weblogs of others. Go have a look!

Best. Example. Ever!

This is old, but I'm just finding it now, due to helpful links from Bitch. Ph.D.

Larry Summers (yes, again with Larry Summers) has been widely defended for floating a hypothesis. Scientists are supposed to float hypotheses, right? Karl Popper said the more improbable the hypothesis, the better, since the whole point is to try to falsify them. So, why on earth would all those women scientists (scientists!) be bothered that Summers posed an improbable hypothesis, namely, that innate biological factors account for the greater number of men than women in the higher strata of science?

Well, Sean Carroll offers a lovely example to illustrate some of the bother:

Entertaining hypotheses can, in context, be offensive. Summers' own defense was that he was simply offering an hypothesis, and even hoped that he would be proven wrong. How can innocent scientific inquiry upset people so much? We should be devoted to the truth, right?

Okay, imagine you like to play chess, but the only person you know with a chess set is your friend (let's call him "Larry"), so you have to play with him over and over. You believe that the two of you are evenly matched, so the games should be competitive. Except that, while you are an extremely polite and considerate player, Larry is consistently obnoxious. When it is your turn to move, Larry likes to take out his trumpet and practice scales (he's a terrible trumpet player). Also, he tends to flick the light switch on and off while you are thinking. And he is consistently jiggling the chessboard slightly, so that the pieces are vibrating around. Occasionally, at crucial points during the game, he will poke you in the side with a sharp stick. And more than once, when it looked like you were about to win the game, he would "accidentally" spill his coffee on the board, knocking over the pieces, and declare the game a draw by forfeit.

You put up with this behavior (he does, after all, own the chess set), but you are only able to win about ten percent of the games. Eventually, in frustration, you complain that his behavior is unfair and he should cut it out. "Well," says Larry, "let's entertain the hypothesis that you usually lose because you just aren't as good a chess player as I am. I suggest that you are just a sore loser with inferior cognitive capacity, although I'd love to be wrong about this."

Perhaps he is correct -- but in context, you have every right to slap him. Nobody should be against seeking the truth and exploring different hypotheses. But when systematic biases are widespread and perfectly obvious, and these biases are strongly affecting the representation of a group such as women, people have every right to be offended when the president of the most famous university in the world suggests that discrimination is imaginary, and it's women's own fault that there aren't more female scientists. Of course psychologists and sociologists should continue to do research on all sorts of hypotheses, and perhaps some day we will have a playing field that is sufficiently level that any remaining differences in the numbers of working scientists can be plausibly attributed to innate capacities. But in the meantime, we should be focused on overcoming the ridiculous biases that plague our field, not in pretending that they don't exist.


I'd say that's about right.

By the way, Sean Carroll is a physicist, not a philosopher, so he probably wasn't heavily schooled in rhetoric. Just so you know ...

Job opportunities for philosophers: a data point.

In a guest post at Panda's Thumb, Steven Thomas Smith reports on a debate at Harvard Law School about whether teaching Intelligent Design (at least, as science) in the public schools is constitutional. The ID proponent seemed to have had the rhetorical edge, if not the weight of the scientific evidence. In response to this, Smith notes:

This is not a scientific debate — this is a rhetorical conflict, and the Wedge does a much better job with rhetoric than scientists, who are trained to convince each other with facts and evidence alone.

Because there is no scientific debate about the validity of evolution, or the fatuity of Intelligent Design creationism, scientists must not debate these subjects on the same stage as creationists because they will only serve the creationist rhetorical end of being taken seriously. But that does not mean that scientists and supporters of scientists cannot attend discussions where nonscientific issues are the focus, and employ the same rhetorical methods used by our opponents.


(That bold emphasis? You know it's mine.)

There are two ways to read Smith's worry here. You could decide the best thing to do would be to bring scientists up to speed on rhetoric. But, maybe that would end up undermining the purity of scientific dialogue, where scientists are trying to establish publicly verifiable facts and to resolve disagreements by appeal to those facts. (Yes, I know. I've read scientific papers. I've written grant proposals. Scientists are not innocent of rhetoric. But most of them aren't trial lawyers, either.)

The other option, the one that would be less of a threat to the purity of scientific dialogue? Make friends with some philosophers.

Tuesday, April 19, 2005

Policy decisions and scientific uncertainty.

By way of Crooked Timber, an article by Chris Mooney in Mother Jones about the stuff that comes out of industry-funded think tanks and passes for science.

So, anyone who has listened to the politicians in Washington is aware that there's a controversy about global climate change. Funny thing is, the politicos don't present it as a controversy about how we should respond to global climate change. Rather, they present it as a controversy about whether global climate change is even a real phenomenon. Why is this funny? Because if you talk to the scientists rather than the politicians, there's really no controversy at all.

Of course, as with all scientific hypotheses, there was quite a bit of scientific skepticism early on. That's OK, though. You go out and do some science. Pretty soon, the thinking goes, you'll have more data and you'll be able to figure out whether your hypothesis is a crazy one or whether it holds up.

It's no surprise that, while the "global warming" hypothesis was still wet behind the ears, folks in industries making the stuff suspected of causing global warming were rooting for the data to undermine this hypothesis. If you're ExxonMobil, you want to keep selling oil. And while the scientific jury is still out, you'll argue that the burden of proof should be on those who suspect that the gas you're selling is hurting the environment rather than on you to prove that it's not. No one is shocked that you want the public to bet your way, given the uncertainties; you want to stay in business.

But scientific juries seldom stay out for ever. Mooney writes:

Even as industry mobilized the forces of skepticism, however, an international scientific collaboration emerged that would change the terms of the debate forever. In 1988, under the auspices of the United Nations, scientists and government officials inaugurated the Intergovernmental Panel on Climate Change (IPCC), a global scientific body that would eventually pull together thousands of experts to evaluate the issue, becoming the gold standard of climate science. In the IPCC’s first assessment report, published in 1990, the science remained open to reasonable doubt. But the IPCC’s second report, completed in 1995, concluded that amid purely natural factors shaping the climate, humankind’s distinctive fingerprint was evident. And with the release of the IPCC’s third assessment in 2001, a strong consensus had emerged: Notwithstanding some role for natural variability, human-created greenhouse gas emissions could, if left unchecked, ramp up global average temperatures by as much as 5.8 degrees Celsius (or 10.4 degrees Fahrenheit) by the year 2100. “Consensus as strong as the one that has developed around this topic is rare in science,” wrote Science Editor-in-Chief Donald Kennedy in a 2001 editorial.

(Bold emphasis added.)

You want science to make an objective assessment of whether a hypothesis stands up? Get together thousands of scientific experts. From different countries. (Some of these scientists are bound to hate each other!) Let them loose on the data. If they come to something like agreement, you're about as close as you're ever going to get to proving a theory true.

Back to Mooney's article:

Even some leading corporations that had previously supported “skepticism” were converted. Major oil companies like Shell, Texaco, and British Petroleum, as well as automobile manufacturers like Ford, General Motors, and DaimlerChrysler, abandoned the Global Climate Coalition, which itself became inactive after 2002.

Yet some forces of denial—most notably ExxonMobil and the American Petroleum Institute, of which ExxonMobil is a leading member—remained recalcitrant. In 1998, the New York Times exposed an API memo outlining a strategy to invest millions to “maximize the impact of scientific views consistent with ours with Congress, the media and other key audiences.” The document stated: “Victory will be achieved when…recognition of uncertainty becomes part of the ‘conventional wisdom.’” It’s hard to resist a comparison with a famous Brown and Williamson tobacco company memo from the late 1960s, which observed: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the mind of the general public. It is also the means of establishing a controversy.


(Again, the bold emphasis is mine.)

OK, here we have the industry-funded think tank at its slimiest. Under the guise of doing good science, or of educating the public, they present as uncertain a scientific hypothesis that has more data and consensus behind it than most good hypotheses will ever get. They go with the "scientists are always skeptical" angle to say, hey, these thousands of scientists could all be wrong! It's not proven 100% so ... there's still a chance it's false! And when in doubt, buy lots of oil!

Of course, the public isn't necessarily quite this stupid. If I show up at your doorstep and hand you a lottery ticket and say, "Hey, there's a non-zero probability that this ticket's a winner!" you know it would not be a good idea to advance me $10,000. (You do know that, right? If not, please email me your street address and a good time for me to swing by.) So, the think tanks have to go a little further to convince the public that there's a live scientific controversy. They have to find themselves some scientists to speak out against the hypothesis.

This is no mean feat. As Mooney notes, a review of almost 1,000 scientific papers on global warming that were published in the decade from 1993 to 2003 "was unable to find one that explicitly disagreed with the consensus view that humans are contributing to the phenomenon". But you gotta stay the course.

So who do they have critiquing the scientific consensus. Michael "Jurassic Park" Crichton. He has an M.D., which ... qualifies him as an expert on climate science?

Or maybe Steven Milloy, a FoxNews.com columnist and an adjunct scholar at the libertarian Cato Institute (which got $75,000 from ExxonMobil). Mooney doesn't identify Milloy's scientific credentials, not that it necessarily means he's without such credentials. Right-leaning news outlets, after all, did believe him enough to pick up his attack on the Arctic Climate Impact Assessment (ACIA). Quoth Mooney:

Citing a single graph from a 146-page overview of a 1,200-plus- page, fully referenced report, Milloy claimed that the document “pretty much debunks itself” because high Arctic temperatures “around 1940” suggest that the current temperature spike could be chalked up to natural variability. “In order to take that position,” counters Harvard biological oceanographer James McCarthy, a lead author of the report, “you have to refute what are hundreds of scientific papers that reconstruct various pieces of this climate puzzle.”

There is an interesting pattern that emerges: most of the "scientific" opposition to global climate change is heavily funded by ExxonMobil. Back when these guys were checking their crib notes on science and telling each other, "Skepticism! Awesome, let's go with that!" they managed to overlook "disinterested."

Mooney quotes Robert Hahn, a fellow at the American Enterprise Institute, as saying, “Climate science is a field in which reasonable experts can disagree.” The thing is, when those reasonable experts are also scientists, they tend to look for facts -- honest to goodness data and analyses that can be agreed upon even by the scientists on the other side of the fence -- to back up their positions. Despite the impression the politicians, the pundits, and the industry-funded think tanks may be trying to convey, that's not what's happening in the "debate" over global warming.

Monday, April 18, 2005

Weblogs as a species of scientific communication.

From Geeky Mom, the link to a presentation she gave about blogging science. (The link takes you to a summary. There are additional links you can follow from the summary page.)

Lots of good stuff to think about here: Who's the intended audience for science blogs (other scientists vs. lay people) and what kind of communication are they intended to provide (communicating new results vs. correcting bad science journalism), how the back-and-forth of ideas is mediated in blogs vs. other arenas, etc.

The blogosphere is still young, so I suspect many of the issues raised in this talk are still being negotiated by the scientific community. Perhaps, given the communicative potential available here, this would be a good time for the scientific community to examine some of these issues and make some thoughtful decisions.

There's always an upside for somebody ...

Remember when I wrote about conscience clauses and fretted about potential conflicts between personal commitments and professional duties? Well, as usual, I only really looked at one side of the story (although I had essentially set up the other).

See, I was really just thinking about the plight of the people seeking medical care. I totally forgot to take into account that this might be a great thing for folks looking for cushy jobs.

Well, over at Crooked Timber, Belle Waring's got it covered. Quoth she:

Just get certified as a pharmacist, hired at Walgreen’s, and then reveal that you are a Christian Scientist and it is against your religion to dispense any medicine at all. Then just sit back, read chick magazines, and eat expired candy while the money rolls in. “I’d like to fill this prescription for an asthma inhaler?” “Sorry, ma’am, that’s against my religion.” And you can’t get fired! Awesome.

Awesome indeed!

Sunday, April 17, 2005

Science and priorities.

For scientists, doing science is often about trying to satisfy deep curiosity about how various bits of our world work. For society at large, it often seems like science ought to exist primarily to solve particular problems -- or at least, that this is what science ought to be doing, given that our tax dollars are going to support it. It's not a completely crazy idea. Even if tax dollars weren't funding lots of scientific research and the education of scientists (even at private universities), the public might expect scientists to focus their attention on pressing problems, simply because they have the expertise to solve these problems and other members of society don't.

This makes it harder to get the public to care about funding science for which the pay-off is not obviously useful. For example, space exploration. In this article Rick Weiss, a science writer for the Washington Post, bemoans the threats to funding of NASA projects like Voyager (still sending home data from the edge of the solar system). More generally, he expresses concern that "Americans have lost sight of the value of non-applied, curiosity-driven research -- the open-ended sort of exploration that doesn't know exactly where it's going but so often leads to big payoffs." Weiss goes through an impressive list of scientific projects that started off without any practical applications but ended up making possible all manner of useful applications. Limit basic science and you're risking economic growth. Of course, Weiss doesn't want to say the only value in scientific research is in marketable products. Rather, he says, an even more important for the public to support research is

Because our understanding of the world and our support of the quest for knowledge for knowledge's sake is a core measure of our success as a civilization. Our grasp, however tentative, of what we are and where we fit in the cosmos should be a source of pride to all of us. Our scientific achievements are a measure of ourselves that our children can honor and build upon.

I confess, that leaves me a little choked up.

But, I don't know ... Scientists have to become the masters of spin to get even their practical research projects funded. Will the scientists also have to take on the task of convincing the public at large that a scientific understanding of ourselves and of the world we live in should be a source of pride. (Do you hear that Dover, PA?) Will a certain percentage of the scientist's working budget have to go to public relations? ("Knowledge: It's not just for dilettantes any more!") Maybe the message that knowledge for knowledge's sake is a fitting goal for a civilized society is the kind of thing that people would just get as part of their education. Only it's not on the standardized tests, and it seems like that's the only place the public wants to put up money for education any more. Sometimes not even then.

Problem: Scientists value something that the public at large seems not to value. The scientists think the public ought to value it. Meanwhile, the public supports science, but feels like science ought to deliver practical results ASAP. Can this marriage be saved?

Of course, when scientists do tackle real-life problems and develop real-life solutions, it's not like the public is always so good about accepting them. For example, it turns out there's now a vaccine against human papilloma virus (HPV) that is nearly through the approval process. HPV is the leading cause of cervical cancer. (Not a totally harmless virus for men: it causes genital warts.) Add another vaccination to the battery of routine childhood immunizations and HPV is outta there So, here's a perfect example of science doing precisely what the public wants it to do. Except, politically, there's a little problem:

In the US, for instance, religious groups are gearing up to oppose vaccination, despite a survey showing 80 per cent of parents favour vaccinating their daughters. "Abstinence is the best way to prevent HPV," says Bridget Maher of the Family Research Council, a leading Christian lobby group that has made much of the fact that, because it can spread by skin contact, condoms are not as effective against HPV as they are against other viruses such as HIV.

"Giving the HPV vaccine to young women could be potentially harmful, because they may see it as a licence to engage in premarital sex," Maher claims, though it is arguable how many young women have even heard of the virus.


(If you want to read a spot-on rant about this, hie yourself to Amanda Marcotte's post at Pandagon. She's done the ranting so I don't have to.)

(The scientist scratches her head.) Let me get this straight: Y'all want to cut funding for the basic science because you don't think it will lead to practical applications. But when we do the research to solve what seems like a real problem -- people are dying from cervical cancer -- y'all tell us this is a problem you didn't really want us to solve?

But here, to be fair, it's not everyone who wants to opt out of the science, just a part of the population with a fair bit of political clout at the moment. The central issue here seems to be that our society is made up of a bunch of people (including scientists) with rather different values, which lead to rather different priorities. In thinking about where scientific funding comes from, we talk as though there were a unitary Public with whom the unitary Science transacts business. It might be easier were that really the case. Instead, the scientists get to deal with the writhing mass of contradictory impulses that is the American public. About the only thing that public knows for sure is that it doesn't want to pay more taxes.

How can scientists direct their efforts at satisfying public wants, or addressing public needs, if the public itself can't come to any agreement on what those wants and needs are? If science has to prove to the public that the research dollars are going to the good stuff, will scientists have to, um, stretch things a little in the telling?

Or might it actually be better if the public (or the politicians acting in the public's name) spent less time trying to micro-manage scientists? Maybe it would make sense, if the public decided that having scientists in society was a good thing for society, to let the scientists have some freedom to pursue their own scientific interests, and to make sure they have the funding to do so. I'm not denying that the public has a right to decide where its money goes, but I don't think putting up the money means you get total control. Because if you demand that much control, you may end up having to do the science yourself. Also, once science delivers the knowledge, it seems like the next step is to make that knowledge available. If particular members of the public decide not to avail themselves of that knowledge (because they feel it would be morally wrong, or maybe just silly, as in the case of pet cloning), that is their decision. We shouldn't be making life harder for the scientists for doing what good scientists do.

It's clear that there are forces at work in American culture right now that are not altogether comfortable with all that science has to offer at the moment. Discomfort is a normal part of sharing society with others who don't think just like you do. But hardly anyone thinks it would be a good idea to ship all the scientists off to some place else. We like our headache medicines and our satellite TV and our DSL and our Splenda too much for that.

So hey, for a few moments, can we give the hard-working men and women of science a break and thank them for the knowledge they produce, whether we know what to do with it or not?

Friday, April 15, 2005

Who's in the club, and why does it matter?

For some reason, I was resisting taking the issue up in this weblog, but the furniture I keep bumping into in the blogosphere makes me think I really ought to take it up.

How much does it matter that certain groups (like women) are under-represented in the tribe of science?

I'm not, at the moment, taking up the causes (nor am I looking for any piss-poor "Barry Winters"-style theories as to the causes. At present, the bee in my bonnet is the effects.

And this is not a hypothetical situation. This post over at Thanks for Not Being a Zombie links to an article from the New York Times with some sobering statistics:

Even as the number of women earning Ph.D.'s in science has substantially increased - women now account for 45 percent to 50 percent of the biology doctorates, and 33 percent of those in chemistry - the science and engineering faculties of elite research universities remain overwhelmingly male. And the majority of the women are clustered at the junior faculty rank.

As some of you know, I'm one of those women with a Ph.D. in chemistry who is no longer a chemist.

Meanwhile, in the small world that is the blogosphere, I learned that kmsqrd, someone I know from a completely different context, is another woman scientist/engineer who is planning on "leaking" out of the pipeline.

Many of us have really good reasons for leaving science and engineering. The big question is, what effect does it have on science and engineering, both as professions and as producers of knowledge and technologies, that so many of us leave?

There are some fairly predictable outcomes. For one, it may make the environment for women considering careers in science and engineering a bit less attractive, there not being so many other women. Not that it's necessarily a deal-breaker. Some women don't care that much whether there are other women in their field. (Some women actually enjoy it, I'm told, because it makes them special. Some women like it so much that they'll actively discourage other women from getting into the field. Whatever.) And some women are so driven by the questions that keep them up at night that they find themselves having to pursue them even if these questions are best pursued in a field that is male dominated. This is not to say one won't be lonely while pursuing them. It can be really hard to be in a field where you don't have too many connections to people who understand you in particular ways. (Dude, analytic philosophy is pretty male dominated. And, even if I were a man, the fact that none of my colleagues have little kids, as I do, would still make me feel somewhat isolated.)

It seems that increasing the number of women scientists and engineers, especially at the senior level, would exert a positive feedback, leading to a further increase in the number of women entering science and engineering. Perhaps each woman who leaves makes it just a bit harder for the next woman to break into the field.

(But, it's not like we've left in disgrace -- we haven't flunked out or been fired. We've balanced our interests and responsibilities -- all of them -- to make the best decisions we can. If anything, we stand as evidence that women do have the aptitude and talent to take on science and engineering. But aptitude and talent for X is not the same as desire to do X for the rest of one's professional life.)

A big question is the extent to which the direction of knowledge and technology production is affected by the relatively low numbers of women scientists and engineers. It's hard to know what the answer is here. But, we can make some guesses. (Male birth control pill? Still waiting. Viagra? You betcha.)

An even harder question is whether the tribe of science might come to different conclusions when faced with the same world if that tribe is a men's club rather than a group of men and women in equal number. The standard story has been that there's just one scientific method, and that anyone who's read the manual can apply it, rather mechanically, to get the same answer as anyone else applying the same methodology. This would mean, of course, that it wouldn't actually affect the kind of knowledge you produced if it happened that there were only men doing science. But anyone who has actually done science knows that there's a lot more interpretation that goes into figuring out just what it is you know. What pushes us toward the interpretations we come to? Who the heck knows? It seems like controlling for all sorts of potential influences on our interpretations (such as being male, or being female) might be a sensible part of setting up a good experiment.

So why, oh why, in proposed "Academic Bills of Rights," are the sciences given a special status? (Brian Leiter has an interesting discussion of the proposal before the Florida legislature.)

Florida HB 837 Section 1004.09 (1) says:

"Students have a right to expect a learning environment in which they will have access to a broad range of serious scholarly opinion pertaining to the subject they study. In the humanities, the social sciences, and the arts, the fostering of a plurality of serious scholarly methodologies and perspectives should be a significant institutional purpose."

(Bold emphasis added.)

This makes it sound like there is exactly one "serious" scholarly methodology and perspective in the sciences. And if that's the case, it really shouldn't worry us what the composition is of the community applying it.

So, who cares if there aren't more women in science and engineering?

I don't know about you, but I'm not quite ready to stop worrying about this. (At the same time, I'm not exactly ready to take my lab coat out of mothballs. Another indication that I am a bum? You be the judge!)

Thursday, April 14, 2005

We send Canada mad hamburgers, they save our asses.

At least, this story in CounterPunch sure makes it look that way.

OK, first, the part you've probably heard (and can read about, thanks to the Cincinnati Enquirer, here, or administer aurally, thanks to All Things Considered, here):

Flu test kits were sent out to labs all over the world by a company that makes flu test kits. Flu test kits come with samples of flu viruses of various sorts, which one uses for comparison when characterizing the influenza strains current flu sufferers present with.

What is less common (one assumes from the media attention) is for flu test kits to come with samples of flu strains that have killed millions of people.

This makes a certain amount of sense. You don't want your lab technicians, in a moment of distraction, to make a mistake that will lead to the release of a virus that could start a deadly flu pandemic.

The linked NPR story presents the inclusion of the H2N2 strain of the influenza A virus (which henceforth I'll refer to as "death-flu") in these test kits as an accident. The Cincinnati Enquirer story, however, says it wasn't. From the story:

[CDC director Julie] Gerberding said the inclusion of the virus in Meridian test kits sent to 3,747 labs does not appear to have been accidental.

"They made the decision to include that particular influenza isolate," she said. "We don't have any details as to how or why that decision was made. That will be something we'll be exploring as we move forward."

Expanding on that point later, Gerberding said, "I'm sure it wasn't an inadvertent use because it's impossible to believe they did not know they were dealing with H2N2 ... Our information right now is that Meridian created these proficiency test products knowing that the H2N2 viruses were in them."


If it wasn't an accident that the company included this death-flu, I hear you ask, what on Earth were they thinking?

Well, the test kits were made to meet the specifications of the College of American Pathologists. Quoting again from the Cincinnati Enquirer article (I'm adding the bold emphasis):

Dr. Jared Schwartz, secretary-treasurer of the pathologists organization, said the organization's arrangement with Meridian called for the provision of an influenza A strain with a biosafety level 2 rating from the CDC. The H2N2 virus carries that rating. The CDC said it will conduct an "expedited review" to raise it to a level 3.

"The vendor ... looked in their freezer and found an influenza virus they thought was appropriate," Schwartz said, "and even though they knew it was an H2N2, they felt it was a safe virus because they felt it had been attenuated (or diluted)."

Schwartz said the pathologists did not ask for an H2N2 virus in its flu test kits and will be more specific as to virus and pathogen subtypes in future orders. He said he expects to continue doing business with Meridian


So technically, Meridian delivered the product it was supposed to (a test kit including an influenza A strain with a biosafety level 2 rating). It didn't need to be a sample of death-flu, but that's what they had on hand. And, the CAP didn't say that it couldn't be death-flu ...

"When do we get to Canada in this story?" I hear you ask. Right about now.

So, nearly 4000 labs receive these test kits. They expect that the test kit includes an influenza A strain with a biosafety level 2 rating. But, they have no reason to think that it's death-flu. And they might have gone on thinking the test kit contained a relatively benign level 2 bug had it not been for the vigilance of a lab technician in Winnipeg, Manitoba on March 25. Note that these test kits had been out and about for about six months when the Canadian lab tech recognized the death-flu in the test kit. So, it would seem, American labs either don't do this kind of quality control check, or they just hadn't gotten around to it yet.

The CounterPunch article linked above discusses (OK, rants about) the implications of this whole situation for US funding priorities, especially in terms of regulatory agencies and the health care system more generally.

At a time like this, I happily recall that when I took the "Which Canadian province are you?" quiz, I was Manitoba.

Wednesday, April 13, 2005

Speaking of holding back information ...

One of my Canadian informants alerted me to this tale of whistle-blowing. It seems that, possibly, the U.S. Department of Agriculture might be really anxious to get cattle and beef moving across the U.S.-Canada border again.

Lester Friedlander, now a consumer advocate, was fired from his job as head of inspections at a large meat-packing plant in Philadelphia in 1995 after criticizing what he called unsafe practices.

Mr. Friedlander said U.S. Department of Agriculture veterinarians sent suspect cow brains to private laboratories, which confirmed they were infected with mad cow disease. Samples from the same animals, however, were cleared by government labs.


If Friedlander is telling the truth, this is a Very Big Deal. At the very least, this would be a case of withholding potentially relevant information. ("Here's your beef. Our lab says it's fine, but this other lab says maybe Mad Cow, so ... maybe you'll want to be careful, eh? You guys still say 'eh', right?") At worst, it would mean USDA labs were ... making up the results they wanted? Guys, this is veterinary biomedicine, not economics!

And for sure, the government has an interest in trade. But perhaps the government's scientists could keep their focus on the matters of scientific fact and let the other bits of the bureaucracy (which, I keep hearing, are legion) attend to the other stuff?

Hacking to be helpful.

This afternoon on the drive home I heard this story about John Hering, USC undergrad and entrepreneur, and a vulnerability in Bluetooth. It seems young John has developed the "BlueSniper rifle" which can hack into Bluetooth-enabled cell phones and PDAs from more than a mile away. Of course, he didn't make the BlueSnipe rifle with the goal of actually hacking into people's wireless devices to steal their information, because ... why alert the press to that? (Unless, of course, he were as dumb as some plagiarists I have known.) Rather, the point was to demonstrate that Bluetooth has this vulnerability so people using Bluetooth technology can take adequate precautions.

The Bluetooth folks, perhaps not surprisingly, are reputed not to have been totally forthcoming about potential vulnerabilities with their product. Here's what NPR gives us from their side:

The industry's Bluetooth Special Interest Group says it takes security "very seriously." In a statement, the group says that "so far no security holes have been discovered in the Bluetooth specification itself. Vulnerabilities that have come to light either exploit the Bluetooth link as a conduit, much like the Internet to the PC, or are a result of the implementation of Bluetooth technology within the device -- as such, we constantly work with our members to assist in implementing Bluetooth technology more effectively." Security flaws that are revealed "are typically solved by new software builds and upgrades," it says.

Someone more hip to the lingo than I (Julie?) can probably decode this. I'm reading it as, "Dude, don't blame us."

So here's the thing: Is John Hering something like a whistle-blower here? He's not working for Bluetooth, but he is sharing information with the public -- information that he thinks the public needs to know to protect themselves -- that, arguably, Bluetooth is not providing. Conceivably, Bluetooth lawyers could come after Hering, so there is some risk involved in publicizing this knowledge. (Of course, there may also be profit -- Hering has a company whose business is exposing security vulnerabilities, so it must be possible to make a buck doing so, right?)

Yet, a part of me (the part thinking about the cell phone in my pocket, no doubt) was thinking, "There's a guy on the radio telling people how to hack into my cell phone!" It was precisely the same feeling I had, while a grad student, when the school paper ran a piece wherein Campus Police outlined E-Z procedures for defeating a U-lock. On the one hand, I suppose it was good not to have a false sense of security about the lock securing my bike to an immovable object. On the other hand, suddenly a bunch of yahoos who were not already in the know knew how to steal my bike!

If the consumer has real alternatives to Bluetooth technology that are not so vulnerable, then calling Bluetooth out is probably a good thing. If Bluetooth needs to have its vulnerabilities aired in the national media before the company will step up and fix them, then calling Bluetooth out is probably a good thing. But what is putting this information out there leads to evildoers exploiting the vulnerability before Bluetooth fixes it or the consumer has time to switch over to the more secure technology? Is there any way the information can be used for good without being available for evil in a case like this?

Edited to add: Go to the comments, where Julie has added a link to her very helpful discussion of the ethical terrain here -- especially the distinction between the stuff that is the responsibility of Bluetooth SIG and the stuff that is the responsibility of the implementers at the cell phone company. Thanks, Julie!

The government and the scientists.

I'm on the wrong coast to attend this talk in DC tonight, but it promises to be interesting. Chris Mooney is a science writer, and the talk is titled “Abuses of Science in Politics and Journalism.” So, I imagine there are two main threads he'll take up:

  1. The ways appointments to "science panels" in various bits of the government bureaucracy have, it is reported by scientists, taken a turn for the political. Specifically, it has been said that scientists have been queried about who they voted for rather than about, say, their scientific qualifications. Given that scientists are pretty committed to to the idea that what they're doing in their scientific capacity ought to be guided by sound scientific methodology and empirical facts rather than by anyone's political agenda, you can see how the scientific community might not like this development. (While some political operatives have essentially responded, "Dude, the party in power gets to appoint who it wants," it does make you wonder why you would then label the panels resulting from such appointments scientific panels rather than "panels of our political allies who happen to be scientists.")
  2. The ways public controversies over such things as global warming and intelligent design theory are presented in the popular media. In particular, a number of scientists feel that certain public controversies have been presented as if they were raging scientific controversies, even when the scientific community has come as close as it ever does to reaching consensus. Why does this happen? Who in journalism school is telling the science writers of tomorrow that you must present both sides of each of these scientific controversies as if they had equal scientific standing? How on earth does this give the lay person good information -- information that they can trust when participating in public dialogue about important policies -- about what the scientists know?


Yes, as it turns out, I have a strong opinion or two about some of this stuff.

I don't think these issues about the relation between science and the government, and about the presentation of science to the public by journalists, are just about scientists maintaining relative autonomy while feeding at the public trough. There really is a concern that the government's efforts to control science to suit a particular political agenda will undermine the integrity of the scientific enterprise that labors under this control. Similarly, there is a worry that the false impression the public gets of science and the state of scientific knowledge due to shoddy science journalism will make the public more willing to let the government direct science in overtly political ways. To the extent that the public actually depends on good scientific knowledge from sound scientific research, this will be a Bad Thing for nearly everyone. (Maybe so much for the politicians in the short term, but I hold out the naive hope that their genetically-modified, bird-flu-laden chickens will come home to roost.)

I suspect a lot of scientists are actually kind of burned out on the extent to which PR plays a role in their being able to do their research, especially given the sorry state of public discourse lately. I think this is part of the explanation for the scientific boycott of hearings on intelligent design theory in Kansas. The Kansas State Board of Education is wrestling over the teaching of evolution and "alternatives" in biology classrooms. Again. (Dear Kansas BOE, Could you please stop recycling agendas from years past? It makes it very hard for doddering academics like me to keep track of what year it is. Love, Doctor Free-Ride)

Ahem.

Anyway, there's a hearing scheduled with a raft of supporters of IDT slated to testify. There are some scientists, but none has published any scientific research to support IDT. The scientific mainstream -- including the scientists from Kansas's six major universities -- has declined to participate, because they don't see this as a real scientific debate. Rather, it seems pretty clearly to be driven by the religious agenda of certain school board members, and the scientists don't really have the time for a religious debate. That is not what scientists do, at least not in their professional capacity.

It will be very interesting to see what the press makes of this supposed scientific hearing with no scientists. My cynical side worries that it will be spun as the scientists laying low because they know this evolution stuff is on shaky grounds. I hope the science writers bother to interview the scientists opting out of the show trial.

But I've survived a lot of shoddy science journalism of late, so I am not holding my breath.

Tuesday, April 12, 2005

Another famous scientist ...

... turns out to be a jerk.

Yes, Jonas Salk developed a vaccine against polio. Yes, it was even a significant innovation, as it was a "killed virus" vaccine rather than a "live virus" vaccine. (Given the spate of cases of polio in the 1980s -- contracted by parents changing the diapers of their babies who received the "live virus" vaccine -- I really am happy that Salk fought to get the go ahead to develop and test his "killed virus" vaccine.)

But, as with most scientific achievements of this sort, Salk didn't do it on his own. He relied on the scientific labors and talents of many others. None of whom, as it turns out, he bothered to mention in the formal announcement that the vaccine was a success. This story from Morning Edition has an interview with one of the overlooked scientific collaborators (who is much more hurt than resentful).

Kinda makes me want to dig Salk up and punch him in the nose.

(I wouldn't actually exhume the body of a dead scientist to punch him in the nose. However, if I were to run into James Watson in a dark alley ...)

Monday, April 11, 2005

Compounding a problem but good!

Let's say you've got a big, multimillion dollar, global research program. In the interests of solving your scientific problem as soon as possible (not to mention making pharmaceutical companies, who could benefit quite a bit from the success of your research program, very happy), you pretty much ignore safety considerations raised by your division's chief of human research protection. (You ignore some concerns about the science, too, because what do those safety types know?) How do you go the extra mile for your research program?

Perhaps by sexually harassing some of the medical and compliance officers?

These guys are pros; don't try this at home!

Questionable content?!

Have you ever tried to post a comment on a blog and had it rejected because of "questionable content"?

I have, just now. There was no swearing. The grammar and spelling was even passible.

Grrr!

Edited to add: You can look at the comment I tried to post in one of my other blogs. (Hush!) It's in the entry for April 12 titled "Weblogs, professional development, and the academic food-chain." Xanga, of course, doesn't make it easy to put in a permalink to a specific entry, so you may have to hunt.

Conscience versus professional duties.

There has been a fair bit of media coverage lately of conscience clauses for physicians and, more recently, pharmacists. The idea of such clauses is that one ought not to be forced, as part of one's professional duties, to participate in an act one objects to on moral grounds. In other words, under a conscience clause, a physician with a moral objection would not have to perform abortions. A pharmacist who had a moral objection to contraception would not have to fill prescriptions for contraceptives. A nurse who has a moral objection to choosing not to prolong life by any available medical means could ignore a Do-Not-Resuscitate order that a patient has entered herself.

The recent generation of conscience clauses go even further. They allow health care professionals to refuse to refer patients to professionals who might offer the services the patients are seeking.

Obviously, we've got a tug-of-war here between the moral convictions of the health care professionals and the moral convictions of the patients.

One claim that has been made in support of these conscience clauses is that patients can always find someone willing to provide a legal procedure or prescription. There is some question, though, of how easily they can find that someone, how far they will have to travel, how long the wait will be, and what it will cost. Is it ethically permissible that health care professionals withhold knowledge as well as service?

It may be that a more profound holding back of knowledge is happening at the level of education of new health care professionals. The word is that fewer and fewer medical students have access to training in controversial procedures (like abortions) -- whether or not the students themselves have any moral objections to these procedures.

A few big ethical issues are tangled together here. One is what the relationship between health care professional and patient/client ought to be. Does the doctor or pharmacist have a responsibility for the well-being of the patient/client? Is this responsibility for the medical well-being, or the moral well-being as well? What duties does the health care professional have to respect the moral values of the patient/client seeking care?

Is it unjustifiably paternalistic for the health care professional not only to withhold service but also to withhold information? (Wouldn't it be more ethical to provide information on how to locate other providers, even if these information was accompanied by an explanation of the first health care professional's objection?)

Is this a case where specialized knowledge and training really do bring with them a duty towards the people who depend on the services that can only be provided by those with such knowledge and training?

Are certain moral views fundamentally incompatible with becoming an ethical health care professional? (I'm not just thinking of extreme cases -- hedonistic cannibals ought not become surgeons. Should Christian Scientists become pharmacists? Should Jehovah's Witnesses become phlebotomists? Should people utterly opposed to abortion and contraception of any kind, under any circumstances, become OB/GYNs?)

It might be interesting to see what the professional codes of ethics for various health care professions say about these issues. (The exercise is left for the reader ...)

Friday, April 08, 2005

Bextra - costs and benefits.

In an interview on All Things Considered today, a physician who specializes in pain treatment argued that maybe the pain doctors have been drowned out by the cardiologists in the decisions on Bextra and Vioxx and the other NSAIDs.

His point was something like this: It's true that there are real cardiovascular (and other) risks associated with taking these drugs, especially at high doses and for a long time. However, it's not clear that the FDA really considered the benefit of these drugs to people with severe pain, or the harms done to these patients if their severe pain is left untreated. The doctor argued that, with what drugs are left after Bextra and Vioxx were pulled, there will be patients whose pain cannot be adequately treated. And even the over-the-counter NSAIDS left (like ibuprofen and naproxen) have non-negligible side effects (especially at doses at which they'd make any kind of dent in real pain).

So maybe the FDA is not aware of all the costs and benefits that ought to be balanced? Or maybe the FDA is being paternalistic here? (Or is my brother the law student gonna explain to me how this all hinges on reducing someone's legal liability?)

The numbers are a little too good ...

In the project to build a new Bay Bridge, there are more problems. Beyond the allegations that a bunch of the welds are defective (as if that were not problem enough), there are concerns about the safety record for workers on the bridge. Specifically, the worry is that the safety record is too good. The concern is that, for the number of people working on a project like this, statistically, there should have been more injuries than were. Because, you know, accidents happen. When many fewer accidents than you'd expect are reported, the agency to which you're reporting them gets ... suspicious. It's not impossible that you could have so few injuries. Maybe you've got a really lucky work site. But maybe ... you're underreporting the injuries.

And why would you underreport injuries? Could it be because you've actually incurred more injuries than you ought to (because your work site is not as safe as it should be)?

I find it intriguing that falsifiers are often like comic book bad guys -- they get caught because of overreaching. They get greedy. In making up the numbers, they paint too rosy a picture of the reality they're making up.

And those meddling kids always manage to figure it out!

Thursday, April 07, 2005

Current events -- again!

**The FDA has asked Pfizer to pull Bextra off the market. Bextra is a COX-2 inhibitor, like Vioxx (remember Vioxx?). The problem, according to the FDA, is that Bextra has a raft of serious side effects (like Vioxx), including cardiovascular problems, stroke, and "a potentially life-threatening skin condition called Stevens-Johnson syndrome, an allergic reaction that usually begins as a blistering of the mouth and lips and can spread to the rest of the body." And, for all the risks, it has no added advantages as a painkiller.

I don't know for sure that the research that brought Bextra to market tested it against a placebo instead of against other well-characterized painkillers ... but it wouldn't surprise me.

**Stephen Johnson, President Bush's choice to head the EPA may have his nomination blocked owing to ethical concerns about an EPA study he oversaw about children and pesticides. While the study has been suspended, it has not been cancelled. The idea of the study was to identify families who used pesticides and monitor the children of these families to see if the family pesticide use was having any effects on the kids. The problem? In recruiting families for the study -- mostly low-income families -- the EPA did not make it sufficiently clear that they were only looking to study families already using pesticides in the home. Plus, they offered about $900 and a free camcorder for participation. ("Hey honey, if we spray for roaches we can get some cash!") And, in recruiting families with babies as young as 3 months, it's not clear that the researchers provided potential participants with the information that is known about potential effects of pesticide exposure in babies and children.

Free and fully informed consent, anyone?

Oh, and it happens that the American Chemistry Council (an industry group) provided funding for the study in question. Not that that necessarily means there would be a conflict of interest ... but it could color the kind of information about pesticides being offered to potential participants in the study.

And speaking of conflicts of interest ...

**Due at least in part to the threatened departure of some prominent scientists, the NIH may relax its rules on conflicts of interest. The current rules prevent NIH administrators and scientists from holding lucrative consulting contracts with drug companies. They also require that NIH administrators and scientists divest themselves of stocks in biomedical companies and put real restrictions on consulting and other "work for hire" that academic scientists frequently do for extra income. But concerns have been raised over whether these strict rules will make it difficult to attract and retain talented researchers at the NIH. Further, the argument has been advanced that these protections against outside influence are more important for administrators than for research scientists. (Research scientists are pure of heart, after all.)

Now, I'm the last person who's going to tell you that researchers working for the government (or for public universities) are well-compensated. And the government does not have the buckets of money sitting around that is did in rosier economic times. But might there not still be reason to believe that consulting contracts with, or stock holdings in, private drug companies could have some influence on what the scientists see in their data?

Stay tuned for more updates from the world out there ...

Wednesday, April 06, 2005

A practical question.

Let's say you've worked very hard on a project. You've been part of the organizing from the outset. You've done a lot of thinking and writing and rewriting. You've worked hard to build consensus. You've done loads of personal outreach to try to build a community around the project (including "cold-emailing" people you don't know personally). You've been the dependable facilitator. You've laminated a bloody sign.

You are the first to acknowledge that the project that you've been working very hard on is a local implementation of someone else's broader project. Because, after all, one should give credit where credit is due.

Let's further suppose you are a lowly assistant professor trying to build your tenure case. Your colleagues in your department have an inkling of what kind of effort you've put into this project (which will appear under the heading of "service" in your list of accomplishments).

So, you open the paper and, in a story about a professor from another department receiving a service award -- one you are very much inclined to feel this professor deserves, mind you -- you discover a quote from an administrator crediting the professor who has won the award with "creating" this project.

Who did the what now?

You struggle with the sense of "create" that could be appropriate here. Certainly, award winning professor didn't create the idea ex nihilo since some other guy did. (And wrote a book about it!) Award winning professor did raise the idea of trying a local implementation of this project, in an ongoing online discussion that award winning professor can legitimately claim credit for. But, while award winning professor has been a font of encouragement and moral support, award winning professor has not been involved in the torturous details of getting the project to actually come off.

Let us be clear that we have no reason to believe award winning professor is trying to seize credit for this project. Instead, we have a high-profile administrator heaping this credit on award winning professor in a periodical whose headlines are sometimes misspelled. So, there is a non-zero probability that the administrator was misquoted. All the same ...

So here's the practical question: what ought you to do? Arguably, your significant contribution is being overlooked. And, a claim that the achievement really belongs to someone else is being publicized to (among others) people outside your department who will judge your tenure case on College and University Retention, Tenure, and Promotion Committees. If they believe what they've read in the paper (and, with no glaring misspellings in the article, why wouldn't they?), it is entirely possible that your claim that this project constitutes real service on your part will be regarded as puffery or worse.

At the same time, recall that you're a lowly assistant professor. It's not like you can be all "'Fraid not!" with regard to the high profile administrator's quote.

Is there any way to set the record straight without stepping on a political landmine? Or do you have to just let it go?

(It's interesting, I think, how this scenario illuminates the trickiness of power relations in academe. As a lowly assistant professor, your contributions often don't get the notice of high profile administrators or other powerful people of note. Instead, there's an assumption that really good things that happen are due to the known galaxy of powerful people. Yet, the lowly must somehow jump up and down to get their contributions recognized, else they don't get to stay in the club long enough to become powerful people of note themselves.)

This has more to do than you think it does with the issue of authorship, to be taken up in class tomorrow. Stay tuned!

Monday, April 04, 2005

Are there too many Ph.D. programs (and if so, is this ethically problematic)?

I know for prospective grad students, applying to schools and hoping to get a slot, this seems like a ludicrous question. However, to recent Ph.D.s looking for jobs (especially academic jobs) the problem is real.

An entry at In Favor of Thinking takes up this issue quite eloquently:

... the ethics of this profession are crap. Realistically, half of the PhD programs ought to shut their doors -- there's no point in churning out so many PhDs (a particular problem in English) when there aren't enough jobs.

(Yes, this is a professor in the humanities writing, but as I've noted before, a similar problem exists in the sciences. In my second year of graduate study in chemistry, I discovered, from Chemistry & Engineering News, that there were about 30% more chemistry Ph.D.s than the market could employ. I discovered this while I was in school to earn a Ph.D. in chemistry. Did this leave me feeling warmly toward my graduate program? It did not.)

Anyway, playing devil's advocate, I responded:

I have a bit of concern on the market-argument for fewer Ph.D. programs ... It assumes that everyone who gets a Ph.D. plans to use it to profess, and that becoming a professor is really the only sane or sensible thing to do with a Ph.D. Even in the humanities (hell, even in *philosophy*), that's not the case. That said, it wouldn't kill graduate programs to give their students more accurate projections of their academic job prospects!

The estimable New Kid on the Hallway added the following response:

I wanted to respond quickly to Janet's comment about the market/PhD program issue - personally, I'm pretty cynical, but I tend to think the only reason anyone should enter a PhD program (at least as PhD programs stand) is if they do want to go on to become an academic, defined as a professor. Being a professor is about the only job I know of that you HAVE to have a PhD to do. I could also agree with someone who knows for sure that they DON'T want to profess, who has deep personal reasons for learning more, entering a PhD program, but I think that as they stand, such a person stands a deep risk of getting sucked into the academic (tenure-track) rat race, or of being looked down upon (and not supported) for not wanting to be part of that rat race. My personal feeling is that if you love a subject and want to continue learning, there are LOTS of other ways to do so than entering a PhD program. So I would go along with limiting the enrollment of PhD programs because I don't actually think getting a PhD benefits anyone much unless they actually do want to profess.

This is based on PhD programs as they stand, mind you, not what they could and probably should be. I guess I think it would be easier to restrict enrollments and for those with interest in something to find other ways to learn about it, than it would be to change the structure/attitude of PhD programs so that they would give students more realistic ideas about employment options. Because the people running grad programs are the ones perpetuating/benefiting by the kind of hierarchy that Mel describes here.


I think this is an interesting response -- and interestingly different from Shrader-Frechette's take on what one ought to do with a science Ph.D. The argument here hinges on what one ought to do to fulfill one's duties to oneself. Basically, doing a Ph.D. might put you in a position where you are limiting your choices and placing yourself under significant duress that may be bad for you. (It sounds strange ... but not so much to someone who has been part of a graduate school cohort going on the market when jobs are scarce and your self-worth hinges on getting an academic job.)

For the record, here was my reply to New Kid:

I completely agree with you that, as things stand now, most Ph.D. programs (especially in the humanities) are set up to groom more rats for the academic rat race, and to look at you as a very odd duck indeed if you make noises about doing anything else with your Ph.D. Because, you know, doesn't *everyone* want to be a professor? So, yes, the undertow toward an academic job is pretty strong, especially for those who haven't, in the mad dash to finish writing up their dissertation, paused to reflect on what it is they might *really* want to do.

But I'm inclined to think that reflecting on what you really want to do is something you ought to be doing at regular intervals (especially given that it's a moving target for some of us). A lot of people just can't tell, until they're in the belly of the beast (or of their graduate department, anyway) whether they'd want to profess. Undergraduates don't see enough of what it's really like to be able to assess the option fully. And, given the sort of personal growth (or trauma) a lot of people seem to go through during a graduate program, I'm not sure most undergraduates have enough data about *themselves* to know whether professing would be a suitable career. This is not to say that there aren't people who could, given 5 minutes' honest reflection, figure out that they should be doing something else without sinking a few years of their lives into a doctoral program. But for a lot of people, kind of "trying it on" is the only way to make a sensible decision.

The next question becomes what one should do, say, after 3 or 4 years of "trying it on" in a Ph.D. program and getting the strong impression that joining the professoriate would be a bad move. Do you walk away and cut your losses? Maybe, but I know some people who, in this position, said "Dammit, I'm seeing this project through!" If nothing else, the Ph.D. was a personal victory, tangible evidence of triumph over adversity. That has a certain kind of value.

As far as the pursuing knowledge for personal growth angle, I'm not sure it's always so easy to really pursuit a subject on your own in the same way without a Ph.D. program. Certainly this is the case in scientific fields. As a college senior, I decided to go to grad school in chemistry because I could certainly pursuit philosophy on my own. Well, four years later, I was quite certain I didn't want to be a chemist, and I was applying to grad school in philosophy because I craved the sort of intellectual community one hardly ever finds outside a graduate program. Granted, there was no philosophy-rich blogosphere back then, but I still think a flesh-and-blood community of fellow thinkers coming together in graduate seminars and over beers gives you a different kind of experience.

Perhaps my experience in grad school was odd, given that a large number of my classmates (in both disciplines I pursued) were fairly certain at the outset that they'd be doing something else with their Ph.D.s besides professing. I worry a little that restricting Ph.D. enrollments might have meant that I wouldn't have encountered some of these folks in my programs, and that this might have made it easier for me to get sucked into the mindset that a Ph.D. can or should only be used to become a professor.


Maybe Ph.D. programs in the sciences are a different matter ... but maybe not.

Sunday, April 03, 2005

Dem Bones

An upcoming piece of legislation about what counts, legally, as Native American remains may have serious implications for archaeological research in the U.S.

The previous legislation on this subject has asserted that existing tribes have a claim to the remains of their members, and that archaeologists have to turn over the remains they find to the appropriate tribe rather than keeping them for study. This seems pretty sensible; most people wouldn't be that psyched to let archaeologists exhume Great Aunt Agnes for the sake of scientific knowledge. The modified legislation, however, will apply to much older remains which aren't obviously the remains of an existing tribe -- the presumption will be that they belong to some tribe rather than being legitimate objects of scientific study. Indeed, some suggest the new version is actually part of a creationist plot to destroy evidence.

So, where does one draw the line here between respect for the dead and the value of scientific study of such remains?

Friday, April 01, 2005

Rights versus Bargains.

We haven't talked about many specifics of patents or intellectual property yet, but as the recording industry goes to court against the P2P successors of Napster, I found this discussion interesting. Not often that we think of individual rights being granted (not recognized) for the express purpose of furthering the good of society.

(Of course, reading Vandana Shiva's chapters will put this aspect of patents in historical context. Stay tuned!)