Monday, October 31, 2005

Lies that "don't matter"? (Van Parijs follow-up)

The conduct of fired MIT biology professor Luk Van Parijs, as reconstructed in the investigation of his work of the last eight years or so, gets curiouser and curiouser. From the October 29th Boston Globe, a follow-up story by Bombardieri and Cook tells us that problems have surfaced not only with the research Van Parijs did at MIT, but also with papers he authored about research he did at Brigham and Women's Hospital while a graduate student. But the twist here is that it's not entirely clear how his fraud in these cases would have helped him. From the Globe article:
The new revelation deepens the mystery about a rising star who was popular with students and colleagues and appeared to be a gifted biologist. In both of the new cases, it appears that Van Parijs said he had done work that he had not done, work that would have been a small part of the overall experiment.

In one case, the data in question would not have affected the conclusion, said Dr. Abul Abbas, who directed the Brigham laboratory where Van Parijs worked and was the senior author on both papers. For the second paper, the questionable data may have affected the outcome, Abbas said.

So, it seems we have a guy fabricating or falsifying data that might not even change the conclusion of the papers for which these "data" were created.

Why?

I can think of a couple of plausible explanations here. One might be that he felt he needed more data to wave around to strengthen the impact of the actually good data he collected. (Replication is good, and more is better.) Another is that possibly some of the "good" data wasn't all that good either, but it was more convincingly faked. This might not be that crazy an idea. Suspicions about Van Parijs's work from his Brigham and Women's years are tied to some plots that look more similar than they should:
In the two papers, Van Parijs was investigating the function of T cells, which are part of the immune system. Van Parijs ran samples of the cells through a device known as a flow cytometer, which sorts the cells by the characteristics in which the scientists are interested. This produces plots, essentially diagrams with large numbers of dots, with each dot representing a cell.

In both papers, there are plots that appear to be almost identical, even though the paper says they are sets of cells from different mice. Using only one mouse would have saved time. The plots are not exact copies, though, which Abbas told the Globe has made him more concerned, because if the data are fraudulent, it implies they were done intentionally. Changing data or inventing it is considered a very serious offense, regardless of the effect the act has on the conclusions made in a research paper, scientists said.

(Bold emphasis added.)
This is the kind of fakery that was bound to be caught -- with a little reflection, Van Parijs could surely have turned out a better faked plot.

The other possibility here, which seems very weird, is that the fabrication and falsification were not done with the intention of producing "better" results, nor of affecting the reported results at all. But this would make Van Parijs ... a scientist who is lying to other scientists just because he can? Is this the scientific equivalent of torturing cats before moving on to your first murder of a human?

Perhaps. Again from the Globe article:
It is not unusual to see cases of fraud involving data that are tangential to the main point of a research paper, as is alleged in some of Van Parijs's work, according to C.K. Gunsalus, a special counsel at the University of Illinois at Urbana-Champaign, and specialist on research integrity.

''It is very common, and there is also a common defense, which is 'I have a PhD and I wouldn't have done something so stupid,' " said Gunsalus. Often, she said, this defense is successful. She also said that it was common to see a pattern of escalation, with small infractions building over time to larger ones.

(Again, the bold emphasis is mine.)

Do scientists who lie about insignificant things lose their taste for gathering real data, or do they get a taste for putting one over on other scientists? Either way, it seems clear that, in a field that is all about figuring out how things really work, telling lies is a Very Bad Thing. At this point, I'd imagine, citing a paper on which Van Parijs is an author would add about nothing, in terms of evidential support, to anyone else's serious scientific work -- this despite the fact that fact that Van Parijs's postdoctoral advisor, David Baltimore, told the Globe that "he knows from work that his lab has done following up on Van Parijs's research that a lot of what he did is, in fact, verifiable." (That "knowledge" rests on the assumption, of course, that members of the lab are doing legitimate experiments and analyses of these ... because surely Van Parijs was the only one who would ever dare to do otherwise.)

By the way, C.K. Gunsalus is the authority to consult on scientific integrity (and lack thereof) in university research settings. Despite some quite reasonable worries people have expressed (like YoungFemaleScientist in this excellent post) about folks accused of scientific misconduct being ruined forever even if the charges turn out to be baseless, Gunsalus has argued that more frequently the lack of real penalties allow the cheats to stay in the system and cheat again. There's a fairly high recidivism rate on cheating in science, according to Gunsalus; it's hardly ever the case that someone is caught for misconduct without having a history of similar deeds. (And that seems to be how the Van Parijs case is shaping up.) And what's the message to the rest of the scientific community if someone is caught fabricating and falsifying data, but is only given a slap on the wrist because it didn't effect the conclusions (or maybe it did, but other labs have "validated" the results)? The message is that lying isn't really a big deal.

Do you see now why some of us get worked up about dishonesty that seems insignificant in the grand scheme of things?

Gunsalus has a downloadable offprint that bundles two of her best articles: "How to Blow the Whistle and Still Have a Career Afterwards" and "Preventing the Need for Whistleblowing: Practical Advice for University Administrators." Both are beautifully written and full of practical advice. If you're a scientist or a science student (or a university administrator), you need to read them!


Technorati tags: , , ,

Sunday, October 30, 2005

Aw Mom, scientific misconduct again?!

Can you believe there's another story in the news about a scientist caught fabricating and falsifying data? Also, the sky is blue.

This time, the miscreant scientist is Luk Van Parijs, who was an associate professor of biology at MIT until they fired his ass. His offenses include "fabricating data in a published scientific paper, in unpublished manuscripts, and in grant applications"; apparently, he also admitted to falsifying data. Gareth Cook and Marcella Bombardieri write about the incident for the Boston Globe.

Why does this kind of thing keep happening? Maybe if we knew, we could put a stop to it. In the meantime, here are my thoughts about the various players in this affair. (All quotes are from the above-linked Boston Globe article unless noted otherwise.)

The other researchers in Van Parijs's lab at MIT
The investigation began in August 2004, when a group of researchers in Van Parijs's laboratory brought their concerns to university administrators.

Well done, other researchers! The Globe article doesn't give details about who these researchers were. They may have included postdocs, graduate students, technicians, and visiting scholars. In any case, it is a good bet that they were lower on the foodchain than Van Parijs.
[Alice] Gast [associate provost and vice president for research at MIT] praised the scientists who made the initial allegations, saying that the university depends on all of its members to defend the integrity of research, even if it means the awkwardness of challenging friends and colleagues.

Keep in mind, if the other researchers really were lower on the foodchain than Van Parijs, that we're talking about something significantly more awkward than challenging a friend and colleague — we're talking about challenging the boss. You know, the guy whose letter of recommendation will make a big difference to your future as a scientist. The guy who needs to sign off on your dissertation. The guy who pays your salary.

Were there conversations in which these other researchers raised their concerns with Van Parijs before they brough their concerns to university administrators? The article doesn't say, but it's hard to imagine that there weren't. Maybe the questions were oblique. But you want to be reassured that the research projects of which you are a part are on the level. Really, there are so many messy consequences that flow from your boss being a cheat that any other theory with evidence to support it is preferable. Only when you're sure that he's really crossed to the other side do you get the admistrators involved.

Standing up for the integrity of one's own research, and of scientific research as a whole? That's a good thing. Mad props to the whistle-blowers.

MIT
MIT said Van Parijs quickly admitted fabricating data, as well as falsifying data, which means changing it in a misleading way. The confidential investigation was conducted by MIT scientists whom Gast declined to name. Van Parijs was placed on paid administrative leave in September 2004, and did not have access to his lab, she said...

Gast said that MIT is working with his coauthors on retracting published errors and that all of Van Parijs's colleagues have been very cooperative. The university is also preparing a report to the US Office of Research Integrity, an agency of the Department of Health and Human Services that investigates scientific misconduct involving federal funds, so that it can perform its own investigation. She said the university immediately stopped the spending on his grants when the problem was discovered, and would work with the government to determine what money needed to be returned.

MIT did well in taking the concerns about Van Parijs's work seriously. They started investigating the allegations quickly. They stopped spending the grant money, recognizing that the funders of the research thought they were funding the collection of actual data rather than fabrications. They are involved in correcting the problems in the published scientific literature stemming from Van Parijs's misconduct. MIT gets that science demands a high standard of integrity, and they don't want the MIT brand associated with behavior unbecoming a scientist.

At the beginning of the investigation, they put Van Parijs on paid administrative leave. After more than a year of getting the facts, they were comfortable enough in their assessment of his deeds (and his character) to fire him. Even if Van Parijs had been tenured (he was not), MIT would have been able to remove him "with cause". Scientific misconduct is a pretty clear cause for removal.

Kudos to MIT for finding the facts and taking appropriate action to remove a bad actor from its faculty. Beyond cleaning its own house, MIT acquits itself well by its "ongoing work to correct the scientific record". Not mentioned in the story is how MIT handled the dissolution of the Van Parijs lab; I hope efforts were made to find reasonable positions for the postdocs, grad students, and technicians displaced by this dissolution.

Luk Van Parijs
It goes without saying that a scientist ought not to fabricate or falsify data. Fabrication and falsification suck. These deeds are varieties of deception. Deceiving the people reading your papers in the journals, or the people reading your grant applications and deciding whether to fund your research, is crappy. And, fabrication and falsification suck even more when done in papers on which you have coauthors. You're dragging good scientists down with you. Even when you've been taken out of the game, they still have to worry about corrections, retractions, and the lasting impact on their reputations.

But wait, what's this?
In an e-mail sent to the Globe last night from his MIT account, Van Parijs said, ''I was shocked at the timing and manner in which MIT made the announcement since I had cooperated with the investigation to the fullest of my capabilities."

Excuse me? Dude, you were caught. Indeed, when confronted, you confessed your wrongdoing. MIT had to fire you. It's not like anyone was going to be able to trust any of your data again. You knew what you were doing was wrong ... and you still did it.

You cooperated? Great. But that doesn't mean MIT can, or should, keep your bad acts a secret. They can't protect your reputation without simultaneously hiding the fact that you put crap into the pool of scientific knowledge. They are not going to keep things hush-hush so you can, maybe, get a job in science someplace else and not have to work under a cloud of suspicion because of your prior bad acts.

Get a clue. You're not a bewildered first year grad student who doesn't know the rules of science. (Indeed, it's quite possible that your grad students were the ones with the moral compass and the courage to rat you out.) You're an associate professor in your mid-30s. If you don't know how to be an ethical scientist by now, is there any good reason to think you'll pick it up at some point in the future?

Van Parijs's mentors
He worked in the lab of Dr. Abul Abbas at Brigham and Women's Hospital for three or four years, until about 1997, according to Abbas, now chairman of the pathology department at the University of California, San Francisco. Abbas said he did not see any indication at the time that Van Parijs might falsify data. He said he is talking with people at Brigham and Women's Hospital to decide whether to investigate the work Van Parijs did while in Abbas's lab.

One imagines if Abbas did see an indication that Van Parijs's, or any other graduate student under his supervision, might falsify data, he would have grabbed him by the shoulders and given him a good shaking. A good research advisor wants you to learn how to do good research. Learning how to obtain clean, reliable data is a lot harder than learning how to make up data that suits your hypothesis.

[H]is career had been highly promising, including being hired as a postdoctoral fellow in the lab of famed Nobel laureate David Baltimore.

''He was a very personally attractive, excited, and thoughtful guy who cared about a wide range of science," Baltimore, now president of the California Institute of Technology, said in an interview yesterday. ''When I first heard there was a question about his work, it came as a very great surprise to me."

Dr. Baltimore, we meet again. Why is it that you are always surprised when someone raises a question about a collaborator's work? Is it that you have an optimistic view about your collaborators? Or do you maintain a certain ... distance from the day to day details of what they're up to? I'm not going to call it willful ignorance, but maybe you should consider collaborating more closely when you undertake collaborations.

The broader culture of science
By no means do I want to say that circumstances made Van Parijs cheat. He owns this mess. But, I have to wonder just a little if, given some changes in the culture in which he was working, we'd have fewer cases like this.
'He got a job and finished his postdoc training much faster than average people," said Xiao-Feng Qin, who worked on the same laboratory bench as Van Parijs at Caltech and is now an assistant professor in immunology at MD Anderson Cancer Center in Houston. ''I think people thought he was a golden boy, because he finished so fast."

You have to get results, and publish them, to stay in the game of science. Doing more, and doing it faster, gives you an edge over the competition for the scarce resources of jobs and grants. Getting really amazing results helps too.

But getting good results, in the real world, takes time. Doing things right sometimes means doing things over. Doing things right often means your hypotheses don't pan out. Writing up clear articles that present your findings accurately and explain what they do, and do not mean, takes time too.

Valorizing high output at great speed over careful research may be a problem. At the very least, we ought to not simply marvel out the golden boy's productivity; we ought to look at his output very carefully to make sure it holds up. Science isn't supposed to be about an impressive biography or being "personally attractive" and fun to talk with. We're supposed to be trading in real results!

Will the community of science learn the lessons from the good and bad actors in this case, take them to heart, and nip future misconduct in the bud? It would be nice to think so. But it wouldn't surprise me if we've got another case like this six months from now.

"Prove me wrong, kids. Prove me wrong!"


Technorati tags: , ,

Saturday, October 29, 2005

Protecting the meaning

Science, by its nature, is an activity that has communication built into it. It's a big world, so scientists need to split up the job of figuring out what's going on with it, and they need to report their findings back to the team. Moreover, the sharing of information between scientists is a way for the individual scientist to be sure what she's observing is a real phenomenon rather than an equipment malfunction or a figment of her imagination. And, of course, scientists pass on information to non-scientists, whether practical information ("Hey, you might want to cook that at a higher temperature if you want to eat it and not get violently ill.") or more esoteric information ("Here are the cool things we could learn by accelerating particles and smashing them into each other; can we have some money?)

A perennial source of frustration for scientists who go to the trouble of telling lay people what they've found is that lay people manage to misunderstand the scientists so frequently. Certain science journalists have been known to make the problem worse by quoting scientists out of context or by playing "balanced reporting" games that don't accurately reflect the center of gravity of scientific opinion. In some ways, this seems like a risk inherent in any kind of communication: whatever you say (or write) can be misinterpreted by someone. Short of spending all your time trying to straighten out the folks who are confused or just not sharing any information in the first place, what are you going to do?

If you're the National Academy of Sciences or the National Science Teachers Association, you're going to do something that makes Dr. Free-Ride very, very proud.

You see, NAS and NSTA have published science education standards. They went to great lengths to make them good, and to put them into words that communicate clearly what students ought to understand about what science knows and how science knows it. In writing and publishing these standards, these organizations clearly hoped that they would be put to use in designing quality science classes.

They did not hope that their standards would be taken and modified in such a way as to remove or alter key parts of what NAS and NSTA were trying to communicate about scientific methodology and scientific knowledge. According to NAS and NSTA, that's precisely what happened when the Kansas State Board of Education took the two sets of standards and substantially altered them to create the state's science education standards. And, rather than let the Kansas State Board of Education misuse and misrepresent these carefully crafted standards, NAS and NSTA have decided to withhold copyright permission.

To really grasp the righteousness of this decision, it's worth looking at a chunk of the Kansas Science Education Standards, and the NAS response to it. I've taken both from the NAS review of the Kansas Science Education Standards, 14 pages of downloadable shock and awe.

First, from the statement on the deveopment of the Kansas standards:
The 1998-2001 science standards committee was able to build upon and benefited from a great deal of prior work done on a national level; the National Science Education Standards published by the National Research Council; Benchmarks for Science Literacy from Project 2061 of the American Association for the Advancement of Science (AAAS); and Pathways to the Science Standards, published by the National Science Teachers Association (NSTA). This allowed the foundation for the Kansas Science Education Standards (2001) to be based on research and on the work of over 18,000 scientists, science educators, teachers, school administrators and parents across the country that produced national standards as well as the school district teams and thousands of individuals who contributed to the benchmarks.

Now, the response of Barbara Schaal and Jay Labov, the NAS reviewers:
This statement suggests that the Kansas Standards are based in large part on these three documents from the NRC, AAAS, and NSTA. However, all three documents are clear about the central role of evolution to the life and physical sciences. Because of the changes that a minority of members of the Kansas State Board of Education made to those state standards in removing aspects of biological and physical evolution and related topics, all three organizations denied copyright permission to the Kansas Board in 1999 (see http://www4.nationalacademies.org/news.nsf/isbn/s09231999?OpenDocument). When the composition of the State Board of Education changed in 2000 and these areas of science were returned to the Kansas Science Standards, our three organizations jointly issued a statement praising this action of the Board (see http://www.nasonline.org/site/PageServer?pagename=NEWS_statement_president_02142001_BA_science_education).

In other words: If you're going to put together standards that fundamentally mislead about the state of scientific knowledge and the proper methodology of science, you'll have to do it without trying to anchor your standards in the authority of NAS and NSTA.

And indeed, it is clear, from the point-by-point analysis presented in the 14 page review of the standards, that this is exactly what the Kansas State Board of Education was trying to do. Download it, read it, and marvel.

No one is stopping the folks in Kansas from crafting their own science education standards. But to craft standards slipped carefully into the published standards of NAS and NSTA indicates that they thought the authority of NAS and NSTA on matters scientific was important. Indeed, in order to protect that very authority, NAS and NSTA essentially called shenanigans on the Kansas State Board of Education. And that's a use of copyrights I can really get behind.

(Hat tip to Jack Krebs at Panda's Thumb, whose post brought this matter to my attention.)


Technorati tags: , ,

Thursday, October 27, 2005

An appeal to my readers

Yeah, I'm talkin' to you!

My department has become the Source of All Required Ethics Training for a number of departments, programs, and colleges at my university, and this seems to flow more in some cases from external pressures (e.g., what the accrediting agency or funding agency requires) than from a deep respect for the value of a philosophical grounding in ethics. Of course, I have blogged about this a little.

But now, in a setting where you all are not pressing my department for an ethics course that will achieve some necessary end for you, I would like to ask for your honest opinions:

What kind of ethics training does a scientist really need?

If you are a scientist (or scientist-in-training), what pieces of information are most useful in your day-to-day scientific activities from the point of view of being a responsible scientist? Is it an ethical theory? A piece of policy? A rule of thumb for deciding how to go forward in a tricky situation? Where or how did you learn it? Are there aspects of being a "responsible" scientist that you wish you had learned more about, and if so, what are they?

Do you think being a responsible scientist in your particular field puts special requirements on you, or creates particular challenges? (I'm interested in a broad swath of "science" here -- including experimental and theoretical sciences, natural sciences, physical sciences, social sciences, and computer science and mathematics.)

Another way to cast the question is: If you (or a student in your field) had to take a class in ethics, what would you put in that class to make it maximally relevant?

Or: What do you want all the scientists in your field to understand about what it means to be a responsible scientist?

If you are a non-scientist, I'm guessing you're here because you have some interest in how science interacts with the other stuff going on in our world (such as policy decisions, education, etc.). Let me put a related question to you all:

What do you want scientists to know about ethics/how to be a responsible scientist? You can, of course, answer based on your favorite dystopian vision of what happens to everyone else if scientist don't have or don't use this crucial ethical information.

You can respond in the comments, or by email to me (dr.freeride AT gmail DOT com). If you could identify yourself as a scientist (with your field) or a non-scientist, that would be helpful.

Thanks for your feedback!

Tuesday, October 25, 2005

How's the inventory in the (scientific) marketplace of ideas?

Scientists, as much as any other people of reason, have an interest in creating a bustling marketplace of ideas. So, you might suppose that their attitude toward scientific theories, and even toward the wild ideas that aren't yet theories but suggest new directions, would be the more the better. And, you might be especially puzzled by the reluctance of scientists to support the "teach the controversy" position on the continuing Intelligent Design versus evolutionary biology contretemps.

Certainly, this is a view you might come to from the testimony of sociologist Steve Fuller in the Dover case. Here's how it was reported in the York Daily Record:
Fuller said intelligent design is a scientific theory that should be taught in school.

But during cross-examination, he said intelligent design — the idea that the complexity of life requires a designer — is "too young" to have developed rigorous testable formulas and sits on the fringe of science.

He suggested that perhaps scientists should have an "affirmative action" plan to help emerging ideas compete against the "dominant paradigms" of mainstream science.

The pool of peer reviewers is smaller than it has been because, as scientific research gets more and more specialized, there are fewer people in that specialty and even fewer of them are willing to peer review pieces, Fuller said. Consequently, grant money also goes to fewer researchers, he said.

"People don't want to judge the validity of a scientific theory based on who is talking about it and promoting it." ...

Fuller told the court that one of the problems of science is with the very definition of "scientific theory," which is the idea of well substantiated explanations that unify a broad range of observations. He said by requiring a theory to be "well substantiated," it makes it almost impossible for an idea to be accepted scientifically.


Let us grant that scientists can be creatures of habit, and that they can be suspicious of new ideas, just like anyone else. Let us also grant that there are ways in which actually-executed peer review departs from the ideal. And, let us grant that new ideas (especially the wacky-looking ones) may require some time before the scientific community can get a good read on whether they will amount to anything.

Does all this mean that there ought to be affirmative action in the Halls of Science for "emerging" and/or seemingly crackpot ideas?

First off, even if the answer to this question is "yes", it's not at all clear that a high school science classroom (most of whose denizens will not grow up to be scientists) would be the best place for this. The point of science pedagogy at this level is to convey something about our current understanding of various facets of the physical and natural world and, more importantly, about the methodology used to come to this understanding. Alternative paradigms probably pack more punch in later science classes (I'd say past the introductory level of college instruction or, even better, in one's first foray into research as an undergraduate). You need to really understand the dominant paradigm to be able to appreciate alternative to it. And, at least when I was a student, undergraduates and graduate students had enough of a taste for iconoclasm that they could be counted on to give most of the alternative paradigms on offer a good looking over. They might do this while the boss's eyes were averted — but this just reinforces my sense that the newish scientists are much less in the thrall of authority or tradition than Fuller seems to be suggesting.

Indeed, given the lore within different scientific disciplines about scientists who made great discoveries (and won prizes and stuff for doing so) by questioning the conventional wisdom of their field, you better believe there is a cachet associated with doing research at the paradigm's edge. It may not be the project you're working on that is the easiest to fund. It may not be a line of research that colleagues in your department think the most of (since they're busy doing research on other edges of the paradigm). But you bloody well keep looking to see if there's anything to it, because you would hate for some other young pup to be the one to convince the community of science that this alternative paradigm is the real deal.

And, while there may be instances where a more serious look at an alternative paradigm is called for, scientists will require at minimum that the alternative paradigm points the way to actual research that could be done to test it or solve live scientific problems with it. The requirement is not, despite what Fuller seems to suggest, that the new paradigm have all the answers. But, it has to fit with enough of the evidence and show real promise of solving outstanding problems that there's a sensible way to use it to guide scientific research. An idea that can't do that isn't ready for this particular marketplace of ideas.

Ed Brayton, also responding to Fuller's testimony, puts it rather well:
Fuller is of course correct to point out that there have been scientific revolutions in the past that have overturned much of what we thought we knew about the world. But those revolutions were the result of scientists actually doing science - building theories and models, testing them against the data, publishing the results for their peers to see and arguing over the results to reach a consensus - not by hiring PR firms and lobbying legislatures and school boards.


Finally, as for ID:

  • Not a new idea

  • Not especially helpful in guiding experimental research (else where are all the experimental reports of research guided by it?)

  • Both potentially iconoclastic and politically popular (at least among governmental folks and people with private think tanks that have lots of money to giveto researchers) -- thus, there's no reason to think mavericks wouldn't be pursuing it if they felt like it.


So, I'm not really feeling the need for ID affirmative action.

Of course you want scientists to be open minded. But, isn't that what all this scientist-on-scientist career competition is supposed to ensure? (If not, could we maybe ease up on the tenure-track-red-in-tooth-and-claw business?)


Technorati tags: ,

Monday, October 24, 2005

Talking down, or keeping things real?

At The Panda's Thumb, guest contributor Joe Meert has post about the Geological Society of America meeting and specifically, about what was said there about Intelligent Design.

I'm always happy to see what the scientists are talking about when they get together, and since I don't have much personal contact with geologists these days, it was useful to see what's on the geologists' minds. Given the importance of geology as a source of evidence for past biological goings on, they have, as you might imagine, rather strong reactions to efforts to impose ID in science classrooms.

But, there is a piece of the meeting (as reported by Meert) that has been gnawing at me a little. It's the strategy suggested by Don Wise. Of Wise's talk, Meert says:

He also noted that we should take our cues from politics. We live in an age of sound bites and using words like “incompetent design” can be more effective than trying to explain in scientific detail why it’s bad science. Wise encourages geologists to take lessons from politics; (1) don’t be defensive (2) keep your points simple and easy to remember (3) use humor to make your points (4) aim your points at the voters.

(Bold emphasis added.)

Now, I don't want to quibble with the claim that a more political strategy might be more effective than the standard scientific approach of looking at evidential support, testability, logical consistency and the like. Given the endumbening effects of years of TV and standardized tests, punchy catchphrases and slandarous mambos may be the most effective way to turn public opinion.

But ... part of what's groovy about science is that it isn't a popularity contest. It's about grabbing onto empirical data, building theories that hold up despite repeated attemt to knock them over, and coming up with an account of the world and its many phenomena that reflects something about how the world really is rather than just how we want it to be.

Now, you can participate in this kind of serious scientific debate and still have a sense of humor. (And if you can't, PZ and Orac can. So can most of the science teachers I've had since the tenth grade.) But my fear is not that bringing the funny will undercut the devastating logic of the scientific arguments. Rather, my fear is that bringing the funny may come to replace the devastating logic of the scientific arguments.

And that would be a problem.

The reason to talk to a scientist, rather than a witty political pundit, a funny pastor, or a stand-up comic, is to get the word on what we know from science and, more importantly, how we know it. Without the scientific explanations, all we have is a bunch of folks trying to win you over with their sparkling rhetoric. With the scientific explanations, there is something like a rational basis to believe the rhetoric or decide it's full of crap. I know it's terribly old fashioned, but I like to make up my mind about things based on reason rather than punchy delivery (or, for that matter, political power). Part of what wins people over to science is that power, while not entirely absent, is greatly modulated by the fact that every scientist in the community is supposed to be accountable to the same world, and to the community of other scientists trying to figure out that world. Losing that reality-based character of the scientific enterprise would be a big mistake. Indeed, leaving the reality-based character of the scientific enterprise (not just the claim to have the backing of reality, but the demonstration of how the story has to fit with reality in specific ways or be abandoned) out of interactions with lay people would be a mistake, too. Otherwise, it really does devolve into a "we say, they say."

(For the record, I think some of the strategies Don Wise outlines in the abstract of his talk really involve engaging in scientific arguments rather than eschewing them for surface flash. But I don't want political considerations in these frustrating times to end up sucking all the goodness out of scientific interactions with non-scientists. So let's be careful out there!)


Technorati tags: , ,

Friday, October 21, 2005

Do you want them to learn it in the schoolyard?

I just got back from talking with an outside evaluator about the federally funded training grant project at my university that tries to get more of our students to graduate school in science. The evaluator is here not at the behest of the funding agency, but rather at the request of the science professor here who oversees the program. Because, you know, he wants to know how good a job we're doing at what we think we're doing, so we can make improvements if needed, and he figured an outside guy who has evaluated other such programs could give us some good insight here.

Let me pause to note that the folks who are this serious about making their efforts successful are a big part of why I love working here.

Anyway, I was on the agenda because I teach the ethics course the federal funding agency requires of students supported by training grants like this one. (That is to say, they require some course on research ethics; I developed the particular course they're taking.) We had an interesting discussion, the evaluator and I, about the genesis of the course, the enrollment, the syllabus, and such. And, in the course of this discussion, we arrived at one of the nagging worries I have about courses like this:

It is possible that learning ethics (even ethics-for-scientists) from a class in a philosophy department will have less of an impact on science students than learning ethics from their science professors would have.

Part of this, I'm afraid, is the curse of the required class. Kids hate required classes, even if what they're required to take has potential value for their lives down the road. Anyone who has taught a class with prerequisites has probably had experience with students who took a class they were required to take and promptly forgot almost all of it ... because check the transcript, you did that class. That shouldn't mean you need to waste valuable brain real estate remembering anything you learned.

And seriously, a class from a philosopher? What the heck does that have to do with learning to be a good scientist? (Set aside the fact that a science professor approached me to develop this course, and that one of the science departments here decided, without consulting me about it, to make this class a requirement for their majors, with at least one other science department thinking about following suit.)

In my case, I actually have some ammunition by way of my misspent scientific youth. Y'all are looking at going to grad school in a science? Been there, done that, wrote the dissertation, got a Ph.D. But, there are other schools where the science majors take their ethics from philosophy departments and the philosophers can't necessarily throw down so effectively.

There's still the worry that, if you put all the discussions of ethics in a one semester course over in some other department, you convey the distinct impression that: (1) thinking about these issues for a semester is sufficient, and/or (2) no one in your home department can teach you what you need to know about ethics, and/or (3) grown-up scientists don't actually need to pay attention to ethics. Obviously, I think all of these are misapprehensions. Indeed, the science professors I've talked to here are really good at highlighting responsible conduct of research (RCR) issues in all kinds of contexts. These professors understand that RCR is still relevant in their work, and they even seem to talk to each other about the best ways to conduct their research rather than letting things blow up and calling in the ethicists in Haz-Mat suits.

It has also been pointed out to me that a bunch of science professors would actually enjoy coming to the ethics class but they don't for fear that it would stifle the class discussions. Given how discussions in this class tend to be (brutally honest, with lots of critical examination of how things are done in real labs -- some good, and some bad), it's probably true that the presence of an authority figure from a science department would change the dynamic. So I guess this is a real advantage of offering the course in the philosophy department: students from different scientific disciplines get to discuss their experiences among different branches of the tribe of science on neutral turf.

Still, I can't help but thinking it would be better if there was some kind of forum for discussions of ethics back in the students' home departments -- a lunch group, a once a month seminar or group-meeting type thing, something. This would help the students understand that their professors really care about this stuff, too. And, it would give the faculty a regular channel for talking together about RCR issues -- because people seem to do better with ethical decisions when they can chew them over with a group.

I imagine I'll keep thinking of ways to optimize this. If my history here is any guide, the science professors I'll be looking to for help implementing my harebrained schemes will be receptive.

They provide a nice contrast to the chair of another department here, who showed up, frantic, at our department the other day. Their department failed to get accreditation because the course (in our department) they had been using as an ethics course didn't satisfy the acrediting agency. So, they wanted us (of course) to whip out a specialized ethics course that would satisfy the acrediting agency. Immediately. In trying to impress upon our department chair just how badly they needed us to solve this problem for them, the chair of the other department exclaimed, "We don't know anything about ethics!"

Sugar-dumpling, that's what scares me ...

It does seem to reinforce the notion that ethics instruction is valuable wherever you can find it. But is it worth $13K? You be the judge!







My blog is worth $13,548.96.
How much is your blog worth?





Technorati tags: ,

Wednesday, October 19, 2005

Tangled Bank #39 has arrived

The 39th edition of The Tangled Bank is being hosted by The Questionable Authority. Go read about science and be enriched.

Tuesday, October 18, 2005

National Chemistry Week: women in chemistry?

On Day 3 of National Chemistry Week 2005, I thought I'd poke gingerly at that perennial blogosphere question: where are the women?

In 2005, more of them are in chemistry departments than there were a decade or two ago. While some (who I won't link directly, because puh-leez!) would have you believe that women can't hack hard sciences like chemistry (and thus can't write hard SF), there are loads of women who manifestly can because they do. Some even have quantum lectures that have cracked the top 100 podcasts. (A woman chemist who blogs is a "where are the women" twofer, right?)

But if we want to talk numbers, we're not looking at anything close to parity. From the data the American Chemical Society has compiled, in 1989 women earned 38.7% of the U.S. bachelors degrees granted in chemistry and 25.7% of the Ph.D.s. (My entering class of more than 50 in the fall of 1989 was roughly a quarter female, which seems to jibe with these stats.) In 1999, women were up to 45.5% of the bachelors and 29.7% of the Ph.D.s. In other words, even though it feels to me like there are significantly more chicks with Ph.D.s in chemistry than when I was in school, it's not actually that big an increase. The ACS Women Chemists 2000 report that about 43% of the women with chemistry Ph.D.s work in academia, with proportionately more teaching at schools granting AAs, BSs and MSs than at schools that grant Ph.D.s or at medical schools. More male chemistry professors with tenure than female, but a closer to even male-to-female ratio among the younger professors (assistant and associate level) than the older.

It looks like things are improving. On the down side, the academic job market is relatively tight, and there's more of an expectation that one will line up a string of postdocs and/or visiting assistant professorships between graduate school and the first "real" academic appointment. This is hard on everyone, male or female. But if you want to do something crazy like have kids, it can make it much harder not to exit the pipeline. In chemistry, this isn't just a matter of temperament. There are a great many experimental systems with which one ought not to interact if one is trying to gestate a healthy human. It's one thing if you're doing computational chemistry, but if you're doing organometallic syntheses ... let's just say, you're not working up till your due date.

People have gone round and round about whether, from the point of view of the science produced, it really makes much difference if the proportion of female chemists is high or low. I think it's an empirical question. But, in the meantime, the proportion of woman can matter quite a lot in terms of how many females get into chemistry in the first place. (I don't just mean get into it as a field of study or career, but get into it as a way of thinking they enjoy and are good at.) Dealing with male professors who let you know that they don't expect females to be any good at the science they are teaching can put you off that science right quick -- not because you think they're right, but because it's tiring dealing with that bullsh*t on a regular basis. Studying chemistry can be tons of fun, but it's hard sometimes to consider joining the professoriate if there's no one in their ranks who looks like you. Seeing more chemistry professors, male and female, who are working out ways to balance research and teaching with "outside" commitments like family would be a good thing for the profession as a whole. These female chemists say a lot of useful things about how we should rethink our comic book ideals about what it means to be a good chemist. I don't agree with some of the sweeping generalizations (women being more cooperative and such -- it's not an essential trait, dude), but it does seem like there are many styles and skills that would build a healthier community of chemistry.

We're not there yet, but I think there's movement in the right direction.

Edited to add: The Chemical Heritage Foundation had a traveling exhibition about women's contributions to chemistry. It traveled to science fairs and such, to get the word out to impressionable young minds that women have always been among those able to hack chemistry.


Technorati tags:
,

Monday, October 17, 2005

Students as a vulnerable population.

I just read Pat Shipman's article in the latest issue of American Scientist, in which Shipman cautions scientists not to be complacent about the Intelligent Design vs. Evolution brouhaha. Shipman makes a case that many scientists have made (especially in the blogosphere), but a few piece of this article really jumped out at me. The thing that connects them is the idea that children are being put at risk.

Speaking about the Dover case, Shipman lays out the standard reasons to take Intelligent Design as "scientifically unimportant," but then notes:
I might have settled back into complacency had I not learned that students in the public high school in my town—a town dominated by a major university—can "opt out" of learning about evolution if their parents send a letter to the school. Allowing students to "opt out" of learning the basic facts and theories of biology is about as wise as allowing them to "opt out" of algebra or English: It constitutes malfeasance.


Even if parents don't ask that Billy and Susie be excused from learning about evolution, they might not get to learn about it:
In at least 40 states, ID is being considered as an addition to the required science curriculum in public schools. This year a poll by the National Science Teachers Association showed that one-third of science teachers feel pressured to include ID, creationism or other "nonscientific alternatives" in their science classrooms. Some teachers are so intimidated by the threat of parental complaints that they skip material dealing with evolution in their classes.

In other words, what's on the menu in science class is being shaped in a significant way by public opinion rather than, say, by sound principles of science pedagogy. If there were a public backlash against the Law of Sines, would it be proper for Trigonometry teachers to drop it quietly from the curriculum? (Given current rates of "non-standard" punctuation among the general population, should grammar texts be consigned to the flames and graders wielding red pens be told to take a flying leap?)

Also in this article, Shipman brings up the case of Bryan Leonard, doctoral candidate in science education at Ohio State University. You'll probably recall that Leonard's Ph.D. defense was put on hold (by his advisor) because his committee wasn't properly constituted according to university policy, given that it lacked a faculty member with expertise in science education. This wouldn't have been news, except that Leonard testified as an expert witness at the evolution hearing held by the Kansas Board of Education. Leonard, a high school teacher, apparently wrote his dissertation on the research question:

When students are taught the scientific data both supporting and challenging macroevolution, do they maintain or change their beliefs over time? What empirical, cognitive and/or social factors influence students' beliefs?

One might quibble with the precise wording of the question (especially given that the consensus view among biologists is that there aren't any scientific data that challenge macroevolution). Setting that aside, there is an interesting question here about what makes a scientific theory believable to a high school student. Knowing the answer to a question like this might have all sorts of useful implications for efforts to improve science education. But, if how you get at the answer is by "teaching" kids something that isn't science as if it were science, then again we're talking about actions that constitute malfeasance. As put by three OSU professors in a letter to the dean of the graduate school (quoted by Shipman):
There are no valid scientific data challenging macroevolution. Mr. Leonard has been misinforming his students if he teaches them otherwise. His dissertation presents evidence that he has succeeded in persuading high school students to reject this fundamental principle of biology. As such, it involves deliberate miseducation of these students, a practice that we regard as unethical.

And, the professors asked the logical question: where was the IRB in this research involving human subjects?

Here's what the government has to say about research with children:
The issue of children as research subjects is a complex one since they are not considered able to make informed choices independently. Further, exposure of children, particularly healthy children, to more than minimal risks must be weighed carefully.

When including children in research, the role of the family should be considered in devising the protocol as well as in obtaining informed consent from the parents or guardians. If the research is based in schools, appropriate involvement and permission must be obtained from the school. Adequate measures must be developed to protect children's privacy and to ensure that their participation does not stigmatize them in the present or future.


The key question here is what the risks are to children who participate in the research, and how these compare to the potential benefits to the subject of participation.

One risk, it seems, it that the students could get an utterly misleading view of what science knows, and, more importantly, of how science works. Potentially, this could interfere with the students' ability to learn in subsequent science courses. (It might also discourage these students from pursuing further coursework in science.) Bad preparation in science could have a direct effect on the courses of study available to the college-bound students, and could impare the ability of the students to develop a sufficiently good understanding of scientific reasoning not to fall prey to all manner of quacks and charlatans (go ask Orac -- Orac knows). These outcomes are all too common given the current baseline of high school science education. It seems clear that the risks would be much greater if science education was actually designed to mislead in the service of answering Mr. Leonard's research question.

(Besides, in the rare cases when deception is part of an approved research protocol, afterwards the subjects are supposed to be told that they have been deceived, as well as why the deception was necessary to answer the research question. Did Mr. Leonard's research protocol do this?)

The overarching question here is what secondary education is supposed to do for (and to) the children who receive it, and whether certain ways of delivering it (including ID, leaving out evolution, misleading about the state of scientific knowledge or about the process of reasoning that produces that knowledge) undercut those aims. This is an important question to answer given that children are a vulnerable population. Their ability to make informed choices independently is not yet fully developed, and indeed education is supposed to help them develop this ability. Screwing around with this puts childrens' long-term well being at risk.

It might be objected that the parents should actually be the ones with the primary, or sole, responsibility for helping their children develop the ability to make informed choices independently. Unfortunately, some parents may not be able to do this, and others may actually view it as contrary to their own interests to make independent thinkers of their kids. (Certain teenagers make this view understandable.) Given that we have public schools (for a little while more, anyway), this seems to reflect a commitment to the idea that there are certain things children ought to learn, not only because it is good for them, but also because it is good for society as a whole. We owe the children a certain basic education, and we're better off as a society if they all get it. Similarly, we owe children certain basic kinds of protection regardless of what their parents might think about it. Is "excusing" your child from learning about evolutionary theory on the basis of religious objections ethically equivalent to denying your child a live-saving blood transfusion on the basis of religious objections? Perhaps not. But, I'm not sure it's as different as some science-semi-literate parents think it is.


Technorati tags: ,

Sunday, October 16, 2005

National Chemistry Week is here.

Today is the first day of National Chemistry Week 2005. And, even though Bill Carroll, the current president of the American Chemical Society, is keeping a blog of his "Extreme Tour" in celebration of National Chemistry Week, I figured I should blog about it, too.

What's so great about chemistry? Of course, if you're a kid, chemistry has the allure of magic -- something might explode! (For those averse to permanent damage, there are plenty of cool chemistry activities that are much safer than whatever my brother did with his store-bought chemistry set to scorch the hell out of our parents' card table.) But I suspect it's real charm for students, at least when it's taught right, is that it's a science that looks for the "whys" pretty early in the game. In general, introductory chemistry doesn't involve much memorization (whether of equations, as in physics, or of Linnaean taxonomy, cell organelles, phases of mitosis, or any of the other important details one has to remember in a biology class). Rather, you learn how to use the Periodic Table almost like a decoder ring to figure out why various substances behave the way they do. From the very beginning, the chemistry student is thinking not just in terms of facts, but in terms of rationalizing those facts. For every weird exception you learn to a regular pattern, the challenge is to understand why it breaks the pattern.

In this chemical universe the student enters, things start to make sense in a way that everyday life hardly ever does. It can be downright seductive. But of course, the orderly chemical universe to which the student is exposed is the product of much labor in laboratories. What happens in the labs can seem chaotic rather than orderly, and sometimes it is only the determination of the chemists to find the underlying order that keeps the going back to the bench to tame the chaos. Needless to say, finding the order in chaos can be seductive, too.

While chemistry often gets props for being a practical subject to pursue (where "practical" usualy means leading to gainful employment, and the contrast class is something like philosophy), a lot of the people I know who went into chemistry were led by their hearts more than their heads. Chemistry just felt like the right way to engage with the world.

Primo Levi expressed this as well as anyone else has. Writing about his experiences as a chemistry student in Italy during the rise of Fascism on the eve of World War II, he said he felt

That the nobility of Man, acquired in a hundred centuries of trial and error, lay in making himself the conquerer of matter, and that I had enrolled in chemistry because I wanted to maintain faithful to that nobility. That conquering matter is to understand it, and understanding matter is necessary to understanding the universe and ourselves: and that therefore Mendeleev's Periodic Table, which just during those weeks we were laboriously learning to unravel, was poetry, loftier and more solemn than all the poetry we had swallowed doen in liceo; and come to think of it, it even rhymed! ...

[T]he chemistry and physics on which we fed, besides being in themselves nourishments vital in themselves, were the antidotes to Fascism ... because they were clear and distinct and verifiable at every step, and not a tissue of lies and emptiness like the radio and newspapers


(The Periodic Table, pp. 45-46.)

Why does it choke me up to see Levi want to conquer matter by understanding it, or to see that his motivation to understand matter is a desire to understand the universe and himself? Coming at a science like this, you can see why a couple centuries ago it was called natural philosophy. As nuts and bolts as the work of a chemist can be -- and Levi was for most of his career a chemist who took on problems in different industrial labs, including an IG-Farben lab while he was a prisoner at Auschwitz -- the drive here is to understand the substance of reality, to get at knowledge we can be sure of and can hold in common with others. Wanting something like this -- to understand of the universe we're in and how we fit into it, to share our experience with our fellow human beings -- feels like the most human of impulses. Science is not the show-offy acting out of the maladjusted braniac, but the labor of the human spirit.

Maybe if more of that got across to science students, and to the public at large, cultivating scientific literacy wouldn't seem so much like taking a dose of castor oil.


Technorati tags: , ,

Friday, October 14, 2005

Science in crisis (?)

Today, Inside Higher Ed has a story about a crisis in the training of new scientists, engineers, and mathematicians that I swear I've seen at least half a dozen times in the last dozen years. (And, given that for large periods of time within those dozen years I was paying more attention to research and thesis-writing than I was to the world, it's likely that I missed a few iterations.) The crisis is that, while in the last decade U.S. enrollments in science, technology, engineering, and mathematics (or STEM) have increased at the bachelors and masters level, doctoral programs in these fields have seen a decrease in enrollment. So, we have more students staying in the science pipeline longer than a decade ago (i.e., not leaking out in high school or college), but fewer are making it to the end of the pipeline and a Ph.D.

The first question to ask is whether this constitutes a real crisis (which means figuring out what would make it a crisis). Then, if it is a crisis, we'd need to figure out what to do about it.

Since I was an undergraduate a loooong time ago, I have heard doom-and-gloom stories about critical shortages of science Ph.D.s in the U.S. Of course, while in graduate school in chemistry I found out that U.S. universities were turning out something like 30% more chemistry Ph.D.s than the U.S. market could handle. So I confess to being a bit skeptical about the target numbers of science Ph.D.s certain folks think we ought to be reaching. Is this one of those situations where we're not actually striving to reach the optimal number of Ph.D. scientists to fully staff a healthy and active scientific community but rather pursuing growth in output for growth's sake? Does our economy depend on an every increasing production of Ph.D. scientists? (What's the futures market like on string theorists?) Is this just one more number we'd like to be able to hold up to compare ourselves favorably to Germany, or China, or India?

I hope not; it would be a silly reason to make more people suffer through the slings and arrows of a doctoral program. But there may well be good reasons. While people with bachelors and masters degrees in the sciences (and engineering and math, of course -- assume they're included; even economics) can find work in research labs, it's generally the folks with the Ph.D.s (or M.D.s) who are driving the original research. Maybe this is the cost of producing fewer Ph.D.s and more B.S.s and M.S.s: we move away from discovery, innovation, and deeper understanding and toward just being technicians. The market can probably absorb a lot more technicians than PIs, but leaving the original research to other countries that are better at producing large numbers of Ph.D. scientists might come back to bite us -- especially if there are skilled technicians who can be had for cheaper than those trained in the U.S.

And it's not clear that the costs will be primarily economic ones. Sure, with fewer Ph.D.s we may have a harder time supporting a thriving biotech industry, or attracting international students to our research universities (where they often inject cash into the system that their American classmates do not), or winning Nobel Prizes at the rate to which we've become accustomed. But who exactly will teach the increased number of students in the sciences at the bachelors and masters level when our Ph.D. output falls below the replacement rate?

(Given the large number of Ph.D. scientists looking for permanent positions, many with multiple post-docs under their belts, this does seem rather far fetched. I'm just reaching for a plausible explanation of the "crisis" here.)

The big distinction between Ph.D.s and scientists with lower levels of education -- that Ph.D.s are the ones who "really" do research -- might point to another reason to be concerned about our progress in increasing science enrollments at the lower levels. In many instances, centers of scientific research place all the emphasis on research, to the exclusion of attention to teaaching, public outreach, and other potentially useful functions a scientist could perform. (I'm not making this up: Sean Carroll talks about it here.) This already has an impact on all those undergraduate science majors. I would never let me children be undergraduate chemistry majors in my graduate chemistry department. The level of instruction from the professors (those guys with the Ph.D.s) was, with a few exceptions, pretty dismal. Occasionally, some good instruction could be had from a TA whose graduate advisor had not yet impressed on him or her that time spent on teaching was lost forever to what really mattered -- research. And, all to often undergraduate research experiences were supervised by graduate students rather than the professors. So, lots of those people getting B.S.s at the high powered research universities may know a lot less than they ought to about their scientific field and about research in it.

And, this is less than ideal if we want a population that actually understands something about science. Once you're out of school, your facility cranking out problem sets doesn't do much for you. It would be much more useful to have a grasp of how science tackles problems (problems whose solutions are not yet in the back of the book!), how science uses the tools it has and develops new tools, and how scientific patterns of thought set up conditions where we really can build a body of knowledge that we can count on (in part because many pairs of hands and eyes scrutinize it an continually update it). You may not need this kind of understanding of science to be an adequate technician (although it can be helpful when unexpected outcomes present themselves). On the other hand, if all you have is mad technical skillz, you may be replaced with a robot someday.

I think the crisis I'm feeling with science has less to do with the numbers game and more to do with recognizing that the value of the Ph.D. scientist is and ought to be more than a machine to produce original research. Yet, in many places, this is how scientists (especially scientists trying to build tenure cases) are regarded. When the rational choice is to shut out the rest of the world so you can do your research and get your high-impact publications, it makes it harder for people outside the system to get value out of the science Ph.D. Unless teaching is recognized as valuable too, the people who might be able to teach us the most about what science is up to right now will have other things they have to do. (And is it a surprise that advisors who haven't given much thought to classroom teaching sometimes have serious difficulties teaching their advisees in the lab?)

This isn't a trivial worry. If Ph.D. scientists are isolated from everything but their research, it becomes easier to marginalize them -- to assume that what they have to say when they make their rare appearances in broader discourses just doesn't matter. We let the public discourse write off science at our peril. So, it seems like maybe we need to start cultivating our scientists as communicators -- not just in journal articles, but in the public square. And we can't really cultivate it without making it count. Research universities ought to recognize that building public interest in and understanding of science is, in the long run, good for the health of the research university and good for the health of science education.

Having more science Ph.D.s might be a good thing, but doing more and better things with the science Ph.D.s we have might be even better.


Technorati tags:
,
,

Thursday, October 13, 2005

Skeptics ahoy!

The 19th Skeptics' Circle is up at Time to Lean. Check out the fairway, gawk at the oddities, and don't forget to have a funnelcake before you head home.

Wednesday, October 12, 2005

Impact factories.

Via Crooked Timber, a story in the Chronicle of Higher Education about how the impact factor may be creating problems rather than solving them.

The impact factor is supposed to be a way to measure the importance of a journal, and of the articles published in that journal, in the great body of scientific research. To compute the impact factor for Journal X for a given year, you divide the number of citations in that year of articles published in Journal X in the two preceding years. Then, you divide that by the total number of articles published in Journal X in the two preceding years. What does this give you? A measure of how important other researchers in the field think the articles in Journal X are (since, the thinking goes, you cite articles that are important, and articles that are important get cited). Because we're looking at a ratio, we get a sense of the proportion of important articles in a journal rather than just the raw number of cited articles. So, if in 2005 there are 100 citations of articles published in 2003 and 2004 in PZ's Journal of Squid Canoodling, and 100 citations of articles published in 2003 and 2004 in Deep Thoughts from the Discovery Institute, but PZ J. Sq. Can. published 1000 articles in 2003-2004 while DTDI published only 300 in that time, the impact factor of PZ J. Sq. Can. is 0.1, while the impact factor of DTDI is 0.33.

As scientists at prestigious research universities know all too well, the impact factor is a way for tenure committees and granting agencies to judge how good your publication record really is. Instead of simply counting your publications, folks can also look at the impact factor of the journals in which you've published. This could be a helpful thing if one has a small number of publications but they're in widely cited journals. Similarly, it could bite you in the butt if your articles happen to be in journals that hardly get cited at all.

But, as seems to be the case any time you represent something like the importance of a journal (or of your scholarship) to a number, there are ways the impact factor doesn't tell you all you might want or need to know. For one thing, given that it's often useful to cite review articles, journals that publish lots of review articles get cited more, thus raising their impact factor. This doesn't tell you much about the impact of those journals' articles on original research. Also, remember that the impact number is calculated based on citations of articles from the preceding two years. It sometimes happens that really important results, for whatever reason, are not recognized (and cited) that quickly. So a three-year-old article that is cited like crazy will do nothing for the impact factor of the journal it's in. And of course, "little" journals that focus on fairly specialized scientific subfields have a much harder time getting high impact factors simply because there are fewer scientists who work in these subfields to cite the articles. (If these journals keep a really tight reign on the number of articles they publish, they could offset the low number of articles cited. But, this isn't necessarily what you want to do in a little subfield that is just taking off -- building the literature seems like a more natural impulse.)

Also (as the "journals" in my example above should suggest), sometimes articles are cited a lot to be made fun of.

The Crooked Timber discussion of the article talks about ways journal editors might try to game the system, raising some legitimate worries. Any place you are clear on the selection criterion, you can usually figure out multiple strategies to satisfy it. For example:

The gaming has grown so intense that some journal editors are violating ethical standards to draw more citations to their publications, say scientists. John M. Drake, a postdoctoral researcher at the National Center for Ecological Analysis and Synthesis, at the University of California at Santa Barbara, sent a manuscript to the Journal of Applied Ecology and received this e-mail response from an editor: "I should like you to look at some recent issues of the Journal of Applied Ecology and add citations to any relevant papers you might find. This helps our authors by drawing attention to their work, and also adds internal integrity to the Journal's themes."

Because the manuscript had not yet been accepted, the request borders on extortion, Mr. Drake says, even if it weren't meant that way. Authors may feel that they have to comply in order to get their papers published. "That's an abuse of editorial power," he says, "because of the apparent potential for extortion."

Robert P. Freckleton, a research fellow at the University of Oxford who is the journal editor who sent the message to Mr. Drake, says he never intended the request to be read as a requirement. "I'd be upset if people read it that way," he says. "That's kind of a generic line we use. We understand most authors don't actually do that." He changed the wording in the form letter last week to clear up misunderstandings, he said.


The benign reading of the editorial suggestion to add citations is, "Hey, there's other work out there, which you might not have noticed, that bears on yours in an interesting way -- have a look!" And, sure, who can argue against keeping up with the literature that relates to your own work? It might have been more persuasively above-board had the literature-to-look-at list included articles from other journals (in a Macy's sends you to Gimbel's kind of move), but you can't blame an editor for knowing his own journal best. The less benign reading is that the editors are taking active steps to artificially boost their journals' impact factors.

What gets lost in all this gamesmanship is the idea that scientific work ought to be evaluated on its own merits. Peer reviewers are supposed to be assessing the soundness of the science, not the sexiness of the finding. Sure, the sheer number of scientific journals in most fields makes it hard for any mortal to "keep up with the literature", which means that scientists look for quick ways to locate the papers most likely to be important. But the quick ways may not be the most reliable ways. And, depending on how sensitive peer review decisions are to impact factor gamesmanship, it is conceivable that things could reach a point where being published in a high impact journal has less to do with the soundness of your science than with the fashionability of your findings. At the extreme, scientists might have to spend a lot more time replicating published results, and might spend quite a bit of time ignorant of important findings that are published in out of the way places, or are waiting in the queue at the journals with high impact factors. And that would suck.


Technorati tags: ,

Tuesday, October 11, 2005

Art appreciation.



This is an unsolicited piece of artwork from my offspring, apparently in support of teaching the controversy. I am unsure why the meatballs in this illustration are so large; perhaps it has some bearing on the progress in the Dover case.

Meanwhile, I suspect that the insects in the picture are subliminal advertizing for the Circus of the Spineless but am unable to confirm that money changed hands.

Truth in advertizing (university Nobel laureate edition).

In yesterday's Los Angeles Times there's an interesting piece on how universities count "their" Nobel laureates. It goes without saying that this is not silent, internal, beaming with pride at the accomplishment of someone dear to us counting. Rather, we're talking about Nobel counts that get put out in university communications with the world.

First, what's going on with the counting?

There seem not to be uniform, agreed upon standards for identifying which institution gets to claim a Nobel laureate. Some, like UC Santa Barbara, are fairly strict, laying claim only to professors who won their Nobels while at UCSB and who are still active members of the UCSB faculty. Other schools, like the University of Chicago, claim as theirs Nobel laureates who were students at Chicago, or did research at Chicago, or are past or current members of the faculty. There's probably an argument of the "it takes a village" variety that could be made to support this kind of practice … but it seems like actually making an explicit argument about why you count Bobby Braniac as your Nobel laureate if he did two years of graduate school with you before transferring out and winning a Nobel a dozen years later might just serve to draw attention to the fact that your laureate count is inflated. (If you can legitimately claim a 15% contribution to Braniac's scientific trajectory, does that make him only a 0.15 laureate for your count? What does that do to the total number of Nobel laureates you can legitimately report?)

From the linked article:

There is good reason for ambiguity in the accounting.

"A university does many things," said David J. Gross, a UC Santa Barbara professor who won the Nobel Prize for physics in 2004. "It teaches, so it's proud of its students who went on to do good things. They're proud of their researchers who worked at the institution who have done good things. And of course they're proud of the people who are there now and their impact on current research and current teaching."

So one school might claim a Nobel laureate who was there as an undergraduate, another for graduate work, another for advanced research, and several for being on the faculty. Who can say which school is most worthy?

Caltech President David Baltimore, whose 1975 Nobel Prize for physiology or medicine is claimed by MIT, Rockefeller University and Caltech, sees no need to attempt an answer.

"It is sort of a game, and you might as well play it by whatever rules you want, like solitaire," he said.


(Bold emphasis added.)

Is it really OK for universities to count Nobel laureates by any rules they want?

To answer that, we need to understand why Nobel laureate counts are supposed to matter. Again, quoting from the article:

Nobel Prizes make schools attractive to prospective students, faculty and donors, conferring the aura of a winner. A university's roster of laureates is "probably more significant than [the college rankings in] U.S. News and World Report," said F. Sherwood Rowland, a UC Irvine professor who won the Nobel Prize for chemistry in 1995.

The fact that a school has one or more Nobel laureates is supposed to tell you something about that school. Just what it's supposed to tell you is a bit nebulous. If a university has a laureate who won the Nobel based on research done at that university, it tells you that high-powered research happens there. If a university has laureates who are still active in research, it tells you there may be opportunities for graduate students to learn from them. (Students might also learn from these luminaries in the classroom, but that's not clear from the mere presence of Nobel laureates on the faculty.) If a university was where a laureate completed his or her bachelor's degree, it tells you that an undergraduate education there is not sufficient to put people off research.

But closer inspection can tell you some other important things about a university. Some universities are the places laureates did the research recognized with Nobel prizes, while others tend to make senior hires of Nobel laureates. This tells you something not only about research conditions at the different kinds of schools but also about the philosophy for building excellent faculties (grow them yourself vs. buy them already famous). How many of a university's claimed laureates were, say, denied tenure while in the midst of their ground-breaking research? This says something about the university's receptiveness to cutting-edge ideas as well as shining a light on its standards for tenure.

Universities are reaching different audiences when they announce their Nobel body counts: students, faculty, administrators, alumni and other potential donors, funding agencies contemplating the feasibility of new research at a given institution, prospective students (and their tuition-payers), and propsective employees. For certain purposes (like securing donations for a new science building), just rolling off the list of your Nobel laureates may do the job of giving the donors warm fuzzies about the instituion. But from the point of view of presenting prospective students with an accurate picture of how their experience will be enhanced, certain ways of counting your Nobel laureates seem less than informative, if not downright deceptive. High school senior Carly Cranium might be better off knowing whether this university is one that has provided an excellent undergraduate education for Nobel laureates, or whether courses in her intended major will be taught by a Nobel laureate, or whether she'll have an opportunity to do research in a lab where prize winning research was done. Otherwise, the number of Nobel laureates ends up being a statistic with as much meaning to the prospective student as the number of acres the campus occupies and the percent of alumni who give money to the endowment.

For early-career scientists picking a suitable environment (for research, teaching, and maintaining a baseline level of sanity), it seems like the laureate count is meaningless without further information. It's nice to know a university will think fondly of you once you've won fame and fortune, but it's better still to know whether a place will nurture you before the world knows you're a star.


Technorati tags: , ,

Monday, October 10, 2005

When unfalsifiability is your business plan.

This is a follow-up to my discussion last month of the Acu-Gen Baby Gender Mentor test, prompted by the report today on Morning Edition that authorities in Illinois are investigating the company marketing the test to determine whether the claims made to consumers in marketing the test rise to the level of consumer fraud.

So, for those just tuning in, the deal is that this test promises an accurate determination (99.9% accurate, if you want the numbers) of fetal gender, as early as 5 weeks into a pregnancy, from only a few drops of the mother's blood. And, they promise a 200% refund if the test is wrong.

Set aside, for the moment, concerns about whether there's good scientific evidence that this test could be so accurate. (That was the subject of the last entry.) Cast your gaze, for a moment, on what seems to be the standard operating procedure when a consumer tries to get a refund for an inaccurate test result.

Scenario 1: Baby Gender Mentor test says boy, but the sonogram says girl. The lab does a retest. If the retest still says "boy", the consumer is told that sonograms give inaccurate gender identification 20% of the time. Still a chance Baby Gender Mentor will be right, so no refund yet.

Scenario 2: Baby Gender Mentor test says boy, but amniocentesis says girl. The lab does a retest. If the retest still says "boy", the consumer is told that the pregnancy began with a set of fraternal twins, one boy and one girl, and that the boy was a "vanishing twin". No refund, even though the boy Baby Gender Mentor detected vanished.

Scenario 3: Sonogram indicates a single fetus, after which Baby Gender Mentor test says boy and amniocentesis says girl. The lab does a retest. If the retest still says "boy", the consumer is told that there was a vanishing boy twin, and that the sonogram that indicated that it was a single fetus is no proof of anything, since sonograms give inaccurate gender identification 20% of the time. No refund, because you can't prove Baby Gender Mentor was wrong!

Do you see the pattern here?

It is true, of course, that sonograms don't always give enough information to make an accurate determination, either of gender or of how many fetuses are present. (A woman of my acquaintance discovered, in her eighth month of pregnancy, that she was expecting twins — on the fifth sonogram.) But the great part here for the folks selling Baby Gender Mentor is they've got an excuse worked out for any mismatch between their test results and what other diagnostic tools (including visual inspection of the newborn) indicate. No, they aren't producing independent evidence that there was a vanishing twin, but since it could have happened, that's enough for them to say that their test result hasn't been falsified. And that, it seems, means that they'll never have to act on their double-your-money-back pledge.

With a business plan like that, maybe it's time to branch out from biotech to religion.


Technorati tags: , ,

Saturday, October 08, 2005

Getting excited about science: the 2005 Ig Nobel prizes.

On Thursday, they awarded the 2005 Ig Nobel Prizes. They don't carry the same cachet as those other Nobel prizes, nor the same hefty cash awards, but the Ig Nobels are often awarded for scientific findings everyday folk can wrap their minds around.

For example, the Ig Nobel in economics went to the inventor of "clocky", an alarm clock that hides so you can't hit the snooze. Who can't appreciate an invention like that? The Ig Nobel in chemistry went to a pair of engineers who studied whether people swim faster or slower in syrup relative to water. (Turns out to be a wash because of the trade-off between drag and leverage.) And, a prize in fluid dynamics was awarded for calculation, from physical principles, of the pressure built up inside penguins before they defecate. It's really the untold story behind March of the Penguins.

Some of the recognized research, I think, is much easier for a scientist to appreciate than for the lay person. The winner of the prize in physics, one of whom was alive to receive the prize, had been conducting an experiment following drops of pitch as they dripped through a funnel -- since 1927. Since the pitch drops fell at the rate of about 1 every nine years, that means that in the 78 years they ran the experiment there were 8 or 9 drops. How's that for careful empiricism? And, if 8 or 9 data points seems too sparse for your tastes, there's the winner in nutrition who photographed and retrospectively analyzed every meal he has eaten for the last 34 years. You'd think the efforts of these scientists could at least make graduate students feel better ... but I'm not sure folks who haven't done the time to try to get an experiment to work, or to get enough data to get meaningful results, would be sufficiently impressed.

Also, a few of the Ig Nobel prizes usually go to projects that, uh, don't seem to yield much in the way of new knowledge. The stand-out this year is the prize in medicine, which was awarded to the inventor of neuticles. I'm not saying that a dog might not retain his self-esteem, post-neutering, better with neuticles than without ... but as far as I can tell there wasn't much canine self-esteem research. Yes, different sizes and levels of firmness offer the consumer/pet guardian lots of choices ... but this is a medical achievement? If you ask me, it just doesn't rise to the same level as watching pitch drops for 78 years. (This year may just be a fluke. The 2004 Ig Nobel Prize in medicine went to research on the effect of country music on suicide.)

I'm hopeful that schoolchildren hearing newsreports about some of these winners will be inspired to pursue science, engineering, and medicine in the hopes of snagging one of these prizes for themselves some day. In my heart of hearts, though, I fear they'll be more inspired by the winners of this year's Ig Nobel Prize for literature:

The Internet entrepreneurs of Nigeria, for creating and then using e-mail to distribute a bold series of short stories, thus introducing millions of readers to a cast of rich characters -- General Sani Abacha, Mrs. Mariam Sanni Abacha, Barrister Jon A Mbeki Esq., and others -- each of whom requires just a small amount of expense money so as to obtain access to the great wealth to which they are entitled and which they would like to share with the kind person who assists them.

Why do the poets get more respect than the scientists?


Technorati tag:

Thursday, October 06, 2005

Getting philosophical, getting committed.

There's something about the ongoing evolution versus intelligent fisticuffs that's been festering with me. It's one of the criticisms that's been leveled at evolutionary theory by folks like Phillip Johnson: the claim that evolution is a philosophical theory. Here's the claim, in context, as presented by a student newspaper at the University of New Mexico covering a talk Johnson gave there:

Johnson said the theory of evolution, or any theories like it, will not survive the 21st century because evolution is a philosophical theory.

He went on to say that one of the major flaws of the theory of evolution is that it excludes the possibility of divine intervention within the creation of living organisms.

“What we have is a theory that supports a moral view that nature is all there is and God is completely out of the picture,” Johnson said.

He said that one of the reasons God is left out of the theory is because scientists are either atheists or very liberal about religion.

Johnson’s speech concluded on the proposal that students should be taught a variety of theories regarding the way life is started — not just evolution.


Now, I know there's a long history of trash-talking various moments in the history of science by saying they look more like philosophy than science. Thomas Kuhn noted that a sure sign that your paradigm is in trouble is that the discussion gets philosophical. He wrote:

It is, I think, particularly in periods of acknowledged crisis that scientists have turned to philosophical analysis as a device for unlocking the riddles of their field. Scientists have not generally needed or wanted to be philosophers. Indeed, normal science usually holds creative philosophy at arm's length, and probably for good reasons. (Structure of Scientific Revolutions, p. 88)

Closer to home, I know what it's like to call one's mother and tell her her child is leaving a perfectly reputable scientific field to become a philosopher. I get that dropping the phi-bomb on a scientific theory is intended to damage reputations and hurt feelings.

But can we pull back for a moment to look at what the content of the slur is supposed to be?

PZ Myers responds (in part) to Johnson and his posse of trash-talkers this way:

You could also claim that Christianity, capitalism, and democracy are "philosophical theories"—that doesn't imply at all that they are going to expire. Evolution is not speculation and faith and guesswork, there is evidence…and what evolution tries to do is explain the evidence.

While I'm grateful for the assurance that philosophical theories aren't about to be yanked off the shelf like expired milk, PZ is gesturing towards a line one should draw that separates evolutionary theory from "philosophical theories" like Christianity, capitalism, and democracy. Johnson seems to be recognizing the same line, but disagreeing about what side evolutionary theory is on. PZ suggests that the "philosophical" side is where you'll find the theories based on speculation, faith, and guesswork. Johnson (as portrayed in the linked article -- even given his track record, I'm hesitant, given experiences with the school paper here, to assume the student paper at UNM necessarily got it right) seems to be saying "philosophical theories" are the ones that use their metaphysical commitments to support certain moral views and undermine others.

So, is the ideal supposed to be that scientific theories are utterly and completely free of philosophy? May I gently remind my scientific friends that, in the medieval university, we'd all be in the same department (or at least, on the same hall)?

Of course scientific theories bring some philosophy with them. You think the data we collect today can help us make good predictions about what will happen tomorrow? That reflects a metaphysical commitment you have about what kind of universe you're living in. And there's nothing wrong with having that commitment. Indeed, it's what helps some of us get out of bed in the morning. You want to show me the analysis that shows your results are statistically significant? Fine, but don't forget that the claim of statistical significance rests on metaphysical commitments about the normal distribution of data in the bit of the world you're studying. If you didn't start with some metaphysical hunches, there would be no way to do any science.

But, clearly, there is a difference between doing this and jumping into a "philosophical theory" of the sort Johnson and PZ seem to have in mind. And here, let me be the millionth person to point out that there is an important distinction between what one takes up as a methodological strategy and what one takes on as a metaphysical commitment. To Johnson, the fact that God is not mentioned anywhere in evolutionary theory is equivalent to biologists saying they're committed to the non-existence of God. To biologists, on the other hand, the non-mention of God reflects a methodological commitment to explain phenomena in the natural world by pointing to natural causes. Saying, "I'm only going to accept causes of types X, Y, and Z in explanations of this sort of phenomena" is not the same as saying, "There's nothing there but causes of types X, Y, and Z." If, as I pour a flask of water on a spoonful of table salt, I dance the tarantella, it would be silly to accuse the chemist, who explains why the salt dissolved by pointing to the structure of the salt and the structure of the water, of denying the existence of the tarantella. Clearly, the tarantella exists, but the chemist doesn't need it to explain why the salt dissolved.

(Occam's razor? Also philosophical. Don't let it freak you out.)

The deal with science — the thing that makes it different from some "philosophical theories" you might worry about — it that there's a serious attempt to do the job of describing, explaining, and manipulating the universe with a relatively lean set of metaphysical commitments, and to keep many of the commitments methodological. If you're in the business of using information from the observables, there are many junctures where the evidence is not going to tell you for certain whether P is true or not-P is true. There has to be a sensible way to deal with, or to bracket, the question of P so that science doesn't grind to a halt while you wait around for more evidence. Encounter a phenomenon that you're not sure is explainable in terms of any of the theories or data you have at the ready? You can respond by throwing your hands up and hypothesizing, "A wizard did it!" , or you can dig in and see whether further investigation of the phenomenon will yield an explanation. Sometimes it does, and sometimes it doesn't. In cases where it does not, science is still driven by a commitment to build an explanation in terms of stuff in the natural world, despite the fact that we may have to reframe our understanding of that natural world in fairly significant ways.

So really, philosophy is not the problem here. Rather, the problem is hanging certain metaphysical commitments on science that are extraneous to the job it's doing.

Which commitments are separable from which others, and which commitments are joined at the hip, can be a tricky business if you're not used to thinking carefully. (See, for example, the current debate over whether you can support disability rights and also support physician assisted suicide.) Even people who think for a living can let their assumptions go unquestioned if they've been humming along for a while. But, it seems to me, if you want to know what a scientific theory commits you to, you might want to talk to some scientists who use the theory. If you're really brave, you could even ask a philosopher of science.


Technorati tags: , ,