Monday, November 28, 2005

Just where is this handbasket headed?

It's time for our regularly scheduled soul searching about what is happening to science (and its future) in the U.S. I was already going to blog about this, given recent news that Stanford has lost two scientists it was trying to recruit to Stanford's Institute for Cancer and Stem Cell Biology and Medicine to Singapore's Institute of Molecular and Cell Biology (which is viewed as a more favorable environment for this cutting edge research than even the deep-blue San Francisco Bay Area). But, not surprisingly, PZ Myers brought up similar worries in response to an article in Wired.

The big worries are that the U.S. is not supporting science in important ways -- U.S. universities aren't turning out enough science Ph.D.s (and science education at K-12 and undergraduate levels is not very good), there's not enough money being put into federally funded scientific research (or industry R&D), ideologically motivated restrictions on research are making it hard to do the research the really good scientists want to do, and scientists generally aren't getting the respect they deserve in the public square. The fear is that all these factors will coalesce in such a way that scientists start fleeing, first particular states in the U.S. (as suggested in this look at how different states fund or prohibit stem cell research), then the U.S. itself. At which point, presumably, everyone left in the U.S. will either be a frycook, a member of the punditry, or a participant in a reality show.

Are we totally screwed? Hard to tell from here, but it seems like this is a good moment for reflection about the precise nightmare scenario we're afraid of.

It is hard to deny that individual scientists will be influenced in their choices of where to go by the circumstances in the different locations under consideration. If you want to do research with stem cells, you probably don't want to take a university position in a state where stem cell research is illegal. If besides having a scientific career you also want to raise kids, you might want to do science in a place where the schools provide a good education (including in matters scientific). If you're a woman doing science, you might want your job to be in a country where it's legal for you to drive yourself to work. At the same time, scientists' choices are not always completely determined by their scientific agendas. You might forego a position at a ridiculously well-funded research institution, where you could get any state-of-the-art piece of equipment you want, in favor of a position at a less-well-funded institution where you can also contribute to excellent undergraduate education or live in a community you prefer; the trade-off would be finding creative ways to pursuit your research with fewer resources. But scientists make this kind of choice all the time. (My suspicion is that the scientists with tight resources tend to be more clever about experimental design because they have to be.) Moreover, there are more than a few scientists who tend to work, whether vocally or quietly, to change local circumstances (lobbying for more funding or more sensible regulations, supporting K-12 education, talking to their neighbors). These people manipulate experimental systems for a living -- are they going to accept their environment as immutable? Not so much (although some prefer moving to a different environment over tinkering with the environment they have). So, it would surprise me a lot if the sun rose one day on a rapture-like disappearance of all the scientists from the U.S. or even from Kansas.

At the same time, it is an important feature of science that you need a scientific community in which to do it. To keep research going, you need a critical mass of people in the lab -- not just to run the experiments, but to bounce ideas off each other and challenge each other's interpretations of the results. Because scientists have finite lifespans, this means you need to keep putting new scientists into the system even to maintain equilibrium. So, cutting back funding of science (including the training of new scientists in graduate school) could set up a situation where whole lines of research die out simply because there aren't enough scientists to maintain them. It's not clear, either, that "thinning the herd" of research lines this way would make for a stronger body of scientific research in the long run, given that it's often unclear in advance which lines of research will be the most important from the point of view of basic knowledge or practical implications. And, there's no guarantee that defunct research lines could be successfully resurrected later when more funding becomes available.

None of this would mean that there wouldn't still be people in the U.S. interested in pursuing science. However, to do so in the context of a scientific community, they would need to: (1) go somewhere that has a suitable scientific community already, (2) find a way to participate remotely with such a community, or (3) figure out how to constitute such a community ex nihilo where they are. The resources for exercising the second option have been steadily improving, but there are still many kinds of research one probably shouldn't do in jammies in one's garage. The third option is pretty hard without many like-minded friends and buckets of money. Assuming one is good at picking up languages, the first option may seem like the path of least resistance. But if the scientists all go to Singapore, that may have an impact not only on the science education available in the U.S., but also on who earns the royalties on patented inventions ... like pharamceuticals. So, as has been said before, this would not be a development without consequences for Joe Q. Public.

At the same time, futzing with one little piece of the current situation without addressing the others may lead to bad consequences, too. For example, increasing the number of science Ph.D.s produced in the U.S. only improves the scientific climate in the U.S. if there are ways for these Ph.D.s to stay engaged in science in the U.S. For starters, making sure there are enough jobs for all these Ph.D.s (non-exploitative jobs, not an endless cycle of post-docs until you achieve nirvana) would be a good thing. Large numbers of well-trained but under-employed and under-appreciated scientists enhances the feeling that the U.S. is not a good place to do science.

It does seem to me that the current situation presents an opportunity to consider cultivating a truly international community of science. Concerns have been voiced about how many foreign students come to the U.S., get science Ph.D.s, and then go home with them, thus shifting resources (in the form of education) from the U.S. to other countries. Setting aside the fact that foreign students pay lots of money to study in Ph.D. programs (whereas U.S. students are generally fully supported), as Ph.D.s in their home countries these scientists are well-situated to collaborate with U.S. scientists (and with U.S.-trained scientists in other countries). At a moment when it may be getting harder to do science locally, it seems like it should be getting easier to do it globally. Moreover, collaborations between scientists in different parts of the world who have different cultural and religious backgrounds but who share a commitment to the project of science just might give the rest of us a model for figuring out how to get along in a diverse community. It might even help us get in touch with our common humanity.

As for the point PZ was making in the Pharyngula discussion: Are we, as a nation, spending money on other things (like wars, or corporate tax breaks, or reality shows) that would be better spent supporting science and the benefits that flow from science to society as a whole? I'm no economist, but I think the answer is probably yes.


Technorati tags: , ,

Tuesday, November 22, 2005

Teaching undergraduate science classes at research universities

Inside Higher Ed reports a story that shouldn't surprise anyone: TA’s as the Key to Science Teaching. You see, according to Elaine Seymour, recently retired director of ethnography and evaluation research at the University of Colorado at Boulder, a lot of college students who start out as science majors leave the sciences, largely due to poor teaching in their science classes. And, one of the sources of poor teaching in science classes is graduate teaching assistants with little to no guidance about how to teach.
... she [Seymour] said the sad fact is that most science TA’s don’t receive much help in doing their jobs well. Many get absolutely no preparation and those who do tend to receive a brief session with no ongoing mechanism to learn about their teaching.

“Training,” Seymour says, is “a very unfortunate word,” and she prefers to talk about “educational professional development” for TA’s. One of the major problems, she found, is that when teaching assistants want to become better teachers, they often feel the need to keep quiet about it. “They consistently told us that if they want to teach and they are interested in that, they keep that to themselves. They are afraid that they will be taken as less serious students” by the professors supervising their work, Seymour says.

Word!

Is is rare, especially in the sciences, that the faculty supervising graduate students pay any attention to the need for "professional development" in the area of teaching. Nearly all the focus is put on learning to be a competent researcher. A charitable interpretation of this is that the faculty regard research as a relatively unknown activity to new graduate students, one that can only be learned by full immersion, while they regard good teaching as something that will come naturally to any intelligent graduate student who gives it a try.

A less charitable interpretation is that the faculty don't actually care about undergraduate teaching. Does undergraduate teaching bring in multimillion dollar grants? Not the way a kick-ass research program does. Does undergraduate teaching lead to high prestige publications or Nobel Prizes? Not so much. Will well-to-do parents keep ponying up $40K and more a year so junior gets the name of the Big Prestigious University on the diploma, setting him up to earn the big bucks, even if junior never gets within 20 feet of an actual professor in a science class? It would seem so.

Of course, even if faculty at such universities had some kind of commitment to undergraduate education (perhaps of the majors in their department, if not the swarms of pre-meds), it wouldn't necessarily mean that these faculty would also be committed to helping their graduate students learn how to teach. Most of these grad students are TAs in the pre-med courses, anyway. And indeed, large numbers of pre-meds who need science courses (and from whom faculty most want to distance themselves) are the reason that the large research universities take in as many science graduate students as they do. Given the reality that there are more science Ph.D.s being produced than there are Ph.D.-level jobs for them, it's pretty clear that a large number of graduate students are used primarily to assume the necessary but unpleasant teaching tasks with which the faculty would otherwise be bothered and to generate scads of data for their advisors' research programs.

Any time a graduate student spends developing pedagogy is time taken away from running experiments. For the graduate students who won't find jobs with their Ph.D.s (because there are too damn many of them), learning to be a good teacher is wasted effort, and it doesn't maximize the department's return on its investment (by moving the pre-meds through and getting more data at the bench). And, for those lucky graduate students who will find jobs with their Ph.D.s -- well of course, they'll want jobs at big research universities like the ones they trained at, and evryone knows that what really matters there is research, not teaching.

One wonders what will happen if the research money dries up, and if tuition payers start demanding quality teaching for their tuition dollar. It might be time to think about alternate strategies for training graduate students.


Technorati tags: , ,

Monday, November 21, 2005

Ethics and alternative medicine

As requested by a fellow blogger, I'm weighing in with some thoughts about "complementary and alternative medical therapies" and the ethical implications thereof. There are plenty of other good posts to read on this general subject, most recently at Pharyngula, retired doc's thoughts, Notes from Dr. RW, and In the Pipeline.

Where I start on this is that doctors are in the odd position of trying to work from scientific knowledge and at the same time trying to provide care to human beings. A patient is not a petri dish. Not only is a human being wildly more complex than most of the experimental systems scientists try to tackle in the lab (where there are, after all, lots of controls in place), but the patient generally has strong opinions about his or her own experience. Sure, these opinions are subjective, but for the person who has them, they're mighty real. (Even if the excruciating pain I feel is "all in my head", the fact that it's in my head makes it real enough to me!)

Scientists are in the business of getting data on the observable features of the systems they're studying (and also working out clever ways to observe more of the features of those systems). They use that data to work out the structure of the system, the cause-and-effect connection, ways to predict what's coming in the system given certain conditions, and maybe even ways to alter conditions to bring about different outcomes. This can be challenging even in vitro. It is a much more impressive achievement in vivo. But it's worth noting that most in vivo research projects are vastly simplified compared to what you might think of as "the real world" of organisms (e.g., studying hundreds of mice that are very similar genetically, are housed in identical conditions, fed the same Mouse Chow at the same times every day, etc.).

In other words, to the extent that physicians are applying the fruits of scientific research to the treatment of their patients, they are often trying to extend results obtained in very controlled conditions to autonomous humans "in the wild". Given that most patients probably wouldn't want to adopt the correspondingly controlled environment, that's fine. But, it means that our best prediction of the likely outcome of intervention X is less certain than it would be under the conditions of the experiments that produced the best current knowledge about intervention X. Complicating things, even under experimental conditions, there is usually a range of outcomes observed in response to a given intervention. So, even while there may be very good evidence to support the prediction that intervention X will produce an outcome somewhere in a particular range, there may not be grounds for predicting that a particular individual will have an outcome in one part of the range rather than another.

"What does all this have to do with alternative medicine?" I hear you ask.

One of the big worries about complementary and alternative therapies is that they just don't have the science to back their efficacy. Yet, the public seems to be clamoring for them. In working out just what they should do about this state of affairs, physicians probably ought to examine why the alternatives are so attractive to so many patients. At least part of this, I think, can be explained in terms of what a tough-minded scientific approach to medicine brings with it, and how this might complicate the delivery of care (not just interventions) to the patient.

A patient shows up at the doctor's office looking for care that addresses a medical issue (an injury, a disease, prevention of some future ill). The doctor gets information about the patient's condition (via examination, lab work, and talking with the patient). Ideally, a medical intervention will effectively address the issue (repair the injury, cure disease or at least manage its symptoms, change the state of affairs in such a way as to lower the likelihood of the future ill). If the intervention can address the issue without making the patient feel like crap, that's a plus. The set of effective interventions the physician has to offer are generally those that have been proven in clinical trials. Ideally, you want trials in which administering intervention X is significantly more effective than administering an appropriately similar placebo (easier when we're talking about pills than non-drug interventions) in a double-blind set up, where neither the clinician nor the patient knows who's getting the intervention and who's getting the placebo. Such trials generate the "evidence" in evidence based medicine.

At present, very few of the alternative therapies people seek out (homeopathy, herbal medicines, chiropractic, accupuncture, etc.) have been through the double-blind clinical trial. By the evidence-based physician's lights, there is just no good reason to think they will be effective.

But for the patient who is given the best interventions evidence-based medicine has to offer, with no resolution to the issue that made him seek care in the first place, an alternative therapy may feel like something to try that might help the problem. In the same way that a 95% success rate of intervention X may be of no comfort at all if you're in the 5% not helped by that intervention, you may not care a fig if the alternative you try hasn't been demonstrated to be broadly effective -- you only care if it helps you. And even if your science-minded doctor tells you that it probably won't help you, if you feel better, you feel better. Even if it's just the placebo effect, it's still an effect, and feeling better is at least part of what you're looking for.

This does not mean, however, that physicians (and the medical schools that train physicians) ought to jump headlong into the uncritical acceptance of the whole plethora of alternative therapies. While patients need care from their physicians, they also expect that there is scientific knowledge guiding that care. Instead, it seems perfectly reasonable for the medical profession to subject the alternatives to the same sorts of clinical trials undergone by mainstream medical therapies. (Many such studies have already been done.) Such testing may turn up interventions that are as effective as mainstream interventions (remember that willow bark is an ancestor of aspirin). More importantly, it may turn up interventions that are harmful, about which patients should be informed. What to do with the interventions that prove themselves neither to be demonstrably effective nor harmful? The physician ought to be clear that the best scientfic evidence doesn't give any reason to believe that the intervention will take care of the medical issue -- but that there is little reason to expect it to be harmful, either. If the patient chose to explore the alternative, probably better that this happen with the physician's knowledge than on the sly. (It's better for the doctor to know how many parameters are being tweaked at a time, especially if the patient's condition changes.)

There is lots much that could be said here -- about who should conduct the clinical trials of alternative therapies (the medical establishment or the people selling the alternative therapies), about the psychological effects of paying a lot for a therapy on perceived improvement, on that feeling of alienation one gets from getting exactly three minutes of face-time from the treating physician and how that plays into perceptions of improvement), about the suspicions one might form that physicians are biased toward the pharmaceutical companies by something beyond scientific evidence (as I type this, my gaze falls on "Floxie", a bright blue plush drop of Fluoxetine -- regifted by a physician relative, who got it from a drug company rep). But on the question of how, ethically, a physician ought to handle the issue of alternative medical interventions, I would make the following suggestions:

  1. Be clear about what is known from clinical trials and what it means to the individual patient. This goes for both mainstream and alternative therapies. Being clear about what is known (desired effects and side-effects) and what is uncertain (including just how a particular patient will respond to a particular intervention) is crucial. Saying, "This will work for you" and being wrong about it undermines your credibility.
  2. Call the patient's attention to interventions clearly demonstrated to be dangerous and/or ineffective. This is worth doing even if the patient has shown no visible interest in these interventions; patients may choose not to discuss these with their physician (in the same way they may not be completely forthcoming about how much booze they drink, or how few vegetables they eat). To the extent that risks are known, especially for interventions readily accessable in the marketplace, patients need to know, too.
  3. Let patients know which interventions are still untested, and what their untested status means to the patient. While "We don't know what X does to people in your condition" leaves open the possibility of good outcomes, it also leaves open the possibility of bad outcomes. Patients may have different risk-taking strategies than physicians when faced with uncertainties. This doesn't necessarily mean patients are muddle-headed; they're just concerned with a different payoff (getting better vs. having firm support from evidence). The physician ought not to get paternalistic here, but instead to give the patient a clear enough explanation of what is known and what is not known that the patient can make well-informed use of his or her autonomy.

There are quacks aplenty looking to make a buck on snake oil. But, the physician shouldn't let anger at the quacks spill over into hostility towards or impatience with the patients who are trying to figure out their options. Being upfront about what is known, and what is uncertain, about alternative and mainstream therapies is a good way for physicians to set themselves apart from the quacks.


Technorati tags: , ,

Tuesday, November 15, 2005

Tangled Bank #41

Tangled Bank #41. Hosted by Flags and Lollipops. It's up already! People can read their science stories early!!

Yes, I'm a little excited.

Scare-mongering, or avoiding miscommunication?

Over at Crooked Timber, there's an interesting post about "quote doctoring" in the context of the battle over the reality of global climate change. In an interesting twist, John Quiggin's post looks at the doctoring of a quote about communicating scientific information to the public to miscommunicate what the original speaker was trying to communicate.

The quote in question, from Stanford University climatologist Stephen Schneider, is an interesting one:
On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but – which means that we must include all the doubts, the caveats, the ifs, ands, and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change. To do that we need to get some broadbased support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have. This ‘double ethical bind’ we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both.

[fn1. Schell, J. (1989). ‘Our fragile earth’, in “Discover” 10(10):44-50, October.]

The quote-doctoring discussed at CT seemed intended to make it look like Schneider was saying that it's OK to make scary, inflated proclamations to the public (while in your lab coat, naturally) if it serves the greater good of protecting the environment. But, interestingly, a number of the commenters to the post asked: isn't that what the full Schneider quote is saying?

I don't think Schneider is advocating punking the public in defense of Mother Earth. Rather, I think he's trying to capture some of the difficulty of communicating science to a lay audience. Let's go in for the close reading:

"On the one hand, as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but – which means that we must include all the doubts, the caveats, the ifs, ands, and buts."
Scientists know that their conclusions are tentative. They are drawing the best conclusions they can with limited data. They know that what they know rests on an intricate inferential edifice, and that new data or wonky instruments or a change in reasonable background assumptions could result in changes -- sometimes big ones -- in what it's reasonable to conclude. When talking to other scientists, there is no problem with including doubts, caveats, ifs, ands, buts. Other scientists understand that these do not mean the conclusions are worthless, nor do they mean that the scientists reporting the conclusions don't know anything. Indeed, the scientists may know quite a lot. But, they are sensitive to the relations between their data, their assumptions, and their conclusions.

To put it more simply: the attitude in science is generally that you ought not to conclude anything that isn't fully supported by the data, and that you never know what the next datum will be. This is why good scientists actually bother to collect data rather than just making it all up. Scientist-to-scientist the question is not so much what do you think is going on here? as how does the data support this conclusion. Rather than come to a conclusion that turns out to be wrong, a scientist anticipates ways it might go wrong and data that might undermine it.

"On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change."
In other words, scientists don't just pursue knowledge for knowledge's sake. Having learned something about the world, they sometime feel that they should use that knowledge to avoid Very Bad Outcomes.

"To do that we need to get some broadbased support, to capture the public’s imagination. That, of course, entails getting loads of media coverage."
Lay people don't generally read peer-reviewed scientific journals, attend group meetings in the science labs at nearby universities, or talk to their scientist neighbors about issues much beyond lawn care, mass transit, and the occasional school board election. Also, people hardly read the newspaper, maybe watch TV news while they're doing other things, and might miss a science-related news item on the car radio because they're screaming at the guy who just cut them off. If lay people don't seek out scientific information, then scientists who want to share the information (and enlist the help) necessary to avoid Very Bad Outcomes need to seek out the audience. And, unless you want them calling you at home (during dinner time!), this means they need to get help from the mass-media.

"So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have."
Is the mass-media going to give you enough air time (or column-inches above the fold) to explain your conclusions, plus the evidence that supports them, plus the uncertainties and how big these are relative to the strength of your conclusion, in the loving detail a fellow scientist would want to see? Here's a hint: will it help sell advertising? If not, you have to make your point clearly and efficiently. You have to make it in such a way that an audience accustomed to tuning out science will pay attention. You have to make it clear how the information you are presenting matters, in this case as a means to avoid the Very Bad Outcomes. Oh, and you have 60 seconds. Go!

Having tuned out science so effectively, members of the public generally have no understanding of error-bars, which means they can't tell big ones from little ones. Rather than recognizing that some doubt is a concommitant of the scientific method, they think that any expression of doubt means that you're only guessing. (Of course, if the point of science were merely to produce wild guesses, there are far cheaper ways to achieve that goal.) But how much time do you have to give your lay audience the full review of the basics of the scientific method, including how certain our inferences from empirical data can be? Will they check out of this review before you even get to the (suitably qualified) conclusions to which you want them to pay attention so we have time to avert the Very Bad Outcomes?

Maybe we start out with the "big picture" of what we've observed, what we think it means, and why we think it matters. Then, given a reasonable amount of time, we can answer questions. Which, I think, is what Schneider is saying here: "This ‘double ethical bind’ we frequently find ourselves in cannot be solved by any formula. Each of us has to decide what the right balance is between being effective and being honest. I hope that means being both."

Given the lay person's state of understanding about scientific methodology, trying to communicate scientific conclusions with all the scientific caveats may itself be a kind of mis-communication, since the lay person may end up with mistaken impressions about the goodness of scientific knowledge and the actually methodology (and its limitations) that goes into producing it. It doesn't help, of course, that scientist from industry and think tanks go to pains to make relatively small uncertainties look big -- big enough to drive a Hummer through.

It would help the communication immensely if the public actually understood enough about how science works to be able to comprehend the truth, the whole truth, and nothing but the truth from scientists who would like their knowledge to benefit even those who aren't scientists. But, it would seem, the public doesn't, in general, have that level of understanding. Until we figure out how to do something about that, it really does the least violence to the scientists' conclusions, and to the public, if the scientists translate their findings -- and their degree of confidence in them -- to laymen's terms.


Technorati tags: ,

Monday, November 14, 2005

Talking the talk vs. walking the walk (plagiarism update)

Earlier, I wrote about plagiarism in the engineering school at Ohio University. A masters graduate, Thomas Matrka, raised concerns about widespread instances of plagiarism in masters theses, while the administration ... well, didn't seem to view it as such a big problem as Mr. Matrka did.

Well, Mr. Matrka commented on my earlier post. I'm reproducing his comment, in its entirety, here:
Dear Dr. Free-Ride,

You pose five simple questions regarding plagiarism. Regardless of the context, the answer to all five questions is an indisputable, YES! An explanation follows.

1) “Do the practices Matrka identified constitute plagiarism?” Everyone knows that copying text word for word from a textbook without quotations is plagiarism. Some cases can be inadvertent, but the many of the cases I have discovered at Ohio University are extended and obviously intentional. There is no honest explanation for verbatim copies that include the same errors as the original, or misleading citations for works other than the actual work from which the text is stolen.

2) “Do the faculty have a duty to deal with past acts of plagiarism… and if so, how?” Plagiarism is a violation of university policy. Failure to distinguish theses containing plagiarism from those that are done honestly perpetuates the deception and ambiguity. How do administrators explain to the student who received a failing grade for plagiarizing a history paper that an engineering student who plagiarized received a master’s degree? How does a researcher properly cite work that is plagiarized? Some universities revoke degrees when plagiarism is discovered; ignoring past cases of plagiarism is inexcusable.

3) “Does pervasive plagiarism in a graduate program undermine the value of a degree granted by that program?” Most alumni, employers, and students expect a university to uphold its own policies. They do not ask, “do you allow plagiarism?” Cataloging theses and dissertations known to contain plagiarism unfairly creates suspicion around the work of all students and faculty.

4) “Do scientists and engineers have a common understanding of what counts as plagiarism?” Graduate students know they are plagiarizing when they open a book, copy it, and submit it for a thesis or dissertation.

5) "Do scientists and engineers agree that plagiarism is a species of scientific misconduct?” I am certain that any scientist or engineer would not appreciate their work being stolen and passed off as original work by another.

Most people agree that acts of plagiarism are very serious and intolerable. The problem at Ohio University’s Russ College of Engineering and Technology is the lengthy history of faculty approved theses and dissertations containing plagiarism. Acting on the evidence will undoubtedly raise questions of fraud and corruption. I will gladly share details with all that are interested.

First, let me thank Mr. Matrka for commenting on the post. It's nice to have the view of someone "on the ground" in a scenario to which I respond.

And (you knew this was coming), let me respond to these comments.

My questions in the original posts were not meant to suggest that engineering students (or engineering profs) were playing hooky when the other kids in school got The Talk about plagiarism. I'm sure almost all could provide at least a rough and ready definition of plagiarism if pressed to do so. But, as someone who spends a lot of time thinking about ethics, I'm drawn to questions about the differences between the values people say they are committed to and the values that seem to guide their behavior. In other words, if you talk the talk, but don't walk the walk (meaning, presumably, that you're walking a different walk), are your values the ones you're talking or the ones you're walking? And, does how we talk (and how we walk) change when we move from group to group (say, from an engineering school to the larger university community, or from a graduate classroom to a professional meeting of engineers)?

Mr. Matrka writes: "Everyone knows that copying text word for word from a textbook without quotations is plagiarism." This seems true. But it might be less clear what to make of use-without-citation of ideas, equations, or even common graphs from a widely used textbook. Often scientific training includes absorbing a more or less canonical body of knowledge that will serve as a common framework for discussions. It's conceivable that, after absorbing this knowledge, one might forget exactly which piece of knowledge came from which textbook (especially if that piece of knowledge is included in multiple textbooks, not to mention lectures). And, in some sense, the "textbook knowledge" is assumed to be so much the common grounding of a field that no one cites his source for it -- it's something one could find "in any textbook" on the subject.

I wonder whether the standard view that you don't have to cite ideas from the textbooks (because they appear pretty uniformly in all the textbooks) has made students and professors lazy about the citation of word for word quotations from textbooks. If you were putting it in your own words, you probably wouldn't cite it. Why put it in the textbook authors' words? Maybe because it's so clear in the textbook. Maybe because that way you know you haven't misstated the fact. I can only imagine that the faculty reading theses with uncited word-for-word quotations from textbooks either (1) don't realize that these are word-for-word quotations, since the faculty haven't spent quality time with the textbook in so long, or (2) do realize that they're word-for-word quotations but view this as a minor instance of plagiarism since it's not the kind of thing that would require citation if the thesis writer had expressed it in his own words.

Myself, I agree with Mr. Matrka that plagiarism is plagiarism. But, I understand the thinking behind saying this sort of plagiarism, while undoubtedly a sign of intellectual slothfulness, is not as evil as stealing someone else's description of a new experiment or someone else's bold new interpretation of experimental results.

Mr. Matrka writes: "Plagiarism is a violation of university policy. ... How do administrators explain to the student who received a failing grade for plagiarizing a history paper that an engineering student who plagiarized received a master’s degree?"

The talk: "As part of the university, we're committed to upholding university policy."
The walk: "Well, maybe we're not as committed to upholding it as are other departments."

It does seem here like university policies lose their force if they're only enforced some of the time, or only by some of the departments. Working at a university myself, I know that there are some policies I would defend to the death and others that I think are really wrong-headed ... but, there are plenty of ways for faculty to lobby to change the policies with which they don't agree. Just ignoring those policies, rather than at least voicing your objections to them, doesn't seem like much of a principled stand.


Mr. Matrka writes: "Most alumni, employers, and students expect a university to uphold its own policies. ... Cataloging theses and dissertations known to contain plagiarism unfairly creates suspicion around the work of all students and faculty."

In other words, if an institution (like a department, or a university) is known not to uphold its own policies, then folks dealing with that institution have no good reason for thinking that institution feels any commitment to the values contained in that policy. If you don't do anything about plagiarism, we don't actually get much out of your saying that p[lagiarism is bad; if you really thought it was bad, wouldn't you do something about it? And, if you have a policy against plagiarism which it is clear you have not enforced, is there good reason for us to believe that you have conscientiously enforced your other policies?

This might be a place where, if the institution in question really does have certain core values embedded in its policies, it ought to just drop the policies that aren't reflective of the values of the institution. It's a choice between being clear about what you stand for and looking like maybe you don't stand for anything at all (save the intake of tuition dollars). Yes, there may be difficulties if, say, the engineering department rejects certain values the rest of the university embraces, but it would probably be better to be clear about this than to have it revealed in a scandal.

Mr. Matrka writes: "I am certain that any scientist or engineer would not appreciate their work being stolen and passed off as original work by another." Again, I'm in total agreement. But, can we say "No scientist or engineer would appreciate X" implies "X is a species of scientific misconduct"? (Let X = having sugar put in your gas tank by a competitor.)

The official government definition(s) of scientific misconduct include plagiarism alongside fabrication and falsification. Most scientists seem to agree that having someone steal your words and/or ideas and present them as her own is a Very Bad Thing. But if the community of science as a whole (or even a subdiscipline within this community -- say, the community of engineers) agreed that this was not a very bad thing in certain circumstances -- say, they agreed that the contents of textbooks were community property for everyone to make use of as desired -- then it might be reasonable to formulate a more precise definition of plagiarism as recognized by the scientific community.

Please note that I'm not claiming that the scientific community or any of its disciplinary sub-communities actually hold this view about texbooks. But if they did, it seems to me, it would be better to be clear about it than to quietly be guided by a different set of values than those recognized in official policies.

If your values are good enough to walk, aren't they good enough to talk? And, if you wouldn't want to be caught talking them, why the heck would you walk them?


Technorati tags: , ,

Wednesday, November 09, 2005

"Intelligent Design": not even interesting as philosophy


I'm probably too tired to blog, but the I've been goaded into it by, of all things, commentary on a local school board election. In Minnesota.

At Pharyngula, here's the commentary on the returns:
All of the candidates have disavowed ID as a fit subject for science courses. It's clearly perceived as a toxic issue and all have tried to distance themselves from it. That's a good sign; unfortunately, it also makes it difficult to tell who is on what side. ... Adams, Langseth, and Maes, have said they don't support ID creationism, but they also waffle with vague suggestions that it's an "interesting idea", and maybe it could be taught in a philosophy course. Sorry, ladies, it is not interesting, and I really wish people would stop treating philosophy as a safe dumping ground for any crap idea that comes along.

Of course, I love it when a non-philosopher speaks up for philosophy as not a dumping ground for crap, and said as much in the comments. But in that comments came the, "Oh yeah? Where the hell are the philosophers publicly decrying 'Intelligent Design' as a crock? If the History of Science Society is on record as against the ID movement, why has the Philosophy of Science Association been silent on the matter?"

These are good questions. This post offers my best guesses (which, given the above-mentioned fatigue, are not exhaustively researched -- I'm shooting from the hip tonight).


  1. Philosophers are speaking out against ID. I'm a philosopher, and I've gone off on ID repeatedly, both here and in my 3-dimensional existence. Philosophy of Biology (where Michael Ruse blogs) does too. So does Brian Leiter. So does Sahotra Sarkar. Anyone with a good search engine could turn up lots more in the blogosphere. Anyone who gains entry to a building with philosophy department offices could find a bunch just randomly knocking doors. We talk to people. We write letters to the editor. Critical thinking is our bag, baby -- how the hell do you think we feel about ID and the whole ID movement?

    By the way, this open letter to the Dover school board from University of Pennsylvania faculty is signed by philosophers.

    But maybe people have gotten so accustomed to tuning out philosophers (the most loathed of all those annoying "intellectuals" the American people cannot abide) that no one notices us shaking our tiny fists.

  2. Philosophers have been speaking out against ID and creationism for a long time, and think the whole thing is played out. There really are some issues that philosophers have achieved "closure" on, even if folks outside the philosophical bubble are still working on them. Once the good philosophers have worked out all there is to say about an issue like creationism/ID, it's time to move onto a live problem.

    For example, Philip Kitcher wrote Abusing Science: The Case Against Creationism (whose cover you see above) for a general audience. It was published in 1982 and is astoudingly clear. Many of his critiques of creationism carry right over to ID. Does he need to write this book again every decade just to earn his anti-ID cred? Or, would it be fair to ask members of the public to crack a book?

    More recently, Elliott Sober has delivered scathing critiques of the "detection" of design claimed by Dembski. If I could be bothered tonight (what with the fatigue), I could generate a list of philosophical take-downs of both ID claims and the flaky anti-evolution strategies that accompany them. The philosophers have been on this a long time. Anyone remember David Hume's Dialogues Concerning Natural Religion? I think the 18th century counts as old skool, yo!

    Again, this is one of those cases where being tuned out by the public may play an explanatory role. Here, given that the public has not attended to what the philosophers have said about the issue, the philosophers decide to tune out the public and do some more fulfilling philosophical work.

  3. Intelligent Design is pretty sissy even for a philosophical theory. I say this as someone who has the utmost respect for purveyors of the interesting-but-clearly-wrong philosophical theories that one encounters in reading the history of philosophy. (Descartes is my favorite guy in this crowd. He's convinced me I can believe in my own existence, but his attempts to get us back reliable empirical data with which to do science don't work nearly as well as he wanted them to. Still, an interesting problem, and Meditations is a good read!)

    Really, what is ID offering? Every time we can't imagine how something could come about through natural processes we holler Intelligent Designer?

    Now, there may be interesting philosophical questions to be asked in the general vicinity. For instance, what conclusions are warranted when our imaginations fail us? Is "design" something of which humans are or could be reliable detectors? Is there a structural "floor" beyond which reduction does not succeed? But, for all of these questions, a serious philosopher would want to subject them to serious examination, maybe even considering real world cases of various sorts. The ID proponents don't seem to be doing any of that.

    ID isn't a scientific theory. It's not even a convincing imitation of a scientific theory. Not much interest there for philosophers who want to understand scientific theory building and testing.

    Maybe ID is interesting to folks who do philosophy of religion. Undoubtedly ID carries with it an interesting bundle of assumptions about the nature of the Designer. I confess that I am fairly ignorant of philosophy of religion, so I can't say whether ID is or should be a hot topic in that field.


In short: I can't think of why one would include ID in a philosophy class except as an example of how not to do philosophy (or science).

So why is the Philosophy of Science Association not on record against ID? I don't know, but I'll try to find out. Stay tuned!


Technorati tags: ,

Sunday, November 06, 2005

Doctor Free-Ride's Film Corner


A member of the Adventures in Ethics and Science Field Team brought me a DVD to review, "Ethics in Biomedical Research". This is a DVD produced by the Howard Hughes Medical Institute. According to the HHMI website, the online catalogue offers "a variety of award-winning publications, videos and other materials—all free." That means this DVD is free for the asking, too.

As the title suggests, the focus of the DVD is the ethical issues around biomedical research. There are four parts: Overview (28 minutes), Animal subjects (19 minutes), Genetic alteration (17 minutes) and Scientific integrity (15 minutes). I was a bit surprised that human subjects didn't get their own dedicated section, but they are discussed in the Overview and the Genetic alteration parts.

The overarching message of the DVD is that ethical issues come up especially where scientists doing biomedical research and the public have overlapping interests (what can be cured, what kind of research is necessary to develop the cure, what will it cost, etc.). However, attention is also paid to ethical questions that come up within scientific communities, quite apart from the public's interests and concerns. The filmakers make it clear that ethical issues are complicated, requiring serious efforts to balance risks and benefits (including future outcomes which are uncertain). But, the DVD encourages scientists to face the ethical questions rather than setting them aside for someone else to handle. Indeed, the message is that taking concerns from different quarters seriously, and discussing them ahead of time (rather than after something bad has happened) ought to be part of the everyday activity of doing science.

The DVD has the kind of lovely footage you'd expect of laboratory apparatus, imaging of microbiological systems, and well-maintained laboratory animals. (I swear, they even made the fruit-flies cute.) There is also the standard footage of principal investigators sitting in their office chairs holding forth about the responsible conduct of research, members of Congress (and the President) speaking about stem-cell research, recipients of treatments that resulted from biomedical advances, protestors of various sorts, and a few professional ethicists. More surprising: we also get to hear the opinions of scientists who are not principal investigators -- actual students and lab technicians. And, there are at least two separate research groups having laboratory meetings devoted to discussing ethical issues in scientific research. (The coolness of watching such a group meeting is undercut a bit by the shaky-cam.)

As far as content goes, there are some important historical mileposts (the Nazi "medical" experiments and the Tuskegee syphilis experiment, the 1975 Asilomar meeting to evaluate the risks of recombinant DNA research). There is also mention of institutional, federal, and international standards that apply to particular kinds of biomedical research (especially research with animal and human subjects). The DVD does include a brief discussion of the three guiding ethical principles in the Belmont Report, and while it can't, for obvious reasons, present all the salient information from institutional guidelines and policy manuals, mentioning that such guideline and manuals exist conveys useful information to the scientist and the scientist-in-training.

But, as the introduction to the DVD makes clear, the sections of the DVD "pose questions but few answers." And in this regard, the DVD is extremely impressive. The interviewees present a wide range of opinions about various ethical issues, from germline alteration to authorship, from financial conflicts of interest to the pressures inherent in the competitive world of cutting-edge research. All of the views in the DVD are presented as worth taking seriously, and the film makers seem to have made a real effort to find some that might challenge more comfortable assumptions within the world of science. (For example, one of the interviewees in the Animal subjects section is Tom Regan. The aim of the DVD is clearly not to cram "all the answers" into 79 minutes of footage, but rather to raise questions and to open discussions -- not only between scientists and non-scientists, but also among scientists. The introduction claims that the DVD content is "presented to stimulate more in-depth discussion, such as in a research group meeting or a classroom setting."

Would I use this DVD in a classroom setting? While it doesn't add any content to my course, it might be useful to my students to see scientists talking seriously about scientific issues. Too, seeing the diversity of views the scientists express in the DVD, and their apparent willingness to work with others to figure out the most responsible course of action in different situations would probably be good for the handful of students I have who start out inclined to reject the whole enterprise of ethics because there are "no right answers" and it's all "just made up". But, I could see this DVD coming in handy in a course designed to prepare students to conduct independent laboratory research, especially in the biomedical sciences. (The Scientific integrity section would work well for students or scientific trainees in pretty much any scientific field.)

A notable absence in this DVD is any explicit role for philosophical tools like ethical theories. Possibly the filmmakers thought ethical theories would be of no use to scientists in the trenches ... but then, I have to ask, how should we reconcile this with the tendency to push off students' ethical training onto philosophy departments? Indeed, I have this nagging worry that DVDs like this (and, I really think this is an excellent DVD) will be substituted for discussions in classroom settings or in research group meetings. "Why yes, we take research ethics seriously. See? We have the DVD!"

Lest you think I'm being overly pessimistic, the member of the Adventures in Ethics and Science Field Team found this DVD tucked away on a bookshelf in a research laboratory. Apparently, it had been provided to the research group by the funding agency. It was still in the shrink wrap.


Technorati tags: , ,

Friday, November 04, 2005

You want to communicate? Then let's communicate!

Over at The Panda's Thumb, Wesley R. Elsberry, who had been attending the Dover trial, reports the following:
Robert Gentry was in the courtroom in the morning, and noticed me sitting with the plaintiffs. At a break, he told me that he was retracting his permission for me to provide his papers on my website. Along the way, he made a rather insulting insinuation that I would alter his materials in some way. Now, back at that press conference, Gentry complained that scientists did not want people to see his papers. I made a good faith offer to host them. I hosted “scientific creationism” files on my BBS back in the old days of direct dial-up, and I certainly did not alter those. I’m a scientist, and I definitely want to rebut the notion that I’m somehow engaged in keeping people from seeing the arguments made by antievolutionists. Far from it. I think antievolution materials make the case for keeping non-science out of science classrooms quite well.

(Bold emphasis added.)

What I was just saying about intellectually honest debate? I think you can see it in this case as well. Scientists cling to the belief that we're all better off if we consider all the arguments -- even (especially!) the ones opposed to our preferred theory or interpretation of the data. And the reason we're better off seeing them is that then we can subject all the arguments -- even our own -- to serious scrutiny, after which we might not have a final resolution but we will at least have better arguments to work with.

The problem is that certain parties seem not to want a serious scientific back-and-forth here. They don't want to subject their arguments to scientific scrutiny. They don't want to respond to any scientific critique, whether on the basis of evidence or logic or methodology. They don't want to be put in a position where they might have to abandon their hypotheses, so they avoid situations in which these hypotheses might be subjected to serious tests. In other words, these folks want no part of the intellectual engagement with the scientific community that is at the heart of the scientific method.

And, that's fine, unless they also want to claim that their hypotheses and arguments are perfectly good science and that those other naughty scientists are ignoring them.

If you want to engage in a scientific debate, engage in a scientific debate. If not, have a spine and be honest about it.


Technorati tags: , ,

Scientific communication with scientists who might not get it

PZ Myers is blogging on the scientific ethics beat ... so maybe I should blog about zebrafish? But honestly, there's plenty of material to go around on scientific ethics. No worries!

The question today is whether it's a good idea for scientists to grant permission to Creation Magazine to reproduce their figures and video clips. The scientists in question were studying the pollen-launching mechanisms of the bunchberry dogwood, among other plants. Their findings were published in Nature. Chad at Uncertain Principles (from whom PZ got the story) describes a colloquium with Dwight Whitaker, one of the scientists on the project:
During the question period, somebody raised the issue of "Intelligent Design," asking if this is the sort of thing that wing nuts are likely to pick up on, and how this sort of structure evolves. Dwight gave a very good explanation of the evolutionary origin, pointing out that the basic structural elements that make the little trebuchets are present in lots of other plants in the dogwood family, so the change from existing plants would be very small. He also explained how it would be evolutionarily advantageous for this particular dogwood plant to have an effective pollen-launching mechanism, as it's a shrubby little thing that can easily benefit from both wind-borne and insect-borne pollination.

In other words, this pollen-launching system fits nicely into evolutionary explanations. But, given that Creation Magazine has contacted the scientists asking for permission to reproduce figures and videos with which they demonstrated their findings, it seems pretty clear that someone thinks these materials can be used to bolster the case for creationism.

You're a scientist. On the one hand, you have a duty to share your findings with the community of scientists, because the other scientists in the community are supposed to be able to replicate it, chime in with their reasoned responses to it, use it as the starting point for further research, etc. On the other hand, you don't want your findings to be misrepresented -- to be identified as showing something they do not. You especially don't want your findings misrepresented by someone claiming the authority of Science who, you suspect, doesn't actually understand how scientific reasoning or scientific discourses work.

Do you grant the permission to reproduce the figures and video clips?

In this case, the scientists did. Quoth Chad:
In the end, they decided that they had an ethical obligation as scientists to make their data freely available, even to wing nuts (they did insist that the article include pointers to the original source, which isn't peddling nonsense). I tend to agree, but it is an interesting question: If you knew that your work was going to be used as "evidence" to support pseudo-science, would you give the whack jobs permission to use your figures?

Quoth PZ:
I wouldn't have to think twice. I'd give my approval. The data is there, and if I trust it to be an accurate reflection of the real world, but of course I would want it disseminated, even if the agent were as untrustworthy as a creationist.

Besides, I'd love to face off against a creationist who tried to use some of my zebrafish development movies, for instance, as an argument against me. It would be a perfect Annie Hall moment.

Although, if I had so much clout and influence that my denial would actually have an impact on their ability to spread their lies, I might have to rethink that. There is no real risk of that happening, though—I'm not the National Academy of Sciences.

Some things to note about these responses:

  1. They opt for disseminating information rather than restricting it -- especially once the information has been published. This is part of the Mertonian norm of communism -- scientific knowledge belongs to everyone in the scientific community.
  2. There is an assumption that, to a certain extent, the facts really do speak for themselves -- if you lay them out for people to see, there are certain interpretations that will seem better (scientifically speaking) and others that will show themselves as transparent efforts to twist the facts.
  3. In insisting on pointers back to the original source of the figures and video clips, the authors are reinforcing their interest not only in providing accurate context for understanding the findings, but also their interest in staying involved in the dialogue about the findings and what they mean. In other words, it's within the rights of another scientist to disagree with how we interpret our findings -- but it's also within our rights to disagree with their position and try to explain why our interpretation was better.
  4. While there is some concern that the requested use of the figures and clips might not be in the service of actual science, the authors seem willing to give the benefit of the doubt until presented with contrary evidence. Of course, there is the implicit threat that if the findings are used in ways that aren't scientifically legitimate that there will be a response from the scientists who granted permission calling out the other "scientists" on methodological shenanigans.
  5. And, there is just a trace of worry that the authority of the scientists who presented the original findings might be hijacked to make a scientifically crummy claim look scientifically respectable to a lay audience that doesn't know any better.


Indeed, connected to this last point, PZ invokes the decision of the National Academies of Science not to grant copyright permission to the Kansas State Board of Education precisely because NAS wanted to head off an attempt to have its scientific authority hijacked. But I think, at least on the face of things, these two cases are different in some important ways.

First, NAS wasn't restricting use of scientific findings by other scientist (or, for that matter, by non-scientists). Rather, they were restricting use of painstakingly crafted educational standards in a way that would have fundamentally changed them, while at the same time claiming that the modified standards were still "based on research and on the work of over 18,000 scientists, science educators, teachers, school administrators and parents across the country that produced national standards as well as the school district teams and thousands of individuals who contributed to the benchmarks." In other words, the Kansas State Board of Education was trying to say, "These standards are an accurate reflection of what all these scientists and science educators say" -- when they were not.

How is that different from a creationist magazine taking pollen-launching mechanism findings and trying to claim that they are evidence for intelligent design? The scientific literature provides a way for the scientists who feel their work is being misrepresented, or misinterpreted, to respond. Publishing a paper doesn't end the discussion in science; the discussion keeps on going until scientists are done with it. On the other hand, educational standards have to be rather more "settled" in order to guide curricula. And, there's no obvious way for scientists to respond to the analogous misrepresentations and misinterpretions in state standards once they've been adopted (nor is there reason to assume that the folks adopting them are scientists who share the same assumptions about intellectually honest dialogue). NAS knew it wasn't dealing with a scientific question so much as a policy question, and it refused to provide cover for the policy by allowing misuse of its standards.

On the other hand, research scientists are working on knowledge that is growing, participating in discussions that are ongoing. There is a risk you take of including, in these discussions, people who don't really buy into the norms of good scientific dialogue ... but there is also an opportunity. Sometimes intellectual honesty and serious engagement with hard questions wins people over and really brings them into the community of scientists. (Sometimes it also impresses the lay people who are paying attention to the exchange.) Ethically impeccable scientific communication may have a bigger impact on those now suspicious of science than any piece of data.


Technorati tags: ,

Wednesday, November 02, 2005

Closer to home

In other news, today I got word from the IRB that my research protocol has been approved and I can start collecting data. Yay me!

With luck, this morning's attempt to fix our department photocopier was successful. I'll be needing that bad boy to duplicate surveys and consent forms.

Imagine how embarrassed I would have been (as a "local authority" on responsible conduct of research) if the protocol had been rejected ...

Completing the misconduct trifecta: plagiarism

As I've just recently been discussing fabrication and falsification in the news, it seems inevitable that I should take up a news story about the "P" in FFP (fabrication, falsification, and plagiarism, the core of the government's definition of scientific misconduct; and no, I don't think the government is thinking of dropping the "P" because the "P" comes between it and its fans ...)

Inside Higher Ed delivers the relevant news item about concerns raised by a graduate of a Masters program in engineering at Ohio University. Briefly, Thomas Matkra, the engineer in question, feels that plagiarism in the masters theses at Ohio University — and, even more, complacency on the part of the faculty about acts of plagiarism in these theses — undermines the value on his degree. From the linked article:
Thomas Matrka did not set out to become a whistle blower.

In 2003, 10 years into his engineering career, he enrolled at Ohio to get a master’s degree. He got good grades, but as he worked on his thesis, he says, his adviser, M.K. Alam, the Moss Professor of Mechanical Engineering, repeatedly expressed dissatisfaction with his work. (Alam did not respond to requests for comment for this article.) Hoping for insight into projects that had previously won Alam’s approval, Matrka spent some time in the university’s library in the summer of 2004 thumbing through past theses.

He was struck by what he found. As he looked the papers over, Matrka says, he noted similarities — occasionally blatant, extended ones — between many of them. He discovered four theses, for example, in which the third chapters on “fluent and multiphase models” were virtually word for word. Two were from 1997 and two from 1998. Three others, from as many as six years apart, contained paragraphs and drawings that were almost identical. (Matrka provided pages from some of these theses to Inside Higher Ed for review.)

“Some of them were so blatantly obvious, where there was page after page copied from one another or from a textbook,” says Matrka. Some of the overlap is so obvious, he says, that it would be impossible for the professors who oversaw the theses not to have known about it. “It’s a faculty approval problem,” Matrka says. “It’s hard not to conclude that advisers condoned this.”


There is a cluster of connected question here:

  1. Do the practices Matrka identified in these theses constitute plagiarism?
  2. Do the faculty have a duty to deal with past acts of plagiarism (e.g., in theses of students already granted degrees) and, if so, how?
  3. Does pervasive plagiarism in a graduate program undermine the value of a degree granted by that program?
  4. Do scientists and engineers have a common understanding of what counts as plagiarism?
  5. Do scientists and engineers agree that plagiarism is a species of scientific misconduct?


What is plagiarism? On Being a Scientist: Responsible Conduct in Research, Second Edition (1995) describes it as "using the ideas or words of another person without giving appropriate credit". So the central question is just what constitutes appropriate credit. And, it's not inconceivable that the standards of appropriateness are different in different contexts. The context in which a master's thesis is written may be quite different from the context in which a journal article, or a grant proposal, is written. Statements that may be regarded as part of the common pool of knowledge of credentialed professionals may be exactly the kind of thing a graduate student is expected to explicate in a thesis. In your manuscript for the journal, you probably don't footnote every textbook fact that you mention in your discussion. However, if you use the exact wording of the textbook author, or reproduce a figure exactly ... it seems like it would be better to err on the side of citing it, just to avoid confusion about whose words or whose figures they are.

So, is it clear to engineering professors and science professors (and working engineers and scientists) what "giving appropriate credit" amounts to? Is it clear to their students? It seems that, at the very least, what's going on at Ohio University indicates a mismatch in expectations between professors and graduate students. Matrka sees practices that look like plagiarism to him, and he expects the faculty to see this as a violation of the norms of the community and do something about it. The faculty ... well, possibly see some of these practices as plagiarism, but they don't seem to think they need to do very much to address past acts. And, it's not clear what they have in mind to head off future such acts ... if, in fact, they see these acts as problematic at all.

[Dennis] Irwin [engineering dean at Ohio University] says the college has begun “briefing graduate students on the nature of plagiarism, its consequences, and how to avoid plagiarizing others’ work,” and that it now requires electronic submission of theses and dissertations and a statement of originality signed by all students. Beginning this winter, he says, the college will “begin using comparison software to screen all of the theses submitted against all of those we have in electronic form.”

What the college will not do, he says, is ask his faculty to review what could be “tens of thousands of pages” of hard-copy theses and dissertations” in the library. That could take a huge amount of faculty time for an uncertain payoff, he says, “so you can probably see our problem in meeting any demand that all instances of plagiarism be removed from the library.”

Irwin adds: “I know Mr. Matrka is not satisfied with our actions to date, but all I’ve heard are accusations, and I haven’t been presented with any evidence that those accusations are true.”

Part of the problem, the dean says, may be a “difference in interpretation between what [Matrka] considers to be plagiarism” and the university’s own interpretation. With technical works like engineering theses, he says, “there are going to be similarities, particularly in equations and diagrams.” He adds: “If the same two people worked on the same experiment or apparatus, it is conceivable that they would jointly develop schematic drawing of that that might be used in both of their theses.”

Matrka admits that that possibility could explain some of the cases that looked fishy to him – which is why he has encouraged the university to turn the review over to faculty members more knowledgeable than he is.

But many of the other examples he has identified, he says, don’t require an expert’s eye. “Some of this stuff you wouldn’t get away with in high school.”


That the college of engineering has, apparently, just begun talking to graduate students about the nature of plagiarism suggests that in the past it was assumed that what counts as plagiarism in engineering was self-evident — or, perhaps, that plagiarism was not very important. The clear “ 'difference in interpretation between what [Matrka] considers to be plagiarism' and the university’s own interpretation" gives lie to the assumption that what counts as plagiarism is self-evident. If plagiarism is something that the college of engineering regards as important, there needs to be a more explicit discussion of what it is and why it matters.

And, if the message about why plagiarism matters is to be taken seriously, this may commit the faculty to doing something about past acts of plagiarism. Leaving theses with plagiarized pieces in the library sends a message about the acceptability of plagiarism. (Who looks at the old theses the most? People writing their own theses. Why? Because they're trying to discern what a good thesis looks like.) Also, tolerating plagiarism sends a message to the people who committed it that this is how we do it in this field. If you discover that someone who graduated 10 years ago made inappropriate use of sources, I'm not sure you yank their master's degree, but it seems like at the least, you ought to contact them and make it clear to them why they ought not to make similar inappropriate use of sources in the future.

If plagiarism doesn't matter to engineers, that's one thing. Maybe they should just go on record as saying, "We thing plagiarism is no big thing. Fabrication and falsification are right out, but ownership of words or ideas is a canard."
Then, there would be no confusion.

However, if plagiarism does matter to engineers, they have to do something about it. They have to be clear about what constitutes appropriate use and what does not, and they have to, as a community, bring the hammer down on inappropriate use. Otherwise, the community will be judged by its actions rather than its words, and the mismatch between the two won't earn the community much respect.

Scientific parents: Please take the opportunity to talk about plagiarism and proper attribution with your scientific offspring today!


Technorait tags: ,

Tangled Bank #40, for your examination.

Hop up on the table in The Examining Room of Dr. Charles and have a look at Tangled Bank #40.

You don't need to put on the paper gown unless you really want to ...