Adventures in Ethics and Science
Tuesday, January 24, 2006
Sunday, January 22, 2006
Welcome Koufax voters!
I'm excited to be one of the many fine nominees for the "Best New Blog" Koufax Award for 2005. As noted in the previous post, "Adventures in Ethics and Science" has a cool new home over at ScienceBlogs. But, a lot of the good stuff is still here where it all started.
Because I know you want to make an informed decision about your vote (or, you know, put off doing actual work for a little while), here's a quick tour of my posts here. A few of these are big-traffic posts via search engine results, while others are posts that are dear to my heart (the "unsung heroes" of the archives). It's my hope that these will give you a taste of some of the issues in ethics and science that seize my hands and make me blog.
Scientific Misconduct (fabrication, falsification, plagiarism, and their pals)
When scientists get caught doing bad stuff -- especially when it's in the news -- it tends to set me off. I rant, but I also try to draw some lessons from it.
- Aw Mom, scientific misconduct again?! (my initial thoughts on Luk van Parijs)
- Lies that "don't matter"? (Van Parijs follow-up)
- Completing the misconduct trifecta: plagiarism
- Talking the talk vs. walking the walk (plagiarism update)
The Farm League
Some of the instances that might not rise (sink?) to the level of full-blown misconduct, but that are slimy enough that they ought to make responsible scientists glower.
- Communicating science to the public? More like an advertising blitz.
- Little white lies to the popular press (follow up).
- Science, meet capitalism.
- When unfalsifiability is your business plan.
Research with animals
Research with human subjects
- Face transplants
(Written in March 2005, before the recent face transplant in France.)
- Who'll protect kids from the EPA?
- Students as a vulnerable population.
Playing well with other scientists
A round-up of day to day issues in the responsible conduct of scientific research.
- Stem cell drama continues (and "magic hands" are raised) (about the tricky balance between having great bench technique and having an experiment that's reproducible)
- All kinds of trouble: more on the Korean stem cell saga (wherein I examine the issue of authorship and responsibility)
- Authorship matters. (about ghostwriting in the medical literature)
- Science blogs for intra-scientific community communication.
- Crackpottery, etiquette, and ethical duties. (When things get awkward at the professional conference ...)
- Who's in the club, and why does it matter? (Should the scientific community worry about its gender make up?)
Teaching science, teaching ethics
- Part of the solution, or part of the problem?
- The problem with cheaters.
- What's the big deal about high school biology class?
- When parental involvement is maybe a bad idea …
Science for the rest of us
The public funds science; what are the public's interests where science is concerned? And what kind of duties do scientists have when it comes to getting the public to understand what science is up to?
- Academic freedom, academic responsibility.
- Policy decisions and scientific uncertainty.
- Uncertainties, prudent planning, and duties.
- Science and priorities.
- Communicating science to the public.
- I miss Sir Karl! (falsifiability and loopiness need not be mutually exclusive)
That should give you a feel for where I've been so far -- I'm looking forward to taking on a lot more, and I welcome your comments on all of it.
UPDATE: I think all the links are working properly now. Thanks to commenters and emailers who pointed out the broken ones.
Wednesday, January 11, 2006
Big news for this blog!
The cool new digs are being hosted by Seed Media Group (which also puts out Seed Magazine). While the rants will still be mine, there are new tools that will be coming online over the course of the coming month(s) that are designed to better server both blogger (me) and reader (you) - and, we hope, will allow folks to navigate across blogs in the network in some novel ways
I'm excited to be making this move because the ScienceBlogs includes more than a dozen other great science bloggers (people you ought to be reading if you're not already) and is gunning to be "the largest and most engaged global conversation about science". This kind of conversation is exactly the kind of thing I think might improve science literacy, augment quality science journalism (while helping people tell good science journalism from dreck), and give non-scientists a better understanding of how the scientific marketplace of ideas works when it's working well. Also, it promises to be a rollicking good time!
This site will remain up (knock wood) as the archive of "Adventures in Ethics and Science" up to this point, but my new posts will go up on the new site. Two other tiny features you'll find on the new site:
- My real name.
- An unblurred picture of me (though it is about 20 years old ...)
What I do for my loyal readers!
Finally, since it's International De-Lurking Week, y'all can tell me what you think about the whole thing, either in this comment thread or in the comments on the new site.
Tuesday, January 10, 2006
Using unethical means to expose unethical conduct.
An interesting piece of the Korean stem cell fiasco that escaped my notice the first time around is that the Korean investigative television program, "PD Notebook," that exposed the faking of photographs for the now-discredited Science article did so using techniques that violated journalistic ethics.
Take a moment to let that sink in.
Here's a lab that is reporting what looks to be great success with cutting edge scientific research. Then Choi Seung Ho, producer of "PD Notebook," gets an anonymous email from someone who claims to be a member of Hwang Woo-suk's laboratory, claiming that Hwang faked data in the Science paper. A good investigative journalist wants to get to the bottom of this to find out whether the stunningly successful research group really is stunningly successful or whether its fame rests on a pile of falsified data.
So, you have to talk to some of Hwang's co-workers, right? The question of journalistic ethics turns on how you talk to them. Here's what James Brooke writes (in the International Herald Tribune):
Choi is in the journalistic doghouse partly for tearing down a national icon, a charismatic, handsome scientist who was the modern, successful face that Koreans yearned to show to the world.
But he is also in the doghouse partly for allowing South Korea's ultra-competitive journalism world to spur him to use techniques that tarnished his work.
In one critical interview, a former co-worker of Hwang, now working at the University of Pittsburgh, is led to believe that his former boss is about to be arrested back home for fraud. When the worker, Kim Seon Jong, starts to talk about faking photographs for the Science article, he can be seen nervously asking if the interview was being filmed. No answer comes from the producer, who is holding a bag with a hidden camera. Instead, the producer hints that if he cooperates with MBC, they will protect him from arrest. To this date, no one has been arrested in the case.
For those keeping score at home, we have:
- Conveying the (false) information the interviewee's boss is being arrested for fraud in the research of which the interviewee was a part.
- Suggesting (falsely) that cooperation in the interview (which would presumably include ratting Hwang out) will protect the interviewee from (impending) arrest.
- Filming an interviewee who did not want to be filmed without alerting him to the fact that he was being filmed or securing his permission to be filmed.
Unauthorized filming with a wee bit of coersion thrown in. Classy! As reported in the International Herald Tribune:
The Korean Broadcasting Commission reviewed the tapes, and a spokesman said the panel had "judged that it is highly likely that the program violated regulations on fairness, objectivity, human rights and statistics and public surveys under the Broadcast Law."
So, "PD Notebook" exposed Hwang's fabrications (which subsequent investigation by Seoul National University has determined really were fabrications), but used unethical means to secure the interview central to this exposure. You might wonder why this matters. Why should folks engaged in deceiving the scientific community, the business community, and the Korean public (who reportedly viewed Hwang as something of a national hero) have any right to expect other people to be honest with them? Isn't there poetic justice in lying to the liars to get the information with which to expose their lies?
Maybe there is. However, what kind of impact does this have on the next scientist with concerns about the boss's misdeeds? Is this scientist going to be brave enough to email a tip to a journalist, knowing that this might expose him or her to coercisive treatment and secret videotaping?
After all, it's not like the interactions between the many scientists on the research team (including "senior author" Gerry Schatten) were, by themselves, enough to head off or correct the fabrication and falsification. There is a legitimate question as to whether the depths of the deception would ever have been revealed if the press had not gotten involved. But rough handling of the journalists' sources of information this round will make it that much harder to find willing sources of information next round.
And, it's worth examining whether this departure from journalistic ethics is an aberation or part of a pattern. Here's what JoongAng Daily had to say:
The roles of producers and journalists are clearly separated in other countries, while producers play both those roles in "PD journalism" in Korea - they go out in the field, investigate, edit the content and write the scripts for the final program.
PD journalism was a byproduct of the authoritarian Korean regimes, media experts said. Academics in Korea do not reject the notion that PD journalism contributed greatly in promoting democracy here, where media have long been censored.
The distinctive form of PD journalism has both some good and bad points, media experts said. "PD journalism was referred to as a window to the truth and an excessive expression of subjective opinions at the same time," said Yoon Ho-jin, a senior researcher with the Korea Broadcasting Institute.
"Current affairs programs made by producers have played the role of critics of the government, and that role was valuable in some ways," said Kim Dong-yule, a senior researcher with the Korea Development Institute. "But the times have changed now, and producers must refrain from investigating with a conclusion already in mind."
"PD journalism has contributed to the maturing of our society with in-depth reporting," a senior executive at MBC said. "But, there have been many cases in which such programs were shaky in gate-keeping and verification."
In blog entries of yore I have noted that it is a serious problem when science writers make up their minds about how the story's going to go, then contact the scientists looking for quotes to support the story, sometimes even ignoring (or spinning) the quotes from scientists that don't support the stories they've already decided to write. In this respect good investigative journalism ought to be a lot like good science: you can start out with a hypothesis to guide your investigation, but in the end you must take pains to let your conclusions be guided by the evidence. "Taking pains" here means seriously considering the likelihood that your hunch is wrong, given the facts you've amassed. In this particular instance, it might mean considering whether coersion might make a frightened scientist offer testimony that isn't reliable (because he thought that this was the testimony that might save him from some threatened bad outcome). It would certainly mean, as well, seeking information that might explain behaviors that look suspicious. (This might mean asking Hwang or other co-workers to explain the Science photographs, for example.) Starting an investigation with your conclusion set in stone is just as intellectually dishonest as making up the data you report.
I suppose it's understandable why a reporter -- even one who hadn't made up his or her mind in advance -- might pressure an interviewee to in order to obtain information that it might be really hard to get otherwise. Indeed, given how good scientists can be at keeping secrets (some information is proprietary, after all, and other information you keep close to the vest until you're ready to publish it so as not to get scooped), it's possible that "PD Notebook" provided the crucial break that brought the house of cards tumbling down. But this just doesn't strike me as a sustainable model for keeping scientific researchers honest. Honest communication with the public (and a recognition that science is ultimately accountable to the public) is an important piece of fostering ethical conduct and rooting out misconduct. However, even more effective would be honest communications between scientists and a recognition that each scientist is accountable to the whole community of scientists. Tipping off a reporter is one way to try to head off misconduct in your lab, but there need to be mechanisms for tipping off folks in the community of science who have a vested interest in eliminating misconduct from science and its products from the scientific literature.
Knowing scientists as I do, it seems to me that pissed-off scientists could deliver a much better smackdown to a fabricator than any media outlet ever could.
Monday, January 09, 2006
Working to do human subjects research right.
Today, some news that makes me smile (and not that bitter, cynical smile): UCSF has announced that it has received full accreditation for its program to protect research participants from the Association for the Accreditation of Human Research Protection Programs (AAHRPP).
This is a voluntary accreditation -- nothing the federal government requires, for example -- that undoubtedly required a great deal of work from UCSF investigators and administrators to obtain. (AAHRPP describes the process as including a preliminary self-assessment, followed by appropriate modifications of your institutions human subject protection program, preparation of a detailed written application, an on-site evaluation of your program by a team of experts, and review of these materials by the AAHRPP council on accreditation.) Here's what the UCSF news report has to say about the process:
While the US Department of Health and Human Services requires federally funded medical research centers to provide written assurance that all human research is in compliance with federal regulations and is guided by national ethical principles, the AAHRPP assessments are more rigorous and comprehensive.
AAHRPP reviews all programs involving research participants or their biological specimens– not only those programs that are federally funded -- and its assessment includes additional protections not required by the federal agency, such as community education and quality improvement activities.
The AAHRPP accreditation process took more than a year of preparation and several months of review, including evaluation of complex research protocols and related safety and privacy measures, as well as the caliber of training of investigators and the strength of the institutionwide commitment to human subjects' protection.
In the process, scrutiny was brought to bear on the effectiveness of interactions between various units needed to ensure the best protection, such as the institution's investigational drug pharmacy program, clinical research centers, committee examining potential conflicts of interest, office overseeing sponsored research, and overall medical center organization.
Notice that the focus here goes beyond whether an institution is following the letter of the law. Federal regulations on reserach with human subjects only extend to federally funded research. The AAHRPP is looking at all the research programs with human subjects at an institution -- whether funded by the feds, private donors, pharamaceutical companies, or any other entity -- to evaluate the human participant protections. And, as noted above, not only is AAHRPP asking for "additional protections not required by the federal agency, such as community education and quality improvement activities," but it is also attentitve to institutional features (e.g., "the effectiveness of interactions between various units needed to ensure the best protection") that are connected to how well programs to protect research participants function. The question is more than whether the institution is in compliance with standards right now, but whether the institution is set up in such a way that continued compliance is a robust part of the way things are done.
Why, you might ask, in the already busy crush of trying to get research done, would an institution take on the extra burden of applying for an accreditation that is not required of it? Part of the answer may be in the attention to how interaction between different units of an institution make compliance more natural. It's easier, in the long run, not to have to struggle to meet the necessary federal regulations -- having a system where the different units are all looking after subject protection, while maybe requiring more effort to set up, is a lower maintanence way to stay in compliance. Moreover, getting this sort of voluntary accreditation sends an unambiguous signal to the people in your organization that the institution is really committed to protecting human subjects, not just grudgingly meeting a bunch of onerous regulations imposed by the government. And, as AAHRPP points out,
Each time a new organization becomes accredited, the global benchmark for human research protection in science is raised.
In other words, working to make things good for human subjects at your institution is a way to make things better for human subjects everywhere.
The AAHRPP website makes for interesting reading, especially its discussion of five domains of standards for human research protection programs (Organization, Research Review Unit, including IRBs, Investigator, Sponsored Research, and Participant Outreach). Also, their advice for an institutional self-assessment looks like it would be valuable for any institution doing research with human subjects, regardless of whether that institution wanted to seek the AAHRPP accreditation.
Kudos to UCSF. Keep making us proud!
Sunday, January 08, 2006
Is all animal research inhumane?
I received an email from a reader in response to my last post on PETA's exposing of problems with the treatment of research animals at UNC. The reader pointed me to the website of an organization concerned with the treatment of lab animals in the Research Triangle, www.serat-nc.org. And, she wrote the following:
Some people may think that PETA is extreme. However, the true "extreme" is what happens to animals in labs. If the public knew, most would be outraged. But, of course our government hides such things very well. Those researchers who abuse animals in labs (which is ALL researchers, by my definition), cannot do an about turn and go home and not abuse animals or humans at their homes. Animal researchers are abusers, and there is enough research on people who abuse to know that abuse does not occur in isolation. The entire industry must change.
There are a bunch of claims here, some of which I'm going to pretty much leave alone because I don't have the expertise to evaluate them. Frankly, I don't know whether even the folks we would all agree are abusing animals in the lab are full-fledged abusers who cannot help but go forth and abuse spouses, children, family pets, neighbors, and such. (I'm not a psychologist or a sociologist, after all.) And, while I'd like to believe that the public would be outraged at unambigous cases of animal abuse, the public seems not to be outraged by quite a lot of things that I find outrageous.
I would, however, like to consider the claim that ALL researchers who do research with animals are abusing those animals.
First, I imagine there may be some research projects involving animal subjects where it's hard to locate an actual harm to the animals. (Consider, for example, positive reinforcement experiments that train pigeons to type. The pigeons are put in the unnatural position of having to interact with a typewriter, but they get food, are protected from predators, etc. Is this a worse life than scrounging through garbage and avoiding city buses? What if we throw in a daily hot stone massage?) For the sake of argument, let's set those aside.
I take it the animal research that is of real concern is that which brings about pain in the animals, or that which ends with the animals being "sacrificed" (i.e., killed). If we agree that animal pain and killing of animals are harms to be avoided (and not everyone will -- the U.S. is a meat eating nation, after all), does that mean that all research that causes animal pain or the killing of animals ought to be stopped?
We'd need to consider the sorts of harms that might come from ceasing animal research. It would, for example, have a marked effect on biomedical research -- including research with human subjects. The very first item in the Basic Principles in the World Medical Association Declaration of Helsinki reads:
Biomedical research involving human subjects must conform to generally accepted scientific principles and should be based on adequately performed laboratory and animal experimentation and on a thorough knowledge of the scientific literature.(Emphasis added.)
In other words, ceasing research with animals heads off much new biomedical research with humans. You can't test a new drug on humans if it hasn't yet been tested in the appropriate animal system, no matter how promising that new drug may be. Unless WMA were to make significant revisions in the Declaration of Helsinki, an end to animal experimentation might mean an end to drug development and other lines of biomedical research. Ending animal suffering in the lab might mean there is more human suffering that we're unable to address with medical treatment.
Of course, there's the legitimate question of whether animal models are actually adequate models for the human conditions such biomedical science aims to address. (I've been told that we could cure most mouse cancers in fairly short order, but we're still quite a ways off on the human cancers the mouse cancers were intended to model.) Lately, researchers have developed an array of alternatives to animal research (in vitro studies, computer models, etc.), but these approaches have their limits, too. Surely the best system for studying human diseases and their treatments would be humans, but experimentation on humans is no less ethically problematic than research on animals.
Goodness of fit between a model and the system that is the target of the modeling is something scientists have to grapple with all the time. There are always ways that the model departs from the target. The practical question is how to work out models that get the important features of the target right. It may be the case that, imperfect as animal models are, they are still the best models we have for certain phenomena we are trying to figure out. But especially when our model systems come with ethical costs (not only animal research but epidemiological studies with humans), it seems like critically examining the model and keeping an eye out for alternative models that might work better is a good idea.
One could object that some of the research done with animals is simply unnecessary. For example, two of the research studies flagged by SERAT as especially problematic are a study of binge-drinking using rats and a study of gambling using primates. Even if binge-drinking and gambling are human behaviors that are problematic and need to be addressed, it's not obvious that the only ways to address them require a complete understanding of the underlying physiological mechanisms of these behaviors. Even if the physiology of binge-drinking or compulsive gambling were to remain something of a black box, there might be ways to change the environment to head off these behaviors.
Scientists might respond that knowing the physiological mechanism is of value even if we don't need that knowledge to solve the problem of heading off harmful behavior. Sometimes knowledge is a good in itself. However, if that knowledge comes at a cost, it's worth considering how that cost stacks up against the value of that knowledge. (Consider the costs of the Tuskegee syphilis experiment, whose aim was knowledge about the natural history of untreated syphilis. Certainly, such information would have value, but its value didn't justify the harms it brought to the subjects of the experiment.) So, the fact that scientists are curious and would like to get a piece of information does not, in itself, justify all the costs that getting to that knowledge might incur.
Careful readers that you are, you will have noticed that I've taken a consequentialist approach to this issue -- one of balancing competing harms and benefits. While this is how most of those who worry about ethical use of animals usually frame the problem, there are others who feel that a more Kantian approach is in order. (Maybe "Kantian" is not quite the right label, since Kant was concerned with respect for persons, and with not undermining the rational capacity in oneself or others. But stick with me here.) In research with human subjects, there are some lines you are not allowed to cross, regardless of the potential benefits of crossing them. For example, here are two items in the Helsinki Declaration's principles governing non-therapeutic research with human subjects:
1. In the purely scientific application of medical research carried out on a human being, it is the duty of the physician to remain the protector of the life and health of that person on whom biomedical research is being carried out....4. In research on man, the interest of science and society should never take precedence over considerations related to the well-being of the subject.(Emphasis added.)
The health and the life of a human research subject are always to be protected by the researcher. No matter what the payoff might be, whether in terms of solving practical problems for society or building scientific knowledge, you can't sacrifice the subject's health or life. This is a non-negotiable point -- like Kant's respect for persons -- around which your consequentialist calculations have to work.
Perhaps there are such lines we ought to recognize with animals in scientific research. I think when they are working as they should, IACUCs are trying to find and respect those lines. But it is also clear that we live in a society that has few qualms about doing fairly nasty things to animals for the sake of cheap food production, or entertainment (think cock-fights), or biker-wear. That society at large doesn't recognize a clear line separating appropriate and inappropriate ways to treat animals doesn't mean there isn't a line there we should recognize (as feminists, anti-racists, and the like will be happy to explain to you). My own hunch is that within a few generations, we may get to a point where certain ways of treating animals that are prevalent right now become unthinkable. But, not having gotten to that point makes it harder to argue for thoroughgoing changes in the rules for animal experimentation. As the situation at UNC illustrates, sometimes it's hard to get people to even follow the rules that are in place.
That said, let me suggest again that it is a strength of the community of scientists that scientists don't all march in lockstep on the matter of what humane treatment of laboratory animals require. Because different scientists have different views on this matter, they're more likely to actually talk to each other about it. In the course of these talks, scientists sometimes come up with clever strategies to get more scientific information with less -- or no -- animal harm. Given that scientists, as a group, have shown themselves to be quite good at answering hard questions using limited data, it might not take all that long for them to work out good ways to eliminate the need for animals in certain research projects, and to minimize the need for animals in others.
And scientists probably ought to care about animal-use worries of the public, not just of other scientists. At the same time, though, scientists should be ready to explain to the public how their animal use is essential to the research, and how that research benefits the public. Then, if members of the public disagree with the scientists (e.g., deciding to forego a bird flu vaccine if it involves animal research of which these members of the public do not approve), that's their choice. If no one used a medical treatment because of ethical qualms, the demand for that treatment would evaporate, and the researchers would turn their attentions elsewhere.
So, to my email correspondent: I'm not sure I think the problem of animal research is as black-and-white as you think it is. But, I'm inclined to think that science is moving toward higher standards for ethical use of animals, at least gradually. And, I think continued discussion on this issue is how that movement happens.
Thursday, January 05, 2006
Just because they're out to get you doesn't mean they don't have a point.
Since I'm in the blessed wee period between semesters, it's time to revisit some "old news" (i.e., stuff that I had to set aside in the end-of-semester crush). Today, a story from about a month ago, wherein the Rick Weiss of the Washington Post reports on the University of North Carolina's troubles obeying animal welfare regulations in its research labs.
You knew that the National Institutes of Health had all sorts of regulations governing the use of animals in research (and even an Office of Laboratory Animal Welfare, whose webpages have a bunch of helpful links for those involved in such research), right? You'd assume that the folks running a major research university (like UNC) would know that, too. Because you know who else knows it? PETA. And somehow, PETA had an inkling that researchers at UNC were maybe not taking the regulations on animal use all that seriously.
From the WaPo article linked above:
At the center of the storm is the University of North Carolina, which in the past four years has twice had the misfortune of hiring animal laboratory technicians who turned out to be undercover agents for People for the Ethical Treatment of Animals.
The first instance produced embarrassing video footage taken by the employee (one clip showed a lab worker using scissors to cut the heads off of baby rats while saying: "I don't put them to sleep. Maybe it's illegal, but it's easier."). It led to a damning report from the federal Office of Laboratory Animal Welfare. But no sanctions came down from that office, part of the National Institutes of Health, because by the end of that investigation OLAW had determined that the problems had been corrected.
By then, however, PETA had managed to have a new agent hired by UNC. After an 11-month tour of duty, that employee released a new batch of evidence, including more photos and videos, and OLAW opened a new investigation.
The recently released report of that second investigation is remarkable for its similarity to the first report, PETA activists note -- including its conclusion that no action needs to be taken because of reassurances that the university has again resolved the problems.
"We looked at the new report and thought, 'Did they just cut and paste the old one or what?' " said Kate Turlington, the PETA investigator who conducted the first undercover operation, during which she wore a hidden video camera while caring for sick lab animals and talking to co-workers.
Goodness gracious, where to start?
First, the NIH, whose regulations we're talking about. It seems like only yesterday I was blogging about problems that flow from having rules without meaningful enforcement. Maybe the NIH is applying major pressure to UNC behind the scenes to really address the problems with the treatment of its laboratory animals. (Maybe NIH can send in undercover agents as lab technicians!) Or maybe, being part of the federal government, NIH is not having such an easy time, what with resource issues and political pressures, functioning as we would like it to.
Not that I'm at all cynical about the government these days.
What about the UNC employees whose conduct PETA recorded and brought to light? It seems pretty clear that they were not only violating the regulations, but were also aware that they were violating the regulations. ("Maybe it's illegal, but it's easier.") Folks, this isn't lobbying or energy trading. This is laboratory science. It would also be easier to experiment on just two rats rather than hundreds. Or, for that matter, to just say you experimented on some rats and make up some persuasive data. Easy isn't what's driving the process here, and breaking the law is frowned upon.
So, UNC gets caught violating animal welfare regulations. They get hit with the "damning report" from OLAW. No sanctions from NIH yet, but the unpleasant publicity from PETA. You would think at that point that someone in charge at UNC would take serious action to make sure everyone doing animal research at UNC cleaned up his or her act. Otherwise, you'd be risking sanctions from an NIH angered that the "damning report" from OLAW had been ignored. And, you'd be risking putting your institution in a position where what PETA claims about it is true. Which, from a public relations point of view, seems like a mighty big risk.
And, which makes the subsequent PETA exposé pretty damn embarrassing (or should, if there is any self-awareness and shame still possible among those who oversee animal research at UNC). It almost looks like, institutionally, UNC doesn't care about animal welfare regulations. This is a problematic stance if, say, you'd like to take money from the federal government to support your research with animals. Moreover, paying lip service to the regulations without making sure they are followed is lax management at best; if you have a principled disagreement with the regulations, presenting reasoned arguments against them is much less slimy than winking at them and taking the money.
The most horrifying part of this all for UNC has got to be making PETA look (comparatively) reasonable. PETA doesn't want any animals used for research (or food, or clothing). PETA is not generally viewed as a voice of reason or moderation. PETA would have you believe that research with animals is usually inhumane, and that scientists and lab technicians can't be trusted to follow the animal welfare regulations.
Thanks to UNC, they have some hard evidence to back up that claim. This, coupled with the NIH's seeming unwillingness to actually enforce the regulations, has got to make things harder, at least on the PR front, for other scientists doing research with animals, even those who follow the animal welfare regulations scrupulously. When the public sees this kind of story, what's that going to do to the center of gravity of public opinion on animal research in particular and on the trustworthiness of science in general?
But now, the very best part of the WaPo story:
Tony Waldrop, UNC's vice chancellor for research and development, said that many of the problems found in the second inspection were remnants of problems from earlier on, which were still in the process of being corrected. "It was not new information," he said, noting that a recent follow-up inspection resulted in "an absolute clean bill of health and full accreditation."
Perhaps most important, UNC says it has updated its screening and background checks for new hires.
In other words: It takes time to get the lab techs to actually treat animals humanely rather than breaking the law because it's easier; it's not like we can just tell our employees what to do! But in the meantime, we'll make damn sure we don't hire anyone who has worked with PETA or who shows other indications of a concern for animal welfare. That hasn't worked out so well for us.
Wednesday, January 04, 2006
The problem with cheaters.
[Finally I'm actually healthy again, and not in a hotel charging $10 a day for internet access. So, on with the blog!]
It must be a law of nature that when past and current graduate students dine together at the end of December the conversation turns, sooner or later, to cheaters. First, of course, you discuss the head-slappingly stupid techniques cheating students employ. ("If they thought we wouldn't notice them doing that, they must think we're really stupid!") Then, you recount a sting operation or two (like planting someone next to a habitual cheater during an exam and having the plant spend the exam period writing utter nonsense -- all dutifully copied by the cheater onto her own exam). Finally, there is the wringing of hands over how the graduate students' efforts against cheaters are for nought given the policies at certain universities that, basically, don't let you do jack to the cheaters.
It's that last part that's been sticking in my craw since the cheating cheaters discussion of which I was a part on New Year's Eve.
Maybe I just lack the necessary perspective here. I am not now, nor have I ever been, a university administrator. I do not grant degrees, nor do I take in tuition money. I've just been in the trenches teaching. From my point of view, assignments and exams are tools for assessing how well my students have learned the material I have tried to teach them (and, therefore, of how effectively I've taught the material), and of how well they're reasoning about this material. Cheating, therefore, is a subversion of the communication that's supposed to tell us how well we've done with the process. What it ends up communicating, when detected (and detection is far more frequent than students seem to think it will be), is that the cheater doesn't actually care about learning the material on offer. And, I get that there are lots of things about which one may legitimately not care, but it seems like it's a good idea then not to take a course on them. Or, if one must take a course on them (as a requirement, say, for a major about which one does care), it seems like a better strategy to try to find something to care about in the material -- how is it connected to the thing I do care about, for example.
Indeed, part of what I find most offensive about cheating in my courses is that it is an attempt to appear as if one cares about the material that reveals the absence of actual effort to learn the material. Cheaters care about my course instrumentally, as a means to get a necessary requirement filled or to get a desired grade. And, they seem to think that I won't feel ill-used by their cheating.
But I'm not ranting about the students today. I'm down on systems that let cheating persist unchecked. On New Year's Eve, I heard tell of policies at three major research universities that make it next to impossible to do anything to a student you've caught cheating. One where a student isn't "caught" without multiple witnesses to the act -- one of whom has to be the professor of record for the course. (Teaching assistants, the prof always sticks around when the exam is being administered, right?). Another where professors and TAs are expressly forbidden from being in the room while students are taking exams (which leaves witnessing and reporting the cheating up to students ... who are not always so invested in taking up this responsibility). For all three of these institutions, there seems to be serious pressure from the administrative forces in the system not to impose sanctions (like suspension, or even failing grades) for even habitual cheaters. And the lack of institutional will to take a stand against cheating seems to have made some of the profs just ... give up trying to do anything about it in their own courses. (Who wants to take on the procedural nightmare involved even in administering a slap on the wrist?)
What the hell are these administrative forces thinking?
I'm sure much of their thinking is informed by legitimate concerns for the rights of the students to due process. If I were cynical, I might suggest that their thinking is also informed by the likelihood that the parents of the cheaters, the captains of industry paying upward of $40K a year for junior to get a name-brand diploma, may be inclined to call those administrative forces to lobby for junior to get a second (or third, or fourth, ...) chance. Certainly, it would be a problem if the system were set up in such a way that profs and TAs could merely allege cheating, without proving it, and thereby end a student's college career. But that's not what's happening. Rather, we seem to have a situation where habitual cheaters are not held to account at all, except for perhaps having to repeat a course.
This at universities where, occasionally, faculty members are booted for fabricating data.
My gut says the root problem here is the model of the university that the students and the administrative forces seem to have in mind. The operative assumption is that the student is a consumer and the university is providing a product. (I paid my money, whaddaya mean I don't get my degree?!) On this model, exams with the right answers are just the necessary paperwork you have to turn in to get the degree you came for. How much, really, should it matter how you got that paperwork filled out?
A better model, at least from where I sit, is one of community. While each of us has our individual interests, we have certain interests in common (like the honest exchange of information and ideas, or the creation of conditions that foster learning). This is why cheating is an abomination -- it strikes at our common interests, and makes it impossible for us to function well as a community. Administrative actions that don't recognize or address this aspect of cheating further undermine the community. When administrative forces don't get that cheating hurts the community, they reinforce the cheater's sense that the community doesn't matter.
Community may be the key to dealing with cheaters in the world of science, too. In many of the high-profile cases of fabrication, falsification, and plagiarism, it comes out that the cheater is a habitual cheater -- someone who has been cheating for some time, and who may even have been caught doing so but let go with a slap on the wrist. I've heard it said (and it seems reasonable to me) that the tendency to let scientists go with a slap on the wrist is reinforced by a lack of intermediate-level penalties for cheating; if all you have is the scientific equivalent of a death penalty, you may look for reasons to let people off. But, having been let off, sometimes repeatedly, the cheaters may start to get the message that cheating doesn't really matter all that much. Their "youthful" offenses are kept quiet, lest a promising young researcher's career be ruined.
Wouldn't it be better to bring the "youthful" offenses out into the light so the scientific community could make it clear how these kinds of behavior hurt the community and undermine the project science is trying to do? Shouldn't the community, in the process of training new scientists, take an active role in keeping these new scientists honest? Mercy comes from an understanding that people sometimes falter in their judgment; working together as a community to help members exercise good judgment seems like a better approach than leaving someone who has screwed up on his own with just the warning not to screw up again. The community ought to know about "prior bad acts", not so it can isolate the actors or consider them evil (because if that were the goal, you'd just boot them from the community on the first offense), but so the community can help the actors interact with the community in better ways and earn back the community's trust.
Truly evil actors will need to be booted, of course. But it seems reasonable that only a small proportion of cheaters are irredeemably evil. One of the strengths of community is that it can help bring you back after you've gotten off track. The trick, it seems, is understanding that you're part of a community in the first place.