Tuesday, January 24, 2006

Baltimore Sun on science fraud

Julie Bell of the Baltimore Sun wrote of scientific fraud.

There was an issue of unavailability of mechanisms to combat fraud:

Five or six times a year when Dr. Richard Smith was editor of the British Medical Journal, he would receive scientific papers that he suspected were fabricated or had other ethical problems. Simply declining to publish them didn't seem like enough to protect the public.

So Smith would look for someone, anyone, to investigate - an employer, a government agency in the researcher's country, a fellow journal worried that the scientist ultimately would get a flawed study published elsewhere if it weren't discredited.

But all too often, Smith said, he wouldn't be able to find anyone to help - or enough time or money for the journal to pursue such an investigation itself.

"As editors, you have an obligation to say you want to find someone to look at it, unless it becomes completely impossible," he said of suspect research.


***
When I was on the Ethics Task Force of the American Chemical Society (ACS), I tried to create a mechanism whereby people questioning data or conclusions of an article in an ACS journal could present those questions in the ACS journal. The majority of the Task Force determined that editors were the preferred arbiters. Richard Smith's statements indicate that some editors think they could use some help.
***

Bell also notes:

The problems aren't just academic. The world's thousands of journals perform a gatekeeping function arguably as important as that of the Food and Drug Administration, though they have no regulatory authority. Most print only studies that have been reviewed by panels of volunteer experts, making publication a sort of stamp of approval from the world of science.

A study published in a credible journal can influence the trajectory of a researcher's career, the focus of future science and - perhaps of the most immediate importance to the public - the practice of medicine itself.

"The stakes are huge," Brian C. Martinson, a researcher who has studied competition and misconduct among scientists, said, singling out the Hwang case. "The financial stakes, the prestige stakes, are huge for the Korean government as well as for the scientist."

The underlying problem the journals are dealing with is the competitive nature of science run amok, argues Martinson, a research investigator for HealthPartners Research Foundation in Minneapolis.

Martinson co-authored a paper in Nature last year that indicated misconduct might be far more prevalent than high-profile anecdotes indicate. The study found that 33 percent of researchers surveyed acknowledged engaging in more "mundane" kinds of misbehavior, such as "changing the design, methodology or results of a study in response to pressure from a funding source."

Bell discusses some of the proposed remedies:

New safeguards "could include, for example, requiring all authors to detail their specific contributions to the research submitted, and to sign statements of concurrence with the conclusions of the work," Editor-in-Chief Donald Kennedy said in a statement last week. "We are implementing improved methods of detecting image alteration, although it appears improbable that they would have detected problems in this particular case." [IPBiz note: at a minimum, journals should require declarations analogous to those required by the United States Patent Office.]

At the Journal of Thoracic and Cardiovascular Surgery, Editor Andrew S. Wechsler said researchers will be banned from publication for up to two years if they fail to disclose an important financial relationship with a research sponsor.

Still, journals say they can only do so much.

"The issue is, the whole field of science depends on trust," said DeAngelis, the JAMA editor in chief. "We're not detectives."

In fact, many journal editors say, peer review - as the vetting of journal articles for publication is known - was never meant to detect fraud. Generally, they say, the unpaid volunteer scientists who review papers aren't looking for it. Their job isn't to drill into mountains of data to determine whether each submitted statistic is accurate (there can be tens of thousands of such data points in some research), but to review the researchers' techniques and whether their conclusions are supported.

"It's well-known that peer review cannot protect against fraud," Wechsler said. "In the instance where fabricated data meets statistical muster and scientific credibility, we're stuck."

Others take issue with that reasoning.

"That's sort of letting themselves off the hook," Dr. Peter Lurie, deputy director of Public Citizen's Health Research Group. "It's a poorly performed role because too many reviewers do a sloppy job: They tend to be generous to their colleagues."

But even when peer review fails, journal editors point out, the world often finds out about fraud because the larger system of scientific inquiry catches it. After publication, other scientists who read about an experiment may find its flaws by trying to replicate it, a way of validating a paper's hypothesis.

"The real arena to catch error is after publication," said Peter Suber, a research professor at Earlham College and a proponent of free, online publication of scientific research.

Arthur Caplan, a University of Pennsylvania professor of bioethics, believes the overall system of journal-sponsored peer review still works well. If "there seems to be more fraud than there used to be, it's probably a function of more science and more experiments," he said.

But today's blistering pace of science, the growth of company-backed research and the increasingly international nature of experimentation have convinced some, including former British Medical Journal editor Smith, that the system needs fortification.

Adil Shamoo, editor of the journal Accountability in Research, suggests mandatory classes on research ethics for scientists. He teaches research ethics at the University of Maryland, though it's still an elective for many. He also long has supported random audits of grant-backed scientific papers by an independent agency.

"If somebody applies for a grant, he has to promise he's subject to that random audit if he's chosen," Shamoo explained.

Smith, who left BMJ in 2004 and became an executive at UnitedHealth Group, said the Singh case taught him that journals can't handle such investigations alone. Once, he and an account in the BMJ detailed, after months of seeking additional data from Singh, he got a box in the mail - filled with sheets of data in handwritten pencil.

"Nobody knows how many scientific journals there are," Smith said, estimating there could be as many as 50,000. "We were the BMJ, one very small part of the forest. It could be that everywhere else in the forest, everything's fine. But I doubt it."





[IPBiz post 1172]

0 Comments:

Post a Comment

<< Home