Skip to Main Content

Mistakes happen — in life, in the lab, and, inevitably, in research papers, too. Journals use corrections and retractions to resolve those mistakes. But one particularly high-profile case is now drawing fresh attention to the problems with journals’ process for addressing concerns about research integrity.

Late last year, Stanford University announced that it was opening an investigation into its president, neuroscientist Marc Tessier-Lavigne, over allegations of research misconduct. Five studies co-authored by Tessier-Lavigne are now under the microscope for containing alleged altered images: a 1999 Cell study, a 2008 paper in the EMBO Journal, a 2003 Nature study, and two studies published in 2001 in Science. Cell, EMBO, and Science have each opened their own investigations into the articles, and both Science and Cell have published “editorial expressions of concern” for the papers, which warn readers that the information in the articles may be questionable, but stop short of making corrections or retractions.

advertisement

But in some cases — the Cell paper and the two studies that appear in Science — the journals were actually alerted to the issues years ago by Tessier-Lavigne himself. The journals never posted corrections. Cell said that after Tessier-Lavigne reached out with concerns about the images in 2015, the journal’s editors decided at the time that no action was needed. Science, meanwhile, didn’t follow through “due to an error,” as Holden Thorp, editor-in-chief of the Science family of journals, said in a statement.

“We know from correspondence that we obtained from Dr. Tessier-Lavigne that the Errata were discussed and agreed to with our editors in 2016,” a spokesperson for Science told STAT via email. “Our email retention policy only lasts for five years, so we don’t have records that allow us to ascertain why they were never posted. It’s clear that they were approved by the editors, but we can’t say anything beyond that. We take full responsibility for this mistake.”

Once again: mistakes happen. But the incident raises the question of why, once potential errors are brought to journals’ attention — whether by scientists volunteering their scrutiny on the online forum PubPeer or by a paper author  — it can take journals a long time to address them.

advertisement

Part of the issue comes down to practical matters of overwhelmed editors who lack sufficient resources to deal with an onslaught of allegations, according to experts. But there’s also a culture of fear around corrections and retractions that is hampering efforts to maintain the integrity of scientific research.

So what would it take to rev up the process? STAT spoke with former and current journal editors, research integrity officers at journals and universities, and volunteer investigators on the hunt for answers.

How the corrections process works

There’s little consensus about how, exactly, the corrections or retraction process is supposed to work. According to the guidelines published by the Committee on Publication Ethics, editors should consider a correction when only a small part of a paper has validity problems. For papers where the data or content are so flawed that they might affect findings and conclusions, guidelines suggest a retraction. And journals should, in general, have mechanisms in place to address allegations that there are problems with papers.

Beyond such general recommendations, journals’ steps for addressing post-publication errors vary. The process can get messy, between angry whistleblowers, reticent authors, lots of red tape, and lack of communication between journals and universities trying to address potential concerns. Rather than an orderly investigation, the procedure typically looks like “one of those old cartoons, where you see a picture of a bar fight and there’s all these explosions and there’s like an arm coming out here and the table coming out there,” said James Heathers, chief scientific officer at Cipher Skin, a health tech company.

Various parties play a part as guardians of the scientific record. Heathers compares the roles of those involved to a multi-layer cake. One tier includes independent sleuths like himself and Elisabeth Bik, a microbiologist and research integrity expert who raised red flags about four papers Tessier-Lavigne co-authored from 2001 to 2008. Bik, Heathers, and Gideon Meyerowitz-Katz, all of whom co-authored a paper earlier this year on solving the problem of slow corrections, are just a few of the people who make up a volunteer army that points out potential data and image problems to journals.

Another layer of Heathers’ cake is composed of the senior academics who run the journals, set policies at scientific organizations, and influence the culture of research integrity. The next layer is made up of research integrity czars at journals, employed to police papers for errors, and research integrity officers (RIOs) at universities, who are the points of contact to handle research misconduct and might run responsible conduct of research training or deal with compliance issues.

Then there’s Congress, which controls the purse strings for federal organizations such as the National Institutes of Health, the National Science Foundation, and the Office of Science and Technology Policy. While, together, these players dictate a lot of the large-scale policy and dollars for integrity in federally funded research, they’re generally directly involved in everyday issues only when cases require sanctions against a researcher, Heathers said.

In practice, according to Heathers, there are very few resources available for dealing with potential corrections, whether at the journal, university, or government level.

And for a long time, journals didn’t assume much responsibility for dealing with potential errors at all. But they were forced to change amid a cultural shift spurred by the rise of the digital age.

From ‘we’re not the police’ to ‘we have a responsibility’

Back in 2003, when Susan Garfinkel started working at the Office of Research Integrity (ORI) in Washington, journals felt like “we’re not the police. We’re not supposed to be looking at what people are doing. And they looked at our office as the bad guys who go after these people, and that wasn’t the truth of the matter at all,” she said. The ORI is a government agency that handles allegations of research misconduct related to federally funded research, but it doesn’t go out looking for allegations, instead addressing only those that others flag or that officers might happen upon while reviewing a paper of concern, said Garfinkel, now associate vice president of research compliance at Ohio State University.

“I just don’t think that journals thought that it was an extensive problem,” Garfinkel said. The prevailing thought at the time was that, because science is self-correcting, any data problems would sort themselves out eventually when future researchers found the same results impossible to reproduce or build upon. Editors also thought pre-publication checks and balances, such as peer review and scrutiny from lead authors’ colleagues, were enough to prevent bad data from getting through.

“In science, the only thing that you have to go on is your word.”

Susan Garfinkel, associate vice president of research compliance at Ohio State University

A lot has changed since then. The allegations used to pour in from people who worked in the authors’ lab or other whistleblowers who knew the research intimately. Then journals began publishing papers online and digital tools became available to check these papers, blowing data-checking access wide open. Suddenly the scientific community could scan the literature online to look for problems in anything from western blots and northern blots to cell images and mouse pictures.

Concerned parties also started sending notices of concern directly to the journals instead of the ORI or the researchers’ universities. It became harder for editors, flooded with potential error notices, to miss — or dismiss — the magnitude of the issue.

“And that’s when journals really started getting on board,” Garfinkel said. Their new perspective could be summarized as, “We have a responsibility in this area, too. We have to make sure that the data that we publish is correct. Otherwise we spend a lot of time dealing with it,” she said.

Garfinkel recalls Mike Rossner leading the charge in the early 2000s as managing editor of the Journal of Cell Biology. This approach was novel at the time, Garfinkel said, because it was hard for journal editors to accept a new mindset on their relationships with authors. They had to let go of the idea that questioning what researchers submitted indicated a lack of trust.

“In science, the only thing that you have to go on is your word,” Garfinkel said. “When you submit a paper for publication, you’re saying this paper is accurate, and that’s all you have, is your word. There was always a very trusting relationship, because once somebody realized that somebody couldn’t be trusted, that was the end of it for them; that really harmed their reputation.”

For Rossner, the decision to start proactively checking papers for image integrity occurred by happenstance. Cell Biology was getting files in the wrong formats all the time, so editors would ask authors to resend. One day in 2002, Rossner was up against the production clock. When he tried to fix the image files himself on Photoshop, he found a telltale sign of image manipulation in a western blot. He confirmed the problem when the source data he requested from authors didn’t match what was up for publication.

That was the summer Cell Biology launched an image-screening policy on accepted manuscripts. The journal developed visual screening methods Rossner still uses today in his consulting work through his company Image Data Integrity.

But now image manipulation has gotten stealthier, with AI-mediated techniques and the problem of paper mills, companies that roll out fabricated or manipulated scientific manuscripts to order. While industrialized cheating has existed for a while, researchers are now more fully aware of systematic fabrication, said image data integrity analyst Jana Christopher. With AI-generated images, it’s tougher to detect fabrications. Now journals face an uphill battle with the volume of issues that require attention and articles that need correcting or retracting, she said.

Publishers who have been targeted by paper mills also find themselves heaped with a backlog of problematic papers to investigate, data scientist Adam Day told STAT. “This means that, taken out of context, a delay might look unnecessary if we don’t know that the individual(s) dealing with it also have hundreds of other cases to deal with,” he said.

Christopher and other image integrity experts like Bik are working with AI detection tools such as Imagetwin. When she spoke to STAT, Christopher was planning to head to a master class with STM Integrity Hub where software companies would be showcasing their data detection tools.

There are still plenty of other obstacles that journals face in dealing with timely reviews of potential errors — including the fear of lawsuits from researchers whose reputations are on the line. Fighting legal action is expensive and time-consuming. To avoid lawsuits or legal threats, journals have to look into allegations and assess the risk of moving forward with them, which can get complicated.

The fear of scarring researchers’ reputations

The standards as to what constitutes scientific misconduct, as well as how to prove it and how to address it, differ among individual journals and universities. That means journals might have a tougher time working with research integrity officers at different universities to address potential errors.

According to COPE guidelines, when journals receive allegations of problematic data, they should reach out directly to the paper’s authors. If they don’t get a response or if they determine the response they receive to be insufficient, then journals should contact the institutions where the paper’s authors work. At that point, the institution begins to evaluate allegations of research misconduct.

But journals have traditionally always had a relationship with the authors, not with their institutions, explained Garfinkel. Figuring out who to contact, and when, between institutions and journals is one of the reasons the corrections process takes so long, she said.

There’s also the problem of resource scarcity on the university’s side, with no more than four RIO staffers at larger institutions and fewer at smaller universities, all handling multiple responsibilities outside of research misconduct. While it can only be good for journals to hire more image integrity experts and broadly trained research integrity czars, “if you’re going to contact an institute, they might only have one person for however many hundred or thousand researchers that work there,” said Christopher. “So then it’s still gonna bottleneck somewhere.”

Universities’ confidentiality requirements around potential research misconduct proceedings may also stymie progress in making corrections. University officials may not even make contact with the journal until their case is closed and they’ve determined not only whether research misconduct took place but who was responsible. That process can take years, leaving potentially incorrect data in the literature in the meantime.

Questions around intent don’t necessarily slow the process for editors if they can act on the data itself and aren’t concerned about who’s responsible. So some journals will just publish corrections or retractions based on the data, without including names. But this is where the balance between speedy corrections and fair process can get tricky.

If journals issue corrections but don’t say who was responsible, then all co-authors can be “scarred by it,” Garfinkel explained. By contrast, if journals name the person or people responsible for the error, then the other co-authors are “let off the hook,” Garfinkel said.

Naming a person and calling it research misconduct is the most transparent way you could address the correction, she said. “But then you get into the problem of it taking so long to get there.”

Thorp of Science tackled this gridlock in an editorial earlier this year, calling for a two-stage review that he noted was Bik-approved: a first stage in which the journals would assess the paper’s validity without assigning any blame, then a second stage in which the universities look at whether there was any fraud and research misconduct. Journals could then be free to move forward with their corrections process with all the information they needed from the universities.

Anonymous accusers and academic retaliation

Also complicating the timeline for addressing corrections is the matter of how to handle anonymous allegations. If someone flagging a potential issue with a paper wants to be taken seriously, the most surefire way is to be Bik. Her reputation for accurate detections precedes her, and it’s clear there’s no conflict of interest. And she’s respectful, not accusatory, said Isabel Casas, director of publication at the American Society for Biochemistry and Molecular Biology.

But Casas also recognizes that other sleuths feel they have to remain anonymous for fear of backlash from the research community in their field. “Retaliation in academia has been proven,” she said.  “It’s not something like a boogeyman.”

Meyerowitz-Katz, an epidemiologist at the University of Wollongong, said he’s one of the lucky ones whose position allows him to go public with his correction notices. But if he were an average Ph.D. student who depended on a stipend for his rent, he wouldn’t do this work. It’s too easy for academics to go after junior researchers, he said.

“Retaliation in academia has been proven.”

Isabel Casas, director of publication at the American Society for Biochemistry and Molecular Biology

Guidelines from COPE as well as research integrity hub STM urge journal editors to consider allegations even when the source is anonymous. Each allegation has to be evaluated on its merits, Rossner said, “no matter who it comes from” — whether it arrives via PubPeer, an anonymous email, or a piece of paper slipped under a journal editor’s door. Journals and institutions have indeed begun to take the mostly anonymous comments on PubPeer seriously, a departure from a not-so-long-ago era when editors, publishers, and RIOs wouldn’t heed them, said Rossner. That’s “really a positive shift in this scientific research integrity environment,” he said.

Rossner understands the plight of editors slammed with error notices — in his Cell Biology days, he’d receive some “wacky” allegations. Many don’t turn out to be valid — but the one or two they do find make the process of slogging through all of them worth it.

Still, not all journals have come around on the issue of anonymity. Heathers said he has seen journals tell whistleblowers that they’re not going to do anything with the notice they’ve received unless the whistleblower identifies themselves.

Changing the culture of corrections

Journals do have the option of posting an expression of concern while they’re waiting to make the call on a correction or retraction. But many have become more hesitant to do so in an era characterized by an outsized fear of post-publication failure.

Expressions of concern were once not such a big deal for researchers’ reputations. Even retractions didn’t necessarily deliver a huge blow, so long as the mistakes were unintentional. But in the digital age, with sites such as Retraction Watch highlighting those papers, and a concurrent rise in public attention to research misconduct, potential corrections and retractions are now frequently interpreted to signal more than incorrect data.

“Now, no matter how many times you say a retraction does not mean there was research misconduct, the community kind of uses it as, ‘Oh, my God! Somebody had a paper retracted. I wonder if there was something wrong,’” Garfinkel said. “And that makes it very difficult to just take a paper and say this data is not correct and retract it, or [get] a correction from a journal.”

“No matter how many times you say a retraction does not mean there was research misconduct, the community kind of uses it as, ‘Oh, my God! I wonder if there was something wrong.'”

Susan Garfinkel, associate vice president of research compliance at Ohio State University

While public scrutiny has raised the stakes when it comes to corrections and retractions, attention from the internet can also spur journals to take quicker action in some cases. PubPeer can drive pressure by calling out editors over not having caught errors, Meyerowitz-Katz said. Twitter can be useful because papers tend to get retracted very quickly if they go viral for egregious errors, he added. He cited one such medical journal article, on an attractiveness study, that after years of controversy was retracted soon after it went viral.

While greater online visibility can force editors’ hands, the problem is “it’s a very ad hoc and arbitrary mechanism with which to force action,” Meyerowitz-Katz said. It won’t work if a study isn’t seen as all that important, or if the person raising concerns doesn’t have a huge social media following.

Thorp also said that, contrary to popular criticism, journal editors generally don’t shy away from posting a correction or retraction. They want to keep a corrected scientific record — but the stigma associated with such notices is alive and well on the researchers’ side, Thorp wrote in an editorial he co-authored last week alongside Science colleagues Jake Yeston and Valda Vinson. The editors explained that when they contact authors with concerns about their papers, they’re “often met with defensiveness and denial. That needs to change.”

To kick things off, Science is adding a third criterion for retractions. Up until now, the journal’s standard was to retract papers either when a case clearly indicated research misconduct or when errors undermined key conclusions in the article. The new category is for papers that have received enough corrections or include enough errors “to cause the editors to lose confidence in it.”

If one of the biggest barriers to a quicker corrections timeline is cultural, some experts hope to rebuild the culture itself. For Garfinkel and her colleagues at Ohio State University, Northwestern University, and George Washington University, who’ve formed a working group of research integrity experts, progress depends on getting the two parties most directly responsible for addressing errors — universities and journals — to better cooperate.

Like Thorp, Garfinkel and her colleagues are also calling for a joint effort on the part of universities and journals to disentangle issues of intent from questions of scientific validity. That means, for example, allowing for corrections even when investigations are ongoing. It also means emboldening editors to think beyond current COPE guidance to first contact the author when an issue arises; if a case looks fishy, they suggest, editors might be better served by reaching out directly to the RIO instead.

The group hopes to build bridges so journal editors and RIOs at universities can talk more openly about potential problems without concerns around triggering a formal university investigation that will tie up journal editors with red tape, or — on the university’s side — the fear that the journal is going to retract a paper without adequate explanation and assignation of responsibility while RIOs are in the middle of a proceeding for cases of potential research misconduct.

Garfinkel and her crew are focused on widely distributing the preprint that came out of these working group sessions. She presented it at the last Association of Research Integrity Officers annual meeting, and plans to further test the waters. The next step is to get more editors and RIOs to buy into this information-sharing process — and ultimately start to reshape a culture of research integrity bottlenecked by fear.

“It will make people start talking to one another,” Garfinkel said. “And if you could get people to start talking to each other, the problems will work themselves out.”

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.