On August 6, FDA announced that Novartis’s application for approval of Zolgensma contained “manipulated” data and that the company knew this while the application was pending, but did not tell the agency. Three days later a group of Senators wrote FDA a letter asking, among other things, why the agency had withdrawn a proposed regulation that would have required “sponsors of certain clinical trials to promptly report suspected data falsification to FDA.” It may be helpful to review the concerns that people raised. Details after the jump.
The Proposal (here)
FDA proposed to amend its rules governing nonclinical (laboratory and animal) studies and clinical (human) trials, as well as its rules governing various other submissions (such as color additive petitions and food additive petitions). For nonclinical studies, for instance, a new regulation would have said:
- When a sponsor becomes aware of information indicating that any person has, or may have, engaged in the falsification of data in the course of reporting study results, or in the course of proposing, designing, performing, recording, supervising, or reviewing studies conducted by or on behalf of a sponsor or relied on by a sponsor involving studies subject to this part, the sponsor must report this information to FDA.
- A sponsor must report this information regardless of whether the sponsor has evidence as to the intent of the person who has, or may have, falsified data.
- The sponsor must report this information to FDA promptly, but no later than 45 calendar days after the sponsor becomes aware of the information.
Notice what sorts of entities this would have applied to.
For nonclinical studies, the term “sponsor” meant (1) any individual or entity that initiates and supports a nonclinical study, (2) any individual or entity that submits a study to FDA in support of an application for research or marketing, and (3) a testing facility that both initiates and conducts the study. For clinical trials, the term meant any individual or entity that takes responsibility for and initiates a clinical trial.
Thus the reporting proposed obligation would have applied to natural persons, drug companies, testing companies, research universities, and academic medical centers, among others.
Notice also that the proposal would have required reporting the falsification of data, even if those data had not been submitted (and never would be submitted) to FDA.
The proposal related to any studies — including ongoing studies that might eventually support a marketing application (or might not), and including data that might never be submitted to FDA in the first instance. What might never be submitted? Here’s one example: data from animal testing at a research university, if no company ends up pursuing the drug for clinical trials and approval.
Why go after the falsification of data that might never be used to support a regulatory decision? FDA was concerned about “non-compliant investigators” working for multiple companies. It explained that sometimes a sponsor monitoring an ongoing study will discover an issue with an investigator, terminate the investigator, and exclude the bad data from its regulatory submissions, without giving FDA the termination details. Although this means the bad data won’t affect any regulatory decisions, the same individual might be working on other studies for other companies. FDA wanted to more rapidly identify these “non-compliant investigators” and, it said, more effectively address the risk they present.
FDA received roughly four dozen comments from not only pharmaceutical companies but also research universities, hospitals, academic health centers, clinical researchers, laboratory research facilities, and the National Cancer Institute (part of the National Institutes of Health). Here are the three biggest concerns raised.
First, there are other federal research misconduct regulations already on the books.
This was the primary concern identified by FDA’s sister agency, the National Cancer Institute, as well as several research universities. For example, any institution that receives funding from the Public Health Service (part of HHS) for biomedical research must comply with the PHS policies on research misconduct, which appear in 42 C.F.R. part 93. Those rules address the investigation, resolution, and reporting of research misconduct — including data falsification. They conform to the Federal Policy on Research Misconduct, issued in December 2000, which applies to all federally funded research and has been adopted across the federal government.
The FDA proposal was inconsistent with this framework. For example:
- Timing and Evidentiary Threshold. The PHS regulations permit a longer period to review allegations (60 days), and they require reporting (to the federal Office of Research Integrity) only if the matter has moved from an “inquiry” stage to the “investigation” stage. Thus reporting is required when there is a “reasonable basis for concluding that the allegation falls within the definition of research misconduct.” In contrast, FDA would have imposed a 45-day deadline and required reporting under a looser “may have” occurred standard.
- What Constitutes Data Falsification. The PHS regulations distinguish between fabrication and falsification. Fabrication is making up data or results and recording or reporting them. Falsification is manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record. In contrast, FDA defined data falsification differently and in a way that seems to include both (“creating, altering, recording, or omitting data in such a way that the data do not represent what actually occurred”).
Many argued that the inconsistencies would be a problem. Having one set of federal regulations out of step with the rest of the federal government would lead to confusion. And some entities would be subject to both sets of rules, which many (including a group of academic medical centers) said would be confusing. The National Cancer Institute said that using “falsification” at FDA to mean the same thing as both “falsification” and “fabrication” at HHS would cause confusion at the institutions subject to both sets of rules. It also argued that where both sets of rules apply, having to report early to FDA could undermine the methodical process for inquiry and investigation laid out in the more comprehensive PHS framework.
Second, unlike the PHS regulations, the proposed FDA regulation was muddled about the relevance of intent.
FDA’s proposed definition of “data falsification” — creating, altering, recording, or omitting data in such a way that the data do not represent what actually occurred — seemed to include honest mistakes. This would describe laboratory values that are skewed because an instrument was calibrated incorrectly, as well as dates recorded incorrectly (such as 09 instead of 08 for August).
But the proposed regulation also stated that “errors” were not to be reported. And FDA’s explanatory notice distinguished at length between falsification and error. The agency admitted that “significant errors could potentially compromise the integrity of data submitted to FDA,” but stated that generally “these errors will be addressed through FDA inspections, sponsor monitoring activities, and the agency’s application review processes than is the case with falsification of data.”
And if “error” is not to be reported, then some investigation is necessary when an anomaly pops up. The intent of the person entering the data turns out to be relevant.
And yet when FDA discussed the evidentiary threshold — information “suggesting” that a person “may have” engaged in falsification — it stated that a sponsor must report regardless of whether it has enough information to determine the person’s intent. In other words, do not complete an initial inquiry (such as the initial inquiry under PHS regulations): report if there is a discrepancy that might be falsification, even if it might actually be an error (and thus not reportable?) (hence the confusion).
Under the circumstances, sponsors might err on the side of reporting everything, especially because violation of the reporting requirement would be a federal crime. (And some in industry said this would happen.) But FDA does not want this; it exempted errors because “requiring sponsors to report every observed error in data recording and processing could overwhelm the agency with information.”
Which leads me to this final point: what are we to think of the fact that FDA claimed the new regulation would not result in any additional reports of falsification? The agency reported that it currently receives around 73 reports of data falsification per year, across all Centers, and that it expected to receive the same number of reports if the proposed rule were adopted. Is that a mistake? Or did it genuinely think this regulation would not change anything?
Third, some argued that the FDA reporting standard was too low (as compared to the PHS standard), that it would implicate many investigators eventually exonerated.
These concerns reflect the intuition that there is a distinction between errors, poor documentation practices, and deliberate falsification. In one key respect, of course, they are equally problematic, because they always mean the resulting data are unreliable. But normatively they are not the same. And being reported to the government for potential data falsification can have professional and reputational consequences.
This is why the National Cancer Institute favored the more measured approach of the PHS regulations: the longer period for investigation and the “reasonable basis” standard give researchers some due process before they are reported to the government as possibly having engaged in research misconduct. The academic medical centers added their concern that claims against individual investigators could be raised and made public based on “frivolous, unsubstantiated, or even malicious charges.” One pharmaceutical company added that it could become harder to recruit people to conduct studies that fall under FDA’s regulations, if those people are subject to reporting for possible falsification of data simply because of an anomaly, even though a thorough investigation would later exonerate them.
What Happens Next
FDA hasn’t said why it withdrew the proposal. Perhaps it realized it needs to think more about the relationship between its regulations and the broader federal framework governing investigation and reporting of research misconduct. Some of the comments didn’t mince words. The University of Texas System wrote, for instance, that the consequences of this proposal for the research community were “significant enough” that FDA should simply “forego the proposed amendment to its regulations.”
But this doesn’t mean the issue is dead. Presumably FDA still has the same policy objectives that it identified in the original Federal Register notice, and many comments supported the basic exercise — they just thought FDA was going about it wrong. So maybe FDA will propose adopting the PHS approach. Or perhaps it will work from the PHS regulations but tailor them to fit its own objectives.