As FDA has noted, artificial intelligence (AI) and machine learning (ML) technologies have the potential to transform healthcare. These technologies could be used at the agency, by other agencies involved in healthcare regulation and finance, by private participants in the healthcare delivery system, and by medical device manufacturers. They can also be embedded in conventional medical devices or, indeed, serve as standalone medical devices. For several years now, FDA has been exploring how its existing medical device framework applies, or should be adapted to apply to, software “as” a medical device as well as software “in” a medical device. In this post, I review four recent law review articles exploring legal and policy issues presented by the emergence of artificial intelligence and machine learning in medical devices and the healthcare setting more generally.
The articles posted in 2022 are complementary, in the sense that the authors do not disagree fundamentally with each other. Professor Sara Gerke (Penn State Dickinson Law) concerns herself with FDA regulation of medical devices that depend on, or contain, artificial intelligence and machine learning software. She suggests modest revisions to the FDA framework that will, she says, improve the safety and patient outcomes associated with AI-based medical devices. Really, though, she is criticizing the 510(k) clearance process altogether and recommending broader changes that would also apply to AI-based devices. Professor Nicholson Price (University of Michigan Law School) provides an essay focusing more on uses of artificial intelligence that probably would not take the form of “devices” regulated by FDA. He’s skeptical about the potential for effective centralized review and validation and asks for “policy support” for “distributed” (local) governance. Professor Mason Marks (Florida State University) discusses FDA’s own use of artificial intelligence in decision making, teasing (but not answering) numerous important administrative law questions, and ultimately recommending a list of “good simulation practices” for the agency. Professor Charlotte Tschider (Loyola University Chicago School of Law) returns to traditional medical devices that include artificial intelligence but focuses on ethical and privacy concerns tied to data collection, rather than regulatory concerns. All four have published extensively on artificial intelligence in the past, as well.
In Health AI for Good Rather Than Evil? The Need for a New Regulatory Framework for AI-Based Medical Devices (Yale Journal of Health Policy, Law, and Ethics), Professor Gerke argues that Congress should amend FDA’s statute to ensure that all health-related medical products that use artificial intelligence are considered “devices” subject to FDA regulation and, also, to ensure that they are all subject to rigorous premarket review. First, she takes issue with the exclusion of some clinical decision support (CDS) software from the statutory definition of “device” — see section 520(o)(1)(E)(4) of the FDCA — because some excluded software will be used to inform clinical management for serious or critical health care conditions. She would amend the statute to include all clinical decision support, AI-based models intended for use in prediction or prognosis of diseases and other conditions, understanding that FDA might still exercise enforcement discretion in low risk situations. Second, she takes issue with the fact that most AI/ML-based medical devices currently on the market were cleared through the 510(k) premarket notification process, citing the Institute of Medicine’s observation that the clearance process cannot assure device safety and effectiveness. She cites, favorably, FDA’s development of a “Safety and Performance Based Pathway” for moderate risk devices — pursuant to which the developers of well-understood device types assess their devices primarily against performance standards rather than in comparison to predicate devices. (FDA has said this approach means both more streamlined submissions for companies and higher quality data for the agency.) And, in fact, she suggests this pathway should be the primary (exclusive) path forward for all 510(k)-eligible medical devices. Until they are well understood, she adds, moderate risk AI-based devices should go through the de novo pathway. She also argues, at the end of her article, that FDA should consider AI-based medical devices to be “systems” rather than “just devices.” This framing seems unduly complicated; the essence of her argument is simply that during review the agency should (1) take account of human factors considerations, and (2) look for not only safety but improved patient outcomes. All of this it could already do and to some extent already does.
Section IV of Professor Gerke’s article interested me the most, because it considers two ways that AI/ML devices challenge FDA’s traditional approach to medical product regulation: (1) they may depend on algorithms that are impossible for humans to understand, and (2) they can learn and improve their performance as they go. The first strikes me as not so unprecedented; the agency has approved medicines, especially biological products, on the basis of clinical effect even where the mechanism of action is not well understood. And she argues, reasonably, that FDA should be cautious about requiring explainability as a prerequisite for marketing authorization. “If there is sufficient proof that a black-box model performs better than a white-box model and is reasonably safe and effective, and the accuracy increase outweighs the loss of model interpretability,” she argues, “then regulators should generally permit marketing of the black-box AI/ML model … to facilitate innovations.” The second challenge, that AI/ML devices will evolve on their own over time, presents a greater challenge, and her solution, broadly speaking, is continuous monitoring.
In the short essay Distributed Governance of Medical AI (SMU Science & Technology Law Review), Professor Price suggests that centralized regulation for AI systems involved in medical care — regulation as a “device” by FDA, ensuring safety and effectiveness before entry into commerce and practice — is unworkable. Many “AI products” do not pass before FDA in the first instance; consider, for instance, the AI tools embedded in electronic health record systems. Some pass before FDA, but Professor Price is even less enthusiastic about the 510(k) clearance process than Professor Gerke. Putting aside the ones that pass before FDA … one key objection he raises is that some AI systems need to be responsive to local environments — predictors of patient deterioration, for instance, may depend on patient populations, infrastructure, and the like. Price’s contributions come in sections III and IV of his essay, in which he argues for “distributed, localized governance” that can complement national regulation. For instance, some academic medical centers already develop, validate, and deploy their own AI. Others, though, may not have the capacity to engage in this “governance,” which leads to his suggestions that (1) their governance tasks be performed by other organizations, and (2) policymakers consider investment in the infrastructure needed to enable them to develop this capacity.
In Automating FDA Regulation (Duke Law Journal), Professor Marks focuses on FDA’s own use of computer models and simulations, including machine-learning, to improve regulatory decision-making. In his view, reliance on these tools “may further erode the FDA’s evidentiary standards and undermine agency credibility.” (The reference to “further” eroding sems from his brief discussion of the controversial Aduhelm (aducanumab) approval.) Part II of the paper describes technologies that FDA uses to simulate biological systems: molecular modeling, virtual humans, and simulated clinical trials. He describes the potential benefits and the potential risks of each, though section places greater emphasis on the risks, and overall tone of the article is critical. Among other things, he cites concerns about the credibility of computer models, a risk of erroneous conclusions if models are adopted too hastily, a risk of biased predictions if models are trained with data from articles in some scientific fields to the exclusion of others, incorrect use of patient-specific models (virtual humans), and misinterpretation of the predictions from the results from these virtual humans. Part III briefly notes a disagreement among Administrative Law scholars about whether replacing agency judgment with algorithmic judgments raises concerns about impermissible delegation of legislative authority. The rest of Part III, labeled as relating to transparency and accountability issues, argues that “outsourcing the design and control of algorithms exacerbates transparency and accountability concerns,” and that FDA “typically resists attempts to compel transparency.” Here, Marks also briefly dabbles with whether adoption of a computer model constitutes a legislative rule that would require notice and comment. Part IV, really the second half of his paper, discusses FDA guidance documents on computational modeling and simulation, building its way to five recommendations that would further “good simulation practices.”
Finally, in Prescribing Exploitation (Maryland Law Review), Professor Tschider engages with privacy issues and ethical issues associated with AI devices, rather than regulatory issues. As Tschider notes, patients are increasingly reliant on devices that use artificial intelligence infrastructures and “dynamically inscrutable algorithms” (Gerke’s “black box” algorithms). She uses the example of devices that require data in order to deliver tailored medicine — such as medical diagnostics, artificially intelligent surgical robotics, implantable devices, and medical wearables (smart hearing aides). And she argues that data collection and use expose the users to “continuous surveillance” and “compromised privacy” — which is a form of “exploitation” with both “deontological risks” (inherent, dignitary, or moral harms) and “consequentialist risks” (potential for monetary loss, job loss, denial of entitlements, etc.). It’s exploitative because of the power differential and the opacity of the algorithms and data use, in part; the patient has a “Hobson’s choice” — to use the technology (and consent to data collection) or to forego treatment options. (“Your privacy or your life,” she writes.) The remedies are adoption of reasonable preventive measures and the adoption of an “information fiduciary” framework; organizations should owe extra duties to the vulnerable to mitigate the exploitation risk.
Want more on artificial intelligence in the healthcare sector? There is so much to recommend (though they don’t all relate to FDA regulatory issues), including:
- FDA’s 2021 action plan for AI-ML-based software-as-a-medical-device
- Prof. Tschider’s heavily downloaded Regulating the IoT: Discrimination, Privacy, and Cybersecurity in the Artificial Intelligence Age (broader scope than healthcare uses)
- And her paper, Medical Device Artificial Intelligence: The New Tort Frontier
- Also on liability issues, there’s Prof. Gerke (with Prof. Price and Prof. Glenn Cohen), Liability for Use of Artificial Intelligence in Medicine
- Prof. Price’s Medical AI and Contextual Bias (continuing the theme noted above that healthcare is different in high-resource settings than low-resource settings, with implications for development and use of AI) as well as his 7-page primer, Artificial Intelligence in Health Care: Applications and Legal Implications
- Prof. Marks’s works include Emergent Medical Data: Health Information Inferred by Artificial Intelligence (which relates to the use of AI to infer health data from data/behavior that has no apparent connection to their health — e.g., identifying pregnant customers from their retail purchases).
Happy reading!