US FDA takes step towards new, tailored review framework for artificial intelligence-based medical devices
Artificial intelligence and machine learning have the
potential to fundamentally transform the delivery of health care. As technology
and science advance, we can expect to see earlier disease detection, more
accurate diagnosis, more targeted therapies and significant improvements in
personalized medicine.
The ability of artificial intelligence and machine learning
software to learn from real-world feedback and improve its performance is
spurring innovation and leading to the development of novel medical devices.
Today, we’re announcing steps to consider a new regulatory
framework specifically tailored to promote the development of safe and
effective medical devices that use advanced artificial intelligence algorithms.
Artificial intelligence algorithms are software that can
learn from and act on data. These types of algorithms are already being used to
aid in screening for diseases and to provide treatment recommendations. Last
year, the FDA authorized an artificial intelligence based device for detecting
diabetic retinopathy, an eye disease that can cause vision loss. The agency
also authorized a second artificial intelligence based device for alerting
providers of a potential stroke in patients.
The authorization of these technologies was a harbinger of
progress that the FDA expects to see as more medical devices incorporate
advanced artificial intelligence algorithms to improve their performance and
safety. Artificial intelligence has helped transform industries like finance
and manufacturing, and I’m confident that these technologies will have a
profound and positive impact on health care. I can envision a world where, one
day, artificial intelligence can help detect and treat challenging health
problems, for example by recognizing the signs of disease well in advance of
what we can do today. These tools can provide more time for intervention,
identifying effective therapies and ultimately saving lives.
We’re taking the first step toward developing a novel and
tailored approach to help developers bring artificial intelligence devices to
market by releasing a discussion paper. Other steps in the future will include
issuing draft guidance that will be informed by the input we receive. Our
approach will focus on the continually-evolving nature of these promising
technologies. We plan to apply our current authorities in new ways to keep up
with the rapid pace of innovation and ensure the safety of these devices.
The artificial intelligence technologies granted marketing
authorization and cleared by the agency so far are generally called “locked”
algorithms that don’t continually adapt or learn every time the algorithm is
used. These locked algorithms are modified by the manufacturer at intervals,
which includes “training” of the algorithm using new data, followed by manual
verification and validation of the updated algorithm. But there’s a great deal
of promise beyond locked algorithms that’s ripe for application in the health
care space, and which requires careful oversight to ensure the benefits of
these advanced technologies outweigh the risks to patients. These machine
learning algorithms that continually evolve, often called “adaptive” or
“continuously learning” algorithms, don’t need manual modification to
incorporate learning or updates. Adaptive algorithms can learn from new user
data presented to the algorithm through real-world use. For example, an
algorithm that detects breast cancer lesions on mammograms could learn to
improve the confidence with which it identifies lesions as cancerous or may
learn to identify specific subtypes of breast cancer by continually learning
from real-world use and feedback.
We are exploring a framework that would allow for
modifications to algorithms to be made from real-world learning and adaptation,
while still ensuring safety and effectiveness of the software as a medical
device is maintained. A new approach to these technologies would address the
need for the algorithms to learn and adapt when used in the real world. It
would be a more tailored fit than our existing regulatory paradigm for software
as a medical device. For traditional software as a medical device, when
modifications are made that could significantly affect the safety or
effectiveness of the device, a sponsor must make a submission demonstrating the
safety and effectiveness of the modifications. With artificial intelligence,
because the device evolves based on what it learns while it’s in real world
use, we’re working to develop an appropriate framework that allows the software
to evolve in ways to improve its performance while ensuring that changes meet
our gold standard for safety and effectiveness throughout the product’s lifecycle—from
premarket design throughout the device’s use on the market. Our ideas are the
foundational first step to developing a total product lifecycle approach to
regulating these algorithms that use real-world data to adapt and improve.
We’re considering how an approach that enables the evaluation
and monitoring of a software product from its premarket development to
post-market performance could provide reasonable assurance of safety and
effectiveness and allow the FDA’s regulatory oversight to embrace the iterative
nature of these artificial intelligence products while ensuring that our
standards for safety and effectiveness are maintained. This first step in
developing our approach outlines information specific to devices that include
artificial intelligence algorithms that make real-world modifications that the
agency might require for premarket review. They include the algorithm’s
performance, the manufacturer’s plan for modifications and the ability of the
manufacturer to manage and control risks of the modifications.
The agency may also intend to review what’s referred to as
software’s predetermined change control plan. The predetermined change control
plan would provide detailed information to the agency about the types of
anticipated modifications based on the algorithm’s re-training and update
strategy, and the associated methodology being used to implement those changes
in a controlled manner that manages risks to patients. Consistent with our
existing quality systems regulation, the agency expects software developers to
have an established quality system that is geared towards developing,
delivering and maintaining high-quality products throughout the lifecycle that
conforms to the agency’s standards and regulations.
The goal of the framework is to assure that ongoing algorithm
changes follow pre-specified performance objectives and change control plans,
use a validation process that ensures improvements to the performance, safety
and effectiveness of the artificial intelligence software, and includes
real-world monitoring of performance once the device is on the market to ensure
safety and effectiveness are maintained. We’re exploring this approach because
we believe that it will enable beneficial and innovative artificial
intelligence software to come to market while still ensuring the device’s
benefits continue to outweigh it risks.
We have more work to do to build out this initial set of
ideas and we’ll rely on comments and feedback from experts and stakeholders in
this space to help inform the agency as we continue to think about how we’ll
regulate artificial intelligence technologies to improve patient care. We
anticipate several more steps in the future, including issuing draft guidance
that’ll be informed by the feedback on today’s discussion paper.
As with all of our efforts in digital health, collaboration
will be key to developing this appropriate framework. We encourage feedback and
welcome a diversity of opinions and thoughtful discourse, which will contribute
to building the foundation of this regulatory paradigm. As algorithms evolve,
the FDA must also modernize our approach to regulating these products. We must
ensure that we can continue to provide a gold standard of safety and
effectiveness. We believe that guidance from the agency will help advance the
development of these innovative products.
We’ve taken similar steps to advance other novel oversight
frameworks for new technologies. Our Digital Health Innovation Action Plan laid
the groundwork for new approaches to foster innovation in digital health. We’re
building our Digital Health Center of Excellence to develop more efficient ways
to ensure the safety and effectiveness of technologies like smart watches with
medical apps. Our Software Precertification Pilot Program is allowing us to
test a new approach for product review.
Comments
Post a Comment