Understanding the Future Regulatory Concerns of AI in Pharma to Harness Its Full Power

Shepherding a new drug to market is a long, expensive, and uncertain process. On average, it requires 10 to 12 years and $2.6 billion dollars to bring a new drug from discovery to regulatory approval and market launch. Approval success rates of drug candidates are between 10%–20% in the United States, the European Union, and Japan.

During the last decade, the return on investment in drug development has seen diminishing returns. According to a Deloitte report, pharmaceutical investment ROI has declined from 10.1% in 2010 to only 1.9% in 2018. As a result, pharmaceutical firms may be incentivized to focus resources on more lucrative areas, such as cancer treatment, at the expense of other diseases.

In this context, reducing the cost of bringing new drugs to market while increasing their approval success rate is a business and public health imperative. To achieve this goal, pharmaceutical executives are actively exploring ways to deploy artificial intelligence (AI) solutions across a range of business functions: drug discovery, clinical trial design and monitoring, drug manufacturing, quality assurance/quality control (QA/QC), and product management.

However, despite AI’s high potential to reshape the pharmaceutical industry, regulatory authorities are concerned about its associated challenges, particularly around data provenance, as well as the reliability, understandability, performance, and monitoring of AI models.

With this in mind, in order to maximize the benefits of AI while mitigating its related risks, pharmaceutical incumbents and startups need to implement AI governance processes and tools.

Promising AI Use Cases in Pharma

Although we’re still at the beginning of the AI revolution in pharma, promising use-cases have already been identified:

Drug discovery: In drug discovery, AI models are predicting the 3D structure of target proteins at an unprecedented scale. This helps scientists match a newly designed drug to the chemical environment of a specific region of the target protein. Likewise, it enables them to predict the effect of the compound on the target and the compound’s safety risks. This progress is reflected in the variety of tools available to scientists to find suitable drug molecule candidates, as well as predict the 3D structures of proteins or the toxicity of thousands of drugs. The potential for AI in drug discovery has also led to massive partnership deals signed between top pharma companies and startups working in this space.

Clinical trials: Evaluating the safety and efficacy of new drugs consumes up to half of the total cost of the drug development process. This is an intrinsically data-intensive activity and scientists are increasingly using AI models to identify meaningful patterns in vast datasets (e.g., clinical trials, medical literature, patient health records, and post-market surveillance data) to improve the design of clinical trials and improve patient-trial matching as well as recruitment. Now, it is also possible to create digital twins of patients to enable smaller and faster studies and innovative startups have secured significant funding through this approach.

Current Guidelines Point the Way to Future Regulation on AI in Pharma

AI advances in pharma are remarkable in important respects, but also create unique challenges. Regulatory agencies have started formulating high-level guidelines to address these challenges. Last year, the International Coalition of Medicines Regulatory Authorities (“ICMRA”), which includes the U.S. Food and Drug Administration (FDA), released a report making a series of recommendations to its members. Industry actors operating in any of these member countries should pay particular attention to the following recommendations:

  • Understandability/explainability: “Regulators may need to elaborate a risk-based approach to assessing and regulating AI, and this could be informed through exchange and collaboration in ICMRA. The scientific or clinical validation of AI use would require a sufficient level of understandability and regulatory access to the employed algorithms and underlying datasets.”
  • Governance processes: “Sponsors, developers, and pharmaceutical companies should establish strengthened governance structures to oversee algorithm(s) and AI deployments that are closely linked to the benefit/risk of a medicinal product, such as trial conduct automation or product use depending on individual data-based algorithms.”
  • AI oversight compliance officer: “Regulators should consider establishing the concept of a Qualified Person responsible for AI/algorithm(s) oversight compliance (similar to legally accountable natural persons for medical devices or pharmacovigilance).”
  • Data provenance and transparency: “Regulatory guidelines for AI development and use with medicinal products should be developed in a number of areas, including data provenance, reliability, transparency and understandability, validity (construct, content, external, etc.), development and use for pharmacovigilance purposes, real-world performance and monitoring.”
  • AI version tracking: “In the EU, to address the rapid, unpredictable, and potentially opaque nature of AI updates, the post-authorization management of medicines, including the Variation framework, may need to be adapted to accommodate updates to AI software linked to a medicinal product. There may be an advantage to defining major vs. minor updates, in a risk-based approach, for all digital tools that impact the quality, safety, or efficacy of a medicinal product and thus linked to its benefits and risks.”

In parallel, regulatory authorities are conducting various initiatives to map AI use-cases and their associated risks. For instance, the Heads of Medicines Agencies (HMA) and European Medicines Agency (EMA) Joint Big Data Task Force held a workshop on AI in medicines regulation where they called for the promotion of transparent and auditable AI.

Smart Steps Companies Can Take Today

Regulators are clearly telegraphing their concerns, which forward-looking companies can and should proactively address through appropriate governance processes and tools. This would not only reduce their regulatory exposure to upcoming regulations but would also accelerate the drug development process for the benefit of patients. To this end, pharma executives should:

  • Embed explainability across the model lifecycle: It is perfectly understandable that regulatory authorities are concerned with the “black box” nature of AI models in such a high-stakes domain and thus require a high level of transparency into how models make their predictions. To address this expectation, pharma companies need to embed explainability across the model lifecycle from 1) development, to 2) evaluation and validation, and to 3) ongoing monitoring after deployment.
  • Implement model evaluation and monitoring capabilities: Various studies have established that, without proper oversight, AI may degrade in performance, replicate human bias, and lead to unintended consequences. To mitigate this risk, companies must ensure that deployed AI solutions are effectively governed through robust diagnostic and monitoring capabilities that would help data scientists continuously debug and improve their models.
  • Define clear lines of accountability: In the financial services industry, model risk managers (MRM) are in charge of overseeing the risks related to the potential adverse impacts from decisions based on incorrect or misused models. As regulators consider the possibility of requiring the creation of a “Qualified Person responsible for AI/algorithm(s)” oversight compliance role, pharma companies should learn from other regulated industries such as banking where similar roles already exist.
  • Anticipate compliance with the EU AI ACT: The ICMRA suggests that regulators “may need to elaborate a risk-based approach to assessing and regulating AI.” This approach is similar to that of the European Commission. Indeed, the EU AI Act is a comprehensive regulatory proposal that classifies AI applications under four distinct categories of risk: 1) unacceptable risk, 2) high-risk, 3) limited risk, and 4) minimal risk. Pharma companies should start with a careful review of the EU AI Act, especially the quality management and conformity assessment procedures for high-risk use cases, and introduce appropriate risk mitigation processes. This is particularly important because the EU AI ACT will have a global impact, similar to the GDPR, on any company looking to do business in the EU. This includes pharma companies based in the U.S.

Moving Forward

AI holds the potential to vastly improve pharma operations and help meet the needs of patients in new ways, ranging from faster drug discovery cycles to improved diagnostic assistance and more personalized medical treatment. However, regulators have also expressed legitimate concerns about its potential risks. Fortunately, a variety of processes and tools are available to help pharma executives mitigate these risks and fully harness the power of AI. Implementing these governance measures now allows them to capture the promise of AI while limiting its downside and ensuring compliance with future regulation.

  • Lofred Madzou

    Lofred Madzou is Director of Strategy and Business Development at TruEra, where he helps enterprises adopt AI responsibly. Lofred is also a Research Associate at the Oxford Internet Institute (University of Oxford) where he is focused on the governance of AI systems through audit processes.

Ads

You May Also Like

Brexit Will Damage the Productivity of the Global Pharmaceutical Industry

Vats of ink have already been spilled on the implications of Brexit, and much ...

Inciting Change through Science-Driven Communications

On April 25, 1953, Nature magazine published a one-page letter submitted by a 23-year-old ...